url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://quant.stackexchange.com/questions/3660/derivation-of-formula-for-portfolio-skewness-and-kurtosis/3662
|
# derivation of formula for portfolio skewness and kurtosis
Where can I find derivation of formula for portfolio skewness and kurtosis? I can find formulas everywhere, but not their derivations? For example, the portfolio variance formula is well known and I can find the derivation of that formula in a lot of books, but I can't find anything on portfolio skewness formula and kurtosis. They are just given the way they are. I'm not strong enough at probability theory to use it to derive the formulas from the expectations operator. Who was the first person to derive them? Where were they first published?
-
## 1 Answer
What is the data basis that you start from? If you just have the covariance matrix, then you can only calculate portfolio variance or volatility by $$w^T \Sigma w$$ where $w$ are the portfolio weights and $\Sigma$ is the covariance matrix. If you have the individual asset continuously compounded returns $r^j_t$ where $j$ indexes assets, $j=1,\ldots,N$, and $t$ stands for time, $t=1,\ldots,T$, then you can also calculate the portfolio returns for each points in time $$r_t = \sum_{j=1}^N w_j r^j_t$$ and then apply the standard variance estimator on $(r_t)_{t=1}^T$. Coming back to your question, having $(r_t)_{t=1}^T$ you can calculate skewness and kurtosis on this sample. You find the formulas on wikipedia.
-
i know how to calculate it when i have the formulae, but the derivation of the given formulas is what i need. you just can't say this is the formula as it is without no proof or derivation. that is what i need – mary Jun 23 '12 at 18:38
@mary When you go to the level of portfolio returns then the standard estimators for skewness and kurtotis apply. So any derivation of these estimators would hold for a general prove. The first formula above e.g. is just algebra and the standard estimator. In the case that we assume iid returns (w.r.t. time) we just work with a sample and the portfolio case is not different from the standard case. – Richard Aug 20 '12 at 20:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304041266441345, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26695?sort=newest
|
## Slick verification of the model category axioms for Spaces and SSets with the q-model structure?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
We choose our category of spaces to be compactly generated weak Hausdorff spaces for convenience, denoted $CGWH$.
Questions:
1.) Is there any sort of slick argument to verify that CGWH with the Quillen model structure is a right-proper (closed) model category?
2.) If we give the following presentation of the model structure on SSet:
Cofibrations are monomorphisms
Fibrations have the RLP with respect to all horn inclusions $\Lambda^n_i \subseteq \Delta^n$ for $0\leq i \leq n$.
Or instead of the characterization of cofibrations, we could instead give:
Trivial fibrations have the RLP with respect to all inclusions of the boundary $\partial^n \subseteq \Delta^n$.
(The point of picking a nice presentation is that the (morally) right choice of definition often simplifies a proof.)
Is there any way to verify the model category axioms more easily? The proofs I've seen appeal to all of the hard work done in question 1. It seems like one should be able to verify the axioms for SSet more easily than the case of CGWH spaces.
-
1
Why does it seem like SSet should be easier? SSet has all sorts of problems that CGWH doesn't, like the fact that you can't compose homotopies in general. – Mike Shulman Jun 1 2010 at 17:22
1
There are proofs for SSet that don't go via CGWH, though; there's one in some notes that I can't remember where to find, and another one in the forthcoming sequel to "A concise course in algebraic topology." – Mike Shulman Jun 1 2010 at 17:23
2
I believe that there's also a forthcoming book by Joyal and Tierney that contains a combinatorial verification of the model category axioms for SSet. Tom Fiore once told me he saw a draft online, but I don't know where, and I can't find it. (These might be the notes Mike mentions?) – Dan Ramras Jun 1 2010 at 20:26
Your second characterization in 2. is not sufficient to determine the q-model structure in sSet---it simply determines the cofibrations, but there are numerous model structures with fewer weak equivalences than the q-model structure but the same cofibrations. One example is the Joyal model structure. What steps of the usual proofs do you think are "hard"? Simplicial sets are convenient for many arguments, but Top has many great properties that sSet does not. – Sam Isaacson Jun 2 2010 at 1:45
2
@Harry, sorry I misread your question. By Cisinski's work, you can easily construct a "minimal" model structure on sSet (one with the fewest weak equivalences given that Cof = Mono) and then left Bousfield localize at the horn inclusions. But then checking that weak homotopy equivalences $X \to Y$ are the same as those maps inducing isos $Ho(Y, Z) \to Ho(X, Z)$ for all Kan $Z$ requires some combinatorics. Incidentally, the fact that you detect all acyclic cofs just with inner horn inclusions is fairly special. I think Cisinski's proof is beautiful, but not concise. – Sam Isaacson Jun 2 2010 at 4:25
show 3 more comments
## 1 Answer
The Joyal and Tierney notes contain a combinatorial proof as Dan says. They are available here. (Wanted to post this in the comments, but it seems impossible to do without sufficient reputation.) I should also mention that it is possible to give a reasonably slick proof of the model category axioms for simplicial sets by using the Cisinski machinery (see his monograph: Ast'erisque vol. 308).
-
Is there any way to get a copy of Cisinki's book without paying 90-some Euros? To be honest, I'd rather do something like pay him directly for an electronic edition instead of shelling out the 20 USD for shipping from France. – Harry Gindi Jun 1 2010 at 21:50
1
I'd have thought that most university libraries should have a copy of the Ast'erisque volumes. That said, a fair amount of the same material (as far as the model structure on simplicial sets is concerned) can be found in his paper "Th'eories homotopiques dans les topos" which is available on his website. – Michael A Warren Jun 1 2010 at 22:09
What do you know, they did have it, and it looks brand new! – Harry Gindi Jun 1 2010 at 23:07
1
If you're in North America and you want to own your own copy, you can buy Astérisque volumes directly from the AMS (but Cisinski's book is 107 USD). You can download a PDF from his website: www-math.univ-paris13.fr/~cisinski/ast.pdf – Sam Isaacson Jun 2 2010 at 0:41
Thanks for the link to the Joyal-Tierney notes! – Dan Ramras Jun 3 2010 at 4:14
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288991093635559, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/252684/integration-with-trig-function/252685
|
# integration with trig function
I am trying to do some practice problem to which there aren't any posted solutions and since I am stuck I thought I should ask for help.
$$\iint_R\cos\left(\frac\pi 2x^2\right)\,dx\,dy,$$ where $R$ is the triangle enclosed by the line $y=x$, the vertical line $x=1$ and the $x$-axis.
How I set this integral up is:
$$\int_0^1\int_y^1\cos\left(\frac\pi 2x^2\right)\,dx\,dy,$$
dy upper limit -> 1 dy lower limit -> 0
then once I integrated with respect to $x$ I got $\cfrac{\sin\left(\frac\pi 2x^2\right)}{\pi x}$ which gets messy once you plug in upper and lower limit of $x$. The part I am stuck at is how to proceed with integrating with respect to $y$...
-
## 1 Answer
Hint: Integrate first with respect to $y$. Nothing much happens: when you do the usual substitution of endpoints, you get $x\cos\left(\dfrac{\pi x^2}{2}\right)$.
Then integrate with respect to $x$. There is an obvious substitution $u=\dfrac{\pi x^2}{2}$.
Remark: Sometimes in a double integral, integrating in one order may be extremely painful, while integrating in the other order may be straightforward.
-
@ andre Nicolar thanks a lot I think that will get me moving! With these functions when integrating cos I always just think of what I would do if I differentiate and I just do the opposite so instead of multiplying by PIx I divide by PIx – Raynos Dec 7 '12 at 0:16
1
Yes, your integration of $\cos(\pi x^2/2)$ was not right. You can check by differentiating. In fact the integration of $\cos(\pi x^2/2)$ is "impossible" (in terms of elementary functions). But you will I hope find the integration of $x\cos(\pi x^2/2)$ easy, letting $u$ be as suggested in the answer. – André Nicolas Dec 7 '12 at 0:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401918649673462, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/143031-gamma-function-residue.html
|
# Thread:
1. ## Gamma function, residue
Show that for $m \geq 0$, the residue of $\Gamma(z)$ at $z = -m$ is $\frac{(-1)^m}{m!}$.
$\Gamma(z)$ is the gamma function. The gamma function is meromorphic. It is defined in the right half-plane by $\Gamma(z)= \int_0^{\infty} e^{-t}t^{z-1}dt$ for $\text{Re}(z)>0$. There is also another representation of $\Gamma(z)=\frac{\Gamma(z+m)}{(z+m-1) \cdots (z+1)z}$ where the right-hand side is defined and meromorphic for $\text{Re}(z)>-m$ with simple poles at $z=0, -1, \ldots, -m+1$. However, I still do not see how to prove this. I need help with this. Thank you.
2. In general, the residue of $f(z)$ at $z=z_0$ is the residue of $f(z+z_0)$ at $z=0$. Now we have $\Gamma(z)=(z-1)\dots(z-m)\Gamma(z-m)$. Therefore what you are looking for is the residue of $\frac{\Gamma(z)}{(z-1)\dots(z-m)}$ at $z=0$. The denominator is analytic at $z=0$ and its value is $(-1)^mm!$; so the solution will be $\frac{(-1)^mR}{m!}$ where $R$ is the residue of $\Gamma(z)$ at $z=0$... can you finish now?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9697696566581726, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/136036-modular-arithmetic.html
|
# Thread:
1. ## Modular Arithmetic
I'm having a hard time understanding modular numbers and aritmetic.
If I have $r=a(\mod P)$, then $r-a=bP$ where $b$ is an integer. So if I want to find 23 in $\mod 12$, I write $r=23(\mod 12)$ which should turn out to be 11 right? So 23 is equal to 11 in $\mod 12$.
So if I want to find:
$r=26(\mod 12)$
I need to find r such that:
$r-26=b12$
where b is an integer and r is the smallest possible positve number satisfying the equation. Is this correct? 26 hours would bring us to 2 o-clock since 2 is the smallest positive integer such that $2-26=b12$ where $b$is an integer. So here, $b=-2$.
-----------------------------------------------------------------
One more example with a different mod that isn't so familiar:
$r=3(\mod 7)$
$r-3=b7$
So the smallest possible number seems to be 10 with $b=1$.
$10=3(\mod 7)$
where b is an integer.
------------------------------------------------------------------
Is my reasoning correct?
2. Originally Posted by adkinsjr
I'm having a hard time understanding modular numbers and aritmetic.
If I have $r=a(\mod P)$, then $r-a=bP$ where $b$ is an integer. So if I want to find 23 in $\mod 12$, I write $r=23(\mod 12)$ which should turn out to be 11 right? So 23 is equal to 11 in $\mod 12$.
So if I want to find:
$r=26(\mod 12)$
I need to find r such that:
$r-26=b12$
where b is an integer and r is the smallest possible positve number satisfying the equation. Is this correct? 26 hours would bring us to 2 o-clock since 2 is the smallest positive integer such that $2-26=b12$ where $b$is an integer. So here, $b=-2$.
-----------------------------------------------------------------
$23\equiv 11\pmod{12}$ since $23\pmod{12}\equiv11+12\pmod{12}\equiv11+0\pmod{12} \equiv11\pmod{12}$
and
$26\equiv 2\pmod{12}$ since $26\equiv 24+2\pmod{12}\equiv 2(12)+2\pmod{12}\equiv 0+2\pmod{12}\equiv 2\pmod{12}$
One more example with a different mod that isn't so familiar:
$r=3(\mod 7)$
$r-3=b7$
So the smallest possible number seems to be 10 with $b=1$.
$10=3(\mod 7)$
where b is an integer.
------------------------------------------------------------------
In this case, $r=3\equiv3\pmod{7}$ is the smallest positive value. Can you try to reason out why?
3. Originally Posted by Chris L T521
$23\equiv 11\pmod{12}$ since $23\pmod{12}\equiv11+12\pmod{12}\equiv11+0\pmod{12} \equiv11\pmod{12}$
and
$26\equiv 2\pmod{12}$ since $26\equiv 24+2\pmod{12}\equiv 2(12)+2\pmod{12}\equiv 0+2\pmod{12}\equiv 2\pmod{12}$
In this case, $r=3\equiv3\pmod{7}$ is the smallest positive value. Can you try to reason out why?
I'm not understanding this at all:
$23\equiv 11\pmod{12}$ since $\underbrace{23\pmod{12}\equiv11+12\pmod{12}\equiv1 1+0\pmod{12}\equiv11\pmod{12}}$
Your going through all of those steps I underbraced, where as I'm saying that $23=11(\mod 12)$ because 23 is the smallest number such that $23-11=b12$, where $b$ is an integer. In this case, -1. Any number in $0\leq x\leq23\$ will not give an integral multiple of twelve when I subtrace 11.
In this case, is the smallest positive value. Can you try to reason out why?
No, my reasoning wouldn't work here then. For $r=a(\mod P)$, I find $r$ such that $r$ is the smallest number such that $r-a=bP$
So if $3=3(\mod 7)$, then $3-3=b7$ therefore $b=0$. So if I have $r=11 (\mod 12)$, it seems that I could just as easily claim that this is equal to 11 since $11-11=b12$ with $b=0$ again.
4. I think you've got your definition backwards. You say "23= 11 (mod 12) because 23 is the smallest number such that 23- 11= b12, where b is an integer." No, 23= 11 (mod 12) because 11 is the smallest number such that 23- 11 is a multiple of 12. And, of course, 11= 11 (mod 12) because 11 is the smallest number such that 11- 11 is a multiple of 12.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 56, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404257535934448, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/09/02/bounded-linear-transformations/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## Bounded Linear Transformations
In the context of normed vector spaces we have a topology on our spaces and so it makes sense to ask that maps $f:V\to W$ between them be continuous. In the finite-dimensional case, all linear functions are continuous, so this hasn’t really come up before in our study of linear algebra. But for functional analysis, it becomes much more important.
Now, really we only need to require continuity at one point — the origin, to be specific — because if it’s continuous there then it’ll be continuous everywhere. Indeed, continuity at $v'$ means that for any $\epsilon>0$ there is a $\delta>0$ so that $\lVert v-v'\rVert_V<\delta$ implies $\lVert f(v)-f(v')\rVert_W=\lVert f(v-v')\rVert_W<\epsilon$. In particular, if $v'=0$, then this means $\lVert v\rVert_V<\delta$ implies $\lVert f(v)\rVert_W<\epsilon$. Clearly if this holds, then the general version also holds.
But it turns out that there’s another equivalent condition. We say that a linear transformation $f:V\to W$ is “bounded” if there is some $M>0$ such that $\lVert f(v)\rVert_W\leq M\lVert v\rVert_V$ for all $v\in V$. That is, the factor by which $f$ stretches the length of a vector is bounded. By linearity, we only really need to check this on the unit sphere $\lVert v\rVert_V=1$, but it’s often just as easy to test it everywhere.
Anyway, I say that a linear transformation is continuous if and only if it’s bounded. Indeed, if $f:V\to W$ is bounded, then we find
$\displaystyle M\lVert h \rVert_V\geq\lVert f(h)\rVert_W=\lVert f(v+h)-f(v)\rVert_W$
so as we let $h$ approach $0$ — as $v+h$ approaches $v$ — the difference between $f(v+h)$ and $f(v)$ approaches zero as well. And so $f$ is continuous.
Conversely, if $f$ is continuous, then it is bounded. Since it’s continuous, we let $\epsilon=1$ and find a $\delta$ so that $\lVert f(h)\rVert_W<1$ for all vectors $h$ with $\lVert h\rVert_V<\delta$. Thus for all nonzero $v\in V$ we find
$\displaystyle\begin{aligned}\lVert f(v)\rVert_W&=\left\lVert\frac{\lVert v\rVert_V}{\delta}f\left(\delta\frac{v}{\lVert v\rvert_V}\right)\right\rVert_W\\&=\frac{\lVert v\rVert_V}{\delta}\left\lVert f\left(\delta\frac{v}{\lVert v\rVert_V}\right)\right\rVert_W\\&\leq\frac{\lVert v\rVert_V}{\delta}\,1\\&=\frac{1}{\delta}\lVert v\rVert_V\end{aligned}$
Thus we can use $M=\frac{1}{\delta}$ and conclude that $f$ is bounded.
The least such $M$ that works in the condition for $f$ to be bounded is called the “operator norm” of $f$, which we write as $\lVert f\rVert_\text{op}$. It’s straightforward to verify that $\lVert cf\rVert_\text{op}=\lvert c\rvert\lVert f\rVert_\text{op}$, and that $\lVert f\rVert_\text{op}=0$ if and only if $f$ is the zero operator. It remains to verify the triangle identity.
Let’s say that we have bounded linear transformations $f:V\to W$ and $g:T\to W$ with operator norms $M=\lVert f\rVert_\text{op}$ and $N=\lVert g\rVert_\text{op}$, respectively. We will show that $M+N$ works as a bound for $f+g$, and thus conclude that $\lVert f+g\rVert_\text{op}\leq\lVert f\rVert_\text{op}+\lVert g\rVert_\text{op}$. Indeed, we check that
$\displaystyle\begin{aligned}\lVert[f+g](v)\rVert_W&=\lVert f(v)+g(v)\rVert_W\\&\leq\lVert f(v)\rVert_W+\lVert g(v)\rVert_W\\&\leq M\lVert v\rVert_V+N\lVert v\rVert_V\\&=(M+N)\lVert v\rVert_V\end{aligned}$
and our assertion follows. In particular, when our base field is itself a normed linear space (like $\mathbb{C}$ or $\mathbb{R}$ itself) we can conclude that the “continuous dual space” $V'$ consisting of bounded linear functionals $\Lambda:V\to\mathbb{F}$ is a normed linear space using the operator norm on $V'$.
## 2 Comments »
1. [...] (but not its Hölder conjugate ) is infinite. And I say that when is -finite, the space of bounded linear functionals on is isomorphic to [...]
Pingback by | September 3, 2010 | Reply
2. [...] is a linear transformation. Since it goes between finite-dimensional vector spaces it’s bounded, which means we have an [...]
Pingback by | May 4, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410717487335205, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22462/what-are-some-examples-of-interesting-uses-of-the-theory-of-combinatorial-species/22564
|
## What are some examples of interesting uses of the theory of combinatorial species?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a question I've asked myself a couple of times before, but its appearance on MO is somewhat motivated by this thread, and sigfpe's comment to Pete Clark's answer.
I've often heard it claimed that combinatorial species are wonderful and prove that category theory is also useful for combinatorics. I'd like to be talked out of my skepticism!
I haven't read Joyal's original 82-page paper on the subject, but browsing a couple of books hasn't helped me see what I'm missing. The Wikipedia page, which is surely an unfair gauge of the theory's depth and uses, reinforces my skepticism more than anything.
As a first step in my increasing appreciation of categorical ideas in fields familiar to me (logic may be next), I'd like to hear about some uses of combinatorial species to prove things in combinatorics.
I'm looking for examples where there is a clear advantage to their use. To someone whose mother tongue is not category theory, it is not helpful to just say that "combinatorial structures are functors, because permuting the elements of a set A gives a permutation of the partial orders on A". This is like expecting baseball analogies to increase a brazilian guy's understanding of soccer. In fact, if randomly asked on the street, I would sooner use combinatorial reasoning to understand finite categories than use categories of finite sets to understand combinatorics.
Added for clarification: In my (limited) reading of combinatorial species, there is quite a lot going on there that is combinatorial. The point of my question is to understand how the categorical part is helping.
-
6
+1 for a really well-written question (not just "why X?" or "what is X?" or "someone told me X is cool, please tell me more") which will hopefully admit informative answers. – Yemon Choi Apr 24 2010 at 23:42
6
This is slightly tangential, but what convinced me even more than André Joyal's paper is Andreas Blass's Seven Trees in One, which is a perfect illustration of how categorical thinking can lead to surprising combinatorial insight - ams.org/mathscinet-getitem?mr=1354064 – François G. Dorais♦ Apr 24 2010 at 23:53
I've been told that the combinatorial interpretation of the composition of generating functions was only made fully rigorous by species theory, but I can't actually back up that claim. – Qiaochu Yuan Apr 25 2010 at 3:17
@Yemon: I wonder if we can make this a sample question for how to ask this sort of question on MO. – Harry Gindi Apr 25 2010 at 3:18
## 8 Answers
Composition of species is closely related to the composition of symmetric collections of vector spaces ("S-modules"), which is a remarkable example of a monoidal category everyone who had ever encountered operads necessarily used. Applying ideas coming from this monoidal category interpretation has various consequences for combinatorics as well. For example, look at papers of Bruno Vallette on partition posets (here and here): I believe that already the description of the $S_n$ action on the top homology of the usual partition lattice was hard to explain from the combinatorics point of view - and for many other lattices would be impossible without the Koszul duality viewpoint.
-
1
That's very interesting, Vladimir, thank you. Something like this is precisely what I was looking for: an application to "honest" combinatorics, like posets, and one where CT displayed a clear advantage. If MO will forgive an opinion, an uncomfortable proportion of the applications I had previously witnessed of CT to subjects S amounted to setting up categories in S, then proceeding to ask CT-questions inside of S, without regard to what the S-questions were. – Pietro KC Apr 25 2010 at 23:50
1
I'm afraid I'm currently on the wrong side of Gowers's cohomology divide to properly appreciate what is being said about "top homology". But I'll try to understand at least one of the papers you mentioned and hopefully be the wiser for it. – Pietro KC Apr 25 2010 at 23:52
Pietro, for an introduction to this area written by combinatorics people I suggest you have a look at the paper "Introduction to Cohen-Macaulay posets" by Bjorner, Garsia and Stanley. The reasons to study the group action on top homology of a CM poset are, for example, briefly discussed in Sec. 6b of that paper. – Vladimir Dotsenko Apr 26 2010 at 8:11
I just saw this comment. Thanks for the pointer, Vladimir. Just a heads-up for anyone else reading this: it's much easier to find the paper if you replace "posets" with "partially ordered sets" in a Google search! – Pietro KC May 16 2010 at 1:48
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let's start from the beginning. The main textbook on species is this one by Bergeron, Labelle, and Leroux, all major experts in the field. Even if you don't want to read the book, read the introduction by G.-C. Rota (which is interesting, enlightening, very short, and downloadable). In there, Rota writes:
"I dare make a prediction on the future acceptance of this book. At first, the old fogies will pretend the book does not exist. This pretense will last until sufficiently many younger combinatorialists publish papers in which interesting problems are solved using the theory of species. Eventually, a major problem will be solved in the language of species, and from that time on everyone will have to take notice."
It has been 13 years since these words had been written (almost to the day), so it is perhaps time to revisit this prediction. The "major problem" part clearly did not work out. One can argue that it's too soon to judge. Maybe. Maybe not. The first part, on "sufficiently many younger combinatorialists", is more interesting and probably arguable. There are sufficiently many people using and referencing the book - it has over 300 citations on GoogleScholar. And if you look at these citations, it becomes clear that the book has an extended influence over a large range of fields - the language and philosophy of species are clearly useful.
On the other hand, in comparison with the "competition", the theory of species is clearly not doing so well. Goulden & Jackson's "Combinatorial Enumeration" and Stanley's "Enumerative Combinatorics" have been cited about 1,000 times each (yes, both are older, but still). If you go a bit further away from the field, Alon & Spencer's "Probabilistic Method" has been cited over 3,000 times...
My conclusions: the answer to your question is both Yes and No. The theory of species is clearly useful, but more like a good language to use, or a guiding principle of which roads to take and which to stay clear of. When it comes to explicitly stated "practical" problems, people seem to prefer more directly applicable tools. I would compare this phenomenon to the influence of complexity theory on enumerative combinatorics: whatever you are enumerating, even if your problems are non-algorithmic and in a very classical combinatorial setting, it is still useful to know what is #P-completeness, simply because this gives you a different point of view on the objects you are enumerating, and sometimes it can also save you a bit of time by suggesting that the problem might be too general and thus have no explicit solution.
-
1
Also, I read Rota's intro before asking the question. I plan to look at this book on Monday. I must say I don't find this kind of generality terribly convincing. Defenses of "new mathematics", without caveats, fail to take into account the limitless possibilities mathematics, and our finite lifetimes. Of course one can't expect Rota to go into examples in a one-page introduction! Coming to MO, I was half-expecting someone to post a concrete, pretty application that made a lot of categorical ideas. Anyway, thanks again for your great answer! – Pietro KC Apr 25 2010 at 7:27
2
If we're counting Google Scholar citations, Flajolet and Sedgewick's Analytic Combinatorics, published fifteen months ago, has 526. – Michael Lugo Apr 25 2010 at 13:38
1
@Pietro No, I don't mean a close analogy with complexity - just on the level of heuristic and occasional formal application. I wanted to make this comparison since I think CS ideas are by far much better known now, and often taken for granted. When you realize that many CS ideas are less than 30 years old, you can see a profound but often very informal influence they had over our thinking. I think comb-sp influence is often just as informal, but on a much smaller scale, of course. – Igor Pak Apr 25 2010 at 19:19
2
@Michael Right, this is actually another very good "competitor"... But "15 moths ago" is a bit misleading, I think. Versions of the book have been available on the web for maybe 10 years. Speaking of citations and to follow up on my CS comparisons, Garey & Johnson's 1979 book has been cited over 35,000 times! – Igor Pak Apr 25 2010 at 19:28
2
Some chapters of Flajolet-Sedgewick were already available in 1994 (as they were writing them, I know Philippe would put them up on his web page, even back then.) [I was hosted in Flajolet's group 1993-1996] – Jacques Carette Apr 26 2010 at 11:57
show 2 more comments
Before I learned about species I didn't understand why you sometimes use exponential generating functions and sometimes ordinary ones. Now I understand: for species you should use exponential generating functions!
-
Noah, I myself find gf's to be somewhat magical objects. It certainly never occurred to me to string out a sequence of numbers in a power series, until I saw it done. I guess this would have been a more natural thing to try for Euler. But knowing that they are worthy of study, I find the series of developments up to exponential gf's natural. It is sort of what comes out if you try to make the product of gf's mean something nice, and you get added benefits in the end. – Pietro KC Apr 25 2010 at 8:32
1
I found that Stanley's books give a very good understanding and motivation for different gf's, including "denominators different from n!". Perhaps I'll glean a different perspective on Monday. Thanks for your input! – Pietro KC Apr 25 2010 at 8:33
@Noah: my impression is to use exponential generating functions when objects are labeled (e.g. labeled trees, permutations, set partions,...) and ordinary generating functions for indistinguishable objects (unlabeled trees, number partitions,...) This impression came from a mixture of Stanley's book/classes and also the theory of species. – Patricia Hersh Oct 22 at 17:39
First of all I need to say that I know nothing about the achievements of category theory. So far, I couldn't get myself to discover any theorem, that proves something in, say, combinatorics, using category theory in an essential way. (Hints were given in some answers to this question, but I didn't have time to follow them yet.)
Furthermore, I have to admit that I don't know enough of the history, so I cannot claim that the following items are really achievements of a "categorical" point of view on combinatorial objects. One may argue that one doesn't need categorical language to phrase these concepts, and it seems to me that Bergeron, Labelle and Leroux have consciously avoided it. However, I think the "origin" of the ideas is of "categorical" spirit.
1) I'd say that the concept of equality of ordinary species, and, related to that, their molecular decomposition is something very important for it's own right, simply because it's beautiful. I'm not sure whether this concept has been fully exploited yet in a practical sense. Possibly it's hard to exploit because very often we encounter structures which are really unlabelled. I read about the desire to classify objects counted by the Catalan numbers every so often: there is little to be done with species, because their defining equation is algebraic. (I guess this doesn't preclude the existence of an interesting labelled object with isomorphism types being counted by the Catalan numbers, if you know of one, please share!)
2) tightly connected with the previous item is the very structured way to go about counting under group action. Nils de Bruijn wrote "this kind of enumeration theory is a matter of exposition and organisation of things which are in essence trivial". Here I think that in many cases, especially multivariate species are just the right tool. A concrete example: "how many possibilities are there to put two red, two blue and four green balls into a round and three square boxes?" Yes, it is easy to that, but with species it is trivial: we want the coefficient of $[R^2B^2G^4]$ in the isomorphism type series corresponding to $E_1(E(R+B+G)) E_3(E(R+B+G))$. Again: I would like to emphasise that species give the problem structure, and it would seem to me that this is at the heart of "categorical thinking", even if no category theory is involved.
3) there is at least one useful operation on species, which comes about very naturally, namely functorial composition.
4) this item is rather a question than an answer: operads have already been mentioned, does somebody know what Monoidal functors, species and Hopf algebras by Marcelo Aguiar and Swapneel Mahajan is about? Maybe that would "really" answer the original question...
-
One further line of response would again invoke Rota:
"What can you prove with exterior algebra that you cannot prove without it?" Whenever you hear this question raised about some new piece of mathematics, be assured that you are likely to be in the presence of something important. In my time, I have heard it repeated for random variables, Laurent Schwartz' theory of distributions, ideles and Grothendieck's schemes, to mention only a few. A proper retort might be: "You are right. There is nothing in yesterday's mathematics that could not also be proved without it. Exterior algebra is not meant to prove old facts, it is meant to disclose a new world. Disclosing new worlds is as worthwhile a mathematical enterprise as proving old conjectures. (Indiscrete thoughts, p.48, Birkhauser, 1997).
For a couple of new worlds made possible by the species concept see:
1) M. Fiore, N. Gambino, M. Hyland and G. Winskel. The cartesian closed bicategory of generalised species of structures. Journal of the London Mathematical Society, 77(2) (2008), 203-220.
2) J. Baez et al. on stuff types (note, they call species 'structure types').
-
While I'm fond of that Rota quote, I'm not sure it really addresses the original question. The same book, to my slight amusement, later has a brief remark approving of some people disparaging the theory of distributions (quoting Calderon) – Yemon Choi Apr 26 2010 at 8:37
4
No, it doesn't answer the original question, but it seems to me to be valid to respond to a question by suggesting a larger context. If you're not interested in the larger context, then the response won't be of interest to you. And you're also right to find Rota's utterances problematic. It is very reasonable to ask of something what can be proved with it which can't be proved without it. Of course, Rota only lists concepts which later turned out to be very valuable. There must be a heap of forgotten concepts which turned out to be of little value. – David Corfield Apr 26 2010 at 10:34
I have a particular interest in this issue having looked into, and written about, the reception of the groupoid concept, which went through a period of "what can I do with them which I can't do with mere groups?" interrogation. – David Corfield Apr 26 2010 at 10:36
Pietro, if you haven't done so by now, you really, really ought to read Joyal's paper. (I can't understand why you would express skepticism before you'd even looked at the primary sources!)
If there is a single application of species to be singled out from this wonderful article, it is Joyal's proof of Cayley's theorem. (This proof was highlighted in Proofs from THE BOOK.) But this is only one of the treasures in the paper that await those who take the trouble to read it.
Much of the art of combinatorial thinking (at least in enumerative combinatorics) is knowing how to draw the correct pictures, and the theory of species can be seen as a step toward turning that art into a science, by formalizing directly the operations on structures which are implicitly coded by generating function techniques. In different words, the basic functional operations on exponential generating functions are lifted to functorial operations on species. At some level such insights must have been known to combinatorialists, but the theory of species serves to formalize them in the light of day, and no less a combinatorialist than Zeilberger has found species a significant source of inspiration.
-
+1; Joyal's proof of Cayley's theorem is one of my favorite proofs. When I first saw it, I think I actually laughed out loud. – Qiaochu Yuan Mar 23 2011 at 1:40
Yes, the proof, once understood, is unforgettable. – Todd Trimble Mar 23 2011 at 1:52
The categorical perspective tells you why egf's have $n!$s in the denominator! A species can be thought of as describing a "graded groupoid," where the grade of degree $n$ is the groupoid consisting of the action of $S_n$ on the corresponding sets. The groupoid cardinality of a finite group $G$ acting on a finite set $X$ is just $\frac{|X|}{|G|}$, so the "graded groupoid cardinality" of a species is precisely its egf.
In cases like the Catalan numbers where the natural generating function is ordinary, what happens is that the action of $S_n$ is free. For example, the species corresponding to Catalan numbers really corresponds to labeled rooted binary trees, and $S_n$ acts on the labels. The resulting quotient counts unlabeled rooted binary trees, so the generating function appears ordinary.
-
1
In "labeled rooted binary trees" you would probably add "planar"... – Vladimir Dotsenko Apr 25 2010 at 22:02
Since the Theory of species is also about relabeling, and relabellings form permutation groups, the Theory of Species is very close related to Permutation Groups theory. Is the theory of species a part of group theory ? NO. And if it were, someone should take it out of there and make it a standalone theory.
Imagine that someone wants to write a book containing equations like Part = E(E+), or that amazing Joyal' spines on Cayley formula. Then the author must write a lot of permutations stuff, leaving the formulas for the last chapter maybe, and confusing the reader: Is this book about permutations or about something else ?
The functorial definition avoid all the permutation trouble that would have been involved, including the definition of species on empty sets or the notion of copies of the empty set. A simple object like an empty box is not likely described by the mathematical empty set. Anyway, if there are still troubles with the empty sets and the void permutations, these are less visible in cats language.
Hence I think the main worry of the authors was to not reload the permutation group theory - and this is highly understandable. All the pieces of this mega combinatorial puzzle were already know : Burnside rings, wreath product, the fix of an element, Polya's polynomials on symmetries and the exponential generating functions. I also think that any presentation of Species should emphasize somehow the magic of egf calculus, that is for Combinatorics what the O and 1 calculus is for the true and false Logic.
Today, when many e.g.f.'s are listed, anyone could observe some "mysterious" relationships between egf's and it could build at least some mnemonic meanings that bring some orders in huge lists formulas. The Theory of Species tries to give a scientific base to this collection of mnemonic meanings. It is like someone invents the classical synthetic geometry after two thousands years of analytic Cartesian geometry and he try to well found it.
Bibliography http://www.math.sinica.edu.tw/www/file_upload/mayeh/1989Therelationsbetweenpermutation.pdf Labelle and Yeh in 1987 on Permutation groups and Species to all Species fans as I am -> I am also watching the talk page on wikipedia http://en.wikipedia.org/wiki/Talk:Combinatorial_species
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526902437210083, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Black%E2%80%93Scholes
|
# Black–Scholes
The Black–Scholes model [1] or Black–Scholes–Merton is a mathematical model of a financial market containing certain derivative investment instruments. From the model, one can deduce the Black–Scholes formula, which gives the price of European-style options. The formula led to a boom in options trading and legitimised scientifically the activities of the Chicago Board Options Exchange and other options markets around the world.[2] lt is widely used by options market participants.[3]:751 Many empirical tests have shown the Black–Scholes price is "fairly close" to the observed prices, although there are well-known discrepancies such as the “option smile”.[3]:770–771
The model was first articulated by Fischer Black and Myron Scholes in their 1973 paper, "The Pricing of Options and Corporate Liabilities", published in the Journal of Political Economy. They derived a partial differential equation, now called the Black–Scholes equation, which governs the price of the option over time. The key idea behind the derivation was to hedge perfectly the option by buying and selling the underlying asset in just the right way and consequently "eliminate risk". This hedge is called delta hedging and is the basis of more complicated hedging strategies such as those engaged in by Wall Street investment banks. The hedge implies there is only one right price for the option and it is given by the Black–Scholes formula.
Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model and coined the term Black–Scholes options pricing model. Merton and Scholes received the 1997 Nobel Prize in Economics (The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel) for their work. Though ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish academy.[4]
## Assumptions
The Black–Scholes model of the market for a particular stock makes the following explicit assumptions:
• There is no arbitrage opportunity (i.e., there is no way to make a riskless profit).
• It is possible to borrow and lend cash at a known constant risk-free interest rate.
• It is possible to buy and sell any amount, even fractional, of stock (this includes short selling).
• The above transactions do not incur any fees or costs (i.e., frictionless market).
• The stock price follows a geometric Brownian motion with constant drift and volatility.
• The underlying security does not pay a dividend.[Notes 1]
From these assumptions, Black and Scholes showed that “it is possible to create a hedged position, consisting of a long position in the stock and a short position in the option, whose value will not depend on the price of the stock.”[5]
Several of these assumptions of the original model have been removed in subsequent extensions of the model. Modern versions account for changing interest rates (Merton, 1976)[citation needed], transaction costs and taxes (Ingersoll, 1976)[citation needed], and dividend payout.[6]
## Notation
Let
$S$, be the price of the stock (please note inconsistencies as below).
$V(S, t)$, the price of a derivative as a function of time and stock price.
$C(S, t)$ the price of a European call option and $P(S, t)$ the price of a European put option.
$K$, the strike of the option.
$r$, the annualized risk-free interest rate, continuously compounded (the force of interest).
$\mu$, the drift rate of $S$, annualized.
$\sigma$, the volatility of the stock's returns; this is the square root of the quadratic variation of the stock's log price process.
$t$, a time in years; we generally use: now=0, expiry=T.
$\Pi$, the value of a portfolio.
Finally we will use $N(x)$ which denotes the standard normal cumulative distribution function,
$N(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} e^{-\frac{1}{2}z^2}\, dz$
$N'(x)$ which denotes the standard normal probability density function,
$N'(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{1}{2}x^2}$
### Inconsistencies
The reader is warned of the inconsistent notation that appears in this article. Thus the letter $S$ is used as:
1. a constant denoting the current price of the stock
2. a real variable denoting the price at an arbitrary time
3. a random variable denoting the price at maturity
4. a stochastic process denoting the price at an arbitrary time
It is also used in the meaning of (4) with a subscript denoting time, but here the subscript is merely a mnemonic.
In the partial derivatives, the letters in the numerators and denominators are, of course, real variables, and the partial derivatives themselves are, initially, real functions of real variables. But after the substitution of a stochastic process for one of the arguments they become stochastic processes.
The Black–Scholes PDE is, initially, a statement about the stochastic process $S$, but when $S$ is reinterpreted as a real variable, it becomes an ordinary PDE. It is only then that we can ask about its solution.
The parameter $u$ that appears in the discrete-dividend model and the elementary derivation is not the same as the parameter $\mu$ that appears elsewhere in the article. For the relationship between them see Geometric Brownian motion.
## The Black–Scholes equation
Simulated geometric Brownian motions with parameters from market data
As above, the Black–Scholes equation is a partial differential equation, which describes the price of the option over time. The key idea behind the equation is that one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula given in the next section. The Equation:
$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0$
### Derivation
The following derivation is given in Hull's Options, Futures, and Other Derivatives.[7]:287–288 That, in turn, is based on the classic argument in the original Black–Scholes paper.
Per the model assumptions above, the price of the underlying asset (typically a stock) follows a geometric Brownian motion. That is
$\frac{dS}{S} = \mu \,dt + \sigma \,dW\,$
where W is Brownian motion. Note that W, and consequently its infinitesimal increment dW, represents the only source of uncertainty in the price history of the stock. Intuitively, W(t) is a process that "wiggles up and down" in such a random way that its expected change over any time interval is 0. (In addition, its variance over time T is equal to T; see Wiener process: Basic properties); a good discrete analogue for W is a simple random walk. Thus the above equation states that the infinitesimal rate of return on the stock has an expected value of μ dt and a variance of $\sigma^2 dt$.
The payoff of an option $V(S,T)$ at maturity is known. To find its value at an earlier time we need to know how $V$ evolves as a function of $S$ and $t$. By Itō's lemma for two variables we have
$dV = \left(\mu S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}\right)dt + \sigma S \frac{\partial V}{\partial S}\,dW$
Now consider a certain portfolio, called the delta-hedge portfolio, consisting of being short one option and long $\frac{\partial V}{\partial S}$ shares at time $t$. The value of these holdings is
$\Pi = -V + \frac{\partial V}{\partial S}S$
Over the time period $[t,t+\Delta t]$, the total profit or loss from changes in the values of the holdings is:
$\Delta \Pi = -\Delta V + \frac{\partial V}{\partial S}\,\Delta S$
Now discretize the equations for dS/S and dV by replacing differentials with deltas:
$\Delta S = \mu S \,\Delta t + \sigma S\,\Delta W\,$
$\Delta V = \left(\mu S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}\right)\Delta t + \sigma S \frac{\partial V}{\partial S}\,\Delta W$
and appropriately substitute them into the expression for $\Delta \Pi$:
$\Delta \Pi = \left(-\frac{\partial V}{\partial t} - \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}\right)\Delta t$
Notice that the $\Delta W$ term has vanished. Thus uncertainty has been eliminated and the portfolio is effectively riskless. The rate of return on this portfolio must be equal to the rate of return on any other riskless instrument; otherwise, there would be opportunities for arbitrage. Now assuming the risk-free rate of return is $r$ we must have over the time period $[t,t+\Delta t]$
$r\Pi\,\Delta t = \Delta \Pi$
If we now equate our two formulas for $\Delta\Pi$ we obtain:
$\left(-\frac{\partial V}{\partial t} - \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2}\right)\Delta t = r\left(-V + S\frac{\partial V}{\partial S}\right)\Delta t$
Simplifying, we arrive at the celebrated Black–Scholes partial differential equation:
$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0$
With the assumptions of the Black–Scholes model, this second order partial differential equation holds for any type of option as long as its price function $V$ is twice differentiable with respect to $S$ and once with respect to $t$. Different pricing formulae for various options will arise from the choice of payoff function at expiry and appropriate boundary conditions.
## Black–Scholes formula
A European call valued using the Black-Scholes pricing equation for varying asset price S and time-to-expiry T. In this particular example, the strike price is set to unity.
The Black–Scholes formula calculates the price of European put and call options. This price is consistent with the Black–Scholes equation as above; this follows since the formula can be obtained by solving the equation for the corresponding terminal and boundary conditions.
The value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is:
$\begin{align} C(S, t) &= N(d_1)S - N(d_2) Ke^{-r(T - t)} \\ d_1 &= \frac{1}{\sigma\sqrt{T - t}}\left[\ln\left(\frac{S}{K}\right) + \left(r + \frac{\sigma^2}{2}\right)(T - t)\right] \\ d_2 &= \frac{1}{\sigma\sqrt{T - t}}\left[\ln\left(\frac{S}{K}\right) + \left(r - \frac{\sigma^2}{2}\right)(T - t)\right] \\ &= d_1 - \sigma\sqrt{T - t} \end{align}$
The price of a corresponding put option based on put-call parity is:
$\begin{align} P(S, t) &= Ke^{-r(T - t)} - S + C(S, t) \\ &= N(-d_2) Ke^{-r(T - t)} - N(-d_1) S \end{align}\,$
For both, as above:
• $N(\cdot)$ is the cumulative distribution function of the standard normal distribution
• $T - t$ is the time to maturity
• $S$ is the spot price of the underlying asset
• $K$ is the strike price
• $r$ is the risk free rate (annual rate, expressed in terms of continuous compounding)
• $\sigma$ is the volatility of returns of the underlying asset
### Alternative formulation
Introducing some auxiliary variables allows the formula to be simplified and reformulated in a form that is often more convenient (this is a special case of the Black '76 formula:
$\begin{align} C(F, \tau) &= D \left( N(d_+) F - N(d_-) K \right) \\ d_\pm &= \frac{1}{\sigma\sqrt{\tau}}\left[\ln\left(\frac{F}{K}\right) \pm \frac{1}{2}\sigma^2\tau\right] \\ d_\pm &= d_\mp \pm \sigma\sqrt{\tau} \end{align}$
The auxiliary variables are:
• $\tau = T - t$ is the time to expiry (remaining time, backwards time)
• $D = e^{-r\tau}$ is the discount factor
• $F = e^{r\tau} S = \frac{S}{D}$ is the forward price of the underlying asset, and $S = DF$
with d+ = d1 and d− = d2 to clarify notation.
Given put-call parity, which is expressed in these terms as:
$C - P = D(F - K) = S - D K$
the price of a put option is:
$P(F, \tau) = D \left[ N(-d_-) K - N(-d_+) F \right]$
### Interpretation
The Black–Scholes formula can be interpreted fairly easily, with the main subtlety the interpretation of the $N(d_\pm)$ (and a fortiori $d_\pm$) terms, particularly $d_+$ and why there are two different terms.[8]
The formula can be interpreted by first decomposing a call option into the difference of two binary options: an asset-or-nothing call minus a cash-or-nothing call (long an asset-or-nothing call, short a cash-or-nothing call). A call option exchanges cash for an asset at expiry, while an asset-or-nothing call just yields the asset (with no cash in exchange) and a cash-or-nothing call just yields cash (with no asset in exchange). The Black–Scholes formula is a difference of two terms, and these two terms equal the value of the binary call options. These binary options are much less frequently traded than vanilla call options, but are easier to analyze.
Thus the formula:
$C = D \left[ N(d_+) F - N(d_-) K \right]$
breaks up as:
$C = D N(d_+) F - D N(d_-) K$
where $D N(d_+) F$ is the present value of an asset-or-nothing call and $D N(d_-) K$ is the present value of a cash-or-nothing call. The D factor is for discounting, because the expiration date is in future, and removing it changes present value to future value (value at expiry). Thus $N(d_+) ~ F$ is the future value of an asset-or-nothing call and $N(d_-) ~ K$ is the future value of a cash-or-nothing call. In risk-neutral terms, these are the expected value of the asset and the expected value of the cash in the risk-neutral measure.
The naive, and not quite correct, interpretation of these terms is that $N(d_+) F$ is the probability of the option expiring in the money $N(d_+)$, times the value of the underlying at expiry F, while $N(d_-) K$ is the probability of the option expiring in the money $N(d_-),$ times the value of the cash at expiry K. This is obviously incorrect, as either both binaries expire in the money or both expire out of the money (either cash is exchanged for asset or it is not), but the probabilities $N(d_+)$ and $N(d_-)$ are not equal. In fact, $d_\pm$ can be interpreted as measures of moneyness (in standard deviations) and $N(d_\pm)$ as probabilities of expiring ITM (percent moneyness), in the respective numéraire, as discussed below. Simply put, the interpretation of the cash option, $N(d_-) K$, is correct, as the value of the cash is independent of movements of the underlying, and thus can be interpreted as a simple product of "probability times value", while the $N(d_+) F$ is more complicated, as the probability of expiring in the money and the value of the asset at expiry are not independent.[8] More precisely, the value of the asset at expiry is variable in terms of cash, but is constant in terms of the asset itself (a fixed quantity of the asset), and thus these quantities are independent if one changes numéraire to the asset rather than cash.
If one uses spot S instead of forward F, in $d_\pm$ instead of the $\frac{1}{2}\sigma^2$ term there is $\left(r \pm \frac{1}{2}\sigma^2\right)\tau,$ which can be interpreted as a drift factor (in the risk-neutral measure for appropriate numéraire). The use of d− for moneyness rather than the standardized moneyness $m = \frac{1}{\sigma\sqrt{\tau}}\ln\left(\frac{F}{K}\right)$ – in other words, the reason for the $\frac{1}{2}\sigma^2$ factor – is due to the difference between the median and mean of the log-normal distribution; it is the same factor as in Itō's lemma applied to geometric Brownian motion. In addition, another way to see that the naive interpretation is incorrect is that replacing N(d+) by N(d−) in the formula yields a negative value for out-of-the-money call options.[8]:6
In detail, the terms $N(d_1), N(d_2)$ are the probabilities of the option expiring in-the-money under the equivalent exponential martingale probability measure (numéraire=stock) and the equivalent martingale probability measure (numéraire=risk free asset), respectively.[8] The risk neutral probability density for the stock price $S_T \in (0, \infty)$ is
$p(S, T) = \frac{N^\prime [d_2(S_T)]}{S_T \sigma\sqrt{T}}$
where $d_2 = d_2(K)$ is defined as above.
Specifically, $N(d_2)$ is the probability that the call will be exercised provided one assumes that the asset drift is the risk-free rate. $N(d_1)$, however, does not lend itself to a simple probability interpretation. $SN(d_1)$ is correctly interpreted as the present value, using the risk-free interest rate, of the expected asset price at expiration, given that the asset price at expiration is above the exercise price.[9] For related discussion – and graphical representation – see section "Interpretation" under Datar–Mathews method for real option valuation.
The equivalent martingale probability measure is also called the risk-neutral probability measure. Note that both of these are probabilities in a measure theoretic sense, and neither of these is the true probability of expiring in-the-money under the real probability measure. To calculate the probability under the real ("physical") probability measure, additional information is required—the drift term in the physical measure, or equivalently, the market price of risk.
### Derivation
We now show how to get from the general Black–Scholes PDE to a specific valuation for an option. Consider as an example the Black–Scholes price of a call option, for which the PDE above has boundary conditions
$\begin{align} C(0, t) &= 0\text{ for all }t \\ C(S, t) &\rightarrow S\text{ as }S \rightarrow \infty \\ C(S, T) &= \max\{S - K, 0\} \end{align}$
The last condition gives the value of the option at the time that the option matures. The solution of the PDE gives the value of the option at any earlier time, $\mathbb{E}\left[\max\{S-K,0\}\right]$. To solve the PDE we recognize that it is a Cauchy–Euler equation which can be transformed into a diffusion equation by introducing the change-of-variable transformation
$\begin{align} \tau &= T - t \\ u &= Ce^{r\tau} \\ x &= \ln\left(\frac{S}{K}\right) + \left(r - \frac{1}{2}\sigma^2\right)\tau \end{align}$
Then the Black–Scholes PDE becomes a diffusion equation
$\frac{\partial u}{\partial\tau} = \frac{1}{2}\sigma^{2}\frac{\partial^2 u}{\partial x^2}$
The terminal condition $C(S, T) = \max\{S - K, 0\}$ now becomes an initial condition
$u(x, 0) = u_0(x) \equiv K(e^{\max\{x, 0\}} - 1)$
Using the standard method for solving a diffusion equation we have
$u(x, \tau) = \frac{1}{\sigma\sqrt{2\pi\tau}}\int_{-\infty}^{\infty}{u_0 [y]\exp{\left[-\frac{(x - y)^2}{2\sigma^2 \tau}\right]}}\,dy$
which, after some manipulations, yields
$u(x, \tau) = Ke^{x + \frac{1}{2}\sigma^2 \tau}N(d_1) - KN(d_2)$
where
$\begin{align} d_1 &= \frac{1}{\sigma\sqrt{\tau}} \left[\left(x + \frac{1}{2} \sigma^{2}\tau\right) + \frac{1}{2} \sigma^2 \tau\right] \\ d_2 &= \frac{1}{\sigma\sqrt{\tau}} \left[\left(x + \frac{1}{2} \sigma^{2}\tau\right) - \frac{1}{2} \sigma^2 \tau\right] \end{align}$
Reverting $u, x, \tau$ to the original set of variables yields the above stated solution to the Black–Scholes equation.
#### Other derivations
See also: Martingale pricing
Above we used the method of arbitrage-free pricing ("delta-hedging") to derive the Black–Scholes PDE, and then solved the PDE to get the valuation formula. The Feynman-Kac formula says that the solution to this type of PDE, when discounted appropriately, is actually a martingale. Thus the option price is the expected value of the discounted payoff of the option. Computing the option price via this expectation is the risk neutrality approach and can be done without knowledge of PDEs.[8] Note the expectation of the option payoff is not done under the real world probability measure, but an artificial risk-neutral measure, which differs from the real world measure. For the underlying logic see section "risk neutral valuation" under Rational pricing as well as section "Derivatives pricing: the Q world" under Mathematical finance; for detail, once again, see Hull.[7]:307–309
## The Greeks
"The Greeks" measure the sensitivity to change of the option price under a slight change of a single parameter while holding the other parameters fixed. Formally, they are partial derivatives of the option price with respect to the independent variables (technically, one Greek, gamma, is a partial derivative of another Greek, called delta).
The Greeks are not only important for the mathematical theory of finance, but for those actively involved in trading. Financial institutions will typically set limits for the Greeks that their trader cannot exceed. Delta is the most important Greek and traders will zero their delta at the end of the day. Gamma and vega are also important but not as closely monitored.
The Greeks for Black–Scholes are given in closed form below. They can be obtained by straightforward differentiation of the Black–Scholes formula.[10]
Calls Puts
Delta $\frac{\partial C}{\partial S}$ $N(d_1)\,$ $-N(-d_1) = N(d_1) - 1\,$
Gamma $\frac{\partial^{2} C}{\partial S^{2}}$ $\frac{N'(d_1)}{S\sigma\sqrt{T - t}}\,$
Vega $\frac{\partial C}{\partial \sigma}$ $S N'(d_1) \sqrt{T-t}\,$
Theta $\frac{\partial C}{\partial t}$ $-\frac{S N'(d_1) \sigma}{2 \sqrt{T - t}} - rKe^{-r(T - t)}N(d_2)\,$ $-\frac{S N'(d_1) \sigma}{2 \sqrt{T - t}} + rKe^{-r(T - t)}N(-d_2)\,$
Rho $\frac{\partial C}{\partial r}$ $K(T - t)e^{-r(T - t)}N( d_2)\,$ $-K(T - t)e^{-r(T - t)}N(-d_2)\,$
Note that the gamma and vega formulas are the same for calls and puts. This can be seen directly from put–call parity, since the difference of a put and a call is a forward, which is linear in S and independent of σ (so the gamma and vega of a forward vanish).
In practice, some sensitivities are usually quoted in scaled-down terms, to match the scale of likely changes in the parameters. For example, rho is often reported multiplied by 10,000 (1bp rate change), vega by 100 (1 vol point change), and theta by 365 or 252 (1 day decay based on either calendar days or trading days per year).
## Extensions of the model
The above model can be extended for variable (but deterministic) rates and volatilities. The model may also be used to value European options on instruments paying dividends. In this case, closed-form solutions are available if the dividend is a known proportion of the stock price. American options and options on stocks paying a known cash dividend (in the short term, more realistic than a proportional dividend) are more difficult to value, and a choice of solution techniques is available (for example lattices and grids).
### Instruments paying continuous yield dividends
For options on indexes, it is reasonable to make the simplifying assumption that dividends are paid continuously, and that the dividend amount is proportional to the level of the index.
The dividend payment paid over the time period $[t, t + dt)$ is then modelled as
$qS_t\,dt$
for some constant $q$ (the dividend yield).
Under this formulation the arbitrage-free price implied by the Black–Scholes model can be shown to be
$C(S_0, t) = e^{-r(T - t)}[FN(d_1) - KN(d_2)]\,$
and
$P(S_0, t) = e^{-r(T - t)}[KN(-d_2) - FN(-d_1)]\,$
where now
$F = S_0 e^{(r - q)(T - t)}\,$
is the modified forward price that occurs in the terms $d_1, d_2$:
$d_1 = \frac{1}{\sigma\sqrt{T - t}}\left[\ln\left(\frac{F}{K}\right) + \frac{1}{2}\sigma^2(T - t)\right]$
and
$d_2 = d_1 - \sigma\sqrt{T - t}$
[11] Extending the Black Scholes formula Adjusting for payouts of the underlying.
### Instruments paying discrete proportional dividends
It is also possible to extend the Black–Scholes framework to options on instruments paying discrete proportional dividends. This is useful when the option is struck on a single stock.
A typical model is to assume that a proportion $\delta$ of the stock price is paid out at pre-determined times $t_1, t_2, \ldots$. The price of the stock is then modelled as
$S_t = S_0(1 - \delta)^{n(t)}e^{ut + \sigma W_t}$
where $n(t)$ is the number of dividends that have been paid by time $t$.
The price of a call option on such a stock is again
$C(S_0, T) = e^{-rT}[FN(d_1) - KN(d_2)]\,$
where now
$F = S_{0}(1 - \delta)^{n(T)}e^{rT}\,$
is the forward price for the dividend paying stock.
### American options
The problem of finding the price of an American option is related to the optimal stopping problem of finding the time to execute the option. Since the American option can be exercised at any time before the expiration date, the Black–Scholes equation becomes an inequality of the form
$\frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV \leq 0$[12]
With the terminal and (free) boundary conditions: $V(S, T) = H(S)$ and $V(S, t) \geq H(S)$ where $H(S)$ denotes the payoff at stock price $S$
In general this inequality does not have a closed form solution, though an American call with no dividends is equal to a European call and the Roll-Geske-Whaley method provides a solution for an American call with one dividend.[13][14]
Barone-Adesi and Whaley[15] is a further approximation formula. Here, the stochastic differential equation (which is valid for the value of any derivative) is split into two components: the European option value and the early exercise premium. With some assumptions, a quadratic equation that approximates the solution for the latter is then obtained. This solution involves finding the critical value, $s*$, such that one is indifferent between early exercise and holding to maturity.[16][17]
Bjerksund and Stensland[18] provide an approximation based on an exercise strategy corresponding to a trigger price. Here, if the underlying asset price is greater than or equal to the trigger price it is optimal to exercise, and the value must equal $S - X$, otherwise the option "boils down to: (i) a European up-and-out call option… and (ii) a rebate that is received at the knock-out date if the option is knocked out prior to the maturity date." The formula is readily modified for the valuation of a put option, using put call parity. This approximation is computationally inexpensive and the method is fast, with evidence indicating that the approximation may be more accurate in pricing long dated options than Barone-Adesi and Whaley.[19]
## Black–Scholes in practice
The normality assumption of the Black–Scholes model does not capture extreme movements such as stock market crashes.
The Black–Scholes model disagrees with reality in a number of ways, some significant. It is widely employed as a useful approximation, but proper application requires understanding its limitations – blindly following the model exposes the user to unexpected risk.
Among the most significant limitations are:
• the underestimation of extreme moves, yielding tail risk, which can be hedged with out-of-the-money options;
• the assumption of instant, cost-less trading, yielding liquidity risk, which is difficult to hedge;
• the assumption of a stationary process, yielding volatility risk, which can be hedged with volatility hedging;
• the assumption of continuous time and continuous trading, yielding gap risk, which can be hedged with Gamma hedging.
In short, while in the Black–Scholes model one can perfectly hedge options by simply Delta hedging, in practice there are many other sources of risk.
Results using the Black–Scholes model differ from real world prices because of simplifying assumptions of the model. One significant limitation is that in reality security prices do not follow a strict stationary log-normal process, nor is the risk-free interest actually known (and is not constant over time). The variance has been observed to be non-constant leading to models such as GARCH to model volatility changes. Pricing discrepancies between empirical and the Black–Scholes model have long been observed in options that are far out-of-the-money, corresponding to extreme price changes; such events would be very rare if returns were lognormally distributed, but are observed much more often in practice.
Nevertheless, Black–Scholes pricing is widely used in practice,[3]:751[20] for it is:
• easy to calculate
• a useful approximation, particularly when analyzing the direction in which prices move when crossing critical points
• a robust basis for more refined models
• reversible, as the model's original output, price, can be used as an input and one of the other variables solved for; the implied volatility calculated in this way is often used to quote option prices (that is, as a quoting convention)
The first point is self-evidently useful. The others can be further discussed:
Useful approximation: although volatility is not constant, results from the model are often helpful in setting up hedges in the correct proportions to minimize risk. Even when the results are not completely accurate, they serve as a first approximation to which adjustments can be made.
Basis for more refined models: The Black–Scholes model is robust in that it can be adjusted to deal with some of its failures. Rather than considering some parameters (such as volatility or interest rates) as constant, one considers them as variables, and thus added sources of risk. This is reflected in the Greeks (the change in option value for a change in these parameters, or equivalently the partial derivatives with respect to these variables), and hedging these Greeks mitigates the risk caused by the non-constant nature of these parameters. Other defects cannot be mitigated by modifying the model, however, notably tail risk and liquidity risk, and these are instead managed outside the model, chiefly by minimizing these risks and by stress testing.
Explicit modeling: this feature mean that, rather than assuming a volatility a priori and computing prices from it, one can use the model to solve for volatility, which gives the implied volatility of an option at given prices, durations and exercise prices. Solving for volatility over a given set of durations and strike prices one can construct an implied volatility surface. In this application of the Black–Scholes model, a coordinate transformation from the price domain to the volatility domain is obtained. Rather than quoting option prices in terms of dollars per unit (which are hard to compare across strikes and tenors), option prices can thus be quoted in terms of implied volatility, which leads to trading of volatility in option markets.
### The volatility smile
Main article: Volatility smile
One of the attractive features of the Black–Scholes model is that the parameters in the model (other than the volatility) — the time to maturity, the strike, the risk-free interest rate, and the current underlying price – are unequivocally observable. All other things being equal, an option's theoretical value is a monotonic increasing function of implied volatility.
By computing the implied volatility for traded options with different strikes and maturities, the Black–Scholes model can be tested. If the Black–Scholes model held, then the implied volatility for a particular stock would be the same for all strikes and maturities. In practice, the volatility surface (the 3D graph of implied volatility against strike and maturity) is not flat.
The typical shape of the implied volatility curve for a given maturity depends on the underlying instrument. Equities tend to have skewed curves: compared to at-the-money, implied volatility is substantially higher for low strikes, and slightly lower for high strikes. Currencies tend to have more symmetrical curves, with implied volatility lowest at-the-money, and higher volatilities in both wings. Commodities often have the reverse behavior to equities, with higher implied volatility for higher strikes.
Despite the existence of the volatility smile (and the violation of all the other assumptions of the Black–Scholes model), the Black–Scholes PDE and Black–Scholes formula are still used extensively in practice. A typical approach is to regard the volatility surface as a fact about the market, and use an implied volatility from it in a Black–Scholes valuation model. This has been described as using "the wrong number in the wrong formula to get the right price."[21] This approach also gives usable values for the hedge ratios (the Greeks).
Even when more advanced models are used, traders prefer to think in terms of volatility as it allows them to evaluate and compare options of different maturities, strikes, and so on.
### Valuing bond options
Black–Scholes cannot be applied directly to bond securities because of pull-to-par. As the bond reaches its maturity date, all of the prices involved with the bond become known, thereby decreasing its volatility, and the simple Black–Scholes model does not reflect this process. A large number of extensions to Black–Scholes, beginning with the Black model, have been used to deal with this phenomenon.[22] See Bond option: Valuation.
### Interest-rate curve
In practice, interest rates are not constant – they vary by tenor, giving an interest rate curve which may be interpolated to pick an appropriate rate to use in the Black–Scholes formula. Another consideration is that interest rates vary over time. This volatility may make a significant contribution to the price, especially of long-dated options.This is simply like the interest rate and bond price relationship which is inversely related.
### Short stock rate
It is not free to take a short stock position. Similarly, it may be possible to lend out a long stock position for a small fee. In either case, this can be treated as a continuous dividend for the purposes of a Black–Scholes valuation, provided that there is no glaring asymmetry between the short stock borrowing cost and the long stock lending income.[citation needed]
## Criticism
Espen Gaarder Haug and Nassim Nicholas Taleb argue that the Black–Scholes model merely recast existing widely used models in terms of practically impossible "dynamic hedging" rather than "risk", to make them more compatible with mainstream neoclassical economic theory.[23] They also assert that Boness in 1964 had already published a formula that is "actually identical" to the Black–Scholes call option pricing equation.[24] Similar arguments were made in an earlier paper by Emanuel Derman and Nassim Taleb.[25] In response, Paul Wilmott has defended the model.[20][26]
## See also
• Binomial options model, which is a discrete numerical method for calculating option prices.
• Black model, a variant of the Black–Scholes option pricing model.
• Black Shoals, a financial art piece.
• Brownian model of financial markets
• Financial mathematics, which contains a list of related articles.
• Heat equation, to which the Black–Scholes PDE can be transformed.
• Jump diffusion
• Monte Carlo option model, using simulation in the valuation of options with complicated features.
• Real options analysis
• Stochastic volatility
## Notes
1. Although the original model assumed no dividends, trivial extensions to the model can accommodate a continuous dividend yield factor.
## References
1. "Scholes". Retrieved March 26, 2012.
2. MacKenzie, Donald (2006). An Engine, Not a Camera: How Financial Models Shape Markets. Cambridge, MA: MIT Press. ISBN 0-262-13460-8.
3. ^ a b c Bodie, Zvi; Alex Kane, Alan J. Marcus (2008). Investments (7th ed.). New York: McGraw-Hill/Irwin. ISBN 978-0-07-326967-2.
4. "Nobel prize foundation, 1997 Press release". October 14, 1997. Retrieved March 26, 2012.
5. Black, Fischer; Scholes, Myron. "The Pricing of Options and Corporate Liabilities". Journal of Political Economy 81 (3): 637–654.
6. Merton, Robert. "Theory of Rational Option Pricing". Bell Journal of Economics and Management Science 4 (1): 141–183. doi:10.2307/3003143.
7. ^ a b Hull, John C. (2008). Options, Futures and Other Derivatives (7 ed.). Prentice Hall. ISBN 0-13-505283-1.
8. Nielsen, Lars Tyge (1993). "Understanding N(d1) and N(d2): Risk-Adjusted Probabilities in the Black-Scholes Model". Revue Finance (Journal of the French Finance Association) 14 (1): 95–106. Retrieved 2012 Dec 8, earlier circulated as INSEAD Working Paper 92/71/FIN (1992); abstract and link to article, published article.
9. Don Chance (June 3, 2011). "Derivation and Interpretation of the Black–Scholes Model" (PDF). Retrieved March 27, 2012.
10. Although with significant algebra; see, for example, Hong-Yi Chen, Cheng-Few Lee and Weikang Shih (2010). Derivations and Applications of Greek Letters: Review and Integration, Handbook of Quantitative Finance and Risk Management, III:491–503.
11. André Jaun. "The Black-Scholes equation for American options". Retrieved May 5, 2012.
12. Bernt Ødegaard (2003). "Extending the Black Scholes formula". Retrieved May 5, 2012.
13. Don Chance (2008). "Closed-Form American Call Option Pricing: Roll-Geske-Whaley". Retrieved May 16, 2012.
14. Giovanni Barone-Adesi and Robert E Whaley (June 1987). "Efficient analytic approximation of American option values". Journal of Finance. 42 (2): 301–20.
15. Bernt Ødegaard (2003). "A quadratic approximation to American prices due to Barone-Adesi and Whaley". Retrieved June 25, 2012.
16. Don Chance (2008). "Approximation Of American Option Values: Barone-Adesi-Whaley". Retrieved June 25, 2012.
17. ^ a b
18. Riccardo Rebonato (1999). Volatility and correlation in the pricing of equity, FX and interest-rate options. Wiley. ISBN 0-471-89998-4.
19. Kalotay, Andrew (November 1995). "The Problem with Black, Scholes et al." (PDF). Derivatives Strategy.
20. Espen Gaarder Haug and Nassim Nicholas Taleb (2011). Option Traders Use (very) Sophisticated Heuristics, Never the Black–Scholes–Merton Formula. Journal of Economic Behavior and Organization, Vol. 77, No. 2, 2011
21. Boness, A James, 1964, Elements of a theory of stock-option value, Journal of Political Economy, 72, 163-175.
22. Emanuel Derman and Nassim Taleb (2005). The illusions of dynamic replication, Quantitative Finance, Vol. 5, No. 4, August 2005, 323–326
23. See also: Doriana Ruffinno and Jonathan Treussard (2006). Derman and Taleb’s The Illusions of Dynamic Replication: A Comment, WP2006-019, Boston University - Department of Economics.
### Primary references
• Black, Fischer; Myron Scholes (1973). "The Pricing of Options and Corporate Liabilities". Journal of Political Economy 81 (3): 637–654. doi:10.1086/260062. [1] (Black and Scholes' original paper.)
• Merton, Robert C. (1973). "Theory of Rational Option Pricing". Bell Journal of Economics and Management Science (The RAND Corporation) 4 (1): 141–183. doi:10.2307/3003143. JSTOR 3003143. [2]
• Hull, John C. (1997). Options, Futures, and Other Derivatives. Prentice Hall. ISBN 0-13-601589-1.
### Historical and sociological aspects
• Bernstein, Peter (1992). Capital Ideas: The Improbable Origins of Modern Wall Street. The Free Press. ISBN 0-02-903012-9.
• MacKenzie, Donald (2003). "An Equation and its Worlds: Bricolage, Exemplars, Disunity and Performativity in Financial Economics". Social Studies of Science 33 (6): 831–868. doi:10.1177/0306312703336002. [3]
• MacKenzie, Donald; Yuval Millo (2003). "Constructing a Market, Performing Theory: The Historical Sociology of a Financial Derivatives Exchange". American Journal of Sociology 109 (1): 107–145. doi:10.1086/374404. [4]
• MacKenzie, Donald (2006). An Engine, not a Camera: How Financial Models Shape Markets. MIT Press. ISBN 0-262-13460-8.
### Further reading
• Haug, E. G (2007). "Option Pricing and Hedging from Theory to Practice". Derivatives: Models on Models. Wiley. ISBN 978-0-470-01322-9. The book gives a series of historical references supporting the theory that option traders use much more robust hedging and pricing principles than the Black, Scholes and Merton model.
• Triana, Pablo (2009). Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets?. Wiley. ISBN 978-0-470-40675-5. The book takes a critical look at the Black, Scholes and Merton model.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 139, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988805413246155, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-software/115790-please-help-me-construct-graph.html
|
# Thread:
1. ## Please help me construct a graph...
I have no experience with this sort of problem.
I've only ever made simple graphs before. This one requires special computer software.
For now, the problem is simple...
I have derived a formula for a simple model of a star:
$P=P_0 (r/r_0)^a$ ; $T=T_0 (r/r_0)^b$
I need to make a graph of a vs b.
This is where I'm begging you to help me. What (free) program could I use? Could anyone provide instructions on how to graph this with the program?
I thank you very very much in advance!
2. Originally Posted by elninio
I have no experience with this sort of problem.
I've only ever made simple graphs before. This one requires special computer software.
For now, the problem is simple...
I have derived a formula for a simple model of a star:
$P=P_0 (r/r_0)^a$ ; $T=T_0 (r/r_0)^b$
I need to make a graph of a vs b.
This is where I'm begging you to help me. What (free) program could I use? Could anyone provide instructions on how to graph this with the program?
I thank you very very much in advance!
Plot what against what? Are you sure you want a against b?
CB
3. Positive. In these equations, r is radius, and r0, P0, and T0 are all positive constants. This is keeping in mind that the temperature, pressure, and density all decrease with radius.
After I graph it, I'll have to point out three regions of the graph (convectively stable, unstable and non-physical)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602859616279602, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/172484-find-equation-level-surface-through-1-2-1-a.html
|
# Thread:
1. ## Find the equation of the level surface through (-1,2-1)
Hi,
The question is: Find the eq. of the level surface through point p= (1,2,-1) and find the eq of the straight line in R3 which is normal to this surface at p
Is finding the equation of the level surface to f(x,y,z) = x^2 - y^2 + z^2 at point (-1,2,-1) the same as finding the equation of the tangent plane to the surface at that point?
Thanks for any help!
2. Originally Posted by Ant
The question is: Find the eq. of the level surface through point p= (1,2,-1) and find the eq of the straight line in R3 which is normal to this surface at p
Is finding the equation of the level surface to f(x,y,z) = x^2 - y^2 + z^2 at point (-1,2,-1) the same as finding the equation of the tangent plane to the surface at that point?
No it is not. It simply means writing the equation $f(x,y,z)=K$ where $K$ is a constant.
So in this case we have $f(x,y,z)=f(-1,2,-1)$.
3. So here we have $K = -2$ ?
4. I also have that the equation to of the straight line normal to this surface at p is:
$t (2i -4j -2k)$ where t is a scalar variable.
Can anyone say if i'm on the right track here? Thanks
5. Originally Posted by Ant
I also have that the equation to of the straight line normal to this surface at p is:
$t (2i -4j -2k)$ where t is a scalar variable.
Can anyone say if i'm on the right track here? Thanks
You forgot the point: $<-1,2,-1>+t <-2i -4j -2k>$
6. Of course! Thanks again
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360672235488892, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/76807/structure-of-k-as-kx-module?answertab=oldest
|
# Structure of $k$ as $k[X]$-module
Let $k$ be a field. There are several ways to make $k$ into a $k[X]$-module; for example, by fixing $\alpha \in k$ and taking $P(X) \cdot t := P(\alpha)t$ for all $t \in k$. Are there other obvious ways to make $k$ into a $k[X]$-module? Can we actually give a nice descriptions of all possible such structures?
What can be said about the $k[X_1,X_2,\cdots,X_n]$-module structures on $k$ for $n \geq 2$?
-
What can be said about background and context? – Phira Oct 28 '11 at 23:23
It's a logical question to ask... there is not much more to say, really. – Evariste Oct 28 '11 at 23:26
You really don't think that "obvious" does not depend on what you already know? – Phira Oct 28 '11 at 23:28
Of course it does, but... I don't see your point, sorry. – Evariste Oct 28 '11 at 23:29
## 1 Answer
All interesting $k[x]$-module structures on $k$ can be given by a pair $(\phi,\alpha)$ where $\phi$ is an automorphism of $k$ and $\alpha\in k$. Then the module action is determined by:
$$P(x)\cdot t = \phi(P(\phi^{-1}(\alpha)))t$$
If you want $k[x]$ to act on $k$ consistent with the "obvious" action of $k$ on $k$, then $\phi = id_k$ and arbitrary $\alpha$ give you all the possible solutions.
This doesn't give all module structures. Let $k_0$ be the least subfield of $k$ containing $0$ and $1$. So $k_0\cong\mathbb Q$ or $k_0\cong\mathbb Z_p$. Let $T$ be an invertible linear operator on the field $k$ when viewed as a vector space over $k_0$. Then you can get an even more general operator given $(\phi,\alpha,T)$:
$$P(x)\cdot t = T^{-1} \phi(P(\phi^{-1}(\alpha)))(Tt)$$
-
Hm, interesting... – Evariste Oct 28 '11 at 23:41
Yes, I see where your operator $T$ comes from! – Evariste Oct 28 '11 at 23:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229665398597717, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/64750/tangent-and-cotangent-bundle/64766
|
## tangent and cotangent bundle
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, I am reading "Introduction to symplectic topology" by McDuff and salamon. At some point I cant go further. My question is: Let $(M,g)$ be a Riemannian manifold and consider the cotangent bundle $T^{\star}M$. My question is now, how one can show that there is an bundle isomorphism $\varphi : TM \oplus T^{\star}M \rightarrow T(T^{\star}M)$, s.t. $d\pi \circ \varphi$ is the identily on $TM$ and it preserves the symplectic forms. In the book there is a hint giving, by saying that one can choose $\varphi$ via a connection on the cotangen bundle.
-
2
What is $T^M$?? – Olivier Bégassat May 12 2011 at 6:18
Sorry I did some mistake. Its the cotangent bundle. Now I have changed it. – marco May 12 2011 at 7:27
The proposed isomorphism can't be right. Let $M$ be $n$-dimensional; then $\mathrm T M$ is rank-$n$, as is $\mathrm T^\ast M$, so $\mathrm T \oplus \mathrm T^\ast$ is rank $n+n$, so the total space of $\mathrm TM \oplus_M \mathrm T^\ast M$ has dimension $3n$. On the other hand, the total space of $\mathrm T(\mathrm T^\ast M)$ has dimension $2(2n) = 4n > 3n$ if $n\neq 0$. I agree that there is a map, however. (Side remark, at the risk of being a little rude: please correct spelling, capitalization, and punctuation; MO should be more professional than emails.) – Theo Johnson-Freyd May 12 2011 at 16:28
## 2 Answers
Your Maths does not add up. Let $n$ be the dimensional of $M$. The LHS is a vector bundle of dimension $2n$ on $M$. The RHS is a vector bundle of dimension of $2n$ on $T^*M$. If you just consider it as a bundle on $M$ , its dimension is $3m$, so no iso possible.
-
But is it possible to decompose $T(T^{\star}M)$ in such a way ? – marco May 12 2011 at 10:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I guess what you want to prove is the following: for a given vector bundle $E$ over $M$ there is a canonical subbundle, the vertical bundle $\mathrm{Ver}(E) = \ker T\pi \subseteq TE$ of the tangent bundle of the total space., where $\pi\colon E \longrightarrow M$ is the bundle projection. Then the choice of a connection gives a complementary subbundle $\mathrm{Hor}(E)$, the horizontal bundle. Depending on your favorite definition of a connection, this is just the definition ;) Now what is true is that the horizontal bundle is isomorphic to the pull-back bundle $\pi^\sharp TM$ of the tangent bundle of the base $M$. This is intrinsically defined as vector bundle over $E$ but it is not a subbundle of $TE$ in an intrinsic way. Here you need the connection. But then you have $\pi^\sharp TM \oplus \ker T\pi \cong TE$.
Now you can unwind this for $E = T^*M$ and get your result (stated correctly). I hope that helps.
EDIT: for $T^*M$ one then has a canonical pairing of the horizontal and vertical spaces at every point. This you can either symmetrize to get a pseudo-riemannian metric or antisymmetrize to get a symplectic form (guess which). Interestingly enough, the metric depends on the connection while the antisymmetric form does not (you may need torsion-free, I forgot). Also as a side remark: the Laplacian (better d'Alembertian) of the metric plays an important role in quantization theory when passing from standard to Weyl ordering on a cotangent bundle.
You can find some background info in the book of Yano&Ishihara or, if you prefer german, in Sect 5.4 of my book ;)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101581573486328, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/16339/force-as-gradient-of-scalar-potential-energy
|
# Force as gradient of scalar potential energy
My text book reads
If a particle is acted upon by the forces which are conservative; that is, if the forces are derivable from a scalar potential energy function in manner $F=-\nabla V$.
I was just wondering what may be the criteria for force to be expressed as negative gradient of scalar potential energy and HOW DO WE PROVE IT?
-
## 3 Answers
Your Question all but includes the right search term for an Answer from Wikipedia, "Conservative Forces", which gets you to http://en.wikipedia.org/wiki/Conservative_Forces. There's even what you ask for, a proof. There's also another link to http://en.wikipedia.org/wiki/Conservative_vector_field, which gives some quite good visualizations that will probably help. Loosely, there mustn't be any vortices in the force field for there to be a scalar potential energy that generates the force vector field as $\nabla\!\cdot\!\phi(x)$.
-
## Did you find this question interesting? Try our newsletter
email address
The force field indeed must be conservative, and this is the criterion for your ability to express force in terms of potential energy. To test this let a force field (in 2D for simplicity) be given by $$F=f_i+g_j$$ where $f$ and $g$ are functions of $x$, $y$ then the partial differential of $f$ with respect to $y$ must be equal to the partial differential of $g$ with respect to $x$. This is a general result in mathematics, to test for conservative vector fields.
In the case of a simple Newtonian gravitational field, $$f=GmMx/(x^2+y^2)^{3/2}$$
Integrating with respect to $x$ yields $-GmM/(x^2+y^2)^{1/2}+C(y)$, where $C(y)$ is the constant of integration. Differentiating this result with respect to y yields $GmMy/(x^2+y^2)^{3/2}+dC(y)/dy$. Equating this to $g$ let us compute $dC(y)/dy=0$ and therefore $C(y)=C$, where $C$ is a constant independent of $x$ or $y$.
Hence the potential energy is given by $$Ep=-GmM/(x^2+y^2)^{1/2}+C$$
-
That is the definition of a force. In my opinion, we assume Energy, Space, Momentum and Time as fundamental and build the theory of mechanics based on these quantities.
-
1
No, that is not the definition of force. Only certain forces can be represented as the gradient of a potential. That's what the question was about. – Mark Eichenlaub Oct 30 '11 at 10:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348149299621582, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/7902/matrix-multiplication-in-block-form-symbolic-calculation-by-mathematica
|
# Matrix multiplication in Block Form symbolic calculation by Mathematica
I have a problem which requires taking product of two $10\times10$ matrices. I would like to do it by considering both matrices as $5\times5$ matrices such that each entry of both matrices is actually a $2\times2$ matrix; moreover, there are variables so this is a symbolic calculation. (Side remark: matrix multiplication is not a commutative operation, i.e. $AB\neq BA$ in general.) Does anyone know how to perform this block form multiplication in Mathematica, or using any other software?
-
Welcome! I added $\LaTeX$ markup to your question to make it read better. In the future, you can do the same by wrapping inline equations with `$` and `$$` to provide block style equations. If you have any questions about the operation of this site, please check out the FAQ, and the meta site. – rcollyer Jul 5 '12 at 2:17
Got you! Thanks! – Danny Jul 5 '12 at 2:54
Take a look at the (free) NCAlgebra package, see e.g. here: mathematica.stackexchange.com/questions/8/… You'll need it when doing more general operation on the block form of matrices. I'll post an answer if/when I have the time. – Szabolcs Jul 5 '12 at 7:27
@Szabolcs: Thanks! I will bear this package in mind. Right now I am trying the function blockMultiply – Danny Jul 5 '12 at 16:39
## 2 Answers
If I understand correctly, the main issue in your question is how to make a `Dot` product of two block matrices such that the result preserves the order of the factors in the resulting block matrix, because the entries are non-commuting matrices themselves.
The problem is that the result of `Dot` has multiplications of the matrix components in it, and this corresponds to the operation `Times` which is orderless. `Dot` preserves the order of its factors, but `Times` always sorts its factors lexicographically, i.e., in a standard sorting order so that `z*b` becomes `b*z` and `m[2]*m[1]` becomes `m[1]*m[2]`, etc.
If one were to use the `Dot` function for the matrix multiplication, one would have to track in what way the order of the input factors is changed when brought into the lexicographical order of `Times`, and then undo that sorting.
Edit
As Mr. Wizard pointed out, it is best to us a generalization of `Dot` that doesn't apply `Times` to the components at all:
````blockMultiply[mats__] := Inner[Dot, mats]
````
End edit
To show how this works, let's first define two $5\times 5$ matrices called `smallMatrix[[1]]` and `smallMatrix[[2]]`. I define them in one go, and show them afterwards:
````smallMatrix = Table[Array[{"A", "B"}[[i]], {5, 5}], {i, 2}];
MatrixForm[smallMatrix[[1]]]
````
$\left( \begin{array}{ccccc} \text{A}(1,1) & \text{A}(1,2) & \text{A}(1,3) & \text{A}(1,4) & \text{A}(1,5) \\ \text{A}(2,1) & \text{A}(2,2) & \text{A}(2,3) & \text{A}(2,4) & \text{A}(2,5) \\ \text{A}(3,1) & \text{A}(3,2) & \text{A}(3,3) & \text{A}(3,4) & \text{A}(3,5) \\ \text{A}(4,1) & \text{A}(4,2) & \text{A}(4,3) & \text{A}(4,4) & \text{A}(4,5) \\ \text{A}(5,1) & \text{A}(5,2) & \text{A}(5,3) & \text{A}(5,4) & \text{A}(5,5) \\ \end{array} \right)$
````MatrixForm[smallMatrix[[2]]]
````
$\left( \begin{array}{ccccc} \text{B}(1,1) & \text{B}(1,2) & \text{B}(1,3) & \text{B}(1,4) & \text{B}(1,5) \\ \text{B}(2,1) & \text{B}(2,2) & \text{B}(2,3) & \text{B}(2,4) & \text{B}(2,5) \\ \text{B}(3,1) & \text{B}(3,2) & \text{B}(3,3) & \text{B}(3,4) & \text{B}(3,5) \\ \text{B}(4,1) & \text{B}(4,2) & \text{B}(4,3) & \text{B}(4,4) & \text{B}(4,5) \\ \text{B}(5,1) & \text{B}(5,2) & \text{B}(5,3) & \text{B}(5,4) & \text{B}(5,5) \\ \end{array} \right)$
Now I multiply these matrices under the assumption that each of their entries is itself a (so far unspecified) matrix:
````productAB = blockMultiply[smallMatrix[[1]], smallMatrix[[2]]];
productBA = blockMultiply[smallMatrix[[2]], smallMatrix[[1]]];
````
If you inspect these result matrices you'll see that the order of the factors is correct, and each element is a sum of (matrix) `Dot` products. The results are too large to display here.
Another way to check that this works is to insert an actual pair of $10\times 10$ matrices by writing them as block matrices. I first define the big two-dimensional matrices and then use `Partition` to subdivide them into blocks of size $2\times 2$:
````bigMatrix = Table[Array[{"a", "b"}[[i]], {10, 10}], {i, 2}];
blockMatrix = Table[Partition[bigMatrix[[i]], {2, 2}], {i, 2}];
MatrixForm[blockMatrix[[1]]]
````
and similarly for `MatrixForm[blockMatrix[[2]]]`.
Now we use these big matrices in the results obtained above with `blockMultiply`:
````AB = Flatten[
productAB /. Thread[
Flatten[smallMatrix] -> Flatten[blockMatrix, {{1, 2, 3}}]
],
{{1, 3}, {2, 4}}];
BA = Flatten[
productBA /. Thread[
Flatten[smallMatrix] -> Flatten[blockMatrix, {{1, 2, 3}}]
],
{{1, 3}, {2, 4}}];
FullSimplify[AB == bigMatrix[[1]].bigMatrix[[2]]]
(* ==> True *)
FullSimplify[BA == bigMatrix[[2]].bigMatrix[[1]]]
(* ==> True *)
````
This says that the block multiplications yield the same result as doing the matrix products directly (as in `bigMatrix[[1]].bigMatrix[[2]]`). And the order of the multiplications is correctly captured.
The `Flatten` commands appearing in the definition of `AB` and `BA` (for the two different orders of the factors) are perhaps a little hard to see through. With a command like `Flatten[blockMatrix, {{1, 2, 3}}]` one gets a list in which the sub-blocks of the `blockMatrix` appear flattened, so that they can be used in the `Thread` of the `->` which replaces the small symbolic block matrices by the blocks of the big matrix. The `Flatten[ ..., {{1, 3}, {2, 4}}]` removes the block matrix level and creates a $10\times 10$ matrix from the $2\times 2$ blocks.
The function `blockMultiply` is intended to work for any number of arguments in a matrix multiplication, and also for any dimension as long as all adjacent factor share a common dimension as required by `Dot`. So you could also repeat the above tests by splitting up the two matrices in `bigMatrix` into $5\times 5$ blocks, for example.
-
Jens, why not ```{a, b} = Table[Array[{"A", "B"}[[i]], {5, 5}], {i, 2}];
Inner[Dot, a, b]
```? – Mr.Wizard♦ Jul 5 '12 at 7:02
@Mr.Wizard Because of the lateness of the hour, I guess. Thanks for the simplificiation! – Jens Jul 5 '12 at 7:19
1
Thanks for the credit in the edit. IMHO it would be better to leave out the long `blockMultiply` code as I don't think it is instructive. – Mr.Wizard♦ Jul 5 '12 at 7:34
1
@Mr.Wizard You're right - all those fireworks messed with my brain. – Jens Jul 5 '12 at 14:33
1
the Pyrotechnic Mitigation Plea -- granted. ;^) – Mr.Wizard♦ Jul 5 '12 at 14:44
show 3 more comments
You are talking about tensors, right? Such a tensor can be created with e.g.
````example=Array[e,{5,5,2,2}];
example//MatrixForm
````
All you have to do now, is to take from the example `e[1,1,1,1]` etc. and replace it with the variables you want to use. The same can be done with a second exampleTensor, so that you can work with both of them (multiply etc.).
I hope this helps.
-
Sorry that my post is ambiguous. This is not about tensor - tensor of two $10\times 10$ matrices would be a $100\times 100$ matrix. What I meant was as Jens pointed out in his first two paragraphs. – Danny Jul 5 '12 at 16:48
No problem! Thanks for the feedback! – partial81 Jul 5 '12 at 18:49
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089102149009705, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/21710/how-does-such-strange-microscopic-behavior-at-the-atomic-level-quantum-mechanic/21713
|
How does such strange microscopic behavior at the atomic level (quantum mechanics) lead to the macroscopic behavior at our level?
So, I'm only a high school student researching quantum physics, and I find it very interesting. However, there's one question that keeps nagging at me in the back of my head. How exactly do odd behaviors like quantum parallelism that occur on the atomic level lead to the behaviors that we consider normal at everyday sizes and scales? That is, what is it about having so many atoms together (classical physics) that makes them behave so very differently from the way a single atom behaves (quantum physics)?
Sorry if it seems like I don't know what I'm talking about... because I may not! So, if there are any misconceptions on my behalf, please tell me so I can actually learn something... :)
Thanks in advance!
-
I can recommend relatively simple sources--- Everett's PhD thesis, reprinted in "The Many Worlds Interpretation of Quantum Mechanics" edited by DeWitt is the original, and still essential. In conjunction with the philosophical discussion in "The Mind's Eye" by Dennett and Hofstadter, and an article by Douglas Hofstadter on the Many-Worlds interpretation reprinted in "Metamagical themas" (all these are completely accessible to any high school student versed in Dirac's book), you can understand the whole classical limit. Its mostly simple physics, only thorny philosophy. – Ron Maimon Mar 2 '12 at 3:23
Tangentially, if you want to have a quite humorous account of what the macroscopic world would be like if these effects were visible, read The Adventures of Mr. Tompkins, the author, George Gamow, was a great physicist. – recipriversexclusion Mar 2 '12 at 16:48
6 Answers
There is a phenomenon called decoherence in quantum mechanics which is largely responsible for this. Basically (the following is a simplification), all the strange behavior that occurs in QM tends to happen when the wavefunctions of different particles are in phase. Decoherence occurs when the phases are randomized, so there's no special correlation between different particles. In that case, the properties of the different particles tend to just combine the way we'd expect them to classically.
A decent (but very basic) analogy for this would be like having a bunch of identical cars whose drivers all turn their turn signals on at the very same time. The turn signals would be blinking together, so we'd say they are in phase. But on a real road, that's not the case at all; different drivers turn their turn signals on at different, pretty much random times. And besides that, there are many different models of car whose turn signals blink at different rates. For both those reasons, the turn signals on a real road are not in phase. That's kind of like decoherence.
The reason I bring this up is that I've posted an answer about it which you might be interested to read. The gist of that answer is that when you have a small system like a single particle, any interaction makes a big difference to the system's momentum. But the same interaction will make only a little difference to a system which contains a large number of particles with partially uncorrelated momenta, like a measuring device. Now, in the paragraphs above, I talked about phase, whereas my other answer talks about momentum, but the idea involved is similar in both cases.
-
Thanks you so much for your answer and for your haste. Your simplification is greatly appreciated. I did research decoherence as it plays a big part in my narrower research topic of quantum mechanics and had an idea it had to do with the answer to my question. Thanks for clarifying! – Brandon Mar 2 '12 at 3:25
Brandon, the simple truth is that you have just asked one of the hardest and least understood questions in all of physics. So, don't feel bad if you don't understand it very well, because, er... no one else really does either?
It's not that we can't model this stuff mathematically. Shoot, Richard Feynman's version of something called Quantum Electrodynamics (QED), which is sort of quantum mechanics merged with Einstein's theory of special relativity, is arguably the most accurately predictive theory in all of physics. (Or was; I haven't kept track lately.) The problem is that whenever we use such precise theories, we can't help but toss in a bit of everyday life in the mix, sort of like a salad in which we mix things more by taste than by precise rules.
So, for example, Feynman's QED theory is incredibly precise in predicting how an electron in one place and state (e.g., velocity) gets to some other place and state. However, to set up the electron in a real experiment -- to create the location and state you are describing in the QED problem setup -- you must use real-world equipment. And that is the fly in the ointment (or is it the secret ingredient in that salad?): The real-world setup for any physics problem is unavoidably embedded at some points in everyday physics concepts like "ordinary," or irreversible time. Once you toss something like ordinary time into the mix, all the nicely reversible properties of time at the atomic scale no longer apply, at least not for the experiment as a whole. Or stated a bit differently: Everyday physics seems to beget more everyday physics. That is the flaw you will find at some level in every single experiment looking at the physics of very tiny scales. It has to be that way, since otherwise how would we as large-scale creatures every find know about the result in the first place?
So, as the amazing physicist John Bell once said while mulling over pretty much the same question you just asked (he could never really answer it; that's how hard it is!), folks who do experimental physicists just sort of develop a "feel" for when you stop applying quantum physics and start applying everyday (or "classical") physics. Time is a very big part of the transition: If time is reversible, it's almost certainly quantum, and if it's not, it's probably better treated as everyday (or classical). Size is less reliable, but for most phenomena at ordinary temperatures, classical physics starts to kick in at roughly the size of a medium-sized molecule, say a buckyball. That metric is very unreliable overall, though, since things as ordinary as a reflection off of a piece of silver are deeply quantum events that cannot be modeled using only classical physics. Shoot, size is a deeply quantum phenomenon, and so is chemistry. Without quantum mechanics stepping in, we'd just be part of some huge big black whole, and so would not be having this conversation.
I'll end by recommending a book: Richard Feynman's "QED: The Strange Theory of Light and Matter." It's paperback, cheap, uses almost no math, yet provides profound and accurate insights into that very precise quantum theory I mentioned above. I won't say it will answer your question, but at least it will present the remarkably non-intuitive features of quantum mechanics about as sharply and starkly as possible.
Good luck!
-
That is, what is it about having so many atoms together (classical physics) that makes them behave so very differently from the way a single atom behaves (quantum physics)?
This difference is still not completely understood and not because from a lack of trying. I disagree with Terry that we can model this in a mathematically correct way and I would even go so far as to say that it cannot be solved by a deeper understanding of the microscopics. David pointed out the decoherence, which explains some properties under the condition that there are no special correlations, but in real materials the interesting bits are often caused by those correlations.
We would not have things like magnetism, superconductivity, spintronics and other phenomena without correlations. Part of this area of physics is summed up by P.W. Anderson: "More is different" (Science 177, 4047, 1972).
A first approach is to these problems is the mean field theory. You model a single particle together with a mean field in which the particle is embedded. At the same time this field is caused by the sum of all other surrounding particles. This simple model explains some parts but often breaks down completely. One modern approach to those phenomena is renormalization group theory, where you try to model first a microscopic system and then try to scale this up repeatedly to understand the macroscopic behaviours.
The correlations are the key reason why quantum effects can be seen real life materials and why many particles can behave totally different then a single atom.
-
Macroscopic objects have a large enough mass. Their uncertainity becomes negligible. Let's take a look at Heisenberg's equation: $$\Delta x\cdot\Delta p\geq \hbar/2$$ $p$ is momentum. If mass is large, the LHS becomes large as $\Delta p=m\Delta v$. Planck's constant has an order of $10^-34$, so the uncertainities in position and velocity can be pretty small once an observation is made. No uncertainity$\implies$ no quantum effects. The size of the wavefunction is tied down by uncertainity (one can also say that it is tied down by it's De Broglie wavelength, which comes out to be nearly the same thing)
-
2
Can you clarify the first word of your post? "Mcroscopic" is missing a rather critical vowel! – dotancohen Mar 2 '12 at 9:08
I don't know which level of abstractness and 'deepness' is appropriate, so I will restrict myself to the basics, e. g. "normal" non-relativistic, single-particle quantum mechanics.
In this case the state of the system (e. g. all the data you know about the particle) is described by the wavefunction $psi$ and the time evolution is given by the Schrödinger equation $i \hbar \frac{\partial}{\partial t} \psi = \hat{H} \psi$, which is more or less the quantum analog to Newtons famous $F = m \cdot a$. So now we ask the question, how we get the classical behaviour as an approximation to the quantum one. This limes is called "semiclassical approximation" and often described by $\hbar \rightarrow 0$, which means, that all characteristic (energy) scales are big compared to $\hbar$. Inserting the formal ansatz for the wavefunction $\psi = a \cdot e^{i/\hbar S}$ in the Schrödinger equation yields in zeroth order (=dropping all terms with $\hbar$) the Hamilton-Jacobi-equation $H(x, \frac{\partial S}{\partial x}) = E$. This equation is a (complicated) reformulation of Newtons law.
So the important point is: If the particles have high enough energy their quantum behavior is nearly equal to the classical one. (This is not only some mathematical trick, but really observed in our labs, e.g. the higher states of the hydrogen atom are only separated by a very small energy and so are nearly continuous instead of being discrete states).
-
You asked, what is it about having so many atoms together (classical physics) that makes them behave so very differently from the way a single atom behaves (quantum physics)?
The answer to your question is the law of large numbers. It accounts for the fact that a large aggregate of atoms behaves much more regularly than tiny systems. Since almost everything accessible to our senses is composed of a truly huge number of atoms, typically $>10^{20}$ of them, our senses just notice the mean behavior, which is much more regular than the detailed behavior of individual atoms.
The law of large numbers is the most basic principle of probability theory, and it is the basis of all of statistics. In physics, the discipline of statistical mechanics treats the cooperative effects that follow from the law of large numbers applied to huge collections of atoms. In fact, it usually deals with the so-called thermodynamic limit, which is the idealization that the number of particles is infinite.
In the thermodynamic limit, the probabilistic predictions of the law of large numbers turn into certainties certainties, and the microscopic laws allow one to derive the macroscopic laws of thermodynamics. The latter govern the behavior of macroscopic bodies and fluids, giving rise to the laws governing hydrodynamics, elasticity theory, or phase equilibria. They also give the mass action law governing chemical reactions.
Of course, the thermodynamic limit is an approximation only, but for macroscopic objects an extremely good one. This is why one has to go to tiny scales to discover the less intuitive behavior of quantum systems.
If you want to read more along this lines, look at Part II: Statistical Mechanics of Classical and Quantum Mechanics via Lie algebras.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555960297584534, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/125217/group-theory-isomorphism-g-10-and-mathbbz-10?answertab=oldest
|
# Group Theory Isomorphism $|G|=10$ and $\mathbb{Z}_{10}$
I think I just didn't get the core of group theory. Although it makes sense to me to follow the regular steps to solve problems of group theory. For example, a group of order $10$ is isomorphic to $\mathbb{Z}_{10}$. To prove this, the standard solution suggests that we have to suppose that there are $2$ elements $x,y$. $x$ has order of 5 another has order of $2$ to begin with. And finally applied that it's isomorphic to $\mathbb{Z}_{2}\times\mathbb{Z}_{5}$ and then isomorphic to $\mathbb{Z}_{10}$
I am confused that, isn't that obvious that a group of order 10 is isomorphic to Z10?? Both of them have $10$ elements. We can simply project them one-by-one.. like $1$ to $x$ ; $2$ to $e$ ; $3$ to $y$.... ....... .......
-
6
It is not true that a group of order 10 must be cyclic: the group of rigid motions of the regular pentagon has order $10$ and is not abelian. – Arturo Magidin Mar 27 '12 at 21:37
## 3 Answers
It is false that a group of order $10$ must be cyclic.
There are two isomorphism types of groups of order \$10: an abelian group, which is indeed cyclic, and a nonabelian group.
The nonabelian group of order $10$ is the dihedral group of degree $5$ (you may see it denoted as either $D_5$ or $D_{10}$). It can be realized as the group of rigid motions of a regular pentagon. It has presentation: $$\Bigl\langle r,s\;\Bigm|\; r^5 = s^2 = 1, sr=r^{4}s\Bigr\rangle.$$
But with your final paragraph: it is not enough for them to have the same number of elements. For example, $\mathbf{Z}_2\times\mathbf{Z}_2$ and $\mathbf{Z}_4$ both have 4 elements, are both abelian, but they are not isomorphic, because the latter group has only two solutions to $x+x=0$ (namely, $x=0$ and $x=2$), but the former group has four solutions to that equation (every element is a solution). So you don't know ahead of time that every group of order $10$ (or even that every abelian group of order $10$) must be cyclic.
-
It seems like you're confused as to what an isomorphism actually is. Surely two sets of order 10 are isomorphic as sets but as the other answers have said this doesn't necessarily make them isomorphic as groups. For two groups to have a (group) isomorphism you need a bijective (group) homomorphism to exist between them. This is why your projection idea wont work (it isn't enough for a group isomorphism), try it on the dihedral group of degree 5 and the cyclic group of order 10, none of the bijections will be homomorphisms.
-
Thanks!I forget the definition that every "cyclic group of order n" is isomorphic to Z10. And of course, it can't always guarantee the homomorphism – Jinji Mar 28 '12 at 1:14
2
@Jinji: It is false that "every cyclic group of order n" is isomorphic to $Z_{10}$. Every cyclic group of order 10 is isomorphic to $Z_{10}$, but not every cyclic group of order $n$. – Arturo Magidin Mar 28 '12 at 4:45
The idea behind an isomorphism isn't just that the groups have the same number of elements, but also that they work the same. You can surely make a bijective mapping $\phi:G\rightarrow \mathbb{Z}_{10}$ for any group $G$ of order $10$, but to be isomorphic, you must have that $\phi$ is a homomorphism. Recall the definition: $\phi:G\rightarrow H$ is a homomorphism if $\phi(xy)=\phi(x)\phi(y)$ for all $x,y\in G$. Think for a moment about the intuitive meaning of that statement.
• $\phi$ is a mapping which associates elements in $G$ to elements in $H$.
• $xy\in G$is the product of two elements in $G$; the resulting element is determined by the algebraic structure of $G$, the way its elements interact with one another.
• $\phi(x)\phi(y)$ is the product of the elements associated with $x$ and $y$ under the mapping $\phi$. The product takes place in $H$, so the resulting element depends on the algebraic structure of $H$.
• If $\phi$ is bijective, and if $\phi(xy)=\phi(x)\phi(y)$ no matter which $x$ and $y$ you pick, then given any product $xy$ in $G$, the corresponding product in $H$ works the same as the one in $G$. Thus the groups work the same, so they are the same in the eyes of a group theorist.
If such a mapping does not exist, then at least some part of the structures of $G$ and $H$ is irreconcilably different, so the groups themselves are different.
In the example of $\mathbb{Z}_{10}$ vs $D_{10}$, we can prove that no isomorphism exists by noticing that $D_{10}$ has no element of order $10$, while $Z_{10}$ does - an essential difference in structure which would have to be preserved by an isomorphism. (It is easy to prove that isomorphisms preserve element orders; you should try it.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9641631841659546, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/121965-distinct-equivalent-classes-null-set.html
|
# Thread:
1. ## Distinct equivalent classes the null set
Let R be an equivalence relation on a set S. Let E and F be two distinct equivalence classes of R. Prove that E and F = null set.
Do i show that itys transitive im a bit stuck. Help much appreciated thanks.
2. Originally Posted by adam_leeds
Let R be an equivalence relation on a set S. Let E and F be two distinct equivalence classes of R. Prove that E and F = null set.
Do i show that itys transitive im a bit stuck. Help much appreciated thanks.
Do you mean $E \cap F = \emptyset$?
Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \Rightarrow e \sim f \Rightarrow E = F$, a contradiction.
3. Originally Posted by Swlabr
Do you mean $E \cap F = \emptyset$?
Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \Rightarrow e \sim f \Rightarrow E = F$, a contradiction.
Yep thats what i meant thanks.
4. Originally Posted by Swlabr
Do you mean $E \cap F = \emptyset$?
Assume there exists an element in $E \cap F$, call it $x$. Then you can apply transitivity as every element of $E$ is equivalent to $x$, as is every element of $F$. $e \sim x$ and $x \sim f \Rightarrow e \sim f \Rightarrow E = F$, a contradiction.
acually im a bit confused how is this a contradiction, it doesnt show its in the null set, just that e is in f?
5. Originally Posted by adam_leeds
acually im a bit confused how is this a contradiction, it doesnt show its in the null set, just that e is in f?
You assumed that there is some element in $E\cap F$. As Swlabr showed, this implies $E=F$, however you took E,F to be distinct equivalence classes, ie. $E \neq F$ - so this is a contradiction.
6. Originally Posted by adam_leeds
Let R be an equivalence relation on a set S. Let E and F be two distinct equivalence classes of R. Prove that E and F = null set.
Do i show that itys transitive im a bit stuck. Help much appreciated thanks.
More of a forward-knowledge looking back approach (since you need to do this problem to prove what I'm about to say), but a relation $R$ on $S$ induces a partition $\pi$ of $S$ where the blocks are the equivalence classes. If that is a definition in your book the answer follows immediately.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303944110870361, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/769/exhibit-an-explicit-bijection-between-irreducible-polynomials-over-finite-fields/1800
|
## Exhibit an explicit bijection between irreducible polynomials over finite fields and Lyndon words.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $q$ be a power of a prime. It's well-known that the function $B(n, q) = \frac{1}{n} \sum_{d | n} \mu \left( \frac{n}{d} \right) q^d$ counts both the number of irreducible polynomials of degree $n$ over $\mathbb{F}_q$ and the number of Lyndon words of length $n$ over an alphabet of size $q$. Does there exist an explicit bijection between the two sets?
-
## 4 Answers
In Reutenauer's "Free Lie Algebras", section 7.6.2:
A direct bijection between primitive necklaces of length n over F and the set of irreducible polynomials of degree n in F[x] may be described as follows: let K be the field with qn elements; it is a vector space of dimension n over F, so there exists in K an element θ such that the set {θ, θq, ..., θqn-1} is a linear basis of K over F.
With each word w = a0...an-1 of length n on the alphabet F, associate the element β of K given by β = a0θ + a1θq + ... + an-1 θqn-1. It is easily shown that to conjugate words w, w' correspond conjugate elements β, β' in the field extension K/F, and that w \mapsto β is a bijection. Hence, to a primitive conjugation class corresponds a conjugation class of cardinality n in K; to the latter corresponds a unique irreducible polynomial of degree n in F[x]. This gives the desired bijection.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
See section 38.10 "Generating irreducible polynomials from Lyndon words" of http://www.jjj.de/fxt/#fxtbook
-
I believe such a bijection is presented in
S. Golomb. Irreducible polynomials, synchronizing codes, primitive necklaces and cyclotomic algebra. In Proc. Conf Combinatorial Math. and Its Appl., pages 358– 370, Chapel Hill, 1969. Univ. of North Carolina Press.
but I don't have immediate access to this paper - I'm pretty sure it's in there though.
-
I don't seem to have access to it either, but at least one other paper (<a href="jstor.org/stable/2001573">Berstel and Reutenauer</a>) suggests that this is an open problem. Indeed I have essentially the same motivation as them for asking this question, so I suppose I should've read this paper more carefully. – Qiaochu Yuan Oct 16 2009 at 18:43
A tiny bit of additional evidence (still not conclusive): springerlink.com/index/P6X9P2BV73L2X2GG.pdf "As there exists a bijection between Lyndon words over an alphabet of cardinality k and irreducible polynomials over Fk [10]..." where the reference [10] is to Golomb's paper. – Alon Amit Oct 16 2009 at 19:12
The correspondence invented by Golomb relies on the choice of a primitive element a in the field of order q^n. Then, to each Lyndon word L=(l_0,l_1,...,l_{n-1}) one assigns the primitive polynomial having as a root the element a^{m(L)} where m(L) is the integer sum of l_i*q^i over i=0,1,...,n-1.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881125450134277, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2011/08/18/stokes-theorem-proof-part-1/?like=1&source=post_flair&_wpnonce=bcc517a348
|
# The Unapologetic Mathematician
## Stokes’ Theorem (proof part 1)
We turn now to the proof of Stokes’ theorem.
$\displaystyle\int\limits_cd\omega=\int\limits_{\partial c}\omega$
We start by considering the case where $c$ is the standard cube $[0,1]^k\subseteq\mathbb{R}^k$. Whipping out the definition of the boundary operator $\partial$, the integral on the right proceeds as follows:
$\displaystyle\int\limits_{\partial([0,1]^k)}\omega=\sum\limits_{j,\alpha}(-1)^{j+\alpha}\int\limits_{[0,1]^{k-1}}\left[{I^k_{j,\alpha}}^*\omega\right]\left(\frac{\partial}{\partial u^1},\dots,\frac{\partial}{\partial u^{k-1}}\right)$
Now any $k-1$-form $\omega$ can be written out as
$\displaystyle\omega=\sum\limits_{i=1}^kf^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k$
where each term omits exactly one of the basic $1$-forms. Since everything in sight — the differential operator and both integrals — is $\mathbb{R}$-linear, we can just use one of these terms. And so we can calculate the pullbacks:
$\displaystyle\begin{aligned}\left[{I^k_{j,\alpha}}^*f^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\right]\left(\frac{\partial}{\partial u^1},\dots,\frac{\partial}{\partial u^{k-1}}\right)&=\\(f^i\circ I^k_{j,\alpha})du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\left({I^k_{j,\alpha}}_*\frac{\partial}{\partial u^1},\dots,{I^k_{j,\alpha}}_*\frac{\partial}{\partial u^{k-1}}\right)&=\\(f^i\circ I^k_{j,\alpha})\det\left(\frac{\partial(v^p\circ I^k_{j,\alpha})}{\partial u^q}\right)&\end{aligned}$
It takes a bit of juggling with the definition of $I^k_{j,\alpha}$, but we can see that this determinant is $1$ if $j=i$ and $0$ otherwise. Roughly this is because $I^k_{j,\alpha}$ takes the $k-1$ basis vector fields of $\mathcal{T}[0,1]^{k-1}$ and turns them into all of the basis vector fields of $\mathcal{T}[0,1]^k$ except the $j$-th one. If $i\neq j$ then some basis $1$-form has to line up against some basis vector field with a different index and everything goes to zero, while if $i=j$ then they can all pair off in exactly one way.
The upshot is that only the two faces of the cube in the $i$ direction contribute anything at all to the boundary integral, and we find
$\displaystyle\begin{aligned}\int\limits_{\partial([0,1]^k)}\omega=&(-1)^{i+1}\int\limits_{[0,1]^{k-1}}f(u^1,\dots,u^{i-1},1,u^i,\dots,u^{k-1})\,d(u^1,\dots,u^{k-1})\\&+(-1)^i\int\limits_{[0,1]^{k-1}}f(u^1,\dots,u^{i-1},0,u^i,\dots,u^{k-1})\,d(u^1,\dots,u^{k-1})\end{aligned}$
On the other side, we can calculate the differential of $\omega$:
$\displaystyle\begin{aligned}d\left(f^idu^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\right)&=df^i\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=\left(\sum\limits_{j=1}^k\frac{\partial f^i}{\partial u^j}du^j\right)\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=\left(\frac{\partial f^i}{\partial u^i}du^i\right)\wedge du^1\wedge\dots\wedge\widehat{du^i}\wedge\dots\wedge du^k\\&=(-1)^{i-1}\frac{\partial f^i}{\partial u^i}du^1\wedge\dots\wedge du^k\end{aligned}$
The tricky bit here is that when $j\neq i$ there’s nowhere to put this brand new $du^j$, since it must collide with one of the other basic $1$-forms in the wedge. But when $j=i$ then it can slip right into the “hole” where we’ve left out $du^i$, at a cost of a factor of $(-1)^{i-1}$ to pull the $du^i$ across the first $i-1$ terms in the wedge.
With this result in hand, we calculate the interior integral:
$\displaystyle\int_{[0,1]^k}d\omega=(-1)^{i-1}\int_{[0,1]^k}\frac{\partial f^i}{\partial u^i}\,d(u^1,\dots,u^k)$
We can turn this into an iterated integral, which Fubini’s theorem tells us we can evaluate in any order we want:
$\displaystyle\begin{aligned}\int_{[0,1]^k}d\omega=&(-1)^{i-1}\int_{[0,1]^{k-1}}\int\limits_0^1\frac{\partial f^i}{\partial u^i}\,du^i\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\\=&(-1)^{i-1}\int_{[0,1]^{k-1}}f^i(u^1,\dots,u^{i-1},1,u^{i+1},\dots,u^k)\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\\&-(-1)^{i-1}\int_{[0,1]^{k-1}}f^i(u^1,\dots,u^{i-1},0,u^{i+1},\dots,u^k)\,d(u^1,\dots,\widehat{u^i},\dots,u^k)\end{aligned}$
which it should be clear is the same as our answer for the boundary integral above. Thus Stokes’ theorem holds for the standard cube.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 1 Comment »
1. [...] that we’ve proven Stokes’ theorem in the case of the standard cube, we can now tackle the general [...]
Pingback by | August 20, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079172015190125, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/102736/gauss-type-sums-for-cube-roots-of-primes?answertab=oldest
|
# Gauss-type sums for cube roots of primes
(Quadratic) Gauss sums express square root of any integer as a sum of roots of unity (or of cosines of rational multiples of $2\pi$, if you will) with rational coefficients.
But Kronecker-Weber guarantees that any root of any integer can be expressed as a sum of that kind. What is the It there corresponding formula for, say, $\sqrt[3]p$?
Upd. I'm sorry, but original question doesn't make much sense. The question I, perhaps, meant to ask is (as Matt E kindly points out) discussed in David Speyer's answer.
-
## 1 Answer
The Kronecker--Weber theorem does not guarantee what you claim. Indeed, $\mathbb Q(p^{1/3})$ is not Galois over $\mathbb Q$, and its Galois closure is equal to $\mathbb Q(\sqrt{-3},p^{1/3})$, which is an $S_3$ (and hence non-abelian) extension of $\mathbb Q$. It is an abelian extension of $\mathbb Q(\sqrt{-3})$, but not all extensions of this field are cyclotomic; rather, they are described by the theory of complex multiplication (Kronecker's Jugendtraum).
-
(Actually, first I wanted to ask about "Gauss-sum-type" formulas for roots of a cubic polynomial with perfect square discriminant -- but then thought that just cube roots should be more tractable.) – Grigory M Jan 26 '12 at 20:19
2
– Matt E Jan 26 '12 at 20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141124486923218, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/104104-solved-trig-functions-exponents-trig-functions-tring-stunt-variables.html
|
# Thread:
1. ## [SOLVED] Trig functions, exponents of trig functions, and tring stunt variables
Here I go...
What's the difference between sin^2(x), (sinx)^2 and sin(x)^2
What does it mean when the 2 is in each one of these positions?
Is this right? sinx * sinx = (sinx)^2? or is it sin^2x?
Why is it that we can simplify some trig functions like
sin2x/2x into siny/y, but when we put a radian value in for x in the first expression it doesn't equal to the same this when we put that value in for y? I think this is using stunt variable but I'm unsure why we can use these things are true equations to find other answers (i hope this question made a little bit of sense)?
Also what's the difference between F(x) and f(x)?
2. Ok, let me address these one at a time:
What's the difference between sin^2(x), (sinx)^2 and sin(x)^2?
$sin^2(x) = [sin(x)]^2$
$sin(x)^2 = sin(x^2)$
Why is it that we can simplify some trig functions like
sin2x/2x into siny/y, but when we put a radian value in for x in the first expression it doesn't equal to the same this when we put that value in for y? I think this is using stunt variable but I'm unsure why we can use these things are true equations to find other answers (i hope this question made a little bit of sense)?
I'm not exactly sure what you're saying, we CAN say that:
$\frac{sin(2x)}{2x} = \frac{sin(y)}{y}$
BECAUSE x is in radians, it must be dimensionless to do so, real numbers have no units, radians have no units. In this case, y MUST equal 2x, or whatever you choose to put into the sine function instead of 2x. y is a dummy variable to make manipulation easier, or to make it easier to understand.
Also what's the difference between F(x) and f(x)?
There is no difference except maybe what the function actually is. There are many labels for functions, f(x), g(x), h(x), I(x), etc. The names may mean something, or may not mean anything more than a function of x. If it means anything at all, it will be specified in the problem you are working on or in the course you are in.
3. Thanks for the really great answer!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343000650405884, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/122098-linear-functionals.html
|
# Thread:
1. ## Linear functionals
Let V be a finite dimensional vector space over field F and let v belong to V,v not equal to 0.Then there exists some f in V* such that f(v) is not equal to 0.
(where V* is dual space of V)
2. Originally Posted by math.dj
Let V be a finite dimensional vector space over field F and let v belong to V,v not equal to 0.Then there exists some f in V* such that f(v) is not equal to 0.
(where V* is dual space of V)
There is a basis $\{v, e_2,\ldots,e_n\}$ of V with v ( $=e_1$) as its first element. Every element x of V has a unique expression $x = \sum_{i=1}^n\alpha_ie_i$ as a linear combination of the basis vectors. Define $f(x) = \alpha_1$.
3. Originally Posted by math.dj
Let V be a finite dimensional vector space over field F and let v belong to V,v not equal to 0.Then there exists some f in V* such that f(v) is not equal to 0.
(where V* is dual space of V)
f is a mapping $f: V \rightarrow F$, isn't it? It is termed a linear functional if it's a linear mapping from V into F.
If $\{v_1,...,v_n \}$ is a basis of Vover F, let $f_1, ...,f_n \in V^*$ be the linear functionals defined by Kronecker delta notation:
$f_i(v_j) = \delta_{ij} =\left \{\begin{matrix} 1, & \mbox{if } i=j \\ 0, & \mbox{if } i \ne j \end{matrix}\right.$
Then set ${f_1,...,f_n}$ is a basis for $V^*$. Anf those linear functionals f_i are unique since they're defined on a basis of V.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313532710075378, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/216858/positive-operator-is-bounded
|
# Positive operator is bounded
For a real Banach space $X$ let $A:X\rightarrow X^*$ be a positive operator in the sense that $(Ax)(x)\geq 0$ for all $x\in X$. Show that $A$ is bounded.
I don't know how to do that, maybe it's an application of the closed graph theorem?
-
## 1 Answer
It's indeed an application of closed graph theorem. First, we take $\{x_n\}\subset X$ a sequence which converges to $x$ and such that $Ax_n\to l$, where $l\in X^*$. As the sequence $\{x_n\}$ is bounded, we have $\langle T(x_n),x_n\rangle\to l(x)$. For $y\in X$, we have $$0\leq \langle Tx_n-Ty,x_n-y\rangle=\langle Tx_n,x_n\rangle-\langle Ty,x_n\rangle-\langle Tx_n,y\rangle+\langle Ty,y\rangle.$$ Taking the limit $\lim_{n\to +\infty}$, we get $$0\leq l(x)-\langle T(y),x\rangle-l(y)+\langle T(y),y\rangle,$$ which gives $$l(y-x)\leq \langle T(y),y-x\rangle.$$ Let $y=z+x$. Then $$l(z)\leq \langle Tz,z\rangle+\langle T(x),z\rangle.$$ Replacing $z$ by $az$, where $0<a$ and taking $a\to 0$, we get $l(z)\leq \langle T(x),z\rangle$. Replacing $z$ by $-z$, we get $l=Tx$.
As $X$ and $X^*$ are Banach spaces, we conclude by closed graph theorem.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9749186038970947, "perplexity_flag": "head"}
|
http://cms.math.ca/cjm/onlinefirst/
|
Canadian Mathematical Society
www.cms.math.ca
| | | | |
|----------|----|-----------|----|
| | | | | | |
| Site map | | | CMS store | |
location: Publications → journals → CJM
Online First
The following papers are the latest research papers available from the Canadian Journal of Mathematics.
The papers below are all fully peer-reviewed and we vouch for the research inside. Some items are labelled Author's Draft, and others are identified as Published.
As a service to our readers, we post new papers as soon as the science is right, but before official publication; these are the papers marked Author's Draft. When our copy editing process is complete and the paper now has our official form, we replace the Author's Draft with the Published version.
All the papers below are scheduled for inclusion in a Print issue. When that issue goes to press, the paper is moved from this Online First web page over to the main CJM Digital Archive.
Abdesselam, Abdelmalek; Chipalkatti, Jaydeep
Published: 2012-11-17
Let $F$ denote a binary form of order $d$ over the complex numbers. If $r$ is a divisor of $d$, then the Hilbert covariant $\mathcal{H}_{r,d}(F)$ vanishes exactly when $F$ is the perfect power of an order $r$ form. In geometric terms, the coefficients of $\mathcal{H}$ give defining equations for the image variety $X$ of an embedding $\mathbf{P}^r \hookrightarrow \mathbf{P}^d$. In this paper we describe a new construction of the Hilbert covariant; and simultaneously situate it into a wider class of covariants called the Göttingen covariants, all of which vanish on $X$. We prove that the ideal generated by the coefficients of $\mathcal{H}$ defines $X$ as a scheme. Finally, we exhibit a generalisation of the Göttingen covariants to $n$-ary forms using the classical Clebsch transfer principle.
Adamus, Janusz; Randriambololona, Serge; Shafikov, Rasul
Published: 2012-07-16
Given a real analytic set $X$ in a complex manifold and a positive integer $d$, denote by $\mathcal A^d$ the set of points $p$ in $X$ at which there exists a germ of a complex analytic set of dimension $d$ contained in $X$. It is proved that $\mathcal A^d$ is a closed semianalytic subset of $X$.
Aguiar, Marcelo; Mahajan, Swapneel
Published: 2013-03-08
Combinatorial structures that compose and decompose give rise to Hopf monoids in Joyal's category of species. The Hadamard product of two Hopf monoids is another Hopf monoid. We prove two main results regarding freeness of Hadamard products. The first one states that if one factor is connected and the other is free as a monoid, their Hadamard product is free (and connected). The second provides an explicit basis for the Hadamard product when both factors are free.
The first main result is obtained by showing the existence of a one-parameter deformation of the comonoid structure and appealing to a rigidity result of Loday and Ronco that applies when the parameter is set to zero. To obtain the second result, we introduce an operation on species that is intertwined by the free monoid functor with the Hadamard product. As an application of the first result, we deduce that the Boolean transform of the dimension sequence of a connected Hopf monoid is nonnegative.
Aholt, Chris; Sturmfels, Bernd; Thomas, Rekha
Published: 2012-07-19
Multiview geometry is the study of two-dimensional images of three-dimensional scenes, a foundational subject in computer vision. We determine a universal Gröbner basis for the multiview ideal of $n$ generic cameras. As the cameras move, the multiview varieties vary in a family of dimension $11n-15$. This family is the distinguished component of a multigraded Hilbert scheme with a unique Borel-fixed point. We present a combinatorial study of ideals lying on that Hilbert scheme.
Bailey, Michael
Published: 2013-03-20
We answer the natural question: when is a transversely holomorphic symplectic foliation induced by a generalized complex structure? The leafwise symplectic form and transverse complex structure determine an obstruction class in a certain cohomology, which vanishes if and only if our question has an affirmative answer. We first study a component of this obstruction, which gives the condition that the leafwise cohomology class of the symplectic form must be transversely pluriharmonic. As a consequence, under certain topological hypotheses, we infer that we actually have a symplectic fibre bundle over a complex base. We then show how to compute the full obstruction via a spectral sequence. We give various concrete necessary and sufficient conditions for the vanishing of the obstruction. Throughout, we give examples to test the sharpness of these conditions, including a symplectic fibre bundle over a complex base which does not come from a generalized complex structure, and a regular generalized complex structure which is very unlike a symplectic fibre bundle, i.e., for which nearby leaves are not symplectomorphic.
Bailey, Michael
We answer the natural question: when is a transversely holomorphic symplectic foliation induced by a generalized complex structure? The leafwise symplectic form and transverse complex structure determine an obstruction class in a certain cohomology, which vanishes if and only if our question has an affirmative answer. We first study a component of this obstruction, which gives the condition that the leafwise cohomology class of the symplectic form must be transversely pluriharmonic. As a consequence, under certain topological hypotheses, we infer that we actually have a symplectic fibre bundle over a complex base. We then show how to compute the full obstruction via a spectral sequence. We give various concrete necessary and sufficient conditions for the vanishing of the obstruction. Throughout, we give examples to test the sharpness of these conditions, including a symplectic fibre bundle over a complex base which does not come from a generalized complex structure, and a regular generalized complex structure which is very unlike a symplectic fibre bundle, i.e., for which nearby leaves are not symplectomorphic.
Berg, Chris; Bergeron, Nantel; Saliola, Franco; Serrano, Luis; Zabrocki, Mike
We introduce a new basis of the algebra of non-commutative symmetric functions whose images under the forgetful map are Schur functions when indexed by a partition. Dually, we build a basis of the quasi-symmetric functions which expand positively in the fundamental quasi-symmetric functions. We then use the basis to construct a non-commutative lift of the Hall-Littlewood symmetric functions with similar properties to their commutative counterparts.
Bernard, P.; Zavidovique, M.
Published: 2013-01-23
We expose different methods of regularizations of subsolutions in the context of discrete weak KAM theory. They allow to prove the existence and the density of $C^{1,1}$ subsolutions. Moreover, these subsolutions can be made strict and smooth outside of the Aubry set.
Birth, Lidia; Glöckner, Helge
Published: 2012-10-03
For a Lie group $G$, we show that the map $C^\infty_c(G)\times C^\infty_c(G)\to C^\infty_c(G)$, $(\gamma,\eta)\mapsto \gamma*\eta$ taking a pair of test functions to their convolution is continuous if and only if $G$ is $\sigma$-compact. More generally, consider $r,s,t \in \mathbb{N}_0\cup\{\infty\}$ with $t\leq r+s$, locally convex spaces $E_1$, $E_2$ and a continuous bilinear map $b\colon E_1\times E_2\to F$ to a complete locally convex space $F$. Let $\beta\colon C^r_c(G,E_1)\times C^s_c(G,E_2)\to C^t_c(G,F)$, $(\gamma,\eta)\mapsto \gamma *_b\eta$ be the associated convolution map. The main result is a characterization of those $(G,r,s,t,b)$ for which $\beta$ is continuous. Convolution of compactly supported continuous functions on a locally compact group is also discussed, as well as convolution of compactly supported $L^1$-functions and convolution of compactly supported Radon measures.
Broussous, P.
Soit $G$ le groupe $\mathrm{GL}(N,F)$, où $F$ est un corps localement compact et non archimédien. En utilisant la théorie des types simples de Bushnell et Kutzko, ainsi qu'une idée originale d'Henniart, nous construisons des pseudo-coefficients explicites pour les représentations de la série discrète de $G$. Comme application, nous en déduisons des formules inédites pour la valeur du charactère d'Harish-Chandra de certaines telles représentations en certains éléments elliptiques réguliers.
Caillat-Gibert, Shanti; Matignon, Daniel
This paper concerns the problem of existence of taut foliations among $3$-manifolds. Since the contribution of David Gabai, we know that closed $3$-manifolds with non-trivial second homology group admit a taut foliation. The essential part of this paper focuses on Seifert fibered homology $3$-spheres. The result is quite different if they are integral or rational but non-integral homology $3$-spheres. Concerning integral homology $3$-spheres, we can see that all but the $3$-sphere and the Poincaré $3$-sphere admit a taut foliation. Concerning non-integral homology $3$-spheres, we prove there are infinitely many which admit a taut foliation, and infinitely many without taut foliation. Moreover, we show that the geometries do not determine the existence of taut foliations on non-integral Seifert fibered homology $3$-spheres.
Caillat-Gibert, Shanti; Matignon, Daniel
This paper concerns the problem of existence of taut foliations among $3$-manifolds. Since the contribution of David Gabai, we know that closed $3$-manifolds with non-trivial second homology group admit a taut foliation. The essential part of this paper focuses on Seifert fibered homology $3$-spheres. The result is quite different if they are integral or rational but non-integral homology $3$-spheres. Concerning integral homology $3$-spheres, we can see that all but the $3$-sphere and the Poincaré $3$-sphere admit a taut foliation. Concerning non-integral homology $3$-spheres, we prove there are infinitely many which admit a taut foliation, and infinitely many without taut foliation. Moreover, we show that the geometries do not determine the existence of taut foliations on non-integral Seifert fibered homology $3$-spheres.
Cho, Peter J.; Kim, Henry H.
Published: 2012-09-08
We construct unconditionally several families of number fields with the largest possible class numbers. They are number fields of degree 4 and 5 whose Galois closures have the Galois group $A_4, S_4$ and $S_5$. We first construct families of number fields with smallest regulators, and by using the strong Artin conjecture and applying zero density result of Kowalski-Michel, we choose subfamilies of $L$-functions which are zero free close to 1. For these subfamilies, the $L$-functions have the extremal value at $s=1$, and by the class number formula, we obtain the extreme class numbers.
Choiy, Kwangho
Published: 2013-02-21
Let $F$ be a $p$-adic field of characteristic $0$, and let $M$ be an $F$-Levi subgroup of a connected reductive $F$-split group such that $\Pi_{i=1}^{r} SL_{n_i} \subseteq M \subseteq \Pi_{i=1}^{r} GL_{n_i}$ for positive integers $r$ and $n_i$. We prove that the Plancherel measure for any unitary supercuspidal representation of $M(F)$ is identically transferred under the local Jacquet-Langlands type correspondence between $M$ and its $F$-inner forms, assuming a working hypothesis that Plancherel measures are invariant on a certain set. This work extends the result of Muić and Savin (2000) for Siegel Levi subgroups of the groups $SO_{4n}$ and $Sp_{4n}$ under the local Jacquet-Langlands correspondence. It can be applied to a simply connected simple $F$-group of type $E_6$ or $E_7$, and a connected reductive $F$-group of type $A_{n}$, $B_{n}$, $C_n$ or $D_n$.
Chu, C-H.; Velasco, M. V.
We introduce the concept of a rare element in a non-associative normed algebra and show that the existence of such element is the only obstruction to continuity of a surjective homomorphism from a non-associative Banach algebra to a unital normed algebra with simple completion. Unital associative algebras do not admit any rare element and hence automatic continuity holds.
Chu, C-H.; Velasco, M. V.
Published: 2012-11-13
We introduce the concept of a rare element in a non-associative normed algebra and show that the existence of such element is the only obstruction to continuity of a surjective homomorphism from a non-associative Banach algebra to a unital normed algebra with simple completion. Unital associative algebras do not admit any rare element and hence automatic continuity holds.
Cruz, Victor; Mateu, Joan; Orobitg, Joan
Published: 2013-02-06
Our goal in this work is to present some function spaces on the complex plane $\mathbb C$, $X(\mathbb C)$, for which the quasiregular solutions of the Beltrami equation, $\overline\partial f (z) = \mu(z) \partial f (z)$, have first derivatives locally in $X(\mathbb C)$, provided that the Beltrami coefficient $\mu$ belongs to $X(\mathbb C)$.
De Bernardi, Carlo Alberto
Published: 2012-12-29
We prove that the set of all support points of a nonempty closed convex bounded set $C$ in a real infinite-dimensional Banach space $X$ is $\mathrm{AR(}\sigma$-$\mathrm{compact)}$ and contractible. Under suitable conditions, similar results are proved also for the set of all support functionals of $C$ and for the domain, the graph and the range of the subdifferential map of a proper convex l.s.c. function on $X$.
Delanoë, Philippe; Rouvière, François
Published: 2012-07-16
The squared distance curvature is a kind of two-point curvature the sign of which turned out crucial for the smoothness of optimal transportation maps on Riemannian manifolds. Positivity properties of that new curvature have been established recently for all the simply connected compact rank one symmetric spaces, except the Cayley plane. Direct proofs were given for the sphere, an indirect one via the Hopf fibrations) for the complex and quaternionic projective spaces. Here, we present a direct proof of a property implying all the preceding ones, valid on every positively curved Riemannian locally symmetric space.
Durand-Cartagena, E.; Ihnatsyeva, L.; Korte, R.; Szumańska, M.
Published: 2013-02-08
We study approximately differentiable functions on metric measure spaces admitting a Cheeger differentiable structure. The main result is a Whitney-type characterization of approximately differentiable functions in this setting. As an application, we prove a Stepanov-type theorem and consider approximate differentiability of Sobolev, $BV$ and maximal functions.
Eilers, Søren; Restorff, Gunnar; Ruiz, Efren
Let $\mathfrak{A}$ be a $C^{*}$-algebra with real rank zero which has the stable weak cancellation property. Let $\mathfrak{I}$ be an ideal of $\mathfrak{A}$ such that $\mathfrak{I}$ is stable and satisfies the corona factorization property. We prove that $0 \to \mathfrak{I} \to \mathfrak{A} \to \mathfrak{A} / \mathfrak{I} \to 0$ is a full extension if and only if the extension is stenotic and $K$-lexicographic. {As an immediate application, we extend the classification result for graph $C^*$-algebras obtained by Tomforde and the first named author to the general non-unital case. In combination with recent results by Katsura, Tomforde, West and the first author, our result may also be used to give a purely $K$-theoretical description of when an essential extension of two simple and stable graph $C^*$-algebras is again a graph $C^*$-algebra.}
Elekes, Márton; Steprāns, Juris
Published: 2013-02-06
A subset $X$ of a Polish group $G$ is called Haar null if there exists a Borel set $B \supset X$ and Borel probability measure $\mu$ on $G$ such that $\mu(gBh)=0$ for every $g,h \in G$. We prove that there exist a set $X \subset \mathbb R$ that is not Lebesgue null and a Borel probability measure $\mu$ such that $\mu(X + t) = 0$ for every $t \in \mathbb R$. This answers a question from David Fremlin's problem list by showing that one cannot simplify the definition of a Haar null set by leaving out the Borel set $B$. (The answer was already known assuming the Continuum Hypothesis.)
This result motivates the following Baire category analogue. It is consistent with $ZFC$ that there exist an abelian Polish group $G$ and a Cantor set $C \subset G$ such that for every non-meagre set $X \subset G$ there exists a $t \in G$ such that $C \cap (X + t)$ is relatively non-meagre in $C$. This essentially generalises results of Bartoszyński and Burke-Miller.
Forrest, Brian; Miao, Tianxuan
Let $G$ be a locally compact group. Let $A_{M}(G)$ ($A_{0}(G)$)denote the closure of $A(G)$, the Fourier algebra of $G$ in the space of bounded (completely bounded) multipliers of $A(G)$. We call a locally compact group M-weakly amenable if $A_M(G)$ has a bounded approximate identity. We will show that when $G$ is M-weakly amenable, the algebras $A_{M}(G)$ and $A_{0}(G)$ have properties that are characteristic of the Fourier algebra of an amenable group. Along the way we show that the sets of tolopolically invariant means associated with these algebras have the same cardinality as those of the Fourier algebra.
Fuller, Adam Hanley
Published: 2012-11-17
Let $\mathcal{S}$ be the semigroup $\mathcal{S}=\sum^{\oplus k}_{i=1}\mathcal{S}_i$, where for each $i\in I$, $\mathcal{S}_i$ is a countable subsemigroup of the additive semigroup $\mathbb{R}_+$ containing $0$. We consider representations of $\mathcal{S}$ as contractions $\{T_s\}_{s\in\mathcal{S}}$ on a Hilbert space with the Nica-covariance property: $T_s^*T_t=T_tT_s^*$ whenever $t\wedge s=0$. We show that all such representations have a unique minimal isometric Nica-covariant dilation.
This result is used to help analyse the nonself-adjoint semicrossed product algebras formed from Nica-covariant representations of the action of $\mathcal{S}$ on an operator algebra $\mathcal{A}$ by completely contractive endomorphisms. We conclude by calculating the $C^*$-envelope of the isometric nonself-adjoint semicrossed product algebra (in the sense of Kakariadis and Katsoulis).
Garcés, Jorge J.; Peralta, Antonio M.
Published: 2013-02-06
We introduce generalised triple homomorphism between Jordan Banach triple systems as a concept which extends the notion of generalised homomorphism between Banach algebras given by K. Jarosz and B.E. Johnson in 1985 and 1987, respectively. We prove that every generalised triple homomorphism between JB$^*$-triples is automatically continuous. When particularised to C$^*$-algebras, we rediscover one of the main theorems established by B.E. Johnson. We shall also consider generalised triple derivations from a Jordan Banach triple $E$ into a Jordan Banach triple $E$-module, proving that every generalised triple derivation from a JB$^*$-triple $E$ into itself or into $E^*$ is automatically continuous.
Giambruno, Antonio; Mattina, Daniela La; Zaicev, Mikhail
Published: 2013-05-15
Let $\mathcal{V}$ be a variety of associative algebras generated by an algebra with $1$ over a field of characteristic zero. This paper is devoted to the classification of the varieties $\mathcal{V}$ which are minimal of polynomial growth (i.e., their sequence of codimensions growth like $n^k$ but any proper subvariety grows like $n^t$ with $t\lt k$). These varieties are the building blocks of general varieties of polynomial growth.
It turns out that for $k\le 4$ there are only a finite number of varieties of polynomial growth $n^k$, but for each $k \gt 4$, the number of minimal varieties is at least $|F|$, the cardinality of the base field and we give a recipe of how to construct them.
Goulden, I. P.; Guay-Paquet, Mathieu; Novak, Jonathan
Published: 2012-10-30
Hurwitz numbers count branched covers of the Riemann sphere with specified ramification data, or equivalently, transitive permutation factorizations in the symmetric group with specified cycle types. Monotone Hurwitz numbers count a restricted subset of these branched covers related to the expansion of complete symmetric functions in the Jucys-Murphy elements, and have arisen in recent work on the the asymptotic expansion of the Harish-Chandra-Itzykson-Zuber integral. In this paper we begin a detailed study of monotone Hurwitz numbers. We prove two results that are reminiscent of those for classical Hurwitz numbers. The first is the monotone join-cut equation, a partial differential equation with initial conditions that characterizes the generating function for monotone Hurwitz numbers in arbitrary genus. The second is our main result, in which we give an explicit formula for monotone Hurwitz numbers in genus zero.
Grandjean, Vincent
Published: 2012-12-29
Given a non-oscillating gradient trajectory $|\gamma|$ of a real analytic function $f$, we show that the limit $\nu$ of the secants at the limit point $\mathbf{0}$ of $|\gamma|$ along the trajectory $|\gamma|$ is an eigen-vector of the limit of the direction of the Hessian matrix $\operatorname{Hess} (f)$ at $\mathbf{0}$ along $|\gamma|$. The same holds true at infinity if the function is globally sub-analytic. We also deduce some interesting estimates along the trajectory. Away from the ends of the ambient space, this property is of metric nature and still holds in a general Riemannian analytic setting.
Grigor'yan, Alexander; Hu, Jiaxin
Published: 2013-02-06
We prove that, in a setting of local Dirichlet forms on metric measure spaces, a two-sided sub-Gaussian estimate of the heat kernel is equivalent to the conjunction of the volume doubling propety, the elliptic Harnack inequality and a certain estimate of the capacity between concentric balls. The main technical tool is the equivalence between the capacity estimate and the estimate of a mean exit time in a ball, that uses two-sided estimates of a Green function in a ball.
Guardo, Elena; Harbourne, Brian; Van Tuyl, Adam
Published: 2012-11-13
Recent work of Ein-Lazarsfeld-Smith and Hochster-Huneke raised the problem of which symbolic powers of an ideal are contained in a given ordinary power of the ideal. Bocci-Harbourne developed methods to address this problem, which involve asymptotic numerical characters of symbolic powers of the ideals. Most of the work done up to now has been done for ideals defining 0-dimensional subschemes of projective space. Here we focus on certain subschemes given by a union of lines in $\mathbb{P}^3$ which can also be viewed as points in $\mathbb{P}^1 \times \mathbb{P}^1$. We also obtain results on the closely related problem, studied by Hochster and by Li-Swanson, of determining situations for which each symbolic power of an ideal is an ordinary power.
Guitart, Xavier; Quer, Jordi
Published: 2012-11-13
The main result of this paper is a characterization of the abelian varieties $B/K$ defined over Galois number fields with the property that the $L$-function $L(B/K;s)$ is a product of $L$-functions of non-CM newforms over $\mathbb Q$ for congruence subgroups of the form $\Gamma_1(N)$. The characterization involves the structure of $\operatorname{End}(B)$, isogenies between the Galois conjugates of $B$, and a Galois cohomology class attached to $B/K$.
We call the varieties having this property strongly modular. The last section is devoted to the study of a family of abelian surfaces with quaternionic multiplication. As an illustration of the ways in which the general results of the paper can be applied we prove the strong modularity of some particular abelian surfaces belonging to that family, and we show how to find nontrivial examples of strongly modular varieties by twisting.
Harris, Adam; Kolář, Martin
Published: 2012-12-04
This article establishes a sufficient condition for Kobayashi hyperbolicity of unbounded domains in terms of curvature. Specifically, when $\Omega\subset{\mathbb C}^{n}$ corresponds to a sub-level set of a smooth, real-valued function $\Psi$, such that the form $\omega = {\bf i}\partial\bar{\partial}\Psi$ is Kähler and has bounded curvature outside a bounded subset, then this domain admits a hermitian metric of strictly negative holomorphic sectional curvature.
He, Jianxun; Xiao, Jinsen
Published: 2012-12-04
Let $F_{2n,2}$ be the free nilpotent Lie group of step two on $2n$ generators, and let $\mathbf P$ denote the affine automorphism group of $F_{2n,2}$. In this article the theory of continuous wavelet transform on $F_{2n,2}$ associated with $\mathbf P$ is developed, and then a type of radial wavelets is constructed. Secondly, the Radon transform on $F_{2n,2}$ is studied and two equivalent characterizations of the range for Radon transform are given. Several kinds of inversion Radon transform formulae are established. One is obtained from the Euclidean Fourier transform, the others are from group Fourier transform. By using wavelet transform we deduce an inversion formula of the Radon transform, which does not require the smoothness of functions if the wavelet satisfies the differentiability property. Specially, if $n=1$, $F_{2,2}$ is the $3$-dimensional Heisenberg group $H^1$, the inversion formula of the Radon transform is valid which is associated with the sub-Laplacian on $F_{2,2}$. This result cannot be extended to the case $n\geq 2$.
Holmes, Mark; Salisbury, Thomas S.
We study the asymptotic behaviour of random walks in i.i.d. random environments on $\mathbb{Z}^d$. The environments need not be elliptic, so some steps may not be available to the random walker. We prove a monotonicity result for the velocity (when it exists) for any 2-valued environment, and show that this does not hold for 3-valued environments without additional assumptions. We give a proof of directional transience and the existence of positive speeds under strong, but non-trivial conditions on the distribution of the environment. Our results include generalisations (to the non-elliptic setting) of 0-1 laws for directional transience, and in 2-dimensions the existence of a deterministic limiting velocity.
Hrušák, Michael; van Mill, Jan
Published: 2013-03-08
We study separable metric spaces with few types of countable dense sets. We present a structure theorem for locally compact spaces having precisely $n$ types of countable dense sets: such a space contains a subset $S$ of size at most $n{-}1$ such that $S$ is invariant under all homeomorphisms of $X$ and $X\setminus S$ is countable dense homogeneous. We prove that every Borel space having fewer than $\mathfrak{c}$ types of countable dense sets is Polish. The natural question of whether every Polish space has either countably many or $\mathfrak{c}$ many types of countable dense sets, is shown to be closely related to Topological Vaught's Conjecture.
Hu, Shengda; Santoprete, Manuele
Published: 2012-12-29
In this paper we regularize the Kepler problem on $S^3$ in several different ways. First, we perform a Moser-type regularization. Then, we adapt the Ligon-Schaaf regularization to our problem. Finally, we show that the Moser regularization and the Ligon-Schaaf map we obtained can be understood as the composition of the corresponding maps for the Kepler problem in Euclidean space and the gnomonic transformation.
Hu, Zhiguo; Neufang, Matthias; Ruan, Zhong-Jin
Published: 2012-09-10
We study locally compact quantum groups $\mathbb{G}$ through the convolution algebras $L_1(\mathbb{G})$ and $(T(L_2(\mathbb{G})), \triangleright)$. We prove that the reduced quantum group $C^*$-algebra $C_0(\mathbb{G})$ can be recovered from the convolution $\triangleright$ by showing that the right $T(L_2(\mathbb{G}))$-module $\langle K(L_2(\mathbb{G}) \triangleright T(L_2(\mathbb{G}))\rangle$ is equal to $C_0(\mathbb{G})$. On the other hand, we show that the left $T(L_2(\mathbb{G}))$-module $\langle T(L_2(\mathbb{G}))\triangleright K(L_2(\mathbb{G})\rangle$ is isomorphic to the reduced crossed product $C_0(\widehat{\mathbb{G}}) \,_r\!\ltimes C_0(\mathbb{G})$, and hence is a much larger $C^*$-subalgebra of $B(L_2(\mathbb{G}))$.
We establish a natural isomorphism between the completely bounded right multiplier algebras of $L_1(\mathbb{G})$ and $(T(L_2(\mathbb{G})), \triangleright)$, and settle two invariance problems associated with the representation theorem of Junge-Neufang-Ruan (2009). We characterize regularity and discreteness of the quantum group $\mathbb{G}$ in terms of continuity properties of the convolution $\triangleright$ on $T(L_2(\mathbb{G}))$. We prove that if $\mathbb{G}$ is semi-regular, then the space $\langle T(L_2(\mathbb{G}))\triangleright B(L_2(\mathbb{G}))\rangle$ of right $\mathbb{G}$-continuous operators on $L_2(\mathbb{G})$, which was introduced by Bekka (1990) for $L_{\infty}(G)$, is a unital $C^*$-subalgebra of $B(L_2(\mathbb{G}))$. In the representation framework formulated by Neufang-Ruan-Spronk (2008) and Junge-Neufang-Ruan, we show that the dual properties of compactness and discreteness can be characterized simultaneously via automatic normality of quantum group bimodule maps on $B(L_2(\mathbb{G}))$. We also characterize some commutation relations of completely bounded multipliers of $(T(L_2(\mathbb{G})), \triangleright)$ over $B(L_2(\mathbb{G}))$.
Iglesias-Zemmour, Patrick
Published: 2012-12-29
We establish the formula for the variation of integrals of differential forms on cubic chains, in the context of diffeological spaces. Then, we establish the diffeological version of Stoke's theorem, and we apply that to get the diffeological variant of the Cartan-Lie formula. Still in the context of Cartan-De-Rham calculus in diffeology, we construct a Chain-Homotopy Operator $\mathbf K$ we apply it here to get the homotopic invariance of De Rham cohomology for diffeological spaces. This is the Chain-Homotopy Operator which used in symplectic diffeology to construct the Moment Map.
Iovanov, Miodrag Cristian
Published: 2013-02-06
"Co-Frobenius" coalgebras were introduced as dualizations of Frobenius algebras. We previously showed that they admit left-right symmetric characterizations analogue to those of Frobenius algebras. We consider the more general quasi-co-Frobenius (QcF) coalgebras; the first main result in this paper is that these also admit symmetric characterizations: a coalgebra is QcF if it is weakly isomorphic to its (left, or right) rational dual $Rat(C^*)$, in the sense that certain coproduct or product powers of these objects are isomorphic. Fundamental results of Hopf algebras, such as the equivalent characterizations of Hopf algebras with nonzero integrals as left (or right) co-Frobenius, QcF, semiperfect or with nonzero rational dual, as well as the uniqueness of integrals and a short proof of the bijectivity of the antipode for such Hopf algebras all follow as a consequence of these results. This gives a purely representation theoretic approach to many of the basic fundamental results in the theory of Hopf algebras. Furthermore, we introduce a general concept of Frobenius algebra, which makes sense for infinite dimensional and for topological algebras, and specializes to the classical notion in the finite case. This will be a topological algebra $A$ that is isomorphic to its complete topological dual $A^\vee$. We show that $A$ is a (quasi)Frobenius algebra if and only if $A$ is the dual $C^*$ of a (quasi)co-Frobenius coalgebra $C$. We give many examples of co-Frobenius coalgebras and Hopf algebras connected to category theory, homological algebra and the newer q-homological algebra, topology or graph theory, showing the importance of the concept.
Jonsson, Jakob
Published: 2013-03-20
For $\delta \ge 1$ and $n \ge 1$, consider the simplicial complex of graphs on $n$ vertices in which each vertex has degree at most $\delta$; we identify a given graph with its edge set and admit one loop at each vertex. This complex is of some importance in the theory of semigroup algebras. When $\delta = 1$, we obtain the matching complex, for which it is known that there is $3$-torsion in degree $d$ of the homology whenever $\frac{n-4}{3} \le d \le \frac{n-6}{2}$. This paper establishes similar bounds for $\delta \ge 2$. Specifically, there is $3$-torsion in degree $d$ whenever $\frac{(3\delta-1)n-8}{6} \le d \le \frac{\delta (n-1) - 4}{2}$. The procedure for detecting torsion is to construct an explicit cycle $z$ that is easily seen to have the property that $3z$ is a boundary. Defining a homomorphism that sends $z$ to a non-boundary element in the chain complex of a certain matching complex, we obtain that $z$ itself is a non-boundary. In particular, the homology class of $z$ has order $3$.
Josuat-Vergès, Matthieu
Published: 2012-11-13
The $q$-semicircular distribution is a probability law that interpolates between the Gaussian law and the semicircular law. There is a combinatorial interpretation of its moments in terms of matchings where $q$ follows the number of crossings, whereas for the free cumulants one has to restrict the enumeration to connected matchings. The purpose of this article is to describe combinatorial properties of the classical cumulants. We show that like the free cumulants, they are obtained by an enumeration of connected matchings, the weight being now an evaluation of the Tutte polynomial of a so-called crossing graph. The case $q=0$ of these cumulants was studied by Lassalle using symmetric functions and hypergeometric series. We show that the underlying combinatorics is explained through the theory of heaps, which is Viennot's geometric interpretation of the Cartier-Foata monoid. This method also gives a general formula for the cumulants in terms of free cumulants.
Kalantar, Mehrdad; Neufang, Matthias
Published: 2013-02-06
In this paper we use the recent developments in the representation theory of locally compact quantum groups, to assign, to each locally compact quantum group $\mathbb{G}$, a locally compact group $\tilde {\mathbb{G}}$ which is the quantum version of point-masses, and is an invariant for the latter. We show that quantum point-masses" can be identified with several other locally compact groups that can be naturally assigned to the quantum group $\mathbb{G}$. This assignment preserves compactness as well as discreteness (hence also finiteness), and for large classes of quantum groups, amenability. We calculate this invariant for some of the most well-known examples of non-classical quantum groups. Also, we show that several structural properties of $\mathbb{G}$ are encoded by $\tilde {\mathbb{G}}$: the latter, despite being a simpler object, can carry very important information about $\mathbb{G}$.
Kawabe, Hiroko
Published: 2012-12-29
Guest-Ohnita and Crawford have shown the path-connectedness of the space of harmonic maps from $S^2$ to $\mathbf{C} P^n$ of a fixed degree and energy.It is well-known that the $\partial$ transform is defined on this space. In this paper,we will show that the space is decomposed into mutually disjoint connected subspaces on which $\partial$ is homeomorphic.
Kellerhals, Ruth; Kolpakov, Alexander
Published: 2013-02-13
Due to work of W. Parry it is known that the growth rate of a hyperbolic Coxeter group acting cocompactly on ${\mathbb H^3}$ is a Salem number. This being the arithmetic situation, we prove that the simplex group (3,5,3) has smallest growth rate among all cocompact hyperbolic Coxeter groups, and that it is as such unique. Our approach provides a different proof for the analog situation in ${\mathbb H^2}$ where E. Hironaka identified Lehmer's number as the minimal growth rate among all cocompact planar hyperbolic Coxeter groups and showed that it is (uniquely) achieved by the Coxeter triangle group (3,7).
Kim, Sun Kwang; Lee, Han Ju
Published: 2013-04-02
A new characterization of the uniform convexity of Banach space is obtained in the sense of Bishop-Phelps-Bollobás theorem. It is also proved that the couple of Banach spaces $(X,Y)$ has the bishop-phelps-bollobás property for every banach space $y$ when $X$ is uniformly convex. As a corollary, we show that the Bishop-Phelps-Bollobás theorem holds for bilinear forms on $\ell_p\times \ell_q$ ($1\lt p, q\lt \infty$).
Kuo, Wentang; Liu, Yu-Ru; Zhao, Xiaomei
Published: 2013-05-07
Let $\mathbb{F}_q[t]$ denote the polynomial ring over the finite field $\mathbb{F}_q$. We employ Wooley's new efficient congruencing method to prove certain multidimensional Vinogradov-type estimates in $\mathbb{F}_q[t]$. These results allow us to apply a variant of the circle method to obtain asymptotic formulas for a system connected to the problem about linear spaces lying on hypersurfaces defined over $\mathbb{F}_q[t]$.
Levandovskyy, Viktor; Shepler, Anne V.
We consider finite groups acting on quantum (or skew) polynomial rings. Deformations of the semidirect product of the quantum polynomial ring with the acting group extend symplectic reflection algebras and graded Hecke algebras to the quantum setting over a field of arbitrary characteristic. We give necessary and sufficient conditions for such algebras to satisfy a Poincaré-Birkhoff-Witt property using the theory of noncommutative Gröbner bases. We include applications to the case of abelian groups and the case of groups acting on coordinate rings of quantum planes. In addition, we classify graded automorphisms of the coordinate ring of quantum 3-space. In characteristic zero, Hochschild cohomology gives an elegant description of the PBW conditions.
Mashreghi, J.; Shabankhah, M.
Published: 2013-02-13
We study the image of the model subspace $K_\theta$ under the composition operator $C_\varphi$, where $\varphi$ and $\theta$ are inner functions, and find the smallest model subspace which contains the linear manifold $C_\varphi K_\theta$. Then we characterize the case when $C_\varphi$ maps $K_\theta$ into itself. This case leads to the study of the inner functions $\varphi$ and $\psi$ such that the composition $\psi\circ\varphi$ is a divisor of $\psi$ in the family of inner functions.
Mendonça, Bruno; Tojeiro, Ruy
Published: 2013-02-21
We give a complete classification of umbilical submanifolds of arbitrary dimension and codimension of $\mathbb{S}^n\times \mathbb{R}$, extending the classification of umbilical surfaces in $\mathbb{S}^2\times \mathbb{R}$ by Souam and Toubiana as well as the local description of umbilical hypersurfaces in $\mathbb{S}^n\times \mathbb{R}$ by Van der Veken and Vrancken. We prove that, besides small spheres in a slice, up to isometries of the ambient space they come in a two-parameter family of rotational submanifolds whose substantial codimension is either one or two and whose profile is a curve in a totally geodesic $\mathbb{S}^1\times \mathbb{R}$ or $\mathbb{S}^2\times \mathbb{R}$, respectively, the former case arising in a one-parameter family. All of them are diffeomorphic to a sphere, except for a single element that is diffeomorphic to Euclidean space. We obtain explicit parametrizations of all such submanifolds. We also study more general classes of submanifolds of $\mathbb{S}^n\times \mathbb{R}$ and $\mathbb{H}^n\times \mathbb{R}$. In particular, we give a complete description of all submanifolds in those product spaces for which the tangent component of a unit vector field spanning the factor $\mathbb{R}$ is an eigenvector of all shape operators. We show that surfaces with parallel mean curvature vector in $\mathbb{S}^n\times \mathbb{R}$ and $\mathbb{H}^n\times \mathbb{R}$ having this property are rotational surfaces, and use this fact to improve some recent results by Alencar, do Carmo, and Tribuzy. We also obtain a Dajczer-type reduction of codimension theorem for submanifolds of $\mathbb{S}^n\times \mathbb{R}$ and $\mathbb{H}^n\times \mathbb{R}$.
Rotger, Victor; de Vera-Piquero, Carlos
The purpose of this note is introducing a method for proving the existence of no rational points on a coarse moduli space $X$ of abelian varieties over a given number field $K$, in cases where the moduli problem is not fine and points in $X(K)$ may not be represented by an abelian variety (with additional structure) admitting a model over the field $K$. This is typically the case when the abelian varieties that are being classified have even dimension. The main idea, inspired on the work of Ellenberg and Skinner on the modularity of $\mathbb{Q}$-curves, is that to a point $P=[A]\in X(K)$ represented by an abelian variety $A/\bar K$ one may still attach a Galois representation of $\operatorname{Gal}(\bar K/K)$ with values in the quotient group $\operatorname{GL}(T_\ell(A))/\operatorname{Aut}(A)$, provided $\operatorname{Aut}(A)$ lies in the centre of $\operatorname{GL}(T_\ell(A))$. We exemplify our method in the cases where $X$ is a Shimura curve over an imaginary quadratic field or an Atkin-Lehner quotient over $\mathbb{Q}$.
Sambou, Diomba
Published: 2013-02-06
Nous considérons les perturbations $H := H_{0} + V$ et $D := D_{0} + V$ des Hamiltoniens libres $H_{0}$ de Pauli et $D_{0}$ de Dirac en dimension 3 avec champ magnétique non constant, $V$ étant un potentiel électrique qui décroît super-exponentiellement dans la direction du champ magnétique. Nous montrons que dans des espaces de Banach appropriés, les résolvantes de $H$ et $D$ définies sur le demi-plan supérieur admettent des prolongements méromorphes. Nous définissons les résonances de $H$ et $D$ comme étant les pôles de ces extensions méromorphes. D'une part, nous étudions la répartition des résonances de $H$ près de l'origine $0$ et d'autre part, celle des résonances de $D$ près de $\pm m$ où $m$ est la masse d'une particule. Dans les deux cas, nous obtenons d'abord des majorations du nombre de résonances dans de petits domaines au voisinage de $0$ et $\pm m$. Sous des hypothèses supplémentaires, nous obtenons des développements asymptotiques du nombre de résonances qui entraînent leur accumulation près des seuils $0$ et $\pm m$. En particulier, pour une perturbation $V$ de signe défini, nous obtenons des informations sur la répartition des valeurs propres de $H$ et $D$ près de $0$ et $\pm m$ respectivement.
Shen, Yibing; Zhao, Wei
Published: 2012-10-30
In this paper, we establish a universal volume comparison theorem for Finsler manifolds and give the Berger-Kazdan inequality and Santaló's formula in Finsler geometry. Being based on these, we derive a Berger-Kazdan type comparison theorem and a Croke type isoperimetric inequality for Finsler manifolds.
Thompson, Alan
We consider threefolds that admit a fibration by K3 surfaces over a nonsingular curve, equipped with a divisorial sheaf that defines a polarisation of degree two on the general fibre. Under certain assumptions on the threefold we show that its relative log canonical model exists and can be explicitly reconstructed from a small set of data determined by the original fibration. Finally we prove a converse to the above statement: under certain assumptions, any such set of data determines a threefold that arises as the relative log canonical model of a threefold admitting a fibration by K3 surfaces of degree two.
Vandenbergen, Nicolas
In this paper, we study the reduced loci of special cycles on local models of the Shimura variety for $\operatorname{GU}(1,n-1)$. Those special cycles are defined by Kudla and Rapoport. We explicitly compute the irreducible components of the reduced locus of a single special cycle, as well as of an arbitrary intersection of special cycles, and their intersection behaviour in terms of Bruhat-Tits theory. Furthermore, as an application of our results, we prove the connectedness of arbitrary intersections of special cycles, as conjectured by Kudla and Rapoport.
Vaz, Pedro; Wagner, Emmanuel
We prove that the 2-variable BMW algebra embeds into an algebra constructed from the HOMFLY-PT polynomial. We also prove that the $\mathfrak{so}_{2N}$-BMW algebra embeds in the $q$-Schur algebra of type $A$. We use these results to suggest a schema providing categorifications of the $\mathfrak{so}_{2N}$-BMW algebra.
Vitagliano, Luca
Published: 2012-12-29
We define partial differential (PD in the following), i.e., field theoretic analogues of Hamiltonian systems on abstract symplectic manifolds and study their main properties, namely, PD Hamilton equations, PD Noether theorem, PD Poisson bracket, etc.. Unlike in standard multisymplectic approach to Hamiltonian field theory, in our formalism, the geometric structure (kinematics) and the dynamical information on the phase space'' appear as just different components of one single geometric object.
Wang, Liping; Zhao, Chunyi
Published: 2012-12-29
We consider the following prescribed boundary mean curvature problem in $\mathbb B^N$ with the Euclidean metric: $\begin{cases} \displaystyle -\Delta u =0,\quad u\gt 0 &\text{in }\mathbb B^N, \\[2ex] \displaystyle \frac{\partial u}{\partial\nu} + \frac{N-2}{2} u =\frac{N-2}{2} \widetilde K(x) u^{2^\#-1} \quad & \text{on }\mathbb S^{N-1}, \end{cases}$ where $\widetilde K(x)$ is positive and rotationally symmetric on $\mathbb S^{N-1}, 2^\#=\frac{2(N-1)}{N-2}$. We show that if $\widetilde K(x)$ has a local maximum point, then the above problem has infinitely many positive solutions that are not rotationally symmetric on $\mathbb S^{N-1}$.
Wu, Xinfeng
In this paper, we introduce weighted Carleson measure spaces associated with different homogeneities and prove that these spaces are the dual spaces of weighted Hardy spaces studied in a forthcoming paper. As an application, we establish the boundedness of composition of two Calderón-Zygmund operators with different homogeneities on the weighted Carleson measure spaces; this, in particular, provides the weighted endpoint estimates for the operators studied by Phong-Stein.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 369, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8742921948432922, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/51444/block-on-a-block-problem-with-friction?answertab=oldest
|
# Block on a block problem, with friction
Consider two blocks, one on top of the other on a frictionless table, with masses $m_1$ and $m_2$ respectively. There is appreciable friction between the blocks, with coefficients $\mu_s$ and $\mu_k$ for static and kinetic respectively. I'm considering the fairly routine problem of determining the maximum horizontal force $F$ (say, to the right) that can be applied to the top block so that the two blocks accelerate together.
The problem is not hard to solve symbolically. If the two blocks move together, their accelerations are the same, and the top block doesn't move with respect to the bottom block, so only static friction is in play. In a standard coordinate system (with $x$ oriented to the right), the sum of horizontal forces for the top block is
$$F-F_{sf}=m_1a$$
and for the bottom block
$$F_{sf}=m_2a$$
where $F_{sf}$ is the force of static friction. Solving for $a$ in these two expressions, and then equating them, gives
$$F=\frac{(m_1+m_2)F_{sf}}{m_2}$$
The maximum such force will therefore be achieved when $F_{sf}$ is maxed out at $\mu_s m_1g$, so
$$F_{max}=\frac{m_1}{m_2}\mu_s(m_1+m_2)g$$
I understand this solution, but conceptually I don't have a response to the following nagging question: $F_{max}$ is clearly larger than the max static friction force $\mu_sm_1g$ (because $\frac{m_1+m_2}{m_2}>1$), so why doesn't the application of a force of magnitude $F_{max}$ to the top block cause kinetic friction to take over? This line of reasoning would suggest that applying a force $F$ of magnitude greater than $\mu_sm_1g$ would cause the top block to start moving with respect to the bottom block (in which case the blocks no longer accelerate together, as in the above solution). I'm at a loss, conceptually, to say what's wrong here. I suspect it has something to do with being careful about reference frames, but a clear explanation would be much appreciated.
-
That would be true if the bottom block were held fixed. But in the end you're applying the force to both blocks (although it's really touching the top block), and the bottom block does accelerate. So when you look in that reference frame, you're still below the maximum static friction. – Chris Gerig Jan 17 at 3:26
Can you elaborate a bit, or spell this out exactly? I don't know what it means to say "when you look in that reference frame, you're still below the max static friction." I'm thinking about free-body diagrams. Consider the opposite case, where a force $F$ of magnitude less than the max static friction force is applied. Then the sum of horizontal forces on the top block gives zero acceleration, but the reaction friction force on the bottom block gives a positive acceleration -- & yet shouldn't the blocks be accelerating together in that case? I want a way to be totally rigorous about this. – symplectomorphic Jan 17 at 3:31
## 3 Answers
The key is that the bottom block is actually moving and is not held fixed like the ground typically is (here I am asusming $F$ is applied to the top block).
Elaborating on my comment: Your acceleration "$a$" is with respect to the ground. The equation $F-F_{sf}=m_1a$ shows that the reason you can accelerate is because $F_{max}>F_{sf}$, and this is accelerating with respect to ground, not with respect to the bottom block.
In particular, if you held the bottom block in place (treating it as the ground), then yes kinetic friction would kick in, but now your equations would be different (because $a_{bottom}=0$ and $a_{top}>0$).
-
I'm still confused, because of the opposite case I mentioned in my comment above. Is my intuition wrong that the two blocks should still accelerate together when $F$ is less than the max static force? If not, then I don't understand how the accelerations in the equations could both be with respect to the ground. That's why I ask. – symplectomorphic Jan 17 at 3:46
Ah be careful, you solved for what $F_{sf}$ will be (assuming both blocks accelerate at the same rate, which is what we are considering): $F_{sf}=m_2F/(m_1+m_2)$. Thus $F-F_{sf} \ne 0$. – Chris Gerig Jan 17 at 3:57
@Chris Gerig Does this imply that no force applied to the top block in the same horizontal direction as the defined x-axis will cause the top block to move with respect to the bottom block? Intuitively I feel that enough impulse would achieve this, is that not good intuition? – Leonardo Jan 17 at 4:23
@Leonardo: no, just apply a force of magnitude greater than what I derived for $F_{max}$. Then the top block will move with respect to the bottom block and kinetic friction takes over. – symplectomorphic Jan 17 at 4:25
@symplectomorphic In the case that the surface under the bottom block is frictionless? What force is stopping the bottom block from moving with the top block no matter what force is being applied (to the top block) in the horizontal? – Leonardo Jan 17 at 4:26
show 5 more comments
You can also think of your problem in a slightly different way, which may (or may not) help you see things in a different light.
1. You apply your force $F$, and it accelerates the top block at a rate $a_1$.
2. You can consider the acceleration as an inertial force of $-m_1\cdot a_1$, so there is a surplus force $F-m_1\cdot a_1$ left over.
3. This surplus force will be used to accelerate the second block at a rate $a_2$.
4. Since the table is frictionless, there can be no surplus, and all the force must be consumed in accelerating the block, so $F-m_1\cdot a_1-m_2\cdot a_2 = 0$.
So far everything holds no matter what the nature of the interfacial contact between the blocks, which we must introduce to solve for $a_1$ and $a_2$:
1. If both blocks move together, then $a_1 = a_2 = a$, and the inequality for static friction applies, $F - m_1\cdot a \leq m_1 g \mu_s$. This, together with the equation of (4) above gives you an acceleration of $a=F/(m_1+m_2)$ for forces $F \leq \frac{m_1}{m_2} (m_1+m_2) g \mu_s$.
2. If $a_1 \neq a_2$ then we have relative motion, and then we know what the interfacial force is exactly, and we can calculate $a_1 = F / m_1 - g \mu_k$ and $a_2 = g \mu_k m_1 /m_2$.
But, to answer your question, why doesn't a force larger than the maximum static friction applied to the top block cause slipping, and thus dynamic friction? Because part of the force is used up in accelerating the top block, and it is only the surplus that is available to try to overcome friction between the blocks.
-
+1 A nice way to think about the inertial force! – Sankaran Jan 19 at 0:15
The reason you're finding this non-intuitive is because you're assuming that the applied force on the top block will cause the bottom block to experience the same force due to friction which is wrong.
For example, consider two blocks `A` and `B` , sitting side by side on a smooth table. If you apply a force `F` on `A`, then one may incorrectly say that the force that `B` experiences is also `F`. But that would be wrong. It's actually less than `F`.
The same is true for your example. The frictional force experienced by the bottom block is less than the actual force applied on the top block. Hence, for the bottom block to experience the maximum static frictional force, one has to apply a force greater than this. Note that this 'frictional force' is due to the top block itself and hence your example is analogous to the above example I gave.
But why does the bottom block experience lesser force in the first place? This is because the 'rest' of the force (or in more correct terms: the rest of the momentum) is used up to move the top block itself. If the top block is massive, then you will need to exert an even larger force to get the top block to accelerate at the same rate as that of the bottom block. Most of the imparted momentum (force) here is given to the top block while the rest of it is given to the bottom block. If the top block is light, then very little of the momentum imparted (force) is used up to move the top block and the rest of it is given to the bottom block. Hence in this case, the required force will be close to the maximum static frictional force.
You can see this in your expression: $F=\frac{(m_1+m_2)F_{sf}}{m_2}$
as $m_1 \to 0$, the force required approaches the frictional force.
However, in this expression: $F=\frac{m_1}{m_2}\mu_s(m_1+m_2)g$
as $m_1 \to 0$, the force also approaches $0$. This is because this expression incorporates the fact that the maximum frictional force decreases as the mass of the top block decreases (lighter the top block, less the normal force between the two blocks). Hence as the top block's mass approaches $0$, the maximum force you can apply also approaches $0$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324294924736023, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4206096
|
Physics Forums
## How are the gravitational and electric force comparable?
I hear all the time how the electric force is so much stronger than gravity.
I understand both forces are inversely proportional to the distances squared, and that the gravitational constant is roughly 10^20 times greater than the coulomb constant.
But one involves charges, while the other involves mass. To me this makes as much sense as a saying a second is larger than a meter. What am I not understanding?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Quote by aftershock I hear all the time how the electric force is so much stronger than gravity. I understand both forces are inversely proportional to the distances squared, and that the gravitational constant is roughly 10^20 times greater than the coulomb constant. But one involves charges, while the other involves mass. To me this makes as much sense as a saying a second is larger than a meter. What am I not understanding?
First, Coulomb constant is approximately 10^20 times greater than the gravitational constant, not the other way around :)
Yes, the quality of objects is different, but you are comparing resulting force between them. Eventhough the forces have different natures, the value you can read on your imaginary force meter when comparing is of the same kind.
I hear all the time how the electric force is so much stronger than gravity.
Try working out the electric and gravitational forces acting on a pair of electrons....
http://www.school-for-champions.com/...ctrostatic.htm
The gravitational attraction between two electrons is only 8.22*10−37 of the electrostatic force of repulsion at the same separation.
## How are the gravitational and electric force comparable?
Dot4: ha, yeah that's what I meant. Switching them was a typo.
Quote by CWatters Try working out the electric and gravitational forces acting on a pair of electrons.... http://www.school-for-champions.com/...ctrostatic.htm
Why is it less valid to give the Earth and moon some small charge and then compare the electric and gravitational forces acting between them?
You would probably say oh well the mass in this case is so much greater than the charge. But that brings me back to my original question of asking how is that different than comparing a second and kilogram.
Recognitions:
Gold Member
Quote by aftershock Dot4: ha, yeah that's what I meant. Switching them was a typo. Why is it less valid to give the Earth and moon some small charge and then compare the electric and gravitational forces acting between them? You would probably say oh well the mass in this case is so much greater than the charge. But that brings me back to my original question of asking how is that different than comparing a second and kilogram.
seconds and kilograms don't measure the same thing. Electrostatic force is a force and gravitational force is a force --- they are the same thing in that sense. What part of that do you not understand?
Quote by phinds seconds and kilograms don't measure the same thing. Electrostatic force is a force and gravitational force is a force --- they are the same thing in that sense. What part of that do you not understand?
Well what I mean is that coulombs and kilograms don't measure the same thing. Gravity wins in the moon/earth scenario because the mass is so much greater than the charge. That last part compares coulombs and kilograms, right?
Quote by aftershock Well what I mean is that coulombs and kilograms don't measure the same thing. Gravity wins in the moon/earth scenario because the mass is so much greater than the charge. That last part compares coulombs and kilograms, right?
When you think about it, you should realise that both coulombs and kilograms are in the same category of "things", but seconds and kilograms are not.
Mass and charge are properties of elementary particles that are the source of attractive forces. These similarities let you compare the two, by e.g.looking at the differences in strength of the force they produce, or observing which elementary particles have which property.
You can't draw such parallels with time and mass. They're completely different.
To use an analogy: You might be attracted to person's eyes or singing ability. Hence you can talk about what turns you on more, and which person has got better voice or more charming eyes.
You can't really compare any of those two with days of the week.
Just to add to Bandersnatch's explanation. Even in the case of earth and moon, were both made of the same kind of electricity, you could measure how much electric force is bigger than gravitational. They are both measured using the same properties.
Quote by aftershock But one involves charges, while the other involves mass.
This is the source of your confusion. You should say one involves charge, which is the electromagnetic force, and one involves gravitation, which is the gravitational force.
Now it should be clear that you are comparing forces, and you can proceed from there. A force is that which changes momentum.
When a charge (electromagnetic force) changes the momentum of an object, this momentum change is applied to the mass of the object. You see, an electron is not just about charge, it has mass too. I think that is a good way to think about charge. Charge is not directly relating to the motion of the particle, but rather think of charge in terms of the process which changes the momentum of the particle.
BTW, there are really only four forces in nature, gravitation, electromagnetic, weak force, and strong force (or three if we accept that the electromagnetic and weak force have been unified into the electro-weak force). When we speak of applying a force to an object, say throwing a ball, what force is that? It might at first seem as if this is a different kind of a force, but really it's not. The contact forces between your hand and the ball are electromagnetic. There are also forces from your muscles which are originated from the chemical processes in your body which ultimately are electromagnetic.
Recognitions:
Gold Member
Quote by aftershock I hear all the time how the electric force is so much stronger than gravity. I understand both forces are inversely proportional to the distances squared, and that the gravitational constant is roughly 10^20 times greater than the coulomb constant. But one involves charges, while the other involves mass. To me this makes as much sense as a saying a second is larger than a meter. What am I not understanding?
Imagine a magnet. You can pick up another magnet with it and both will easily overcome gravity. The same thing applies for electric charge. With a very very small portion of an objects electrical charges out of balance you can pick up another electrically charged object. Just a small percentage of charges added or missing from an object completely dominates the force of gravity from the entire Earth! That's what is meant by saying electromagnetism is much stronger than gravity.
Recognitions:
Gold Member
Quote by aftershock I hear all the time how the electric force is so much stronger than gravity. I understand both forces are inversely proportional to the distances squared, and that the gravitational constant is roughly 10^20 times greater than the coulomb constant. But one involves charges, while the other involves mass. To me this makes as much sense as a saying a second is larger than a meter. What am I not understanding?
Are you familiar with the term "active gravitational mass"? It is sometimes referred to as "gravitational charge", and is also called "the standard gravitational parameter". It is defined by Kepler's third law:
$$\mu=4\pi^2R^3T^{-2}$$
Notice that mass is not in there anywhere. It just so happens that $\mu$ and mass are proportional.
Recognitions: Science Advisor What's meant mathematically is that the dimensionless coupling constant describing the electromagnetic force is much larger than that describing the gravitational force. The thing is, we can unambiguously define an electromagnetic coupling constant (the fine structure constant, ~ 1/137), because there is a unique smallest electric charge. There's not a unique smallest mass, as each elementary particle has a different mass which does not appear to be related to any fundamental mass unit. So you get a different gravitational coupling constant depending on which fundamental particle you pick. But regardless of which one you pick, you always get an answer which is many orders of magnitude smaller than the electromagnetic coupling constant.
Quote by Nabeshin So you get a different gravitational coupling constant depending on which fundamental particle you pick.
Wait a minute. Is the gravitational coupling constant actually different per unit of mass, for different particles? It seems strange to me because I normally think of gravitational force in terms of mass, without regard to what kind of mass it is.
Let me rephrase that question. Would a gram of electrons (if they could be held together for a time) have the same gravitation force as a gram of protons, even if both have a different gravitational coupling constant?
Also, in the Wikipedia article about the gravitational coupling constant, it says that the gravitational coupling constant characterizes the gravitational attraction between charged elementary particles having nonzero mass. Can you please explain a little about the relationship with gravitation and charge, because I have previously thought that they were unrelated?
Thanks.
P.s. What about a gram of neutrons. Neutrons are not charged so they might be excluded from having a gravitational coupling constant, yet they have mass and definately exert gravitational influence. Perhaps they are able to have GCC based on the fact that they are composed or quarks, which do have charge?
Quote by aftershock Why is it less valid to give the Earth and moon some small charge and then compare the electric and gravitational forces acting between them? You would probably say oh well the mass in this case is so much greater than the charge. But that brings me back to my original question of asking how is that different than comparing a second and kilogram.
The electron and the proton are the basic building blocks of all objects in the universe, so the Earth and the Moon are just a bunch of electrons...The difference being that there is positive and negative charge, and so the electric force is screened out over large distances, whereas gravity is not.
Think of the amount of mass required to generate a gravitational pressure needed to overcome the electromagnetic binding force between molecules inside the mass--the equilibrium occurs, basically, when an object in space becomes spherical. This happens at about 1020 - 1021kg. Divided by the mass of a proton implies you need about 1047 atoms to generate the amount of gravitational pressure to break the electromagnetic strength between atoms. This should hopefully demonstrate to you why comparing gravity vs. electromagnetic forces between an electron or proton is not such an arbitrary method of determining the relative strength of the two forces.
Recognitions:
Science Advisor
Quote by MikeGomez Wait a minute. Is the gravitational coupling constant actually different per unit of mass, for different particles? It seems strange to me because I normally think of gravitational force in terms of mass, without regard to what kind of mass it is. Let me rephrase that question. Would a gram of electrons (if they could be held together for a time) have the same gravitation force as a gram of protons, even if both have a different gravitational coupling constant? Also, in the Wikipedia article about the gravitational coupling constant, it says that the gravitational coupling constant characterizes the gravitational attraction between charged elementary particles having nonzero mass. Can you please explain a little about the relationship with gravitation and charge, because I have previously thought that they were unrelated? Thanks. P.s. What about a gram of neutrons. Neutrons are not charged so they might be excluded from having a gravitational coupling constant, yet they have mass and definately exert gravitational influence. Perhaps they are able to have GCC based on the fact that they are composed or quarks, which do have charge?
My point is this, the dimensionless gravitational coupling constant is given by
$$\alpha_G = \frac{G m_1 m_2}{\hbar c} = \frac{m_1 m_2}{m_p^2}$$
So while the Newton gravitational constant G is a fundamental constant of nature, $\alpha_G$ depends on $m_1$ and $m_2$. The reason is, as I said, there is no 'fundamental unit of mass' as there is a fundamental unit of charge.
Of course the force of gravity is given by the usual F=GmM/r^2, which doesn't care at all about composition.
I think the only reason they mention charged particles is to compare to the fine structure constant, which is defined of course in terms of electrical charge. I see no reason why you can't have a gravitational coupling constant for neutral particles.
Thread Tools
| | | |
|-------------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: How are the gravitational and electric force comparable? | | |
| Thread | Forum | Replies |
| | General Physics | 8 |
| | Introductory Physics Homework | 5 |
| | Introductory Physics Homework | 1 |
| | Introductory Physics Homework | 4 |
| | General Physics | 12 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532181024551392, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26001/are-the-rationals-homeomorphic-to-any-power-of-the-rationals
|
## Are the rationals homeomorphic to any power of the rationals ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I asked myself, which spaces have the property that $X^2$ is homeomorphic to $X$. I started to look at some examples like $\mathbb{N}^2 \cong \mathbb{N}$, $\mathbb{R}^2\ncong \mathbb{R}, C^2\cong C$ (for the cantor set $C$). And then I got stuck, when I considered the rationals. So the question is:
Is $\mathbb{Q}^2$ homeomorphic to $\mathbb{Q}$ ?
-
## 3 Answers
Yes, Sierpinski proved that every countable metric space without isolated points is homeomorphic to the rationals: http://at.yorku.ca/p/a/c/a/25.htm .
An amusing consequence of Sierpinski's theorem is that $\mathbb{Q}$ is homeomorphic to $\mathbb{Q}$. Of course here one $\mathbb{Q}$ has the order topology, and the other has the $p$-adic topology (for your favourite prime $p$) :-)
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Yes, they are homeomorphic. To construct a homeomorphism from $\mathbb Q$ to $\mathbb Q^2$, one can proceed roughly as follows: express $q\in \mathbb Q$ as a continued fraction $[a_0, a_1,a_2,...]$ (of finite length) and associate with it the pair $([a_0,a_2,...], [a_1,a_3,...])$.
Mind that this is a homeomorphism, but not an isometry (cf comment on Tom's answer).
I vaguely remember that there is a ceneral Theorem in point set topology stating that all coutable topological spaces "of the same kind as $\mathbb Q$" are homeomorphic.
-
Why the construction does not work for $\mathbb R$ instead of $\mathbb Q$? – Wadim Zudilin May 26 2010 at 12:40
1
Each positive rational has two different continued fraction expansions. 5+1/2 = [5,2] maps to ([5],[2]) = (5,2). But 5+1/2 = [5,1,1] also, which maps to ([5,1],[1]) = (6,1). – Gerald Edgar May 26 2010 at 13:20
1
@Xandi: On the contrary, it works with infinite continued fractions to show that the space of irrationals is homeomorphic to its square. – Gerald Edgar May 26 2010 at 13:22
1
Sry but i was wondering why this is - surjective: only $a_0$ might be negative; hence you never get $a_1$ negative and hence the map is not surjective. - continuous : Note that $a_n:=[1;n]=1+1/n$ converges to $1$, but $([1],[n])=(1,n)$ diverges. I don't know how to repair this. Still it would be nice to have an explicit homeomorphism. – HenrikRüping May 26 2010 at 14:22
2
The irrationals have a nice characterisation as well (the rationals are the unique countable metric space without isolated points): the irrationals are the unique 0-dimensional [base of clopen sets] separable metric space that is nowhere locally compact [no non-empty open set has compact closure]. It is a way of showing that the irrationals are homeomorphic to N^N and hence to any finite or countable power of itself. – Henno Brandsma May 26 2010 at 17:55
show 6 more comments
I don't think so: the completion of $\mathbb{Q}^2$ is $\mathbb{R}^2$, so that a homeomorphism $\mathbb{Q}^2\to\mathbb{Q}$ would give a homeomorphism $\mathbb{R}^2\to\mathbb{R}$?
-
5
Your argument only shows that there is no isometry. – Xandi Tuni May 26 2010 at 12:35
You're quite right. I thought it sounded too good to be true... – Tom Smith May 26 2010 at 12:36
1
This will give a really good exercise for a topology course! – Xandi Tuni May 26 2010 at 12:48
3
It also shows there is no homeomorphism uniformly continuous in both directions. – Gerald Edgar May 26 2010 at 13:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464564323425293, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/6086/how-does-this-thought-experiment-not-rule-out-black-holes/6327
|
# How does this thought experiment not rule out black holes?
How does the following brief thought experiment fail to show that general relativity (GR) has a major problem in regards to black holes?
The full thought experiment is in my blog post. The post claims that GR violates its own equivalence principle at the horizon of a black hole. The principle says that the laws of physics in any sufficiently small, freely falling frame are the same as they are in an inertial frame in an idealized, gravity-free universe. Here's a condensed version of the thought experiment:
In an arbitrarily small, freely falling frame X that is falling through the horizon of a black hole, let there be a particle above the horizon that is escaping to infinity. A free-floating rod positioned alongside the particle and straddling the horizon couldn't be escaping to infinity as well, or else it'd be passing outward through the horizon. However, if instead the rod didn't extend as far down as the horizon, then in principle it could be escaping, possibly faster than the particle beside it. In an inertial frame, unlike in X, a body's freedom of movement (in principle and if only relative to other free objects in the frame) doesn't depend on the body's position or extent. Then a test of the laws of physics can distinguish X from an inertial frame. If X was equivalent to an inertial frame, I wouldn't be able to tell whether the rod could possibly be passing the particle in the outward direction, by knowing only whether the rod extends as far down as an imaginary boundary (the horizon) within the frame. If X was equivalent to an inertial frame, the rod could in principle be passing the particle in the outward direction regardless of its extent within X.
The thought experiment above takes place completely within X, which is arbitrarily small in spacetime (arbitrarily small both spatially and in duration). That is, the experiment is completely local. That the particle is escaping to infinity is a process occurring within X; it tells us that the particle won't cross the horizon during the lifetime of X. The particle needn't reach infinity before the experiment concludes.
It isn't necessary to be able to detect (by some experiment) that a horizon exists within X. It's a given (from the givens in the thought experiment) that a horizon is there. Likewise, I am free to specify the initial conditions of a particle or rod in relation to the horizon. For example, I am free to specify that the rod straddles the horizon, and draw conclusions from that. The laws of physics in X are affected by the presence and properties of the horizon regardless whether an observer in that frame detects the horizon.
It seems to me that the only way the equivalence principle is satisfiable in X is when in principle the rod can be escaping to infinity regardless of its initial position or extent in X, which would rule out black holes in a theory of gravity consistent with the principle. Otherwise, it seems the bolded sentence must be incorrect. If so, how? In other words, how can I not tell whether the rod can possibly be passing the particle in the outward direction, by knowing only whether it extends as far down as the horizon?
I'd appreciate hearing from Ted Bunn or other experts on black holes. A barrier to getting a satisfactory answer to this question is that many people believe the tidal force is so strong at the horizon that the equivalence principle can't be tested there except impossibly, within a single point in spacetime. An equation of GR (see my blog post) shows that a horizon isn't a special place in regards to the tidal force, in agreement with many texts including Ted Bunn's Black Hole FAQ. In fact the tidal force can in principle be arbitrarily weak in any size X. To weaken the tidal force in any given size X, just increase the mass of the black hole. (Or they might believe it's fine to test the principle in numerical approximation in a frame larger than a point, but not fine to test it logically in such frame anywhere. Kip Thorne disagrees, in a reference in my blog post.) Note also that the Chandra X-ray Observatory FAQ tells us that observations of black holes to date aren't confirmations of GR, rather they actually depend on the theory's validity, which is to say the existence of black holes in nature isn't proven.
Edit to add: I put a simple diagram, showing GR's violation of its own EP, at the blog post.
Edit to add: I'm awarding the bounty to dbrane, whose answer will likely retain the lead in votes, even though it's clearly incorrect as I see it. (In short, the correct answer cannot be that an infinitesimally small frame is required to test the EP. It is in fact tested in larger labs. The tidal force need only be small enough that it doesn't affect the outcome. Nor is the horizon a special place in regards to the tidal force, says GR.) I do appreciate the answers. Thanks!
Edit to add: this question hasn't been properly answered. The #1 answer below made a false assumption about the question. I've beefed up the question to address the objections in the answers below. I added my own answer to recap the objections and reach a conclusion. Please read the whole post before answering; I may have already covered your objection. Thanks!
-
4
The extent of your object is what spoils the freely falling frame. It is not equivalent to an inertial frame. – Raskolnikov Feb 28 '11 at 12:12
1
Two comments: tidal forces don't have to be large at the horizon; they can be negligible for very large black holes. Also, the evidence for black hole horizons has been discussed here physics.stackexchange.com/questions/3349/… – dbrane Feb 28 '11 at 12:58
@Raskolnikov: In a reference in my blog post, Kip Thorne specifically says that the equivalence principle applies in a small, freely falling frame that is falling through the horizon of a black hole, and says that the laws of physics are testable in such frame, just like they are in any other small, freely falling frame anywhere else. So yes, according to the equivalence principle X should be equivalent to an inertial frame. @dbane is correct that the tidal forces can be mild at the horizon. In principle they can be arbitrarily weak in any given size X. – finbot Mar 2 '11 at 4:20
6
@finbot: Nobody is going to win the bounty, since many good replies have been given but you just don't understand them. As I said, freely falling frame $\neq$ intertial frame. They are only equal locally. Forget black holes and GR, what you are proposing would be equivalent to saying that because you hurl a stone away during free fall, the stone must absolutely escape because the free fall frame is equivalent to an inertial frame. This is obviously untrue. – Raskolnikov Mar 5 '11 at 12:29
@Raskolnikov: According to bounty rules, half must go to the highest-voted answer. I'll give the other half to the second-highest-voted answer. Yes, they are equivalent only locally, so it's a good thing X is defined to be local, by virtue of being defined to be arbitrarily small. What I'm proposing isn't like your "hurling stone" example at all. The thought experiment concerns whether it's possible for the rod to also be escaping, given how far down within X it extends. The experiment isn't trying to prove that the rod must be escaping. – finbot Mar 8 '11 at 2:13
show 7 more comments
## 9 Answers
I just read your blog post and it's clear to me where you've gone wrong.
The equivalence principle only allows you to transform to an inertial frame locally. This means that if your spacetime is curved, then the falling observer can only choose Minkowski coordinates for an infinitesimal region around her.
Think of a curved surface and having to choose a very small patch on it for it to appear flat. Clearly, you can't extend that flat patch indefinitely and call it an inertial frame of infinite extent (which you require in order to argue that the frame would allow you to send signals out to infinity).
The horizon is a global object that you realize exists when you patch together all the infinitesimal coordinate systems and examine its causal structure.
So, yes, the falling observer can do experiments to realize the horizon exists, but this does not violate the Equivalence Principle because such experiments are not done locally in an infinitesimal region. This applies to the rod that you seem to want to send away to infinity after crossing the horizon too. The infinitesimal flat patch in which you're allowed to play with the EP does not include infinity (or anything beyond the horizon), so you can't throw things outside of the horizon once you've crossed.
-
+1 @dbrane absolutely spot on. – user1355 Feb 28 '11 at 14:50
1
If I understand correctly, the gist of it is that you can't fit a rod inside a region of infinitesimal extent. – David Zaslavsky♦ Mar 4 '11 at 0:11
2
@finbot: but note that I said infinitesimal extent, not arbitrarily small (which would imply some finite size). – David Zaslavsky♦ Mar 4 '11 at 3:21
1
@finbot: when you're at the event horizon of a black hole, sufficiently small is infinitesimally small. This comment thread is getting kind of long and I'm not inclined to say anything further on the matter. – David Zaslavsky♦ Mar 4 '11 at 4:36
1
@dbrane--- this answer is not totally wrong, but its false. The scale at which the equivalence principle holds is determined by the curvature scale, not by the distance to the horizon. – Ron Maimon Aug 20 '11 at 6:28
show 14 more comments
Dbrane's answer contains the essential points. However I should point out that General Relativity is more sophisticated than your models suggest.
1. The Inertial Frame concept (as used in the Equivalence Principle) is really only valid infinitesimally (whence it matches Minkowski space and "idealised gravity-free universe"). Some authors have critized the EP for this, and you are too. Most authors accept this and just present the EP "locally" - with "local" meaning no large deviations via curvature. Near the Event Horizon of a Black Hole is not a good place to find such flatness - especially if the BH is rotating - so we would be dealing with small Frames at best. All this makes "Law K" in your post suspect. (EDIT ADD FOR CLARITY) Thus the Blog phrase "Then law K is false in X" needs to say "Then law K is false in General Relativity".
2. A different problem here is the status of "Event Horizon" (presumed at R=2M in your post). Put simply Event Horizons are difficult to find for an active Black Hole (one that is still eating up matter): its position is actually mobile until the Black Hole finally settles down (at the end of the Universe). This is a very counterintuitive behaviour of Black Holes and of General Relativity and arises because the "M" in "R=2M" hasnt been determined until the Black Hole has stopped growing!
3. Concerning this:
In an inertial frame, unlike in X, a body's freedom of movement (in principle and if only relative to other free objects in the frame) doesn't depend on the body's position or extent.
The "freedom of movement" that I think you are referring to whether the object can be accelerated beyond the speed of light, which cannot be done in any Inertial Frame. As it cannot be so accelerated then no physical process in momentarily passing frame X can stop the momentarily straddling rod from entering the Event Horizon (remember that the Frame X is entering the Horizon too).
-
@Roy Simpson: On #1, X is defined to be arbitrarily small, so it meets the condition of local, hence nothing suspect about the laws of physics being tested. On #2, that's a difficulty in practice only, which is irrelevant here. In principle (in theory) we are free to test the laws of physics in X exactly as that frame is defined; Kip Thorne agrees too. On #3, that doesn't answer my question. For the EP to be satisfied in X, I mustn't be able to tell whether the rod can possibly be passing by the particle in the outward direction, by knowing only whether it touches the horizon. – finbot Mar 2 '11 at 4:48
Unfortunately there is a gap between the GR physics in Black Hole books and GR in general. So all we can do is give some pointers and advice: you will have to study up on these answers. My primary advice is to draw a diagram (or 3) of what you think is actually happening in this thought experiment. You will notice that all GR books are filled with diagrams, so must this experiment. When you do that you will have to "draw a border" around X, you will be constrained to put it inside/outside/straddling the EH. You will then add rods, particles, observers and anything else relevant. – Roy Simpson Mar 2 '11 at 17:02
Note that diagrams are usually drawn at a time t, but you might need to consider several such times to get a good picture of these events. In presentation of your arguments ensure that the logic and definitions are watertight. The phrase "freedom of movement" has come from nowhere in this description, so others cannot help here. Nevertheless the 3 points I have made are where there are surprising counterintuitive aspects of GR. For example when the books say "A must happen near a BH" they mean "A would not happen if only it could exceed the speed of light in some inertial frame." – Roy Simpson Mar 2 '11 at 17:12
I have added an EDIT on the Law K aspect, just to focus that point. – Roy Simpson Mar 2 '11 at 18:16
I think a diagram would be superfluous. I think the thought experiment amply shows that I'm able to tell whether the rod can possibly be passing by the particle in the outward direction, by knowing only whether it touches the horizon. Which violates the EP, it seems. – finbot Mar 2 '11 at 22:27
show 19 more comments
You're choosing a "freely falling" inertial frame. There's a natural set of coordinates for a non rotating black hole for this called "Gullstrand-Painleve" coordinates. They correspond to the natural coordinates for a particle falling into a black hole from infinity. See the wikipedia article.
In these coordinates, the speed of light is different for light trying to move away from the black than light moving towards it. As the little patch enters the black hole, the speed of light moving away from the black hole becomes negative, that is, even light moving away from the black hole still gets sucked into the singularity.
A well written, sort-of introductory and very intuitive paper you might find enlightening on these coordinates, and their generalization to a rotating and/or charged black hole, is:
Am.J.Phys.76:519-532,2008, Andrew J. S. Hamilton, Jason P. Lisle , The River Model of Black Holes
http://arxiv.org/abs/gr-qc/0411060
-
The other answers(+1) are correct; the problem is that the inertial frame has to be small. I've added this answer in the hope it will give a better intuitive understanding of what one of those small inertial frames looks like. – Carl Brannen Mar 4 '11 at 1:17
@Carl Brannen, frame X is defined to be arbitrarily small. So "it has to be small" isn't an answer to this question. I'm familiar with the river model (it's one of my favorites). In any model, however, the escape velocity above the horizon is less than the speed of light. The horizon is defined as the highest place where light cannot possibly escape. The particle can definitely be escaping by virtue of being above the horizon. The entire thought experiment takes place locally, within an abitrarily small (in spacetime) freely falling frame. No better arena for testing the EP exists. – finbot Mar 4 '11 at 1:49
@Carl Brannen: The question doesn't say "escape to infinity". It says "escaping to infinity". That's different. Escaping to infinity is something that can be happening entirely locally, in an arbitrarily small region of spacetime. Likewise if a thought experiment specifies that a car is moving at 70 kph, it doesn't mean that the car moves 70 km during the experiment. Search above for "The thought experiment above" for a further explanation. – finbot Mar 4 '11 at 3:37
@Carl Brannen: I think "escaping toward infinity" would be ambiguous. Only "escaping to infinity" indicates it's never coming back. I did a web search of .edu sites to see that I'm using the phrase properly. It's okay to talk about something escaping to infinity in terms of a local process. The particle reaches some great distance after the experiment is over. If I say "escaping toward infinity" there's no guarantee the particle won't fall back and cross the horizon during the lifetime of X. It indicates only the direction it's escaping, not what it's escaping. – finbot Mar 4 '11 at 4:37
@Carl Brannen: "Just because a particle is escaping towards infinity doesn't mean it gets there." True. But escaping to infinity is different. In physics that term clearly means it's moving toward infinity and never falling back. And it doesn't mean that the situation can't be analyzed locally. I see 2K+ .edu links mentioning "escaping to infinity" and everywhere I look they're analyzing locally. After all, it would take forever to reach infinity. The mere existence of the word "infinity" in a thought experiment, or even "escaping to infinity", doesn't necessarily mean a global experiment. – finbot Mar 4 '11 at 7:29
show 5 more comments
The answers to this question all get the answer wrong. The answer is that an accelerating frame has exactly the same horizon as the black hole, so that the equivalence principle holds. It does not hold infinitesimally as you approach the horizon, it holds including the horizon, if you identify the black hole horizon with the Rindler horizon.
The length scale at which the EP fails is the inverse curvature, which is as large as you like compared to the distance to the horizon. So the motion of the ball and the rod is the same in a uniformly accelerated frame as it is next to a black hole.
This type of equivalence principle, with a short distance to the horizon, was never used by Einstein, but it's sort of folklore by now!
LATER EDIT: I see that this answer might be interpreted as lending support to the claimed violation of the equivalence principle in the OP's question. There is absolutely no violation of the equivalence principle, and this can be easily seen.
Given a rigid rod L in the horizontal direction, it is impossible to accelerate it horizontally while keeping it rigid with an acceleration any greater than
$a_{\mathrm{max}} = c^2/L$
because then the left-most point would be past the Rindler horizon of the right most point. If you try to do this to a rod, it gets properly longer, because the acceleration on the left point can't keep up (this is easily seen in a space-time diagram). The intuition that fails is that there is such a thing as "uniform acceleration of a rigid rod". So when the rod is longer than the distance to the horizon it will not be able to pass the particle in the inertial frame before the whole frame reaches the horizon and the question is moot.
More generally, it is impossible to find a contradiction between a black hole horizon and the EP, because the near horizon metric is Rindler, up to curvature corrections which are arbitrarily small, so it is equivalent to a flat space, and there is no thought experiment which can refute this in a black hole which doesn't work in flat space just the same.
-
For completeness' sake: this answer of course only talks about large black holes. Smalish ones have arbitrarily large curvature on the horizon. – Marek Aug 20 '11 at 11:46
This answer doesn't answer the question. It doesn't refute the OP's noted difference between X and an inertial frame, a difference that contradicts the EP. An accelerating frame doesn't have the same horizon as a black hole, the OP shows. You can't assume such sameness in the answer, when the OP refutes that sameness. – finbot Aug 23 '11 at 6:07
I just fixed that. The OP doesn't show any violation, but has just noticed the fact that rigid bodies in special relativity can't accelerate rigidly with arbitrarily great acceleration. There is a nice exposition of this by Bell in the special relativity paper in "Speakable and Unspeakable in Quantum Mechanics". – Ron Maimon Aug 23 '11 at 18:18
Nothing is accelerating in the OP, so Bell's spaceship paradox doesn't apply here. The rod is defined to be free-floating, i.e. freely falling. (No gravitational acceleration either, since the tidal force is defined to be negligible. As measured in frame X, the rod's velocity is constant, as is the particle's.) As noted in the OP, the rod cannot be passing the particle in the outward direction, as it could in a true inertial frame. That's a violation of the EP. This violation doesn't require that the rod completely pass the particle. As to your last paragraph see "Rindler" in the blog. – finbot Aug 26 '11 at 8:05
1
Um--- you set up a thought experiment where one particle is outside, and the other is inside the horizon. Then you look at it in a free-falling frame. In the free-falling frame, the outside-particle must be accelerating fast, and has a Rindler horizon, while the inside-particle passes this Rindler horizon. Whatever happens near a black hole in terms of communication loss happens exactly the same way for a particle accelerated in a freely falling frame. That's because the near-horizon geometry is flat, and you can apply the equivalence principle there, as you say. – Ron Maimon Aug 26 '11 at 8:48
show 11 more comments
Saggitarius A* has been confirmed to be a black hole, and many others have been discovered; by observing the movement of stars around Saggitarius A* over many years , ballistic trajectories for stars that can only be explained by a DEEP gravitational well (millions of solar masses) exerting a major influence in what appears to be an empty spot. It is useful to remember that the existence of black holes has been confirmed by Astronomers, and that if your thought experiment somehow precludes their existence, the problem lies with your thinking.
I think the biggest issue you're having is with your treatment of this "rigid rod" as something that could actually physically exist. Any rod in this universe is made up of atoms, and its rigidity and elasticity are entirely the result of electromagnetic forces between the atoms in the rod. Therefore, saying that the "rod" is half-inside and half-outside the event horizon is only saying that half of the rod's constituent atoms are inside the event horizon and the other half are outside of it. The rod that you call upon in your thought experiment seems to have properties that are not of this universe.
The EH is not a physical boundary, and if the black hole is large enough, tidal forces will be negligible on infalling matter when crossing the EH. From the reference frame of the infalling matter, it would not experience being instantaneously teleported from one side of the EH to the other nor any of the other effects you've suggested. The matter will continue on a ballistic trajectory orbiting the center of mass; a ballistic trajectory that will never take it outside of the horizon (by definition), sure, but upon crossing the horizon - if our object was a guy in a space suit - he'd have no indication that he'd crossed the event horizon (except perhaps that his radio to home base has stopped working).
Importantly, the astronaut (or the rod) would not be stretched into infinity or torn apart at the EH; a black hole was recently discovered that has a radius of 4 light-days, and a density that is FAR below the density of Earth's atmosphere at sea level, for example. Our astronaut would pass the horizon of that black hole sedately, and asphyxiate and die and freeze before ever encountering any forces strong enough to cause even mild discomfort, even at Voyager-1 speeds headed directly at the singularity.
-
As this is a Bounty question (and because my other answer has a long set of comments) I have decided to add another answer. This answer is somewhat different from other answers provided, although it is consistent with them. There has been some challenge to understanding the other answers, and there may be some difficulty in understanding this answer too, but I shall record it here for those interested in this physical scenario.
In short: there is a contradiction in this physical scenario. The nature of this contradiction and its consequences I shall discuss at the end.
Let us consider the two physical assumptions which make up the scenario:
(A) A particle on a trajectory starting just above an Event Horizon, this trajectory being a timelike trajectory leading to timelike infinity (in an asymptotic model, say);
(B) A rigid rod (of length L say) straddling ie "half-in" and "half-out" of the Event Horizon at some time t say (in some appropriate coordinates).
The scenario continues by discussing Frames and so on, but this answer does not depend on anything else. Let us examine each of these assumptions a little more carefully.
(A) Is it possible for such a trajectory to exist? It depends on the Black Hole metric, but for the Schwarzchild metric if the particle has energy greater than some minimum E(R), then it may escape the region. If its energy and angular momentum is less than this value, it may orbit the Black Hole, or may directly plunge in. So let us assume that we are dealing with a Schwarzchild metric and that the particle can be given sufficient energy to follow the escaping trajectory.
Now let us consider (B) in more detail:
Is (B) possible? I claim that (B) is an inconsistent assumption. I shall outline a proof below.
First we need to return to the trapped surface property of an Event Horizon: a particle P is inside an Event Horizon if every trajectory leads to the Singularity. So now consider the rod. This rod is "rigid" in some sense, although we do not require any properties of "rigid" in this proof - so it could be (normal matter) elastic. However it is simpler to assume a regular model of a rigid rod of length L (much smaller than R=2M, say). Consider two points on the rod at time t: P is inside the Event Horizon and Q is outside. Since Q is outside there exists, by definition, some trajectory $\gamma$ such that $\gamma$ does not lead to the Singularity.
Now let the rod trajectory be such that point Q follows the $\gamma$ trajectory. The point P necessarily will follow a trajectory leading to the Singularity and so the proper distance PQ will extend to at least R = 2M in finite proper time. Thus the rod will break, and so the rod was not rigid as assumed but composed of at least two separate components (this is Hawking radiation!). So we have a contradiction as we assumed that the rod was rigid. So assumption (B) is inconsistent with General Relativity and cannot be used in any thought experiment.
A first objection might be via a "fluid model" intuition of the Event Horizon which allows a rod to be outside, then part straddling, then maybe entirely contained in the "fluid". But this intuition is not valid here: either the rod is or is not in the Event Horizon.
A second objection might be that this implies that the Event Horizon (fluid) has "moved" superluminarily, and this is not possible. The explanation is that the Event Horizon is not a local physical object and not constrained by the restrictions of Special Relativity. In fact it is a Global General Relativity object with counterintuitive properties: discontinuity and achronicity (see Hawking and Ellis 1973) and, as shown here, superluminarity.
We can now understand in outline the paradox that the original question identified with Inertial Frames. These are local objects in which a Global GR entity was analysed - but any attempt to account for the behaviour of a global object by purely local analysis will result in the sort of contradictions and paradoxes that the question has uncovered.
EDIT ADD FOR CLARITY:
There is a further objection to the stark conclusion of this answer, that can be understood in terms of further thought experiments. I shall discuss these and how they relate to the original question.
Let us assume that the rod is actually a long spacecraft with a removable capsule at the top. If the lower part of the spacecraft is within the EH, then the capsule might be fired off, escaping to infinity. Thus in this case it is not true that the entire spacecraft either is or is not contained within the EH, and the EH fluid model somewhat applies. Of course then the rod is not actually rigid as we assumed, so our conclusion is still valid, but only just. We could generalise this scenario to a spacecraft with N modules. A further generalisation simply assumes that the matter of the rod is such that at any distance along its length an explosion can occur (perhaps caused by a striking antiparticle) which would cause the rod to split and the top part to escape (to infinity). In this case (and within the modelling approximation) it would be appropriate to consider a fluid-like model for the EH and thus talk about the rod "straddling" the Horizon.
However this "straddling" model assumes a wider physical scenario than did the original thought experiment, which merely considered the rod as an inert object, and certainly did not consider explosions, the quantum matter in the rod, and colliding particles which might happen (occasionally) in realistic physical situations. When these other factors are present one can discuss "straddling" rods: in the bare model as presented the "straddling" concept becomes inconsistent as discussed.
So in this wider sense the answer to the part of the thought experiment about how the upper straddling rod could in principle be detected in frame X apparently escaping to infinity faster than a nearby particle, is that experiments in X will have detected that the rod has been in some form of explosion near the EH (unlike any non-EH straddling rod which happened also to be escaping). However the original thought experiment was so limited (to a rigid rod idealisation) that "explosions" and the like cannot occur; as a consequence "straddling" cannot occur either.
-
It seems that your "the Event Horizon is ... not constrained by the restrictions of Special Relativity" is in contradiction with the EP. The EP does apply to a small, freely falling frame that falls through the horizon of a black hole (SR applies in such frame). You can't refute the thought experiment by noting that the rod can't be passing outward through the horizon. That is a premise of the thought experiment. You can refute the experiment (or any purportedly sound argument) only by showing that either one of its premises are false or that its conclusion doesn't follow from its premises. – finbot Mar 8 '11 at 5:01
Yes the EH is not (local) physical and so the EP indeed does not apply.(In this sense your overall intuition is correct.) Premise (B) is a premise of the thought experiment however, which although one might initially think is allowable, is shown here to be false ie inconsistent with GR. Thus your argument has led from a false (ie unusable) premise to a conclusion and is thus logically invalid. Read the answer again at least once before replying - I am saying some quite surprising things in this answer. – Roy Simpson Mar 8 '11 at 10:53
– finbot Mar 8 '11 at 15:48
@finbot : I am not disagreeing with the EP (which is not mentioned in this answer). Indeed the EP applies at the Horizon, but not to the Horizon itself. It is the (dynamic) Horizon itself which could be said to violate SR. In fact I think that your real argument here is that this thought experiment shows that the (dynamical) Horizon violates SR (which is itself surprising) rather than GR. The fact that you dont care about the smallness of the EP Lab also shows that you are really testing SR itself with the Horizon - and finding something wrong (as have I in this answer). – Roy Simpson Mar 8 '11 at 22:30
To take this further you need to understand the difference between SR and GR. Understanding the non-SR behaviour of the Horizon (from other examples which are known) would also help. As a matter of fact I have found a different thought experiment (before this) which also suggests that Black Holes violate SR! – Roy Simpson Mar 8 '11 at 22:33
show 1 more comment
I also would like to point out that the notion of "escaping to infinity" violates the locality of the equivalence principle, as it requires an infinite amount of time. That, too, is not a local probe of the gravitational field, as it depends on all the dynamics of the spacetime between the probe point and the time the particle reaches infinity.
-
I covered that above, and in my answer. The -ing suffix indicates action, which occurs locally too. I can be moving to France even when I haven't yet left Spain. Escaping to infinity is something the particle is doing wholly within X, which is arbitrarily small in spacetime. You'd be correct if the thought experiment instead said "escaped to infinity". – finbot Mar 8 '11 at 1:58
@finbot: how do you detect the "escaping"? You have to allow a sufficient amount of time to pass, and measure a sufficient amount of separation. The weaker you make the local field, the more time it takes to see a signal of size X. Either way, you violate locality. – Jerry Schirmer Mar 8 '11 at 2:52
The "escaping" doesn't have to be detected; it's a given. The particle is let to be escaping to infinity. Anything that is possible in principle can be let in a thought experiment. – finbot Mar 8 '11 at 4:51
@finbot: then, you're sensitive to any variation in the gravitational field that occurs in <b>any</b> finitely sized area, and the only local frame is a single point--you can find gravitational forces on any finite sized object if you're taking your thought experiment to that extreme. – Jerry Schirmer Mar 8 '11 at 14:42
I've covered that objection in comments here and in my answer. While the EP is strictly true only at a point in spacetime, it's testable in a larger frame. It is tested in labs larger than a point, and otherwise it'd be outside the realm of science, for all ideas must be testable in science. There is a tidal force in a frame larger than a point, but that's completely ignorable when it has no effect on the outcome of the experiment. See my comments to Jerry Schirmer in my answer below, to see why the tidal force has no effect on the outcome of the thought experiment. – finbot Mar 8 '11 at 16:09
show 10 more comments
It is impossible to have part of the rigid rod being under the horizon and part above.
Mathematically on the horizon itself the speed of rod's movement becomes the speed of light. This means the linear contraction of length of the rod to zero for a stationary observer. Thus the rod crosses the horizon WHOLE AT ONCE and after this moment its speed becomes greater than speed of light.
Of course this also means that the rod cannot be made of substance or carry any information since information cannot be transferred faster than light (in actual world the BH will evaporte earlier than any object can approach the horizon, to cross the horizon one would have to move towards it faster than light with only light rays in theory can reach the horizon exactly at the last moment of the BH's existence).
-
Evaporating horizons recede with finite local speed. A simple penrose diagram for a evaporating horizon will show you that the horizon forms a timelike surface in the enveloping spacetime. – Jerry Schirmer Apr 22 '11 at 22:21
@Jerry Schirmer I said exactly the same: the horizon is a timelike surface, so you only can cross it at once with the whole your ship, like crossing a moment in time. You cannot have part of your ship under the horizon and part above. – Anixx Apr 22 '11 at 22:28
Sure you can. Just not for an infinite time. The average doorway forms a timelike surface in space, after all. The null expansions or the global structure of the spacetime are the things that make the horizon special. Not the fact that it is a timelike (or null, in the case of non-expanding/contracting horizons) surface. – Jerry Schirmer Apr 22 '11 at 23:01
"Sure you can. Just not for an infinite time" - completely incorrect statement if the spaceship is rigid and forms a connected body. – Anixx Apr 22 '11 at 23:44
@Anixx: solve the geodesic equation in non-singular coordinates yourself. The result is that the tidal force on a finite sized object at the horizon is proportional to $\frac{M}{r^{2}}$, which is finite, and can be made arbitrarily small for an arbitrarily large object. You aren't taking the light-cone tipping effect into consideration, and you really have to in this case. – Jerry Schirmer Apr 23 '11 at 0:22
show 13 more comments
The answer by dbrane nearly says what I am going to say but not as clearly and shortly as I will.
The OP states the Equivalence Principle as
« The principle says that the laws of physics in any sufficiently small, freely falling frame are the same as they are in an inertial frame in an idealized, gravity-free universe. »
This is wrong. This mistake has nothing to do with back holes, it's always wrong for any neighbourhood no matter how small it is. It's only true for a point, not for a neighbourhood. Or, to put it another way, it can be approximately true, up to first order, in a sufficiently small neighbourhood. But it can never be exactly true except at a point (unless the gravitational field is of a rather special kind, and even the Earth's field makes this impossible).
Mathematically, the values of the Christoffel symbols can be zeroed out at one point by an apropriate choice of coordinates, but they cannot be made zero for a neighbourhood no matter how small it is.
What the principle of equivalence says is that you cannot tell the difference between a gravitational field and a pseudo-force due to your choice of coordinates. It does not say you can find coordinates that makes the gravitational force zero. But you can find coordinates that make it zero at one point.
Now although I know nothing about black holes, I have to point out that if you fix what level of approximation you desire, and choose a small neighbourhood which is sufficiently small that within that tolerance there is a frame which is close to being an inertial frame, the required smallness of the neigbhourhood might change with time. With very violent dynamics, the needed smallness might shrink indefinitely, and if there were a singularity, it might be the case that no degree of smallness was sufficient for the wished for tolerance, and this would not violate the EP.
-
-1: I am sorry, but you are repeating the wrong answer by DBrane. The scale of violations of EP is the inverse curvature scale, which can be made arbitrarily large compared to the distance to the horizon. It is not a mistake to apply EP to a patch which includes the horizon, but you need to realize the horizon coordinates make a Rindler horizon in this patch. The "required smallness of the neighborhood" is bigger than the domain of the thought experiment, so the resolution does not come from this line of thinking. I have to downvote, because this is patiently explained in my answer + comments – Ron Maimon Jan 17 '12 at 16:50
I carefully said « might », and my point is that it is incumbent on the OP to do an analysis of the validity of the approximation used in his thought experiment...to check whether this happens or not. It is not incumbent on me to do it for the OP! But it was incumbent on me to point out the mistake about what the EP is and what it is not. – joseph f. johnson Jan 17 '12 at 17:13
@RonMaimon My answer is not a repetion of dbranes excellent answer since I isolate the typical student confusion about the EP and correct it more clearly, and since I qualify the other point with a « might.» I answer the OP since obviously the OP mistake about EP has to be fixed first, and then the analysis of the limits of the validity of the approximation have to be made...by the OP. – joseph f. johnson Jan 17 '12 at 18:07
DBrane's answer is (atypically) not excellent, it is wrong. It is also absurdly upvoted, and grossly misleading. This is why I had to vote down both (sorry). Saying might doesn't help--- you need to check whether it is or is not to answer the question, or determine that it is unanswerable (it isn't). If you didn't do this homework, the answer is not satisfactory, it doesn't answer the question. Anyway the OP is a crackpot, and there is no point in communication. – Ron Maimon Jan 17 '12 at 21:52
@dbrane I do not know enough about black holes to contradict Mr. Maimon, but I do know that the OP was making a serious mistake about the EP, and that the OP needs to double-check the validity of the usual approximation if they want to extend it to monkeywrenches and event horizons.... – joseph f. johnson Jan 17 '12 at 21:55
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455265402793884, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-math-topics/163519-applying-eulers-formula-polyhedron-g-print.html
|
# Applying Euler's Formula to polyhedron G
Printable View
• November 16th 2010, 09:56 PM
MacstersUndead
Applying Euler's Formula to polyhedron G
Let G be a polyhedron (or polyhedral graph), each of whose faces is bounded by a pentagon or a hexagon
(i) Use Euler's formula to show that G must have at least 12 pentagonal faces
(ii) Prove, in addition, that if there are exactly three faces meeting at each vertex, then G has exactly 12 pentagonal faces.
--
A corollary to Euler's Formula states that for a polyhedral graph with n vertices, m edges, and f faces, that $n - m + f = 2.$
I also know that since each face is bounded by at least 5 edges, that it follows that counting up the edges around each face that $5f \leq 2m$ since each edge bounds two faces, but then I get
$5(2-n+m) \leq 2m$
$10 - 5n + 3m \leq 0$
but I can't see how this helps. Am I doing something wrong? Any help on (i) will be appreciated. After that, I'll try (ii) on my own.
All times are GMT -8. The time now is 08:23 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609054923057556, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/234235/maybe-things-can-be-divided-by-zero/234462
|
# Maybe Things Can be Divided by Zero
In the equation: $$\frac{z^2-1}{z-1}$$ $z$ can not be equal to $1$.
However $$\begin{align} \frac{z^2-1}{z-1}&=\frac{(z-1)(z+1)}{z-1}\\ &=(z+1) \end{align}$$ So then if $z$ is equal to $1$ we have $$\frac{z^2-1}{z-1}=2$$
Can someone explain that please?
-
2
You have just solved a limit without even knowing it! – Joel Nov 10 '12 at 23:27
## 8 Answers
The point at $z = 1$ is called a "removable singularity", and this process "removes" it. In general, a function having a limit at a point but not defined there can be extended to a function continuous at that point in only one way. That's what happens here. $\frac{z^2 - 1}{z - 1}$ is defined and continuous everywhere except $z = 1$, and has a limit at $z = 1$. $z + 1$ is defined and continuous everywhere, and equals $\frac{z^2 - 1}{z - 1}$ wherever both are defined. So then it must be that unique extension.
NB: It's worth noting that as other posters have pointed out, this equation is not technically correct since one expression is undefined at the point and the other isn't, but the procedure you used gives the unique extension just mentioned.
-
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
Let $f(z) = \frac{z^2-1}{z-1}$ and $g(z) = z+1$ then $f$ is analytic on $\mathbb C \setminus \{1\}$ and $g$ on $\mathbb C$.
A theorem of complex analysis says that if two functions coincide on an open set then they are equal everywhere that they are both defined.
This implies that there is a unique way to continue a function (such as $f$) onto a larger domain if it shares some overlap with another function (such as $g$).
This justifies continuing $f$ to a function on the whole of $\mathbb C$ by defining $f(1) = 2$.
-
Can you please explain what "analytic" means in this context? – Gineer Nov 10 '12 at 16:46
@Gineer, it's a particularly good type of function that is studied in complex analysis. The point is, if we are only thinking about these types of functions there is a unique way to define $f(1)$. – sperners lemma Nov 10 '12 at 16:54
@spernerslemma I don't think the OP is referring to complex numbers with the symbol $z$. – Nameless Nov 10 '12 at 17:37
4
@Nameless: Even so, the argument carries over by replacing '$\mathbb{C}$' by '$\mathbb{R}$', 'analytic' by 'continuous' and ignoring the second paragraph (which is far more general a result than is needed). I +1'd this answer because it emphasises that $z \mapsto (z^2-1)/(z-1)$ and $z \mapsto z+1$ are different functions (in intension, at least). – Clive Newstead Nov 10 '12 at 21:33
@CliveNewstead Of course it carries over to the real numbers. My problem is this answer seems to be well above the mathematical maturity level of the OP, so for him it is useless. – Nameless Nov 11 '12 at 12:54
This step $\begin{align} \frac{(z-1)(z+1)}{(z-1)}&=z+1 \end{align}$ is only valid when $z \not=1$.
-
sigh...........?!? I'm sure if I understood math, I could actually love it, but so far it eludes me. :-( – Gineer Nov 10 '12 at 16:20
But I could start with f(x) = x that is defined everywhere and expand it by x/x to f(x) = x^2/x. This should not be defined at x = 0 suddenly? Isn't it so that a function is defined everyhwere when the numerator has a higher power than the denominator? Or the other way round: If I take the limit z -> 1 in the question, I think this would be allowed and leead to the result 2. – Foo Bar Nov 10 '12 at 16:53
3
@Foo Bar If somebody defines a function by saying $f(x) = \frac{x^2}{x}$, then they haven't defined their function carefully because they haven't specified the domain of the function. But assuming they have not specified their domain, the convention is to assume they meant all real numbers where this expression is defined, which would not include $0$. – littleO Nov 10 '12 at 21:35
1
@Gineer think of learning math like learning swimming. You cannot fight the water, you gotta accept it for what it is and work with it. Math has a super steep learning curve and you gotta be patient and work through a lot of problems to get a sense of what might be going on. Solving lots and lots of problems yourself is the only way to learn math. – Amatya Nov 11 '12 at 1:19
Rather than making a statement like \begin{align*} \frac{z^2 - 1}{z-1} = z + 1 \end{align*} without saying what $z$ is, you should make a more careful statement like this:
If $z \in \mathbb{R}$ and $z \neq 1$, then $\frac{z^2 - 1}{z - 1} = z + 1$.
It wouldn't make sense to plug in $z = 1$, because that equation is not even true if $z = 1$.
-
In this case $z = 1$ is a hole!
-
So are you saying the denominator is equal to the enumerator and therefor it's just 1? In that case, it's still not equal to 2 though? – Gineer Nov 10 '12 at 16:15
1
@Gineer: No, LJym89 is saying that the function is undefined at $z=1$, so its graph has a hole there. – Brian M. Scott Nov 10 '12 at 16:19
O yea... whole $\ne$ hole. I think its getting late. Thats for mentioning it though, makes sense if I think of it graphically... Is that true though, because if you simplify the equation z can be equal to 1. The two equations are equal to each other then – Gineer Nov 10 '12 at 16:28
In this type of case, for any value of z other than 1, the terms will cancel out and the function will behave like z+1. Notice that when z=1, the function is undefined. That means that that value can't be in the domain. We call it a hole.
To try and understand why you still cannot divide by 0 even in cases like this, try to graph out the function $f(x) = \frac{1}{x}$ and plug in values near 0 from the left and the right. Try values in this interval $0 \leq x \leq 1$ and notice what happens as you get closer to 0. After you check this by hand, go to wolfram or your calculator and look at what happens to verify. I hope this helps.
-
1
But it can be in the domain if you just simplify the equation, no? – Gineer Nov 10 '12 at 16:44
1
Well sure, but you have to treat this type of problem as a "I know what you were" I suppose. If I were to allow myself to simply divide it away I would lose the possibility of holes. Also, looking at the undivided part written out it is clear division by 0 could happen for a particular value of z. These types of functions are interesting, therefore we consider them in the way we described where we choose not to forget original candidates for discontinuity. – only_gad Nov 10 '12 at 21:13
Algebraists will talk about things like $\mathbb C(z)$, the "field of rational functions over $\mathbb C$", and in that field it is indeed the case that $\frac{z^2-1}{z-1} = z+1$, but this doesn't say anything about being able to evaluate the function defined by $f(z) = \frac{z^2-1}{z-1}$ at $1$. What it does say is that $z^2-1 = (z-1)(z+1)$.
-
2
It is structures like $\mathbb{C} (z)$ that best capture our intuitions about manipulating algebraic expressions: there is an obvious sense in which $\frac{z^2 - 1}{z-1} = z + 1$ and it should not have to involve taking limits! – Zhen Lin Nov 10 '12 at 20:28
By writing $z-1$ in the denominator you are already implicitly assumthat $z≠1$. Even if you derive the expression $z+1$ or anything else from the previous one, the assumption still holds that $z≠1$.
What you showed above holds if and only if $z≠1$. Conversely, if $z=1$ it does not hold. In the case that $z=1$, the expression $\frac{z^{2}-1}{z-1}$ is undefined, so it cannot be reduced to anything.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547926187515259, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/67123/representation-of-integers-as-the-product-of-linear-forms-in-three-variables
|
## representation of integers as the product of linear forms in three variables
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to find all integer triples (x,y,z) such that: $\prod_{\theta}(x + y \theta + z \theta^2)=1$, where $\theta$ runs through the solutions to the cubic $x^3 + x^2 - 2x - 1=0$.
In his book "Diophantine equations" (p. 111-12) Mordell gives the equation
$w^n = \prod_{\theta}(x + y \theta + z \theta^2)$
where $\theta=\theta_1, \theta_2, \theta_3$ are the solutions to a cubic equation with integer coefficients.
Mordell's partial solution is,
$x + y\theta_1 + z \theta_1^2 = (p + q\theta_1 + r\theta_1^2)^n$,
$x + y\theta_2 + z \theta_2^2 = (p + q\theta_2 + r\theta_2^2)^n$,
$x + y\theta_3 + z \theta_3^2 = (p + q\theta_3 + r\theta_3^2)^n$,
$w = \prod_{\theta}(p + q \theta + r \theta^2)$
where $p,q,r$ are arbitrary integers and $n$ runs through the integers.''
He continues to say, " the general solution depends upon the theory of algebraic numbers and is connected with the units in an algebraic number field".
-
So: you want to find all triples $(x, y, z)$ of integers such that $\prod_{\theta} (x + y\theta + z \theta^2) = 1$, where $\theta$ runs through the solutions to your cubic? (I think Mordell is trying to do something else much harder -- for him w is not part of the question; he's solving for the quadruple $(x, y, z, w)$.) – David Loeffler Jun 7 2011 at 14:25
David, thank you for your reply. Indeed your restatement of the problem is correct. It may be less difficult than the problem he tries to solve. I looked at Mordell's particular solution as only a particular solution to my problem. – Tom Hunt Jun 7 2011 at 15:10
In your example $1,\theta,\theta^2$ generate the maximal order, so the equation is simply the norm equation whose solutions are exactly the units of the field generated by $\theta$. The field is totally real, hence has unit rank 2, and the group of units is generated by: {$-1,\theta+1,\theta^2-1$}. So the solutions are those such that $p+q\theta+r\theta^2=\pm (\theta+1)^a(\theta^2-1)^b$, for any integers $a,b$. (all computations were done using Sage) – Dror Speiser Jun 7 2011 at 15:29
Dror: You have to restrict to norm 1 units, cf. my answer below, which crossed over with yours. – David Loeffler Jun 7 2011 at 15:45
Dror, thank you for your response. – Tom Hunt Jun 7 2011 at 16:09
show 1 more comment
## 1 Answer
This isn't really a research-level question, and hence belongs more on math.SE than here, but here goes anyway...
Let $K$ be the number field $\mathbb{Q}(\theta)$ where $\theta$ is a root of your cubic $f$. Then the ring of integers `$\mathcal{O}_K$` of $K$ is `$\mathbb{Z}(\theta)$`, and we are reduced to looking for elements `$\omega \in \mathcal{O}_K^\times$` such that $\operatorname{Norm}_{K/\mathbb{Q}}(\omega) = 1$. It takes about 0.01 seconds for my computer to tell me that $\mathcal{O}_K^\times$ is isomorphic to $\mathbb{Z}^2 \times \mathbb{Z}/2$, and the index 2 subgroup of units of norm 1 is isomorphic to $\mathbb{Z}^2$, with independent generators $\theta-1$ and $-\theta-1$.
So the general solution is $$x + y \theta + z \theta^2 = (\theta - 1)^a (-\theta-1)^b$$ for any $a, b \in \mathbb{Z}$.
(If you're unfamiliar with the theory of algebraic number fields I'm using here, Stewart and Tall's book is a good reference.)
-
David, thank you for the response, I will refer to Stewart and Tall. – Tom Hunt Jun 7 2011 at 16:08
It's not research-level for a number theorist, but any other type of mathematician could come across such a problem, and would likely have no idea how to solve it, and would have difficulty making sense of our literature. – Kevin O'Bryant Jun 12 2011 at 16:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441463947296143, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/experimental-physics+gravity
|
# Tagged Questions
3answers
338 views
### How to measure force of impact inside container?
I am in 7th grade and for my science fair project, I need a way to measure the force on a dropped object when it hits the ground. What I am trying to determine is which packing materials provide the ...
2answers
178 views
### why there is no accuracy of the measured value of $G$?
With the advancement of Modern Technology still there is no accuracy of the measured value of $G$ Gravitational Constant, why!?
2answers
186 views
### How does one get the value of acceleration of gravitation on earth accurately by experiment without electronic device?
How does one get the value of acceleration of gravitation on earth accurately to 5 significant digits by experiment without electronic device?
5answers
937 views
### The square in the Newton's law of universal gravitation is really a square?
When I was in the university (in the late 90s, circa 1995) I was told there had been research investigating the $2$ (the square of distance) in the Newton's law of universal gravitation. ...
7answers
875 views
### What is the proof that the universal constants ($G$, $\hbar$, $\ldots$) are really constant in time and space?
Cavendish measured the gravitation constant $G$, but actually he measured that constant on the Earth. What’s the proof that the value of the gravitation constant if measured on Neptune would remain ...
3answers
181 views
### Could we prove that neutrinos have mass by measuring their gravitational signature?
It is now said that neutrinos have mass. If an object has mass then it also emits a gravitational field. I appreciate the neutrinos mass is predicted to be small, but as there are so many produced ...
1answer
78 views
### Gravity waves detectors; are they all similar?
Are the gravity waves detectors all working on the same principle/effect ?
1answer
530 views
### Escape velocity of a rocket standing on Ganymede (Moon of Jupiter)
I want to calculate the escape velocity of a rocket, standing on the surface of Ganymede (moon of Jupiter) and trying to leave Ganymede. My thinking was, the kinetic energy $E_{\text{KIN}}$ must be ...
2answers
83 views
### experimental bounds on spacetime torsion
Did Gravity Probe B provide any bounds on Einstein-Cartan torsion? is a non-zero torsion value at odds with the results regarding frame-dragging and geodetic effects?
2answers
187 views
### Question about gravity probe B
I have a question about the gravity probe B experiment. According to this site: http://science.nasa.gov/science-news/science-at-nasa/2011/04may_epic/ The measurements they made confirm Einsteins ...
2answers
642 views
### Active gravitational mass of the electron
In PSE here electrons are added to a sphere and gravitational modifications are expected. My question is: Is there any experiment that show that a negatively charged object is source of a stronger ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292541146278381, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/38730/transmission-of-gaussian-beam-through-graded-index-slab?answertab=oldest
|
# Transmission of Gaussian Beam Through Graded-Index Slab
The $ABCD$ matrix of a glass graded-index slab with refractive index $n(y)=n_0(1-\frac{1}{2}\alpha^{2}y^{2})$ and length $d$ is $A=\cos(\alpha d)$, $B=\frac{1}{\alpha}\sin(\alpha d)$, $C=-\alpha \sin(\alpha d)$, $D=\cos(\alpha d)$ for paraxial rays along the z axis. Usually, $\alpha$ is chosen to be sufficiently small so that $\alpha^{2}y^{2} << 1$. A Gaussian beam of wavelength $\lambda_0$, waist radius $W_0$ in free space, and axis in the z direction enters the slab at its waist. How can I use the $ABCD$ law to get an expression for the beam width in the $y$ direction as a function of $d$?
-
## 1 Answer
The ABCD law can be used for Gaussian beam propagation using the complex beam radius $q$. Defining $\frac{1}{q} = \frac{1}{R}-i\frac{2}{kW^2}$, $R = R(z)$ being the radius of curvature of the beam and $W = W(z)$ the halfwidth at point $z$ and $k = 2\pi/\lambda_0$, the complex beam radius transforms as $q \to \frac{Aq+B}{Cq+D}$. In your case (waist at the beginning of the medium, radius of curvature at the waist being infinite), so that $q = ikW_0^2/2$ at the front of the medium.
-
Thank you for your reply. Wouldn't it be $q1=-kW_0^{2}/(2i)$ though? And this would just give us an expression for q1, from which we can get an expression for q2. However, we still need to substitute out for q2, and to do this, we need to get the radius of curvature for the beam leaving the slab. I'm still not sure how to do this. – John Roberts Oct 1 '12 at 23:57
1
$i$ and $-1/i$ are the same, so both expressions are correct, expect fot the square of $W_0$ which I missed. Now, you can get $q_2$ from $q_1$ using the ABCD transformation of complex beam radius as above and then using the definition $1/q_2 = 1/R_2 - 2i/(kW_2^2)$, you can determine both the radius of curvature and the beam width lleaving the slab. – Ondřej Černotík Oct 2 '12 at 11:41
1
Because all the parameters (except $q$) are real, you can split $1/q_2$ to real and imaginary part. The real part is then equal to $1/R_2$ while the imaginary part is $-2/(kW_2^2)$, which will then be expressed only in terms of $\alpha, d, W_0, \lambda_0$. – Ondřej Černotík Oct 2 '12 at 14:08
1
To get $1/R_2$ in terms of those parameters, you just take the real part of $1/q_2$. – Ondřej Černotík Oct 2 '12 at 15:36
1
– Ondřej Černotík Oct 2 '12 at 15:54
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9031400680541992, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/223129/show-there-is-no-measure-on-mathbbn-such-that-mu-0-k-2k-ldots-frac?answertab=votes
|
Show there is no measure on $\mathbb{N}$ such that $\mu(\{0,k,2k,\ldots\})=\frac{1}{k}$ for all $k\ge 1$
For $k\ge1$, let $A_k=\{0,k,2k,\ldots\}.$ Show that there is no measure $\mu$ on $\mathbb{N}$ satisfying $\mu(A_k)=\frac{1}{k}$ for all $k\ge1$.
What I have done so far:
I am trying to apply Borel-Cantelli lemma ($\mu(\mathbb{N})=1)$. Let $(p_n)_{n\in \mathbb{N}}=(2,3,5,\ldots)$ be the increasing sequence of all prime numbers. It is the case that for all $k\in\mathbb{N}$ and $i_1\lt i_2 \lt \ldots \lt i_k$ we have $\mu\left(A_{p_{i_1}}\cap\ldots\cap A_{p_{i_k}}\right)=\mu\left(A_{p_{i_1}}\right)\ldots\mu\left(A_{p_{i_k}}\right)$ so all $A_n$ are independent. We know that the sum of the reciprocals of the primes $\sum\limits_{n=1}^\infty \frac{1}{p_n}$diverges and hence, by B-C second lemma, $$\mu\left(\bigcap\limits_{n=1}^\infty\bigcup\limits_{m=n}^\infty A_{p_m}\right)=1$$ holds. However, I cannot figure out how to conclude the reasoning. I would appreciate any help.
-
1 Answer
You are basically there. You proved in your last line that $\mu$-a.e. number is divisible by infinitely many primes, which is an obvious contradiction.
-
Oh, ok. I am almost sure I understand. Is the following $\mathbb{N}\subset \bigcup\limits_{n=1}^\infty\bigcap\limits_{m=n}^\infty A_{p_m}^c$ the precise form of what you have just stated? Also it is an easy contradiction, because the whole space cannot be contained in the set of measure $0$. Am I correct? – Kuba Helsztyński Oct 29 '12 at 0:52
Yes, this is correct. – Lukas Geyer Oct 29 '12 at 1:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589319229125977, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/178949-if-f-holder-continuous-1-then-f-constant.html
|
# Thread:
1. ## If f is holder continuous with a > 1, then f is constant?
Hello
I'm trying to prove that if f satisfies the holder condition, i.e.
there exists a constan K such that for all x,y in R : |f(x) - f(y)| <= K(|x - y|^a)
and a > 1
then f is a constant function.
I assume that the right direction is proving that the derivative of f is constantly zero, but I don't see why f has to be differentiable in the first place...and even assuming that f is differentiable I don't see why the derivative must be zero..
2. Let $x_0\in\mathbb R$. We have $\left|\frac{f(x)-f(x_0)}{x-x_0}\right|\leq K|x-x_0|$ if $x\neq x_0$. Now you can see what the derivative in $x_0$ is and conclude.
3. So would it be correct to take the limit of both sides of the inequality as X goes to Xo, and then we get that the absolute value of the derivative at Xo is smaller or equal to zero?
4. Yes, it's correct.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578112959861755, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/111371/dual-space-is-a-vector-space?answertab=votes
|
# dual space is a vector space
I was wondering if some one could please shed some light on why or how a dual space itself becomes a vector space over the Field. The "Finite Dimensional Vector spaces" book by Paul Halmos states."To every vector space V we make correspond the dual space V' consisting of all the linear functionals on V; " The book goes on to present the defining property of a linear functional and the definition of the linear operations for linear functionals.
Also for the sake of completion a linear functional is defined by the text as "A linear functional on a vector space v is a scalar-valued function y defined for every vector x, with the property that (identically in the vectors $x_{1}$ and $x_{2}$ and the scalar $\alpha _{1}$ and $\alpha _{2}$)
$y\left( \alpha _{1}x_{1}+\alpha _{2}x_{2}\right) =\alpha _{1}y\left( x_{1}\right) +\alpha _{2}y\left( x_{2}\right)$
Based on these definitions is n't V' composed of scalar valued functions with the above property. I fail to see any vectors present in V'. Yet the book later assumes that we must know that and starts defining a dual space V'' of a dual space V' of a vector space V.
Any help would be much appreciated.
-
1
A vector is an element of a vector space. Thus, if $V'$ has a vector space structure all its elements are vectors. – azarel Feb 20 '12 at 18:53
1
Halmos surely gives the axioms for a vector space at some point. Check that the "linear operations for linear functionals" make $V'$ into a vector space. – Dylan Moreland Feb 20 '12 at 19:04
## 1 Answer
Let's go back further:
Let $\mathbf{V}$ and $\mathbf{W}$ be any two vector spaces over the same field $\mathbf{F}$. Let $\mathcal{L}(\mathbf{V},\mathbf{W})$ be the set of linear transformations $T\colon \mathbf{V}\to\mathbf{W}$.
We will make $\mathcal{L}(\mathbf{V},\mathbf{W})$ into a vector space over $\mathbf{F}$. In order to do this, we need to define an "addition of linear transformations" and a "scalar multiplication of elements of $\mathbf{F}$ by linear transformations" (that is, our "vectors" will be linear transformations from $\mathbf{V}$ to $\mathbf{W}$; remember that a vector space is just a set with a "vector addition" and a "scalar multiplication" that satisfy certain properties, and we call the elements of the set "vectors"; they don't have to be "tuples" in the usual sense).
So, given two linear transformations $T,U\colon \mathbf{V}\to\mathbf{W}$, we need to define a new linear transformation that is called the "sum of $T$ and $U$". I'm going to write this as $T\oplus U$, to distinguish the "sum of linear transformations" from the sum of vectors. Since we want $T\oplus U$ to be a linear transformation (which is a special kind of function) from $\mathbf{V}$ to $\mathbf{W}$, in order to specify it we need to say what the value of $T\oplus U$ is at every $\mathbf{v}\in \mathbf{V}$. My definition is: $$(T\oplus U)(\mathbf{v}) = T(\mathbf{v}) + U(\mathbf{v}),$$ where the sum on the right is taking place in $\mathbf{W}$. This makes sense, because $T$ and $U$ are already functions from $\mathbf{V}$ to $\mathbf{W}$, so $T(\mathbf{v})$ and $U(\mathbf{v})$ are vectors in $\mathbf{W}$, which we can add.
Is $T\oplus U$ a linear transformation from $\mathbf{V}$ to $\mathbf{W}$? First, it is a function from $\mathbf{V}$ to $\mathbf{W}$. Now, to check that it is a linear transformation, we need to check that for all $\mathbf{v}_1,\mathbf{v}_2\in\mathbf{V}$ and all $\alpha\in \mathbf{F}$, we have $$(T\oplus U)(\mathbf{v}_1+\mathbf{v}_2) = (T\oplus U)(\mathbf{v}_1)+(T\oplus U)(\mathbf{v}_2)\quad\text{and}\quad (T\oplus U)(\alpha\mathbf{v}_1) = \alpha((T\oplus U)(\mathbf{v}_1)).$$ Indeed, since $T$ and $U$ are themselves linear transformations, we have: $$\begin{align*} (T\oplus U)(\mathbf{v}_1+\mathbf{v}_2) &= T(\mathbf{v}_1+\mathbf{v}_2) + U(\mathbf{v}_1+\mathbf{v}_2) &\text{(by definition of }T\oplus U\text{)}\\ &= T(\mathbf{v}_1)+T(\mathbf{v}_2) + U(\mathbf{v}_1)+U(\mathbf{v}_2) &\text{(by linearity of }T\text{ and }U\text{)}\\ &= T(\mathbf{v}_1)+U(\mathbf{v}_1) + T(\mathbf{v}_2)+U(\mathbf{v}_2)\\ &= (T\oplus U)(\mathbf{v}_1) + (T\oplus U)(\mathbf{v}_2) &\text{(by definition of }T\oplus U\text{)}\\ (T\oplus U)(\alpha\mathbf{v}_1) &= T(\alpha\mathbf{v}_1) + U(\alpha\mathbf{v}_1) &\text{(by definition of }T\oplus U\text{)}\\ &= \alpha T(\mathbf{v}_1) + \alpha U(\mathbf{v}_1) &\text{(by linearity of }T\text{ and }U\text{)}\\ &= \alpha(T(\mathbf{v}_1) + U(\mathbf{v}_1))\\ &= \alpha((T\oplus U)(\mathbf{v}_1)) &\text{(by definition of }T\oplus U\text{)} \end{align*}$$ so $T\oplus U$ is indeed an element of $\mathcal{L}(\mathbf{V},\mathbf{W})$.
I'll let you verify that $(S\oplus T)\oplus U = S\oplus (T\oplus U)$ for all $S,T,U\in\mathcal{L}(\mathbf{V},\mathbf{W})$ (since this is an equality of functions, you need to check that they have the same value at every $\mathbf{v}\in \mathbf{V}$). That $T\oplus U=U\oplus T$ for all $T,U\in\mathcal{L}(\mathbf{V},\mathbf{W})$; that if $\mathbf{0}$ is the linear transformation that sends every $\mathbf{v}\in\mathbf{V}$ to $\mathbf{0}\in\mathbf{W}$, then $T\oplus\mathbf{0}=T$ for all $T$; and that given $T\in\mathcal{L}(\mathbf{V},\mathbf{W})$, and we define $-T$ to be the function $(-T)(\mathbf{v}) = -(T(\mathbf{v}))$, then $T\oplus (-T) = \mathbf{0}$.
Now we define a scalar multiplication, which I will denote by $\odot$ (again, to avoid confusion with the scalar multiplication from $\mathbf{V}$ and $\mathbf{W}$. Given $T\colon \mathbf{V}\to\mathbf{W}$ and $\alpha\in\mathbf{F}$, define $(\alpha\odot T)$ to be the function $$(\alpha\odot T)(\mathbf{v}) = \alpha T(\mathbf{v}).$$ I will let you verify that this definition works, in that $\alpha\odot T$ is a linear transformation when $T$ is a linear transformation; and that it satisfies the necessary properties:
• $\alpha\odot(\beta\odot T) = (\alpha\beta)\odot T$;
• $1\odot T = T$;
• $(\alpha + \beta)\odot T = (\alpha\odot T)\oplus (\beta\odot T)$;
• $\alpha\odot(T\oplus U) = (\alpha\odot T)\oplus (\alpha\odot U)$.
So $(\mathcal{L}(\mathbf{V},\mathbf{W}),\oplus,\odot)$ is a vector space over $\mathbf{F}$ whenever $\mathbf{V}$ and $\mathbf{W}$ are vector spaces over $\mathbf{F}$.
So now, dual spaces: Note that $\mathbf{F}$ is always a vector space over itself, by defining vector addition to be the same as the addition of $\mathbf{F}$, and scalar multiplication to be the same as multiplication in $\mathbf{F}$.
So if $\mathbf{V}$ is any vector space over $\mathbf{F}$, then we can consider $\mathcal{L}(\mathbf{V},\mathbf{F})$: this makes sense, because both $\mathbf{V}$ and $\mathbf{F}$ are vector spaces over $\mathbf{F}$; and this is itself a vector space over $\mathbf{F}$ with vector addition $\oplus$ and scalar multiplication $\odot$ as defined above.
This vector space, $\mathcal{L}(\mathbf{V},\mathbf{F})$, is called the dual space of $\mathbf{V}$. We write $\mathbf{V}^*$ instead of $\mathcal{L}(\mathbf{V},\mathbf{F})$, and the elements of $\mathbf{V}^*$ are called "functionals".
By abuse of notation, we usually write $+$ instead of $\oplus$ (just like we use the same symbol for the addition of $\mathbf{V}$ and the addition of $\mathbf{W}$), and $\cdot$ or just juxtaposition instead of $\odot$.
The equation you have, $$y(\alpha_1 x_1 + \alpha_2x_2) = \alpha_1y(x_1) + \alpha_2y(x_2)$$ is just telling you that the function $y$ is a linear transformation from $\mathbf{V}$ to $\mathbf{F}$.
It is traditional to use boldface lower case letters like $\mathbf{f}$, $\mathbf{g}$, $\mathbf{h}$ to represent functionals. This to remind us that even though they are vectors in the vector space $\mathbf{V}$, they are "really" functions (when they are at home).
In fact, you could go back even further. If $\mathbf{W}$ is a vector space over $\mathbf{F}$, and $X$ is any set, then we can look at $$\mathcal{F}(X,\mathbf{W}) = \{f\colon X\to\mathbf{W}\mid f\text{ is a function}\}.$$ Then $\mathcal{F}(X,\mathbf{W})$ is a vector space, with addition $(f\oplus g)(x) = f(x)+g(x)$ and scalar multiplication $(\alpha\odot f)(x) = \alpha f(x)$. The case of $\mathcal{L}(\mathbf{V},\mathbf{W})$ corresponds to looking at a subspace of $\mathcal{F}(\mathbf{V},\mathbf{W})$ consisting of linear transformations.
This is a standard construction in abstract algebra. Whenever $A$ is an algebra (in the sense of General Algebra; a group, semigroup, ring, vector space, lattice, etc), and $X$ is a set, the collection of all function $f\colon X\to A$ becomes an algebra of the same type under "pointwise operations". In fact, this is nothing more than a "direct power" (a direct product in which every factor is the same) indexed by $X$.
-
1
You should really write a book on linear algebra; I'm confident it would become an instant best-seller – ItsNotObvious Feb 20 '12 at 19:28
Wow, Thanks so much for that insight. I think i understand finally what vector spaces are and how generic they are and hence why so many other fields rely on vector spaces. – Hardy Feb 20 '12 at 19:40
A really great answer. Thanks - this really helped me also. – user60088 Jan 29 at 11:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 118, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396796226501465, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58711/spherical-transforms-and-rotations
|
## Spherical transforms and rotations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider a function $f$ on $S^2,$ and its spherical transform $\hat{f}.$ Let $r$ be a rotation by some $\rho \in SO(3).$ Is there some nice formula for $\widehat{f \circ r}?$ I have found some allusions to "Wigner D-matrices"... I am sure there are good references...
-
What is the "spherical transform" of a function $f$? – Deane Yang Mar 17 2011 at 2:41
A decomposition in spherical harmonics (= eigenfunctions of laplacian on $S^2.$) – Igor Rivin Mar 17 2011 at 3:04
## 1 Answer
Suppose your function is $f(s)$ and its spherical transform, as you call it, is `$\hat{f}_{lm}=\int_{S^2} Y_{lm}(s)^* f(s)\,\mathrm{d}s$`, where the `$Y_{lm}(S)$` are spherical harmonics. Then for fixed $l$, the components of $\hat{f}$ will transform under a rotation $r\in SO(3)$ in an irreducible representation of $SO(3)$ of spin $l$: ```$$
\hat{f\circ r}_{lm} = \sum_{n} D(r)^l_{m,n} \hat{f}_{ln}.
$$``` The representation maps `$D({\cdot})^l_{m,n}$` are presumably what you are looking for and are precisely the Wigner D-matrices (they are matrices in the $m,n$ indices).
An explicit expression for these matrices depends on how you choose to parametrize your rotations $r\in SO(3)$. If you use Euler angles, then explicit expressions are given on the Wikipedia page for Wigner D-matrices. If you prefer a different parametrization of $SO(3)$, then another reference might be more convenient. In that case, please refine your question.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8263458609580994, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/classical-physics%20lagrangian-formalism
|
# Tagged Questions
1answer
134 views
### Generalized momentum conjugate and potential $U(q, \dot q)$
On Goldstein's "Classical Mechanics" (first ed.), I have read that if $q_j$ is a cyclic coordinate, its generalized momentum conjugate $p_j$ is costant. He obtained that starting from Lagrange's ...
2answers
206 views
### Charge, velocity-dependent potentials and Lagrangian
Given an electric charge $q$ of mass $m$ moving at a velocity ${\bf v}$ in a region containing both electric field ${\bf E}(t,x,y,z)$ and magnetic field ${\bf B}(t,x,y,z)$ (${\bf B}$ and ${\bf E}$ are ...
1answer
363 views
### Deriving the action and the Lagrangian for a free particle in Relativistic mechanics
My question relates to Landau, Classical Theory of Field, Chapter 2 - Relativistic Mechanics, paragraph 8 - The principle of least action. As stated there, To determine the action integral for a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8704911470413208, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/3592/zero-knowledge-auth-schemes-with-weak-secret
|
# Zero Knowledge auth schemes with weak secret
In Zero Knowledge auth schemes the public DH factor of each peer is encrypted with a potentially weak pre-shared secret and the resulting ciphertexts are exchanged over an insecure channel. Why is no attack on the weak secret possible?
I mean the messages can be sniffed and with an offline dictionary attack or simply bruteforce it should be possible to reveal the secret.
-
## 1 Answer
I believe that you are talking about one specific version of EKE, which is one of several known Password authenticated key agreement methods (which is the general category of methods that do a key agreement with the property that someone listening into the exchange can't learn anything, and an attacker that poses as one of the two sides can learn no more from a single exchange than whether a specific password was correct).
Now, with that background, here is the answer to your question. With DH-based EKE, yes, both sides exchange their $E_k( g^x )$ and $E_k( g^y)$ values, and yes, the attacker can decrypt both values with a potential password, giving $g^x$ and $g^y$ if his guess is correct, and $D_{k'}(E_k(g^x))$ and $D_{k'}(E_k(g^y))$ if his guess is incorrect. However, EKE uses a special encryption method such that $D_{k'}(E_k(g^x))$ and $D_{k'}(E_k(g^y))$ are also valid public values; possibly $g^z$ and $g^w$, for some $z$ and $w$. This prevents the attacker from learning anything; he cannot test if a specific $g^x$ and $g^y$ generates the correct shared secret (because the DH problem is hard), and checking if they are valid public value tells him nothing (because they're all valid). So, while the attacker can make a list of potential decryptions, there's nothing that distinguishes the correct one from all the wrong ones.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559643864631653, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/841/integral-via-complex-analysis-integral-via-hypercomplex-analysis
|
# Integral via complex analysis. Integral via hypercomplex analysis
If I remember rightly there are some integrals of real functions which are easier to compute by using complex analysis.
Is this because of properties of the particular function or because of a lack of a known real analysis technique?
Are there functions which would require hypercomplex analysis to integrate?
-
This question is extremely vague – Casebash Jul 27 '10 at 23:21
My understanding of the question is: "There are definite integrals over parts of the real line that are easier to compute using techniques from complex analysis. What properties of the function are responsible for this? Are there any integrals that cannot be computed without further extensions with hypercomplex numbers?" This sounds fine to me. – Larry Wang Jul 27 '10 at 23:25
2
There is a famous anecdote of Feynman in which he recalls competing with other people to evaluate integrals. Feynman would use differentiation under the integral sign, and he claimed that this technique (which his colleagues never used) worked on any integral. But one of his colleagues showed him an integral he could not integrate; the only method his colleague knew that worked was complex analysis. Unfortunately I don't know the integral in question. – Qiaochu Yuan Jul 27 '10 at 23:54
## 2 Answers
I'm pretty sure you mean the evaluation of definite integrals. For instance, $e^{-x^2}$ has no elementary antiderivative, but it's definite integral over the real line can be computed explicitly (it's $\sqrt{\pi}$).
You ask what complex analysis has to do with this. Well, the idea is that complex integrals around closed curves (of holomorphic functions) are usually comparatively easy to evaluate: it's because we have the residue theorem as a tool.
However, these real integrals are usually over the whole real line, which doesn't satisfy the hypothesis of a path under the residue theorem. For instance, it's not a finite closed path. However, we can replace the real number by a large semicircle (to pick one common example) that goes from $-R$ to $R$, then around the upper-half-plane from $R$ to $-R$. This is a closed path and the residue theorem can apply to this. When you let $R$ to $\infty$, the integral around semicircular path often tends to zero (by direct bounding arguments). So what you're left with is the integral along the real line, and it's equal to the residues.
Examples of this will be in any complex analysis textbook, e.g. Ahlfors's. The Wikipedia article on the residue theorem (link above) also has examples.
-
A generic real-valued function carries no additional information other than its values. But many of the common functions encountered in analysis are just the restriction to $\mathbb{R}$ of holomorphic functions, and these functions carry a lot of additional information, namely, their values in $\mathbb{C}$. Unlike real-analytic functions, holomorphic functions are extremely rigid: for example, if two holomorphic functions agree on a set of complex numbers with a limit point, they must be equal everywhere. That means it's possible to deduce information about how a holomorphic function behaves on part of its domain (say, $\mathbb{R}$) from information about how it behaves on other parts of its domain, since the two have to determine each other. This information is extracted using contour integrals.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532616138458252, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-math-topics/1837-making-me-go-1n54n3.html
|
# Thread:
1. ## This is making me go 1N54N3!!
I already asked the following 2 question but they were not answer thus I shall ask them again.
1)Are the cardinal numbers countable?
2)Are the cadinal numbers dense; meaning, for $\aleph_n<\aleph_m$ there exists a $\aleph_p$ such as $\aleph_n<\aleph_p<\aleph_m$. For example, the rationals and the reals are dense? (Perhaps this is connected to the continuum hypothesis?)
3)Finally we get to a question to is making me go insane! I was thinking about this when I was falling asleep. Consider the set of all FINITE sets. What is the cardinality of this set?!?! I was able to prove (although not formally but you can consider it to be a proof) with the property of the power set, that I can make the cardinality of this set as large as I like!!!!! Thus, there is no cardinal number for this set!!!! When I was falling asleep a solution entered my mind. Who says that any infinite set must have a cardinal number? Perhaps, that reasoning is not true. And this is such a case. Thus, I decided to this the "super-cardinal number". Just as a cardinal number always excedes any natural number (cardinality of finite sets) so too the supercardinal number excedes any cardinal number (cardinality for SOME infinite sets). What happened?
2. Originally Posted by ThePerfectHacker
3)Finally we get to a question to is making me go insane! I was thinking about this when I was falling asleep. Consider the set of all FINITE sets. What is the cardinality of this set?!?!
Try something simpler. What is the Cardinality of all sets with ONE element?
RonL
3. I did, and I approach to the same conclusion. This set can be made larger than any cardinal number!
Did I find something new?
4. Originally Posted by ThePerfectHacker
I did, and I approach to the same conclusion. This set can be made larger than any cardinal number!
Did I find something new?
I'm being elliptic. I promise to be more direct in future.
When I ask about the cardinality of all sets with one element I am trying
to get you to think about "sets of what?". I want you to do this because
I suspect you have in mind something like a set of all sets.
The reason this is something to think about is because such an entity is
not well defined or unparadoxical.
RonL
5. Originally Posted by ThePerfectHacker
I already asked the following 2 question but they were not answer thus I shall ask them again.
1)Are the cardinal numbers countable?
2)Are the cadinal numbers dense (...)
3)Finally we get to a question to is making me go insane! I was thinking about this when I was falling asleep. Consider the set of all FINITE sets. What is the cardinality of this set?!?! I was able to prove (...) that I can make the cardinality of this set as large as I like!!!!!(...)
Ad 2. It is the continuum hypothesis (generalised). It states that there is no cardinal number between $\aleph_n$ and $2^{\aleph_n}$ thus the set of cardinal numbers is not dense.
Ad 3. Yes if we assume that generalised continuum hypothesis is true. Let us imagine sequence that it's first element is 0, the second element is $\aleph_0$, the third 1, the fourth $2^{\aleph_0}$ and so on... it is
$<br /> x_1 = 0, \quad x_2 = \aleph_0<br />$
$<br /> x_n = x_{n-2}+1 \quad for \quad n = 2k+1 > 2 <br />$
$<br /> x_n = 2^{x_{n-2}} \quad for \quad n = 2k > 2, <br />$
$<br /> where \quad k \in \mathbb{N}<br />$
Due to continuum hypothesis we can't miss any infinite cardinal, and because natural numbers are countible we can't miss them either.
Ad 3.
And it is very interesting. What is that proof?
Just as the set of all sets does not exist, maybe the set of all finite sets also. But I cannot imagine the proof of this fact is like this one Russel did. Maybe the fact that it does not really have cardinallity could be such proof. Interesting.
6. Hmm... Looks like you are already at this forum. Greetings to you
7. My proof is not going to be very formal (and I do not need the countinuum hypothesis ) I am going to be working with a set of sets having single elements. This is what CaptainBlack proposed because I think it would be easier.
Let $\mathbb{Z^+}=\{1,2,3,...\}$ the natural numbers. Thus, $P(\mathbb{Z^+})$ is larger (the property of a power set). Thus, $\bigcup\{x\} ,x\in P(\mathbb{Z^+})$ having the cardinality greater than $\aleph_0$. But the previous demonstration is the the uninon of all sets having single elements. Further, by the properties of the power set we have that $P(P(\mathbb{Z^+}))$ is even larger. Thus, $\bigcup\{x\},x\in P(P(\mathbb{Z^+}))$ is the union of sets having a single element. This is even larger then the previous one. Thus, using this construction we can construct the cardinality of "the sets of all finite sets" to be as large as we like. Thus there is no such cardinal number. Now the conclusion I draw from this that some sets can be so large that you cannot use cardinals numbers anymore, and thus there is no problem with this set.
I asked some people about this problem, and they have no idea. I hope someone finds a solution to my problem. Perhaps, the problem is that no one ever considered this set.
CaptainBlack, you mention a "well-defined set" I heard this term many times before and wanted to ask what it mean? Further I happen to know that the "Super-set" the set of all sets is not well defined and thus we cannot use it. Further, my set is the set of all FINITE sets and it itself is infinite, thus it is not an element of itself. I see no problem with my set.
8. Originally Posted by ThePerfectHacker
My proof is not going to be very formal (and I do not need the countinuum hypothesis ) I am going to be working with a set of sets having single elements. This is what CaptainBlack proposed because I think it would be easier.
Let $\mathbb{Z^+}=\{1,2,3,...\}$ the natural numbers. Thus, $P(\mathbb{Z^+})$ is larger (the property of a power set). Thus, $\bigcup\{x\} ,x\in P(\mathbb{Z^+})$ having the cardinality greater than $\aleph_0$. But the previous demonstration is the the uninon of all sets having single elements. Further, by the properties of the power set we have that $P(P(\mathbb{Z^+}))$ is even larger. Thus, $\bigcup\{x\},x\in P(P(\mathbb{Z^+}))$ is the union of sets having a single element. This is even larger then the previous one. Thus, using this construction we can construct the cardinality of "the sets of all finite sets" to be as large as we like. Thus there is no such cardinal number. Now the conclusion I draw from this that some sets can be so large that you cannot use cardinals numbers anymore, and thus there is no problem with this set.
I asked some people about this problem, and they have no idea. I hope someone finds a solution to my problem. Perhaps, the problem is that no one ever considered this set.
CaptainBlack, you mention a "well-defined set" I heard this term many times before and wanted to ask what it mean? Further I happen to know that the "Super-set" the set of all sets is not well defined and thus we cannot use it. Further, my set is the set of all FINITE sets and it itself is infinite, thus it is not an element of itself. I see no problem with my set.
Suppose such a set of all sets exists, then each of its subsets is an element
of it, hence it contains its Power Set. Hence the Cardinality of its Power
Set is less than or equal to its Cardinality.
But you will have seen a proof that the Cardinality of the Power Set of a Set
is strictly greater than the cardinality of the Set itself - Contradiction
RonL
9. So, we have all to prove that the set of all one-element sets does not exist. Let $\mathfrak{U}_1$ denote the set of all one-element sets.
Using Cantor theorem we have that the cardinality of $\mathcal{P}(\mathfrak{U}_1)$ is greater than the cardinality of $\mathfrak{U}_1$.
Suppose that $X \in \mathcal{P}(\mathfrak{U}_1)$ The set {X} satisfies ${X} \in \mathfrak{U}_1$ by the definition thus there exist an injection from $\mathcal{P}(\mathfrak{U}_1)$ to $\mathfrak{U}_1$. This way: $card (\mathcal{P}(\mathfrak{U}_1)) \leq card( \mathfrak{U}_1)$ . It is in contradiction with Cantor theorem thus the set $\mathfrak{U}_1$ does not exist.
I've heared that there is the sollution of your problem. The set of all sets does not exist, however there exists the class of all sets. The class theory was founded by von Neumann and Bernays. In thirties Kurt Godel have created axiomatic system of class theory. More details I don't know.
10. Originally Posted by CaptainBlack
Suppose such a set of all sets exists, then each of its subsets is an element
of it, hence it contains its Power Set. Hence the Cardinality of its Power
Set is less than or equal to its Cardinality.
But you will have seen a proof that the Cardinality of the Power Set of a Set
is strictly greater than the cardinality of the Set itself - Contradiction
RonL
AH!!!!!!
But you are assuming that it has a CARDINALITY!!!!
Not necessarily it has a cardinal number, did Canotr prove that each infinite set has a cardinal number???
I demonstrated that this is false by construction the set of all finite sets.
11. Originally Posted by albi
So, we have all to prove that the set of all one-element sets does not exist. Let $\mathfrak{U}_1$ denote the set of all one-element sets.
Using Cantor theorem we have that the cardinality of $\mathcal{P}(\mathfrak{U}_1)$ is greater than the cardinality of $\mathfrak{U}_1$.
Suppose that $X \in \mathcal{P}(\mathfrak{U}_1)$ The set {X} satisfies ${X} \in \mathfrak{U}_1$ by the definition thus there exist an injection from $\mathcal{P}(\mathfrak{U}_1)$ to $\mathfrak{U}_1$. This way: $card (\mathcal{P}(\mathfrak{U}_1)) \leq card( \mathfrak{U}_1)$ . It is in contradiction with Cantor theorem thus the set $\mathfrak{U}_1$ does not exist.
I've heared that there is the sollution of your problem. The set of all sets does not exist, however there exists the class of all sets. The class theory was founded by von Neumann and Bernays. In thirties Kurt Godel have created axiomatic system of class theory. More details I don't know.
How can a set not exists
I was reading on Wikipedia that first we have Naive set theory which rested on intuition then Axiomatic set theory came along and removed all the paradoxes in set theory. Thus, I understand that a set (even though undefined) must satisfy some condition to be a set, right? Thus, my set does not satisfiy that.
Important, you set my problem was the set of all sets, and I know that no such set exists (called superset), but my problem is more reasonable that it is a set of all finite sets. Again, I am not speaking about the Superset overhere, taking about a completely different set.
12. Originally Posted by ThePerfectHacker
AH!!!!!!
But you are assuming that it has a CARDINALITY!!!!
Not necessarily it has a cardinal number, did Canotr prove that each infinite set has a cardinal number???
I demonstrated that this is false by construction the set of all finite sets.
No I'm not, Cardinality denotes a relationship between sets.
We say set $A$ has a greater than or equal cardinality to set
$B$ if there exists a one-one mapping from $B$ into $A$.
We say set $A$ has a cardinality equal to that of a set $B$ if there
exists a one-one mapping from $B$ into $A$, and there exists a
one-one mapping from $B$ into $A$
The proof that the Cardinality of the Power Set of a set is strictly greater
than the cardinality of the set itself is a proof the there is a one-one
mapping form the set into the Power Set, but that there is no such one-one
mapping from the Power Set into the Set itself.
Thus our contradiction stands.
RonL
(note into in this context includes onto)
13. Originally Posted by ThePerfectHacker
How can a set not exists
Important, you set my problem was the set of all sets, and I know that no such set exists (called superset), but my problem is more reasonable that it is a set of all finite sets. Again, I am not speaking about the Superset overhere, taking about a completely different set.
Read again. I'm not talking about universum (or superset as you are saying). I am talking about $\mathfrak{U}_1$, it is the set of all ONE-ELEMENT sets.
Originally Posted by ThePerfectHacker
AH!!!!!!
(...), did Canotr prove that each infinite set has a cardinal number???
I don't know what Cantor was thinking about cardinal numbers. However I have seen two (similar!) attitudes to this problem in my Set Theory book.
We can simply introduce an axiom that every set has cardinal number. (For every set there exists cardinal number such as $card(A) = card(B)$ if and only if $A$ has the same number of elements as $B$)
The second one was to introduce something called "relation type" (it is the direct translation from Polish, I was unable to find anyting about it in Wikipedia or MathWorld).
Axiom. For any relation system $<A, \mathcal{R}>$, $\mathcal{R} \subset A \times A$ there exists "relation type".
Two relation systems $<A, \mathcal{R}>$ and $<B, \mathcal{S}>$ have the same "relation type" iff they are isomorphic.
The cardinal number of the set $A$ is "relation type" of the system $<A, A \times A>$.
14. Originally Posted by CaptainBlack
No I'm not, Cardinality denotes a relationship between sets.
We say set $A$ has a greater than or equal cardinality to set
$B$ if there exists a one-one mapping from $B$ into $A$.
We say set $A$ has a cardinality equal to that of a set $B$ if there
exists a one-one mapping from $B$ into $A$, and there exists a
one-one mapping from $B$ into $A$
The proof that the Cardinality of the Power Set of a set is strictly greater
than the cardinality of the set itself is a proof the there is a one-one
mapping form the set into the Power Set, but that there is no such one-one
mapping from the Power Set into the Set itself.
Thus our contradiction stands.
RonL
(note into in this context includes onto)
Ah, but then how did Cantor proof the the fundamental property of the power set if it fails in this case with the Superset!!!???
15. Originally Posted by ThePerfectHacker
Ah, but then how did Cantor proof the the fundamental property of the power set if it fails in this case with the Superset!!!???
Look at the ZF axioms. The axiom of regularity effectivly rules that
your "set of all sets" is not a set in ZF set theory.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 71, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.963118851184845, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/34947/does-the-uncertainty-principle-apply-to-photons
|
# Does the uncertainty principle apply to photons?
Wikipedia claims the following:
More generally, the normal concept of a Schrödinger probability wave function cannot be applied to photons. Being massless, they cannot be localized without being destroyed; technically, photons cannot have a position eigenstate and, thus, the normal Heisenberg uncertainty principle does not pertain to photons.
Edit:
We can localize electrons to arbitrarily high precision, but can we do the same for photons? Several sources say "no." See eq. 3.49 for an argument that says, in so many words, that if we could localize photons then we could define a current density which doesn't exist. (Or something like that, I'll admit I don't fully understand.)
It's the above question that I'd like clarification on.
-
– user10001 Aug 26 '12 at 11:01
## 3 Answers
The relation $p={h\over \lambda}$ applies to photons, it has nothing to do with the uncertainty principle. The issue is localizing the photons, finding out where the are at any given time.
The position operator for a photon is not well defined in any usual sense, because the photon position does not evolve causally, the photon can go back in time. The same issue occurs with any relativistic particle when you try to localize it in a region smaller than its Compton wavelength. The Schrodinger position representation is only valid for nonrelativistic massive particles.
There are two resolutions to this, which are complementary. The standard way out it to talk about quantum fields, and deal with photons as excitations of the quantum field. Then you never talk about localizing photons in space.
The second method is to redefine the position of a photon in space-time rather than in space at one time, and to define the photon trajectory as a sum over forward and backward in time paths. This definition is fine in perturbation theory, where it is an interpretation of Feynman's diagrams, but it is not clear that it is completely correct outside of perturbation theory. I tend to think it is fine outside of perturbation theory too, but others disagree, and the precise nonperturbative particle formalism is not completely worked out anywhere, and it is not certain that it is fully consistent (but I believe it is).
In the perturbative formalism, to create a space-time localized photon with polarization $\epsilon$, you apply the free photon field operator $\epsilon\cdot A$ at a given space time point. The propagator is then the sum over all space-time paths of a particle action. The coincidence between two point functions and particle-paths This is the Schwinger representation of Feynman's propagator, and it is also implicit in Feynman's original work. This point of view is downplayed in quantum field theory books, which tend to emphasize the field point of view.
-
I'll have to give this a thorough looking. There's a lot of unfamiliar stuff in here. As far as the De Broglie relation, I meant its use in finding $\Delta p$. – Alec S Aug 26 '12 at 5:40
@AlecS: You can find $\Delta p$, but there is no $\Delta X$ because photons don't have a real position operator defined at one time-slice. The photon concept, like all relativistic particle concepts, is a quantum field picture which is justified in perturbation theory, but hard to justify outside it (but it should be possible). – Ron Maimon Aug 26 '12 at 5:47
I wish I could give this comment more of my time because it looks like you put a lot of thought into it. But on the surface it doesn't seem to answer my question. There are a lot of new words here, but not a lot of new explanation. – Alec S Aug 26 '12 at 8:43
@AlecS: ok here's a rephrasing: photons don't have a position wavefunction, so the uncertainty principle can't be formulated. The photon still has a qualitative uncertainty principle, because it has a space-time localization, just not a 3-dimensional uncertainty principle like a nonrelativistic particle. – Ron Maimon Aug 26 '12 at 9:49
The language you are using is a little tough to parse, but it more-or-less agrees with some of the literature I've seen on the topic. – Alec S Aug 26 '12 at 19:42
show 4 more comments
## Did you find this question interesting? Try our newsletter
email address
Absolutely yes, the uncertainty principle applies to photons nearly identically to how it applies to electrons. To see a great example of a localized traveling wave function which could apply to either a photon or an electron, see the wikipedia article on Wave Packets.
The original wikipedia quote is nonsense, and I have modified the original wikipedia article to remove it.
The energy eigenstates of a photon in free space are also eigenstates of momentum and are monochromatic. So at frequency $f$ the energy is $E=hf$ and the momentum is $p=E/c=hf/c$. The correct statement is "a photon in a momentum eigenstate can't be localized." Guess what, neither can an electron in free space in a momentum eigenstate be localized. If momentum is certain, uncertainty in position is infinite, i.e. can't be localized. As with electrons, so with photons. And electrons have a finite rest mass and therefore finite eigenstates.
So how do I localize a photon? Experimentally, I have a light source with a shutter. I can open the shutter for 1 ns, otherwise it is closed. You can be sure when I do that I have a burst of electromagnetic energy of about 30 cm physical extent along the direction of travel. That burst of energy is traveling at 30 cm/ns. So every photon that made it through that open shutter has now got finite position uncertainty, even though it's expected position is a function of time, just like a car driving down the road at 100 kph has finite position uncertainty even as its position changes with time.
Theoretically, I create a wave packet which Wikipedia describes beautifully. A localized photon, just a like a localized anything, is no longer monochromatic, is no longer an eigenstate of momentum and energy. No difference here between a photon and an electron.
I am shocked the wikipedia article on the photon has such nonsense in it. I went to wikipedia and removed that paragraph from the article and put a comment in the talk section to describe why.
-
> The correct statement is "a photon in a momentum eigenstate can't be localized." Of course -- this is first-year quantum mechanics. Thanks for your explanation, I'm curious to hear what comes up in the talk section of wikipedia. – Alec S Aug 26 '12 at 19:15
Sorry, I'm withdrawing my "answered" check for now. According to several sources (pra.aps.org/pdf/PRA/v79/i3/e032112) (docs.google.com/…) (see just below 3.49) photons cannot be sharply localized at ALL -- even with an arbitrarily high uncertainty in the momentum. – Alec S Aug 26 '12 at 19:40
@AlecS I can't read the PRA ref, don't have login. In the google docs ref, what you cite is an unpublished one line quote from a referee of the paper. I call B.S. on the referee, and challenge you or anyone else to find a sensible refutation to what I say about localization. – mwengler Aug 26 '12 at 21:11
Sorry mwengler, but you are simply incorrect. As Ron mentions below, there is no Schrodinger (that is, position operator) basis for photons. Furthermore, I would advise you to revert your wikipedia edit, because that sentence was in fact completely correct, if obtuse. – genneth Aug 26 '12 at 23:30
@genneth can you give me a reference that goes beyond a single sentence saying it can't be done? I'd love to understand how a photon represented by a wave packet is any different from an electron represented by a wave packet. – mwengler Aug 27 '12 at 0:01
show 11 more comments
In addition to what was discussed already, and besides the fact the Schrödinger formalism is not relevant for photons, a good place to start in my view is in Roy Glauber's work (or some other introductory text to quantum optics). There, you'd see different uncertainties arising, such as between the photon number and phase, etc...
-
Thanks. I'm having a difficult time understanding Ron Maimon's comment -- I probably need a little more background. – Alec S Aug 26 '12 at 7:07
1
Frankly, I don't think that by explaining something by a bunch of other names (i.e. feynamnn's propagator etc) an understanding can arise. Maybe this bud of explanation would work: If you understand why photons are relativistic (the do travel fast...) then you would expect a QM description that would also be Lorentz invariant (familiar special relativity?). This means that the equation used will be symmetric to translations (time&space) and rotations. Alas, Schrödinger's eq isn't, it has a 1st derivative in time, and 2nd derivatives in space. Hence it cannot describe relativistic particles... – natan Aug 26 '12 at 7:23
1
Wonderful. But why not the Dirac equation, then? – Alec S Aug 26 '12 at 7:34
Why Dirac? why not Klein Gordon to begin with? (then go to dirac...), and then you'll need to describe an eq that captures the QM nature of the electromagnetic field. This time, a gauge symmetry is needed, specifically, the Abelian U(1) symmetry of a complex number, which reflects the ability to vary the phase of a complex number without affecting observables or real valued functions made from it (such as the energy or the Lagrangian). But with these my friend Alec, we already long left the realms of Schrödinger. I think you'd enjoy the graduate courses in physics if you want to dig more... – natan Aug 26 '12 at 7:59
I'll be there soon enough. But I think I'll get a head start in the upcoming months. Your comment was clear, thank you. – Alec S Aug 26 '12 at 8:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400826096534729, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/4427/what-is-the-conceptual-significance-of-supercommutativity/4466
|
## What is the conceptual significance of supercommutativity?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A $\mathbb{Z}/2\mathbb{Z}$-graded algebra is said to supercommute if $xy = (-1)^{|x| |y|} yx$; in other words, odd elements anticommute. Why is this the "right" definition of supercommutativity? (Put another way, why is this the natural tensor product structure on super vector spaces?) Answers from both a categorical or physical point of view would be great.
-
## 2 Answers
The categorical answer is that (in characteristic zero) this is the only way that you can make a suitable symmetric tensor category, other than by using group representations. There is a Tannakian theorem of Deligne to this effect in the algebraic setting.
One of the physical answers is equivalent to the categorical answer. "Parastatistics" is the topic of self-consistent linear actions of the symmetric group on identical quantum-mechanical particles. The parastatistics theorem in physics (or theorems or conjectures; the level of rigor of the real point is not entirely clear) is a lot like Deligne's theorem. It says that parastatistical particles come in two kinds, parafermions and parabosons, and that they can all be modeled as fermions and bosons together with internal state spaces which are group representations.
Bosons and fermions may not look exactly the same as commutative or supercommutative algebras. But they are the same topic, because (if you apply second quantization in reverse) the values of their fields commute or anticommute.
For particles in 2D, the correct group action is the braid group, not the symmetric group. So in this case, the parastatistics theorem does not hold and you can have "anyons". Then the allowed statistics is given by a unitary ribbon tensor category. However, since the category in question is no longer symmetric, there is no clear way to define commutativity; at least, nothing that's clearly important.
Note also that isn't just that the principle of available symmetric tensor categories comes from category theory and is needed in physics. It's also needed in topology. The most traditional supercommutativity in mathematics is cohomology.
To answer Qiaochu's question below, there's nothing wrong with using the standard switching map $v \otimes w \mapsto w \otimes v$ to define commutativity. It shows up all the time. The point is that the signed switching map $v \otimes w \mapsto (-1)^{|v||w|}w \otimes v$ is another valid and inequivalent symmetric monoidal structure. (The symmetric tensor structure is interpreted as what it means to permute factors of a product, of course.) There is nothing to prevent the signed switching map from arising among topological invariants or in physics, so it does arise. The structure theorems say that all "suitable" choices for the switching map are essentially these two, possibly disguised by a restriction to tensors that are invariant under a group action.
For both good and bad reasons, I was deliberately vague about what it means for the symmetric tensor category to be suitable, in the sense that it will satisfy a structure theorem. You want some extra axioms and properties to hold, some of them related to existence of duals and traces. One version of the structure theorem, due to Deligne, is reviewed in this arXiv paper by Etingof and Gelaki. (The theorem cited as [De2] is the relevant one.)
-
Could you clarify what you mean by "suitable"? I don't see anything wrong with the identification of v x w with w x v except that it's more boring; it seems to give a legitimate symmetric monoidal category. – Qiaochu Yuan Nov 7 2009 at 1:29
1
A followup to Greg's answer, and a more precise notion of suitable in the mathematical context can be found here: <arxiv.org/abs/math/0401347>; (one can seek the references for the original source). Deligne's theorem says that every symmetric tensor category which is not obscenely large (meaning that lengths of Jordan-Holder series grow super-exponentially) is equivalent to representations of a super-group G, and in particular, the objects can be viewed as vector spaces (w/ G action) and the braiding given by either v x w --> w x v, or v x w --> (-1)^|v||w| w x v. – David Jordan Nov 9 2009 at 22:18
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is not a satisfying answer to your question, but one observes that the exterior algebra and Clifford algebras have this kind of commutativity, so it certainly arises "naturally".
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933443546295166, "perplexity_flag": "middle"}
|
http://www.cfd-online.com/W/index.php?title=Gradient-based_methods&diff=12595&oldid=12594
|
[Sponsors]
Home > Wiki > Gradient-based methods
# Gradient-based methods
### From CFD-Wiki
(Difference between revisions)
| | | | |
|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 7: | | Line 7: | |
| | where <math>U</math> and <math>\alpha</math> are the flow variable vector and the design variable vector respectively. <math>U</math> and <math>\alpha</math> are implicitly related through the flow equation, which is represented by a residual function driven to zero. | | where <math>U</math> and <math>\alpha</math> are the flow variable vector and the design variable vector respectively. <math>U</math> and <math>\alpha</math> are implicitly related through the flow equation, which is represented by a residual function driven to zero. |
| | | | |
| - | <math>RJ(U(\alpha),\alpha)=0</math> | + | <math>{R}(U(\alpha),\alpha)=0</math> |
| | | | |
| | Finite difference method: | | Finite difference method: |
## Revision as of 03:12, 24 January 2011
As its name means, gradient-based methods need the gradient of objective functions to design variables. The evaluation of gradient can be achieved by finite difference method, linearized method or adjoint method. Both finite difference method and linearized method has a time-cost proportional to the number of design variables and not suitable for design optimization with a large number of design variables. Apart from that, finite difference method has a notorious disadvantage of subtraction cancellation and is not recommended for practical design application.
Suppose a cost function $J$ is defined as follows,
$J=J(U,\alpha)$
where $U$ and $\alpha$ are the flow variable vector and the design variable vector respectively. $U$ and $\alpha$ are implicitly related through the flow equation, which is represented by a residual function driven to zero.
${R}(U(\alpha),\alpha)=0$
Finite difference method:
Linearized method:
Adjoint method:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8876965045928955, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/103144-another-problem-related-random-walk.html
|
# Thread:
1. ## Another problem related to random walk...
Let ${X_n}$ be a symmetric simple random walk.
What is $P(X_7+X_{12} = X_1 + X_{16}$?
I tried to start this problem by setting a new variable $X_{7+X_{12}}$ and transfer the problem into the probability that two X equal but that lead no where. Is there any property that deal with the sum of random walk random variables? It would be great if someone could help me.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933652400970459, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/249076/vector-space-or-not/249081
|
# Vector Space or Not?
I have a question. Suppose that $V$ is a set of all real valued functions that attain its relative maximum or relative minimum at $x=0$. Is V a vector space under the usual operations of addition and scalar multiplications? My guess is it is not a vector space, but I can't able to give a counterexample?
-
When talking about real valued functions, it's difficult to use the terms "relative minimum" and "relative maximum". For these to make sense, you're going to need the curves to be at least $C^1$. – andybenji Dec 2 '12 at 8:32
You have not accepted any answer yet! – Mhenni Benghorbal Dec 2 '12 at 8:34
@andybenji: What definition of local minimum/maximum are you using? – wj32 Dec 2 '12 at 8:35
@wj32 I'm using the typical one from calculus: A local maximum is where the derivative goes from positive to negative and a minimum is the opposite. See my answer below for more details. – andybenji Dec 2 '12 at 8:46
4
– wj32 Dec 2 '12 at 9:01
show 2 more comments
## 3 Answers
As I hinted in my comment above, the terms local maximum and local minimum only really make sense when talking about differentiable functions. So here I show that the set of functions with a critical point (not necessarily a local max/min) at 0 (really at any arbitrary point $a \in \mathbb{R}$) is a vector subspace.
If you broaden this slightly to say that $V \subset C^1(-\infty,\infty)$ is the set of differentiable functions on the reals that have a critical point at 0 (i.e. $\forall$ $f \in V$ $f'(0) = 0$).
Then it's simple to show that this is a vector space.
If $f,g \in V$ ($f'(0)=g'(0)=0$), and $r \in \mathbb{R}$ then, and hence
1. $(f+g)'(0) = f'(0) + g'(0) = 0 + 0 = 0$.
2. The derivative of the zero function is zero, and hence evaluates to 0 at 0.
3. $(rf)'(0) = r(f'(0))=r0 = 0$.
And together these imply that $V$ is a vector subspace of $C^1(-\infty,\infty)$.
Robert Israel's answer above is a nice example of why we must define our vector space to have a critical point, not just a max/min at 0.
-
Thank you very much.:) – juniven Dec 2 '12 at 10:15
the terms local maximum and local minimum only really make sense when talking about differentiable functions... Where did you see that? This is far from being true. – Did Dec 7 '12 at 18:33
Consider the functions $$f(x)=\cases{x&\text{if }x<0\\0&\text{otherwise}}$$ and $$g(x)=\cases{0&\text{if }x<0\\x&\text{otherwise.}}$$
-
For example, $x^2 + x^3$ and $x^2$ both have relative minima at $0$, but $(x^2 + x^3) - x^2$ does not.
-
A very nice example.:) Many Thanks – juniven Dec 2 '12 at 10:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9120578169822693, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/24552/how-do-kinetic-energy-and-linear-momentum-relate?answertab=votes
|
# How do kinetic energy and linear momentum relate?
It took me quite a long time to click my gears in place and even then I'm not sure it's completely correct.
The problem is that I need to understand these concepts (physics concepts; not just these two) with intuition, not only mathematical representations. So $(E_k = \frac{1}{2} mv^2)$ and $(P_l = mv)$ don't tell me much.
Hence: Here's how I've been viewing them:
1. Linear momentum is the moving version of inertia; how much it could resist change in its non-zero velocity.
2. Kinetic energy is how much a moving object could influence other objects upon contact.
So $P_l$ is how much force an object need/can take while $E_k$ is how much it can give. All for moving objects.
Am I correct in this view?
PS. I'm aware of the similar questions already posted. No, they don't address what I need.
-
## 1 Answer
So $P_l$ is how much force an object need/can take while $E_k$ is how much it can give. All for moving objects.
Momentum & Energy are not forces, I believe I understand what you intended but its important not to mince words. Momentum and Energy are what they are defined to be mathematically and nothing more. Along those lines most of the intuition you will need about these equations comes directly from these equations and conservation laws.
As the velocity of an object increases how does its momentum change? Well you know $P_l=mv$ so momentum must increase linearly. How does its $E_k$ change? Again go back to the equation...$\frac{1}{2}mv^2$ ...it increases as a square of the velocity. Then you know that classically, total energy is conserved and that for inelastic collisions momentum is conserved... this tells you how momentum and energy behave when objects interact.
But actually, one could write a book on classical mechanics (actually they have, and they're everywhere) and then you could read all of them but that won't give you the intuition that solving problems provides. My suggestion: solve problems, then solve more problems and then find problems you can't solve and get frustrated, get really frustrated and think. That bit where you get frustrated is the most valuable, as this is how you develop intuition. Intuition requires context and context comes from experience.
-
1
I never said they were forces but rather how much force they need done on them for an effect to show. Namely, why a 1000 kg train at 1 m/s is better than a 1 kg ball at 1000 m/s although both would take the same force and time to stop them although the ball would have a much more destructive effect on any object it contacts because its KE is much higher (1:1,000). – Mussri Apr 30 '12 at 19:00
Objects that aren't accelerating don't apply a force, so both of those objects are applying 0 force until they come in to contact with some that has some inertia. Destructive properties is ambiguous, it depends on several things like how the energy is dispersed in the collision. For example if the ball is made of rubber maybe it will heat up and deform greatly upon collision which would absorb a lot of its energy. Meanwhile if the train is completely inelastic then when it collides all of its energy will be transferred kinetically. Otherwise though I agree with your last statement. – acadien May 2 '12 at 21:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9672630429267883, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/121852-sampling-replacement-standard-deviation.html
|
# Thread:
1. ## Sampling with replacement - standard deviation
Can anyone help me with this problem?
Suppose there are N balls in total. At each sampling, a ball is randomly taken out, marked with a mark, and put back to the samples. If a ball is taken out for a second time during a sampling, just put back without doing anything. Let X denote the number of balls being marked after n repeated samplings. The expectation of X can be calculated as: N*(1 - (1 - 1/N)^n) . But how to calculate the standard deviation of X?
2. Originally Posted by southliguang
Can anyone help me with this problem?
Suppose there are N balls in total. At each sampling, a ball is randomly taken out, marked with a mark, and put back to the samples. If a ball is taken out for a second time during a sampling, just put back without doing anything. Let X denote the number of balls being marked after n repeated samplings. The expectation of X can be calculated as: N*(1 - (1 - 1/N)^n) . But how to calculate the standard deviation of X?
The probability that a given ball is not marked is $p=N^{-n}$. So the distribution of the number of unmarked balls is $B(N,p)$. The standard deviation of the number of marked balls is equal to that of the number of unmarked balls.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513764977455139, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/4131/comparing-model-fits-across-a-set-of-nonlinear-regression-models
|
Comparing model fits across a set of nonlinear regression models
CONTEXT: I am modelling the relation between time (1 to 30) and a DV for a set of 60 participants. Each participant has their own time series. For each participant I am examining the fit of 5 different theoretically plausible functions within a nonlinear regression framework. One function has one parameter; three functions have three parameters; and one function has five parameters.
I want to use a decision rule to determine which function provides the most "theoretically meaningful" fit. However, I don't want to reward over-fitting.
Over-fitting seems to come in two varieties. One form is the standard sense whereby an additional parameter enables slightly more of the random variance to be explained. A second sense is where there is an outlier or some other slight systematic effect, which is of minimal theoretical interest. Functions with more parameters sometimes seem capable of capturing these anomalies and get rewarded.
I initially used AIC. And I have also experimented with increasing the penalty for parameters. In addition to using $2k$: [$\mathit{AIC}=2k + n[\ln(2\pi \mathit{RSS}/n) + 1]$]; I've also tried $6k$ (what I call AICPenalised). I have inspected scatter plots with fit lines imposed and corresponding recommendations based on AIC and AICPenalised. Both AIC and AICPenalised provide reasonable recommendations. About 80% of the time they agree. However, where they disagree, AICPenalised seems to make recommendations that are more theoretically meaningful.
QUESTION: Given a set of nonlinear regression function fits:
• What is a good criterion for deciding on a best fitting function in nonlinear regression?
• What is a principled way of adjusting the penalty for number of parameters?
-
Quick clarification: Do you want to assume that exactly one functional form should fit all of the participants, or do some participants have one form and some another? If one of the functional form fits them all, do the parameter values get to vary across participants, or not? – conjugateprior Nov 2 '10 at 9:01
@Conjugate prior: No. It is quite clear from the data that the best functional form varies across participants. I am running separate nonlinear regressions. – Jeromy Anglim Nov 2 '10 at 9:07
1 Answer
For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the overfitting under control.
That approach ignores higher level problem structure: the population has groups that are assumed to share a functional form, so data from one participant with the a particular form is potentially useful for estimating the parameters of another with the same form. But it's a start, if not a finish, for the analysis.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047923684120178, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/212952-limit-indeterminate-format-0-infinity.html
|
2Thanks
• 1 Post By HallsofIvy
• 1 Post By ILikeSerena
# Thread:
1. ## Limit of indeterminate in format 0^infinity.
Hi everyone, we are asked to evaluate $\lim _{x \to 0^+}\(ln(x+1))^{1/x}$. Now what I did is I set $y=(ln(x+1))^{1/x}$ then took the natural log of both sides to bring down the exponent 1/x: $<br /> \lim _{x \to 0^+}y=\lim _{x \to 0^+}\frac{ln(ln(x+1))}_x$. From there on I can't seem to figure out how to make the function in the form of 0/0 or infinity/infinity to use L'Hopital's rule. Any one got any ideas? Thanks y'all in advance!
2. ## Re: Limit of indeterminate in format 0^infinity.
Originally Posted by EliteAndoy
Hi everyone, we are asked to evaluate $\lim _{x \to 0^+}\(ln(x+1))^{1/x}$. Now what I did is I set $y=(ln(x+1))^{1/x}$ then took the natural log of both sides to bring down the exponent 1/x: $<br /> \lim _{x \to 0^+}y=\lim _{x \to 0^+}\frac{ln(ln(x+1))}_x$. From there on I can't seem to figure out how to make the function in the form of 0/0 or infinity/infinity to use L'Hopital's rule.
Well, that's the whole point isn't it? The denominator goes to 0 but the numerator doesn't! What does the numerator go to? Dividing by smaller and smaller values in the denominator just makes the numerator larger.
Any one got any ideas? Thanks y'all in advance!
3. ## Re: Limit of indeterminate in format 0^infinity.
Hi EliteAndoy!
Alternatively, you can write
$\lim_{x \to 0+} (\ln(1+x))^{1/x} = \lim_{k \to \infty} (\ln(1+10^{-k})^{1/10^{-k}}$
Furthermore, we have:
$\frac x 2 \le \ln(1+x) \le x \qquad \text{if } 0\le x \le 1$
Can you find an upper and lower bound for the limit using this?
4. ## Re: Limit of indeterminate in format 0^infinity.
Now I get it. By the way I forgot to say that we are not jsut asked to evaluate the limit, but to rather prove it.Thanks everyone for the reply, really helped me out understanding this problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9233944416046143, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4219653
|
Physics Forums
Page 2 of 2 < 1 2
Recognitions:
Science Advisor
## Atomic Excitation transition energy
Quote by Sekonda Upon using more accurate values I attain $$n-\delta_{i}=8.33$$
Then you are still not using values which are accurate enough. Using the values given in the exercise I get $$n-\delta_{i}=8.1286$$ which is a rather good match.
Indeed that would be, I could of sworn I was using the exact numbers - also depends on what you take for the planck's constant. I was using 6.63 - though I'm guessing 6.626 would be better, I'll use a decent speed of light as well... Thanks man! SK
Using the planck constant as 6.626*.... and the speed of light as 2.99*.... I get that $$n-\delta_{i}=8.0094$$ so maybe it's a d-orbital of the n=8 level?
Recognitions: Science Advisor I have given you the result above. As you are still off, the values you use are still not exact enough. Maybe your conversion from J to eV for the h is not exact? You can already use the standard value of h=4.1356675...eVs directly. Also, there are numerous wavelength to energy converters around the internet, so you can crosscheck, whether your conversion is good or not. You need at least two significant digits for the outcome to get an accurate result, so it is a good idea to use more than two significant digits for the constants you start with. You will spend much more time by tracking the mistakes in using numbers which are not exact enough than you safe by typing one or two numbers less.
Yeah I see now, I used more precise values for the constants and got your result. Though I'm not sure we'd get these constants to this degree of accuracy under examination conditions nor have we been told to expect to remember them to this degree! I'm guessing they were probably just looking for the correct application of physics. I guess that'll do! Thanks again, SK
Page 2 of 2 < 1 2
Thread Tools
| | | |
|----------------------------------------------------------|------------------------------------|---------|
| Similar Threads for: Atomic Excitation transition energy | | |
| Thread | Forum | Replies |
| | Chemistry | 0 |
| | Atomic, Solid State, Comp. Physics | 6 |
| | Advanced Physics Homework | 0 |
| | Quantum Physics | 3 |
| | Quantum Physics | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438452124595642, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/1906/kubo-formula-for-quantum-hall-effect
|
Kubo Formula for Quantum Hall Effect
I'm trying to understand the Kubo Formula for the electrical conductivity in the context of the Quantum Hall Effect.
My problem is that several papers, for instance the famous TKNN (1982) paper, or an elaboration by Kohmoto (1984), write the diagonal entries of the conductivity tensor in the form
$$\sigma_{xy}(\omega \to 0) = \frac{ie^2}{\hbar} \sum_{E^a < E_F < E^b} \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle - \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{(E^a - E^b)^2} .$$
This is the static limit $\omega\to 0$ and low temperature $T\to 0$. The sum goes over all eigenstates $|a\rangle$ and $|b\rangle$ of the single-particle Hamiltonian. $E_F$ is the Fermi energy. $v_x$ and $v_y$ are the single-particle velocity operators.
However, these papers don't derive this equation, which is unfortunate because the Kubo formula is usually not presented in this form. I have found (and succeeded in rederiving) the following variation instead
$$\sigma_{xy}(\omega+i\eta) = \frac{-ie^2}{V(\omega + i\eta)} \sum_{a,b} f(E^a) \left( \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle}{\hbar\omega + i\eta + E^a - E^b} + \frac{\langle a|v_y|b \rangle \langle b|v_x|a \rangle}{-\hbar\omega - i\eta + E^a - E^b} \right).$$
This is formula (13.37) from Ashcroft, Mermin, though they don't actually prove it. $f(E)$ is the Fermi distribution. A nice derivation is given in Czycholl (german).
Now, my question is, obviously
How to derive the first formula from the second?
I can see that the first equation arises as the linear term when writing the sum as a power series in $\omega$, but why doesn't the constant term diverge?
-
I'm not at all sure of this, but: I think the issue may be that the Hall conductivity is defined as an antisymmetrized component of the conductivity tensor, i.e. the quantity that the first formula applies to may actually be $\sigma_{xy} - \sigma_{yx}$. Does this sound plausible? – Matt Reece Dec 15 '10 at 6:45
I'm not sure, but a related observation is that the conductivity tensor should probably be antisymmetric in the first place. – Greg Graviton Dec 15 '10 at 8:49
That doesn't sound right to me; there should be ordinary (non-Hall) conductivities on the diagonal. – Matt Reece Dec 15 '10 at 15:03
I have no clue. Could you give an example of a material with diagonal conductivities? Or any general pointers on this stuff? After all, diagonal conductivities are strange because the current flows perpendicular to the applied electric field. – Greg Graviton Dec 15 '10 at 21:08
Found it! A slight variation of an argument by Czycholl can be used to show that the diverging term actually vanishes. I'll write it up soon. – Greg Graviton Jan 13 '11 at 10:58
show 1 more comment
1 Answer
The first formula indeed follows from the second formula if we let $\omega\to0$. To see that, expand the fractions as
$$\frac1{\pm\hbar\omega + E^a - E^b} = \frac1{E^a-E^b}\left(1 \mp \frac{\hbar\omega}{E^a-E^b}\right) + \mathcal O(\omega^2)$$
to obtain $\sigma_{xy} = \sigma^1 + \sigma^2$ as the sum of a potentially divergent term
$$\sigma^1 = \frac{-ie^2}{V\omega} \sum_{a,b} f(E^a) \frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{E^a - E^b}$$
and a term that looks like the first formula
$$\sigma^2 = \frac{-ie^2\hbar}{V} \sum_{a,b} f(E^a) \frac{- \langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle}{(E^a - E^b)^2} .$$
To see that the first term vanishes instead of diverging, we have to use the Heisenberg equation of motion $v_x = \frac{d}{dt}x = [H_0,x]$ which gives
$$\langle a | v_x | b \rangle = \langle a | H_0 x - x H_0 | b \rangle = (E^a-E^b) \langle a | x | b \rangle$$
and thus
$$\langle a|v_x|b \rangle \langle b|v_y|a \rangle + \langle a|v_y|b \rangle \langle b|v_x|a \rangle = (E^a-E^b) (\langle a|x|b \rangle \langle b|v_y|a \rangle - \langle a|v_y|b \rangle \langle b|x|a \rangle) .$$
The factors $(E^b-E^b)$ cancel and the remaining sum over $b$ becomes a sum over the identity $\sum_b |b\rangle\langle b| = 1$. Thus, we arrive at
$$\sigma^1 = \frac{-ie^2}{V\omega} \sum_{a,b} f(E^a) \left(\langle a|xv_y - v_yx |a \rangle \right) = 0 .$$
since the commutator $[x,v_y]$ vanishes.
To see that the second term is correct, we have to get the summation indices right. To do that, we have to rearrange the summation to obtain
$$\sigma^2 = \frac{ie^2\hbar}{V} \sum_{a,b} (f(E^a)-f(E^b))\frac{\langle a|v_x|b \rangle \langle b|v_y|a \rangle}{(E^a - E^b)^2} .$$
In the limit $T\to0$, the difference of Fermi-Dirac distributions $f(E^a)-f(E^b)$ will be equal to
• $1$ if $E^a < E_F < E^b$
• $-1$ if $E^b < E_F < E^a$
• $0$ otherwise
Using this and rearranging the summation again gives the Kubo formula in the first form.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180189371109009, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/29902/why-is-the-center-of-mass-of-2-bodies-at-the-focus-of-their-elliptical-orbits?answertab=oldest
|
Why is the center-of-mass of 2 bodies at the focus of their elliptical orbits?
Why is the center-of-mass of 2 bodies (which interact only via Newtonian gravity) located at a focus of each of the elliptical orbits?
I know that when there are no external forces, the center of mass moves at a constant speed, but that doesn't explain it.
-
2
"they orbit around their center of mass" That's a bit of an oversimplified statement. You do know that orbits are, in general, not circular with a well-defined center point? – leftaroundabout Jun 11 '12 at 13:53
The important thing here is that the center of mass is an invariant in a frame where it is initially at rest. This actually follows from the definition of the thing and Newton's laws, but I've become convinced that the point is subtle and important enough (if basic) to merit a careful answer. – dmckee♦ Jun 11 '12 at 14:00
If the orbit is elliptic, the center of mass will be at one of the ellipse's focuses. But why? – Omer Jun 11 '12 at 14:55
Wait. Is the question about the invariance of the CoM or about the shape of two body orbits? – dmckee♦ Jun 11 '12 at 16:22
1
The question is about the shape of two body orbits - why the CoM is one of the two ellipses' focuses. – Omer Jun 11 '12 at 17:13
2 Answers
I'm going to assume that Omer is specifically asking why the centre of mass is at the focus (well, one of the foci) of the orbits. Omer, if this isn't what you meant please ignore what follows because it's completely irrelevant!
If you have a body moving in a central field (i.e. the force is always pointing towards the centre), and the field is inversely proportional to the square of the distance from body to the centre, then the orbit is an ellipse with the centre at one of the foci. For now let's just assume this and we can come back to prove it later.
So if we can show that both of the bodies feel a central inverse square force, with the COM at the centre, this guarantees the orbits will be ellipses with a focus at the COM. Given that the force is due to the two bodies attracting each other, and that both bodies are orbiting around, it may seem a bit odd that each body just feels a central inverse square force, but actually this is easy to show.
The picture shows the two masses and the COM. I haven't shown the velocities because it doesn't matter what they are. For now let's just consider $m_1$ and calculate the force on it. By Newton's law this is simply:
$$F_1 = \frac{Gm_1m_2}{(r_1 + r_2)^2}$$
First is this force central? We know the centre of mass doesn't move. For two bodies this seems obvious to me, but in any case dmckee proved it in his answer. If the COM doesn't move it must lie on the line joining the two mases, otherwise there'd be a net force on it. So the force $F_1$ must always point towards the COM i.e. the force is central.
Second is this an inverse square law force i.e. is $F_1 \propto 1/r_1^2$? Well the definition of the centre of mass is that:
$$m_1r_1 = m_2r_2$$
or
$$r_2 = r_1 \frac{m_1}{m_2}$$
If we substitute for $r_2$ in the expression for $F_1$ we get:
$$F_1 = \frac{Gm_1m_2}{(r_1 + r_1(m_1/m_2))^2}$$
or with a quick rearrangement:
$$F_1 = \frac{1}{(1 + m_1/m_2)^2} \frac{Gm_1m_2}{r_1^2}$$
and this shows that $F_1$ is inversely proportional to $r_1^2$. I won't work through it, but it should be fairly obvious that exactly the same reasoning applies to $F_2$ so:
$$F_2 = \frac{1}{(1 + m_2/m_1)^2} \frac{Gm_1m_2}{r_2^2}$$
This is the key result. Even though the two bodies are whizzing around each other, each body just behaves as if it were in a static gravity field, but the strength of the field is reduced by a factor of $(1 + m_1/m_2)^2$ for $m_1$ or $(1 + m_2/m_1)^2$ for $m_2$. This applies to all two body systems, even such unequal ones as the Sun and the Earth (ignoring perturbations from Jupiter etc).
I did start by assuming that a body in a central gravity field orbits in an ellipse with the foci at the centre, but I'm going to wimp out of proving this since it would double the length of this answer and you'd all go to sleep. The proof is easily Googled.
NB this only applies to two body systems. For three or more body systems the orbits are generally not ellipses with the centre of mass at the focus.
-
Thanks! It was exactly what I searched for. – Omer Jun 11 '12 at 18:25
This actually follows from Newton's laws (and it only holds true for an isolated system).
For simplicity we'll consider an isolated system of two bodies on a line. Call their masses $m_1$ and $m_2$ and put them at $x_1$ and $x_2$. Now computer their center of mass: $$X = \frac{\sum_i m_i x_i}{\sum_i m_i} = \frac{1}{M} \sum_i m_i x_i$$
Assume that there is some force, $F$, between them. Newton's second law tells us that the force on body one due to body two $F_1$ is equal and opposite that on body two due to body one $F_2 = -F_1$.
This is enough information to compute the acceleration of the CoM: $$A = X'' = \frac{1}{M} \sum_i m_i a_i = \frac{1}{M} \sum_i m_i \frac{F_i}{m_i},$$ cancel the masses inside the sum and take note of the relation between the forces and you get $$A = 0 .$$
For more bodies and more dimensions the math gets more complicated but the result is the same.
-
I think Omer is specifically asking why the centre of mass is at the focus of the orbits – John Rennie Jun 11 '12 at 15:11
@JohnRennie: This is the reason they move "around" the CoM. Any movement away on the mart of mass one must be mirrored by a mass-ratio weighted movement away on the part of mass two (as long as only internal forces are acting). And likewise for movements toward, and in all relevant dimensions. I know it seems like a different question, but it's not. That is the subtle point that makes this question interesting and pedagogically important. – dmckee♦ Jun 11 '12 at 16:19
Ah...seeing the OP's comment above maybe I have answered the wrong question. – dmckee♦ Jun 11 '12 at 16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603952765464783, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32383/schrodinger-equation-from-klein-gordon
|
# Schrodinger equation from Klein-Gordon?
One can view QM as a 1+0 dimensional QFT, fields are only depending on time and so are only called operators, and I know a way to derive Schrodinger's equation from Klein-Gordon's one.
Assuming a field $\Phi$ with a low energy $E \approx m$ with m the mass of the particle, by defining $\phi$ such as $\Phi(x,t) = e^{-imt}\phi(x,t)$ and developing the equation $$(\partial^2 + m^2)\Phi$$ neglecting the $\partial_t^2 \phi$ when finds the familiar Schodinger equation:$$i\partial_t\phi=-\frac{\Delta}{2m}\phi$$. Still, I am not fully satisfied about the transition field $\rightarrow$ wave function, even if we suppose that the number of particle is fixed, and the field know acts on a finite dimensional Hilbert Space (a subpart of the complete first Fock Space for a specific number of particles). Does someone has a reference to another proposition/argument for this derivation? Thank you.
Edit: for reference, the previous calculations is taken from Zee's QFT book
-
## 1 Answer
I think you are mixing up two different things. Namely:
1. On the one hand, you can see QM as 0+1 (one temporal dimension) QFT, where the position operators (and their conjugate momenta) in the Heisenberg picture plays the role of the fields (and their conjugate momenta) in the QFT. You can check, for instance, that spatial rotational symmetry in the quantum mechanical theory is translated to an internal symmetry in the QFT.
2. On the other hand, you can take the non-relativistic limit (by the way, ugly name because Galilean relativity is as relativistic as Special relativity) of the Klein-Gordon or Dirac theory to get the Schrödinger QFT, where $\phi$ (in your notation) is a quantum field instead of a wave function. There is a chapter in Srednicki's book where this issue is raised in a simple and nice way. There, you can also read about spin-stastistic and the wave function of multi-particle states. Let me add some equations that hopefully clarify (I'm using your notation and of course can be wrong factors, units, etc.):
The quantum field is: $$\phi \sim \int d^3p \, a_p e^{-i(p^2/(2m) \cdot t - p \cdot x)}$$
The Hamiltonian is:
$$H \sim i\int d^3x \left( \phi^{\dagger}\partial_t \phi - (1/2m)\partial _i \phi ^{\dagger} \partial ^i \phi \right) \, \sim \int d^3p \, p^2/(2m) \,a^{\dagger}_p a_p$$
The evolution of the quantum field is given by:
$$i\partial _t \phi \sim [\phi, H] \sim -\nabla ^2 \phi /(2m)$$
1-particle states are given by:
$$|1p> \sim \int d^3p \, \tilde f(t,p) \, a^{\dagger}_p \, |0>$$
(one can analogously define multi-particle states)
This state verifies the Schrödinger equation:
$$H \, |1p>=i\partial _t \, |1p>$$ iff
$$i\partial _t \, f(t,x) \sim -\nabla ^2 f(t,x) /(2m)$$
where $f(t,x)$ is the spatial Fourier transformed of $\tilde f (t,p)$.
$f(t,x)$ is a wave function, while $\phi (t, x)$ is a quantum field.
This is the free theory, one can add interaction in a similar way.
-
Thanks for the update, but I am specifically under a limit operation that would lead me to a "first quantization scheme" from a "second quantization scheme", alias, is it enough from the fact that I recover a Schrodinger equation and can then construct a conserved probability current with a positive density component ($j^0$) to reinterpret the field as a wave function whose modulus square gives probabilities? – toot Jul 19 '12 at 17:41
I'm not sure what you are looking for. $f$ is a wave function that verifies the Schr. equation. The expectation value of $\phi$ is also a function that verifies the Schr. equation. So, as long as you can normalize them you get a quantum mechanical probabilistic interpretation. Have I answered your question? – drake Jul 19 '12 at 22:48
I think you did =) Thanks you. – toot Jul 20 '12 at 10:12
2
@drake: The expectation value of $\phi$ is not the correct way to extract wavefunction from field. The right way is to consider the state $\int \psi(x)\phi^\dagger(x)$ where $\psi$ is a number and $\phi^\dagger(x)$ is the nonrelativistic field. – Ron Maimon Jul 27 '12 at 4:25
1
@toot: The comment above is the correspondence between nonrelativistic fields and wavefunctions. If you smear a nonrelativistic creation field with a function, you produce a particle with wavefunction $\psi$. – Ron Maimon Jul 27 '12 at 4:26
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053663611412048, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54342/historic-derivation-of-wiens-law
|
# Historic derivation of Wien's law
Every book I've read, including a lot of websites, Wikipedia, etc, say that Wien derived this:
$$\rho_\nu(T)=\rho(\nu,T)=\nu^3f\left(\frac{\nu}{T}\right)$$
Being $\rho_v(T)$ the spectral enegy density of a black body for a given temperature and electromagnetic wave frequency. And everywhere it's mentioned that he proved this using thermodynamical arguments in a paper from 1893. I haven't been able to find that paper or that thermodynamical argument, which is what I'm interested in. I've been looking for a few days already.
Does anybody know how he did this?
-
1
See webpages.uidaho.edu/~crepeau/ht2009-88060.pdf, which claims that the distribution law was derived by Wien in tandfonline.com/doi/abs/10.1080/14786449708620983 – joshphysics Feb 18 at 22:54
@joshphysics: Those look strong. To avoid link-only answers, could you fill in the argument as an answer? – Emilio Pisanty Feb 18 at 23:44
1
(That 1901 picture of Planck is priceless, by the way. A far shout from the usual "statesman of science" pictures.) – Emilio Pisanty Feb 18 at 23:44
@joshphysics Thanks. That article was very interesting. I will try to find that article for free... maybe my university has some copies. Although that article is from year 97, will all sources say that relation was derived in his 93 paper: Eine neue Beziehung der Strahlung schwarzer Körper zum zweiten Hauptsatz der Wärmetheorie, which is also cited in that article, and that sadly it seems like it has no translated version. – MyUserIsThis Feb 19 at 7:11
– tom May 4 at 15:54
show 4 more comments
## Interpretation and assumptions
I expect you want to see the relation $\rho(\nu) = \frac{8\pi h}{c^3}\nu^3$. The derivation requires the assumption of an ideal black body (total absorption and no reflection) of size $L^3\gg\lambda^3$ and the spectral energy density should be homogeneous and isotropic.
## Derivation
I had a lecture in German which explained this topic very well.
1. We calculate the eigenmodes of a box, where the mode index is $j^2 = j_x^2+j_y^2+j_z^2=\left(\frac{2\nu}{c}L\right)^2$ where we used the condition of resonance.
2. We calculate the number of modes $G(\nu)=2\frac{1}{8}\frac{4\pi}{3}j^3$ in the frequency spectrum between 0 and \nu.
3. We calculate the spectral mode density $g(\nu)=\frac{\partial G(\nu)}{\partial\nu}$. The spectral energy density $u(\nu)$ is now the product of $g(\nu)$ and the energy per mode $\epsilon_{Wien}=h\nu e^{-\frac{h\nu}{k_B T}}$ (from classical Boltzmann statistics) per volume $L^3$.
4. The rest (simple math) is up to you or see the german reference. You get the relation from above and $$u(\nu)=\rho(\nu)e^{-\frac{h\nu}{k_B T}}\;.$$
-
The claim "the energy per mode is$\epsilon=h\nu\exp(-h\nu/kT)$" is much stronger than anything you've assumed and it is essentially equivalent to Planck's energy quantization condition. As such it would have been unknown to Wien, for whom the equipartition theorem would have governed this. – Emilio Pisanty Feb 18 at 23:40
My derivation is a different than the traditional. I thought you are interested in actual physics and such understanding. It uses the basic knowledge about classical Boltzmann statistics. I am not an expert of physical history. If you want to derive that too, you may do so but I don't do it today. Good night. – strpeter Feb 18 at 23:58
No, your derivation is equivalent to the traditional one. Wien's displacement law is much weaker: it does not give a specific result for $\rho_\nu$ but only constrains the form it may take in terms of some unknown function $f$. Finding $f$ means deriving either Wien's distribution law or the full Planck law, both of which are stronger than the question at hand. – Emilio Pisanty Feb 19 at 0:43
Thanks for your answer, although I was more interested in the historical derivation of the weak version of the function as I wrote it in the question. Just the way Wien derived it from maxwell's electromagnetism and classical thermodynamics. – MyUserIsThis Feb 19 at 7:12
– strpeter Feb 19 at 22:41
## protected by Qmechanic♦May 4 at 16:28
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409299492835999, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/93002?sort=newest
|
## Finite sums of prime numbers $\geq x$
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $S_x$ be the set of finite sums of prime numbers $\geq x$. In other words, let $S_x$ be the submonoid of `$(\mathbf{Z}_{\geq 0},+)$` generated by the set $\mathcal{P}_{\geq x}$ of prime numbers $\geq x$.
It is easy to see that $S_x$ contains every sufficiently large integer. This follows from the classical fact that given two coprime integers $a$ and $b$, every sufficiently large integer, in fact every integer $\geq (a-1)(b-1)$, is of the form $ma+nb$ for some $m, n \in \mathbf{Z}_{\geq 0}$. See for example this page.
Let $N_x$ be the largest integer which is not in $S_x$.
Examples :
If $x=2$ then `$S_2 = \{0\} \cup \mathbf{N}_{\geq 2}$` so that $N_2=1$.
If $x=3$ then `$S_3 = \{0,3\} \cup \mathbf{N}_{\geq 5}$` so that $N_3=4$.
By definition, we have $N_x \geq x-1$ (in fact parity considerations imply that $N_x \geq 2x-2$ for $x \geq 3$).
On the other hand, given that there are at least two primes in the interval $[x,2x]$, the above classical fact implies that $N_x \ll x^2$.
What is the asymptotic behaviour of $N_x$ as $x \to \infty$ ?
-
Interesting question! This is essentially Sloane's A180306, but it does not contain any information on the asymptotics. – Charles Apr 3 2012 at 15:22
@Charles : Thanks for the reference to Sloane's database. @Mark : I don't really know. I suspect that the upper bound I gave is much too high, but I don't know what to expect since I'm not a specialist in additive number theory. – François Brunault Apr 3 2012 at 16:10
1
Lemma 1 of Erdos's 1974 paper with Benkoski "On weird and pseudoperfect numbers" is: There is an absolute constant $c$ such that every integer $m > c p_k$ is the distinct sum of primes not less than $p_k$. (Here $p_k$ is the $k$th prime, in the usual order.) No proof is given. "The lemma is probably well known and, in any case, easily follows by Brun's method." – Anonymous Apr 4 2012 at 2:08
@Anonymous : Thanks for the reference! It would be interesting to make $c$ explicit. – François Brunault Apr 4 2012 at 8:53
In view of the answers below indicating the link with Goldbach's conjecture, which is out of reach, I would accept any answer showing that $N_x \ll x$. – François Brunault Apr 4 2012 at 9:21
## 3 Answers
According to "The three primes theorem with almost equal summands" by Baker and Harman, every large odd $N$ is a sum of three primes each of size $\sim N/3$. (In fact, within $N^{4/7}$ of $N/3$-- this is much closer than we need, so we could use weaker results of earlier authors, if preferred.)
For odd $N$, this gives $N$ as a sum of three primes, each at least $\sim N/3$. If $N$ is even, take a prime $p$ of size $\sim N/4$ (which exists by the prime number theorem), and write $N-p$ as a sum of three primes of size about $N/4$. So we get $N$ as a sum of four primes at least $\sim N/4$.
-
Thanks! This result indeed shows that $N_x/x$ is bounded (and in fact $\operatorname{limsup} N_x/x \leq 4$). – François Brunault Apr 6 2012 at 9:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to spell it out, If Golbach's conjecture is false, then at least once $N(x)>4x$, but that seems extremely unlikely. It is certain that $N(x)>3x$ infinitely often, but there is every reason to believe that $N(x)<(3+\epsilon)x$ with finitely many exceptions. If so, then we could equivalently define $N_x$ as the largest odd integer which is neither prime nor a sum of $3$ primes all at least $x$.
it will frequently happen that the next prime after $p$ is at least $p+8.$ In these cases we can write $3p$ as a sum of primes at least $x=p$ however at least one of the odd numbers $3p+2,3p+4,3p+6$ is not itself prime and hence not a sum of primes all greater than $x=p.$ This establishes that $N(x)>3x$ infinitely often.
There is no proof that every even integer is a sum of two primes. Were there a counter-example, $E$, it would need to be a sum of at least 4 primes and hence furnish an example of $N(x)>4x$ where $x$ is $\frac{E}{4}.$ However there is every reason to believe the stronger statement that for every $\epsilon \gt 0$ there is an $M(\epsilon)$ such that $2m$ is a sum of two primes, both larger than $m(1-\epsilon)$ for all $m \gt M(\epsilon).$ If so, then $N(x)<(3+\epsilon)x$ provided that $x$ is reasonably larger $\frac{M(\epsilon)}{1-\epsilon}.$
-
4
Using $x = k! + 1$, we can even show that $\limsup_{x \rightarrow \infty} N(x) - 3x = \infty$. In fact, using an estimate of Rankin on large prime gaps, we should get $N(x) - 3x > c\log x \log \log x \log \log \log \log x (\log \log \log x)^{-2}$ infinitely often. – Woett Apr 4 2012 at 4:15
However, does not rule out the possibility that for every $\epsilon \gt 0$ we have $\limsup_{x \to \infty}N(x)-(3+\epsilon)x=-\infty$ – Aaron Meyerowitz Apr 4 2012 at 4:33
Oh, of course not! Perhaps even $N(x) < 3x + x^{\epsilon}$ for large $x$. But this should be really far from what can be proven with current methods :) – Woett Apr 4 2012 at 4:41
True. With current methods we can not prove Golbach's conjecture so we do not know that $N(x) \lt 4x$ is true, although we believe it is. Were I foolish I would predict $(3+c_1/\ln{x})x \lt N(x) \lt (3+c_2/\ln{x})x.$ – Aaron Meyerowitz Apr 4 2012 at 5:08
Thanks Aaron for your answer. I believe your argument shows that for any odd $x$ which is not prime, we have $N(x) \geq 3x$, since $3x$ cannot be written as a sum of primes $\geq x$. – François Brunault Apr 4 2012 at 9:18
show 9 more comments
(Edited according to the comments below)
In his article titled 'Sums of Distinct Primes', Kløve conjectured, on the basis of computational evidence, that $\displaystyle \lim_{x \rightarrow \infty} \dfrac{N_x}{x} = 3$. This would imply the binary Goldbach conjecture (for large enough $x$) in the following way: if every integer larger than $(3 + \epsilon)x$ can be written as sum of primes, where those primes are all larger than $x$, then, in particular, every even number between $(3 + \epsilon)x$ and $4x$ can be written as a sum of two primes.
-
@Woett: In that paper, the word "prime" does not appear. – Mark Sapir Apr 3 2012 at 16:24
Hmm, maybe I understand the question wrong. Please read page 56 of the article of Erdos and Graham (page 52 of this pdf: math.ucsd.edu/~ronspubs/80_11_number_theory.pdf) – Woett Apr 3 2012 at 16:30
2
@Woett: Yes, Kløve, Torleiv Sums of distinct primes. Nordisk Mat. Tidskr. 21 (1973), 138–140. According to Math Reviews he conjectured there that the limit is 3. The review does not mention Goldbach. The paper is not on-line, unfortunately. – Mark Sapir Apr 3 2012 at 17:53
1
Does Vinogradov result that every sufficiently large odd number is a sum of three primes imply that the limit is at most 5 (or at least $< \infty$)? Vinogradov does not claim that his primes are all of approximately the same size, but perhaps his method can be adapted to produce such a result. – Mark Sapir Apr 3 2012 at 18:18
1
@François: No, since not requiring the primes to be distinct can only make the limit lower, but the limit has to be at least 3 (there is an odd composite in $[3n-5,3n)$). – Emil Jeřábek Apr 4 2012 at 10:18
show 13 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 94, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508395195007324, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/28194/how-to-show-that-all-dfa-is-in-p/28203
|
# How to show that $ALL_{DFA}$ is in P
How can I show that $ALL_{DFA}$ is in P ?
$ALL_{DFA} = \{ \langle A \rangle \mid A \text{ is a DFA and } L(A) = \Sigma^* \}$
-
What is $ALL$? (presumably $DFA$ is determinsitic finite automaton). And then what is their relation in $ALL_{DFA}$? – Mitch Mar 20 '11 at 22:51
@Mitch: Edited. – metdos Mar 21 '11 at 7:14
## 1 Answer
It is easy to see that a DFA accepts $\Sigma^*$ if and only if all reachable states from the start state, $q_0$, are accepting. This can easily be decided in polynomial-time by performing a breadth- or depth-first search on the DFA from $q_0$. If at any time a non-accepting state is visited, reject, otherwise, if only accepting states are found, accept.
Note that this problem is much harder for NFAs; $\{ \langle A \rangle \mid A \text{ is an NFA and } L(A) = \Sigma^* \}$ is NP-hard.
-
I have understood edit part. Apart from that I could not see how using $\overline{ALL_{DFA}}$ additionally helps. – metdos Mar 21 '11 at 20:33
It is just another school of thought. But you are correct; I didn't use it in my solution. – Zach Langley Mar 21 '11 at 21:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370689392089844, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/110414/methods-for-proving-non-fo-definability
|
Methods for proving non FO definability
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm having difficulties to prove that the subset of even numbers is not first-order definable in $(\mathbb{N},<)$. Any hint is welcomed!
More generally, what are usual techniques in order to prove that a subset is not FO-definable. I know one using isomorphism.
What are other methods?
-
3 Answers
One method is to use quantifier elimination. Note that 0 and $S(x)=x+1$ are definable in $M=(\mathbb N,<)$. The expanded structure $M'=(\mathbb N,<,0,S)$ is a model of the theory $T$ of a discrete linear order with zero and successor. You can show that every formula is in $T$ equivalent to an open formula (it suffices to prove it for formulas consisting of a single existential quantifier followed by a conjunction of atomic formulas and their negations), and this implies that every definable subset of $M$ is finite or cofinite. (And as a bonus, it shows that $T$ is complete.)
Another method is to use compactness. In this simple example, you can just take any elementary extension $M^*\succ M$ and an element $a\in M^*$ satisfying the alleged formula $\phi$ defining parity. The function $f$ which leaves $\mathbb N$ fixed and maps everyone else to its successor is an automorphism of $M^*$, but $\phi$ cannot be satisfied by two successive elements. In more complicated situations, one may need to take $M^*$ e.g. recursively saturated (or saturated in some larger cardinality) to define an automorphism by some sort of a zig-zag construction.
Yet another method is to use Ehrenfeucht–Fraïssé games (these are quite useful for showing undefinability of classes of finite structures, but often come handy in other situations as well). By induction on $k$, show that Duplicator has a winning strategy in $EF_k((M,a_1,\dots,a_n),(M,b_1,\dots,b_n))$ whenever the sequences $\vec a$, $\vec b$ are ordered in the same way, and for each $i,j$, the distances $a_i-a_j$ and $b_i-b_j$ are the same, or they are both large (bigger than $2^k$ or something like that). This implies that a fixed formula $\phi(x)$ cannot distinguish two large enough elements.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another proof is to use the theorem of McNaughton-Schutzenberger that a regular language can be defined in $FO(<)$ if and only if its syntactic monoid is aperiodic (satisfies $x^n=x^{n+1}$ from some $n$). $FO(\mathbb N,<)$ is the special case of a unary alphabet. The syntactic monoid of the even numbers is the cyclic group of order 2, which is not aperiodic. More generally, it follows from the McNaughton-Schutzenberger theorem that the only sets definable are finite or cofinite. See the book of Straubing on Automata, logic and circuit complexity for these kinds of things.
-
General methods
• Ehrenfeucht-Fraisse games
• The theorems of Hanf and Gaifman
You can prove, that the even numbers are not first order definable by gaifmans theorem: Suppose they were. By gaifmans theorem the definition is a boolean combination of basic local sentences. Choose an even and an odd number far away from 0 in terms of the locality of these sentences. Then they cannot distinguish these two numbers. Contradiction.
-
1
Locality theorems can be used to prove undefinability (they work basically as ready-made applications of Ehrenfeucht–Fraïssé arguments), but in this case, isn’t the Gaifman graph of $(\mathbb N,<)$ one huge clique? – Emil Jeřábek Oct 23 at 12:59
Oops. I confounded $(\mathbb N,<)$ with $(\mathbb N,S)$ where $S$ is a successor relation. Sorry. – maxky Oct 24 at 8:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320555925369263, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/178660-matrix-operation.html
|
# Thread:
1. ## Matrix operation
Determine A^18
dont know what to do besides to just count. But i had a similar question before where i could se it was a rotation matrix and then it was easy. Is there an easy way for this one to?
Thanks!
2. Decompose
$A=2I+N$
and apply the Newton's binomial theorem. The matrix N is nilpotent of order 3.
3. Two comments:
1. Your matrix is in Jordan Normal Form already.
2. Take a look here.
Cheers.
4. Jordan Normal Form did the job really easy
Cheers!
5. I think FernandoRevilla's method and my proposed method boil down to the same thing (as they should!). The binomial theorem is going to give you those binomial coefficients that are showing up in the Jordan Normal Form power section of the wiki post.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304218292236328, "perplexity_flag": "middle"}
|
http://gilkalai.wordpress.com/2009/10/06/the-polynomial-hirsch-conjecture-discussion-thread-continued/?like=1&source=post_flair&_wpnonce=8ac41143de
|
Gil Kalai’s blog
## The Polynomial Hirsch Conjecture: Discussion Thread, Continued
Posted on October 6, 2009 by
Here is a link for the just-posted paper Diameter of Polyhedra: The Limits of Abstraction by Freidrich Eisenbrand, Nicolai Hahnle, Sasha Razborov, and Thomas Rothvoss.
And here is a link to the paper by Sandeep Koranne and Anand Kulkarni “The d-step Conjecture is Almost true” – most of the discussion so far was in this direction.
We had a long and interesting discussion regarding the Hirsch conjecture and I would like to continue the discussion here.
The way I regard the open collaborative efforts is as an open collective attempt to discuss and make progress on the problem (and to raise more problems), and also as a way to assist people who think or work (or will think or will work) on these problems on their own.
Most of the discussion in the previous thread was not about the various problems suggested there but rather was about trying to prove the Hirsch Conjecture precisely! In particular, the approach of Sandeep Koranne and Anand Kulkarni which attempts to prove the conjecture using “flips” (closely related to Pachner moves, or bistaller operations) was extensively discussed. Here is the link to another paper by Koranne and Kulkarni ”Combinatorial Polytope Enumeration“. There is certainly more to be understood regarding flips, Pachner moves, the diameter, and related notions. For example, I was curious about for which Pachner moves ”vertex decomposibility” (a strong form of shellability known to imply the Hirsch bound) is preserved. We also briefly discussed metric aspects of the Hirsch conjecture and random polytopes.
For general background: Here is a chapter that I wrote about graphs, skeleta and paths of polytopes. Some papers on polytopes on Gunter Ziegler’s homepage describe very interesting and possibly relevant current research in this area. Here is a link to Eddie Kim and Francisco Santos’s survey article on the Hirsch Conjecture.
Here is a link from the open problem garden to the continuous analog of the Hirsch conjecture proposed by Antoine Deza, Tamas Terlaky, and Yuriy Zinchenko.
Earlier posts are: The polynomial Hirsch conjecture, a proposal for Polymath 3 , The polynomial Hirsch conjecture, a proposal for Polymath 3 cont. , The polynomial Hirsch conjecture – how to improve the upper bounds .
Here are again some basic problems around the Hirsch Conjecture. When we talk about polytopes we usually mean simple polytopes (although looking at general polytopes may be of interest).
Problem 0: Study various possible approaches for proving the Hirsch conjecture.
We mainly discussed this avenue, which is certainly the most tempting.
Problem 1: Improve the known upper bounds for the diameter of graphs of polytopes, perhaps even finding a polynomial upper bound in terms of the dimension $d$ and the number of facets $n$.
Strategy 1: Study the problem in the purely combinatorial settings studied in the EHRR paper.
Strategy 2: Explore other avenues.
(Nicolai Hahnle remarked that the proof extends to families of monomials.)
Problem 2: Improve the known lower bounds for the problem in the abstract setting.
Strategy 3: Use the argument for upper bounds as some sort of a role model for an example.
Strategy 4: Try to use recursively mesh constructions like those used by EHRR.
Problem 3: What is the diameter of a polytopal digraph for a polytope with n facets in dimension d?
A polytopal digraph is obtained by orienting edges according to some generic linear objective function. This problem can be studied also in the abstract setting of shellability (and even in the context of unique sink orientations).
Problem 4: Find a (possibly randomized) pivot rule for the simplex algorithm which requires, in the worse case, small number of pivot steps.
A “pivot rule” refers to a rule to walk on the polytopal digraph where each step can be performed efficiently.
Problem 5: Study the diameter of graphs (digraphs) of specific classes of polytopes.
Problem 6: Study these problems in low dimensions.
Problem 7: What can be said about expansion properties of graphs of polytopes?
Problem 8: What is the maximum length of a directed path in a graph of a d-polytope with n facets?
Problem 9: Study (and find further) continuous analogs of the Hirsch conjecture.
Problem 10: Find “high dimensional” analogs for the diameter problem and for shellability.
Problem 11: Find conditions for rapid convergence of a random walk (or of other stochastic processes) on directed acyclic graphs.
Problem 12: Study these problems for random polytopes.
A polynomial upper bound for graphs of polytopes is not known also for random polytops.
Problem 13: How many dual graphs of simplicial d-spheres with n facets are there?
### Like this:
This entry was posted in Convex polytopes, Open discussion, Open problems and tagged Convex polytopes, Hirsch conjecture. Bookmark the permalink.
### 16 Responses to The Polynomial Hirsch Conjecture: Discussion Thread, Continued
1. Pingback: Polymath3 update « Euclidean Ramsey Theory
2. Kristal Cantwell says:
I have an idea for dealing with this problem. It may take a while for me to develop it since the reference I am basing it on went back to the library yesterday. The idea is that a proof using active facets may use the fact that active facets tend to cluster together. I naively picture a polytope splitting into two hemispheres one that consists of active facets the other that consists of active facets when the objective function is replaced by its negative. The proof would use this fact somehow to find paths on both hemispheres that connect thus having a low length. I realize that there would be facets on which the objective function might be constant again I picture them as possibly forming a band between the two hemispheres. Anyway that is what I am thinking about now.
3. Kristal Cantwell says:
I have just managed to find the book I was thinking about online and I see there was a problem with the above. A facet is active with respect to a vertex it contains as well as a linear function. So I would need a defintion that does not depend on a specific vertex but only on a linear function and then try to use that to find a limit on graph diameter.
4. Oinky says:
I confess to being confused by vertex decomposibility. Is there an easy algorithm to decide if a pure simplicial complex is VD?
5. Gil Kalai says:
Dear Oinky
There is an easy algorithm (but it is exponential in the number of vertices): To check if a complex K is vertex decomposible(VD) go over all its vertices v and check if the antistar of v is VD and the link of v is VD. (If this is true for some vertex then the complex is VD.) The antistar of a vertex v is the complex containing all faces not containing v and the link of v is the complex containing all faces containing v after we delete v from all of them.
6. Oinky says:
Dear Gil,
Thanks. I think my confusion comes from a difference in definitions.
Do you require that the link and the antistar of v remain pure simplicial complexes, as defined here:
http://www.eg-models.de/models/Simplicial_Manifolds/2003.06.001/_preview.html
In another paper, (http://arxiv.org/abs/math/0604018), Lutz defines VD as follows:
A triangulated d-ball or d-sphere is vertex-decomposable if we can remove the star of a vertex v such that the remaining complex is a vertex-decomposable d-ball (with a single d-simplex being vertex-decomposable) and such that the link of v is a vertex-decomposable
(d − 1)-ball or a vertex-decomposable (d − 1)-sphere, respectively.
That doesn’t seem like an equivalent definition. For example, take a hexagon — (a 1-sphere). Number the vertices 1 to 6 counter clockwise. Now delete vertex 1 and vertex 4. The edges (2,3) and (5,6) remain. And clearly that’s not a ball or a sphere since it’s disconnected.
But I would argue that a polygon is vertex VD with your definition.
I apologize for my naivety.
Thanks again.
7. Gil Kalai says:
Oinky, I was a bit careless in the definition.
Yes, you have to demand that a VD complex is pure. Once you make this requirement it follows that the complex is shellable and hence homehomorphic to either a sphere or a ball.
The two definitions you mentioned are equivalent: in the example of a hexagon deleting vertex 1 and vertex 4 will leave us with a pure but a non VD subcomplex (the union of two disjoint edges). There is no way to delete another vertex leaving the antistar pure. But the hexagon itself is VD: you have to delete the vertices in another order.
(One way to think about vertex decomposibility is as a strong form of shellability: You can start the shelling by deleting all facets containing some vertex; and this property is hereditary; it applies to the shelling of the facets containing the vertex; and to the shelling of all facets not containng the vertex.)
There is another similar notion called “non evasiveness”. A simplicial complex is non evasive if it is a single point or if it has a vertex v so that both the link of v and the antistar of v are “non evesive”.
(Non-evasiveness is a strong form of “collapsibility” in a similar way that VD is related to “shellability. There also is a more general notion of “nonpure shellability” but I am not sure if the analogous notion to VD or non-evasiveness (which should be wider than both) was studied.)
8. Oinky says:
Dear Gil,
Thank you very much. My confusion is gone.
9. Anand Kulkarni says:
I’m also extremely interested in seeing whether weak vertex decomposability is preserved under dual Pachner moves. This feels very plausible to me, and the resulting bound of 2(n-d) would also perhaps be concordant with current lower bounds on spheres and unbounded polyhedra — thus removing the need to restrict our attention to polytopality-preserving moves.
However, I still don’t think I understand the operations of vertex deletion or (in the dual setting) facet removal. I gather that facet removal as a dual operation to vertex deletion is distinct from projective facet removal, since in the latter operation objects are guaranteed to remain polytopal even as hyperplanes are removed.
If v maps into facet F in the polar, the dual of antistar(v) is the set containing all faces not contained in the facet F. Is this correct? Thus, the dual of vertex deletion is removing all faces contained in some facet F.
I presume the square is known to be weakly facet-decomposable. However, removing any F from the complex of a square S would entail removing an edge e={1,2} and some vertices {1} and {2} from complex(S). Doesn’t this result in the complex of an unbounded three-facet polyhedron, not in a 2-simplex?
10. Oinky says:
Just some musings I thought I’d share….
It occurred to me that it might be easier to show that the d-step and Hirch conjectures are true in odd dimensions (if they are true), due to consequences of the Upper Bound Theorem for polytopes, which states that the number of facets in a d-dimensional polytope is O(n^floor(d/2)). Consider the number of faces in neighborly simplicial polytopes with 2d vertices:
dimension number of faces
6 112
7 150
8 660
9 858
10 4004
11 5096
12 24752
13 31008
Intuitively, in order to obtain a graph with a high diameter, one needs more verticies to work with. The the odd dimensions simply do not provide as “many” faces as the even dimensions for their duals to have high diameter.
Odd dimensions also have an interesting bisteller flip that preserves the f-vector of the complex: the m-to-m flip where m=ceil(d/2). If one could show that the m-to-m flip preserves diameter and all neighborly simplicial polytopes in odd dimension are connected via a sequence of m-to-m flips, that would prove the d-step conjecture for a wide class of polytopes.
11. Anand Kulkarni says:
I apologize for my naivety, but I must still admit to some continued confusion regarding vertex decomposability.
Would someone be willing to provide a basic example of the vertex decomposition of a small simplicial polytope? (Or equivalently, the facet decomposition of a simple polytope?)
With an eye to determining whether VD is preserved under Pachner moves, here is a related result by Lee applying this strategy to Hirsch: http://www.springerlink.com/content/w587g7r771p429l4/
12. Gil Kalai says:
Dear Oinky,
(Belated) thanks for your comment. One interesting remark is that the graphs of d-polytopes with n facets that has maximum number of vertices, have all diameter less than $n^6$ (or so). This was my first (and, by far, the hardest) result regarding diameter of graphs of polytopes.
Dear Anand,
Consider the Octahedron. The star of a vertex looks like a cone over a rectangle and so is the antistar. So to vertex decompose the octahedron we need to see that a cone over a square is vertex decomposible (VD), and that a square itself is VD.
In general, if K is a cone over L with apex v (i.e. the join of L with a new vertex v) then if L is VD so is K since the link and antistar of v in K are both isomorphic to L.
We need to show that the square is VD. Consider a vertex of the square: the link of a vertex is a complex consisting of 2 points. The antistar of the vertex is a complex consisting of two edges. Again, the antistar is a cone over the link so it is enough to demonstrate that the link is VD.
The link of a vertex for a complex consisting of 2 isolated points is the complex consisting of the empty set alone which is VD. The antistar is a single vertex (again a cone over the link) which is also VD.
13. Pingback: Plans for polymath3 « Combinatorics and more
14. realsamurai says:
Forex abner grenselose muligheder, giver for at oge uendeligt hovedstaden til erhverve uafhangighed, frihed.
Vores fri forex signal
-en af mader at denne formalet. mere info pa smartfxsol.com
New Launch soon! come on smartfxsol.com
15. Pingback: Subexponential Lower Bound for Randomized Pivot Rules! | Combinatorics and more
16. 草榴社区i says:
Great writing!!!
• ### Blogroll
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109297394752502, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/727/algebraic-group-g-vs-algebraic-stack-bg/746
|
## algebraic group G vs. algebraic stack BG
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've gathered that it's "common knowledge" (at least among people who think about such things) that studying a (smooth) algebraic group G, as an algebraic group, is in some sense the same as studying BG as an algebraic stack. Can somebody explain why this is true (and to what extent it is true)? I can get as far as seeing that quasi-coherent sheaves on BG are the same as representations of G, but it feels like there's more to it.
In particular, Scott Carnahan mentioned here that deformations of BG as an algebraic stack should correspond exactly to deformations of G as an algebraic group. I assume this means that any deformation of BG must be of the form BG', where G' is a deformation of G (as a group). It's clear to me that such a BG' is a deformation, but why should these be the only deformations?
-
## 4 Answers
The stack BG only recovers G up to inner automorphisms, not canonically (as suggested by blah) - this can lead to serious issues in families or equivalently over a nonalgebraically closed field, as Shenghao's comment points out. One way to say this is the following : the loops in BG are G/G, the adjoint quotient of G. On the other hand, if you give a map pt --> G then the based loop space (fiber product of pt with itself over BG) is G, so you recover the group canonically.
-
What about setting $pt \rightarrow G$ to be the unit $e$ of $G$? – Qfwfq Apr 21 2010 at 6:19
Thanks for catching the typo -- it should read $pt \to BG$, i.e. a choice of universal bundle $EG$ (unique up to equivalence).. – David Ben-Zvi Apr 21 2010 at 14:21
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If G is a group scheme over k (algebraic closed), then me talk through how to get G back by looking at the stack BG. The k points of BG (which is a groupoid) is one point whose automorphisms are the k points of G. The pullback of this point to Spec A for any k-algebra A has automorphisms given by the A points of G. If you think of BG points as principal bundles, I'm saying the automorphsims of the trivial bundle on Spec A are the A points of the group.
So what happens if you deform BG? You still have this one point, you can't deform that to anything, so you can only change its morphisms. That's your G' (you get an algebraic group since you can pullback to all the Spec A's). How you see it's BG' is a little trickier, so maybe I should leave it to a real algebraic geometer, but I think that the idea is that BG is distinguished by being the sheafication of the trivial bundles in the smooth/fppf topology, and this won't change when you deform.
-
That sounds very reasonable, but why couldn't BG deform to a gerbe, which is some kind of a twist of a BG that only has a point locally? The answer is probably, "go learn more about gerbes." – Anton Geraschenko♦ Oct 16 2009 at 16:30
Of course, the other answer might be that Scott and I are a bit off. Though I tend to think of gerbes as "the land where deformations that shouldn't exist live" as in obstructed defomrations really exist if you're willing to think of them as objects over the gerbe corresponding to their obstruction class. Though, maybe that's wrong. – Ben Webster♦ Oct 16 2009 at 16:53
Hello Ben,
a little comment: when you say "G is a group scheme over k", you mean k is a separably closed field, right? Because otherwise the groupoid BG(k) may not have only one isomorphism class of object; the set of isom classes is the Galois cohomology H^1(k,G). Also I got confused by "the pullback of this point". I think one should deform BG along the nilpotent embedding Spec k --> Spec A, rather than considering Spec A --> Spec k...
The structural map BG --> Spec k has a section Spec k --> BG. So maybe one can deform BG --> Spec k together with this section, so that any gerbe becomes trivial.
-
You should add this as a comment to Ben's answer rather than as a separate answer. Like this: – Anton Geraschenko♦ Oct 16 2009 at 18:05
when you say "G is a group scheme over k", you mean k is a separably closed field, right? Because otherwise the groupoid BG(k) may not have only one isomorphism class of object; the set of isom classes is the Galois cohomology H^1(k,G). Also I got confused by "the pullback of this point". I think one should deform BG along the nilpotent embedding Spec k --> Spec A, rather than considering Spec A --> Spec k. The structural map BG --> Spec k has a section Spec k --> BG. So maybe one can deform BG --> Spec k together with this section, so that any gerbe becomes trivial. – Anton Geraschenko♦ Oct 16 2009 at 18:06
If you actually want to comment on other answers, you can just leave a comment, rather than writing a new answer. I think you have a good point about keeping the section. You might want to try to edit this into an actual answer to the question, since mine may well not be right. For the pullback question, you may have just not understood what I was doing in that section; I was talking through (in part for my own benefit) how to reconstruct G by looking at the stack BG and taking the automorphisms of the trivial bundle on all k-schemes. – Ben Webster♦ Oct 16 2009 at 18:07
i just learned how to use edit/comment. – shenghao Oct 16 2009 at 20:05
what i would expect is that the group G is basically the same thing as the pointed stack BG, where you point it by the trivial G-bundle.
-
But there's only one k-point (where k is your base field), so what on earth does pointed get you? – Ben Webster♦ Oct 17 2009 at 4:21
1
there isn't only one point -- there's a connected groupoid of points, with automorphisms G.. if BG were a scheme you'd be right, but it's a stack.. or topologically it has nontrivial pi_1.. – David Ben-Zvi Oct 20 2009 at 1:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590840339660645, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/172235-mixed-partial-derivatives.html
|
# Thread:
1. ## Mixed Partial Derivatives
Solve the equation u_xx - 3u_xt - 4u_tt = 0 subject to the initial conditions u(x,0) = x^2 and u_t(x,t) = e^x.
Hint: consider a change of coordinates.
Having spent a large amount of time getting wrong solutions to the above question, any help that can be offered would be very much appreciated!
Thanks!
2. Hello,
show your working and we'll tell you what's wrong (although I don't think that'll be me ^^)
3. Ok no problem!
I have seen something similar where you factor out the operator: (d_xx - 3d_xt - 4d_tt) = 0
This gives us: (d_t + d_x)(4d_t + d_x)u = 0
I then have attempted to define new coordinates. I was led to believe that I should try, for example: a = x +t and b = -4x + t (though this does not appear to work so perhaps this is where I have gone wrong?).
We then want to find a v(a,b) such that it solves the equation.
We then get;
u_x = v_a - 4v_b
u_t = v_a + v_b
I then continue...but will avoid writing out the rest of it now in case my coordinates are wrong! If anybody can advise me if they are or are not correct then I will post the rest of my solution on that basis!
Many thanks!
4. Originally Posted by Rocky
I have seen something similar where you factor out the operator: (d_xx - 3d_xt - 4d_tt) = 0
This gives us: (d_t + d_x)(4d_t + d_x)u = 0 I think you mean (d_x + d_t)(d_x – 4d_t)u = 0.
I then have attempted to define new coordinates. I was led to believe that I should try, for example: a = x +t and b = -4x + t (though this does not appear to work so perhaps this is where I have gone wrong?).
I get a = x – t and b = 4x + t.
We then want to find a v(a,b) such that it solves the equation.
We then get;
u_x = v_a - 4v_b
u_t = v_a + v_b
I then continue...
Keep going, you're on the right lines. Now use the chain rule to find $u_{xx},\; u_{xt}$ and $u_{tt}$ in terms of $v_{aa},\ v_{ab}$ and $v_{bb}$. For example, $u_{xx} = \frac{\partial}{\partial a}(u_x)\frac{\partial a}{\partial x} + \frac{\partial}{\partial b}(u_x)\frac{\partial b}{\partial x}.$
You should then find that $u_{xx}-3u_{xt}-4u_{tt}$ is some constant times $v_{ab}$ (the terms in $v_{aa}$ and $v_{bb}$ all cancel out). So your equation is equivalent to $v_{ab} = 0$, whose general solution is $v(a,b) = f(a)+g(b)$ for arbitrary (twice-differentiable) functions f, g. So the solution for u(x,t) is $u(x,t) = f(x-t) + g(4x+t)$.
5. That is a brilliant help thanks very much! I have now got to this stage...but am now a little confused as to how I take into the account the initial conditions.
I have attempted to somehow adapt d'Alambert's solution to the wave equation (or something along these lines) but the solution I attain does not seem to satisfy the original equation!
In particular I obtained a solution along the lines of:
(1/2) (x-t)^2 + (1/2) (4x+t)^2 + the integral of e^y dy (between the limits (x-t) and (4x+t))
Again any help in finishing this question would be much appreciated! Thanks!
6. Originally Posted by Rocky
Solve the equation u_xx - 3u_xt - 4u_tt = 0 subject to the initial conditions u(x,0) = x^2 and u_t(x,t) = e^x.
Is that last equation correct? I would expect it to be u_t(x,0) = e^x. (if it is an initial condition, then it ought to tell you what happens at time 0.)
In that case, if the general solution is $u(x,t) = f(x-t) + g(4x+t)$ then the initial conditions are
$f(x) + g(4x) = x^2,\quad -f'(x) + g'(4x) = e^x.$
Integrate the second one to get $-f(x) + \frac14g(4x) = e^x$. (There should be a constant of integration somewhere there, but it will cancel out in the eventual solution, so let's ignore it.)
Solve those two simultaneous equations for f(x) and g(4x) to get
$f(x) = \frac15(x^2-4e^x),\quad g(4x) = \frac45(x^2+e^x).$
Therefore
$f(x-t) = \frac15\bigl((x-t)^2-4e^{x-t}\bigr),\quad g(4x+t) = g\bigl(4(x+\frac t4)\bigr) = \frac45\bigl((x+\frac t4)^2+e^{x+(t/4)}).$
Add those together to get the solution for u(x,t).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467628598213196, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Quantum_tunnelling
|
# Quantum tunnelling
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Quantum mechanics
Introduction
Glossary · History
Background
Fundamental concepts
Experiments
Formulations
Equations
Interpretations
Advanced topics
Scientists
Quantum tunnelling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount. This plays an essential role in several physical phenomena, such as the nuclear fusion that occurs in main sequence stars like the sun.[1] It has important applications to modern devices such as the tunnel diode[2] and the scanning tunneling microscope. The effect was predicted in the early 20th century and its acceptance, as a general physical phenomenon, came mid-century.[3]
Tunneling is often explained using the Heisenberg uncertainty principle and the wave–particle duality of matter. Purely quantum mechanical concepts are central to the phenomenon, so quantum tunnelling is one of the novel implications of quantum mechanics.
## History
Quantum tunneling was developed from the study of radioactivity,[3] which was discovered in 1896 by Henri Becquerel.[4] Radioactivity was examined further by Marie and Pierre Curie, for which they earned the Nobel Prize in Physics in 1903.[4] Ernest Rutherford and Egon Schweidler studied its nature, which was later verified empirically by Friedrich Kohlrausch. The idea of the half-life and the impossibility of predicting decay was created from their work.[3]
Friedrich Hund was the first to take notice of tunneling in 1927 when he was calculating the ground state of the double-well potential.[4] Its first application was a mathematical explanation for alpha decay, which was done in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon.[5][6][7][8] The two researchers simultaneously solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunneling.
After attending a seminar by Gamow, Max Born recognized the generality of tunnelling. He realized that it was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems.[3] Shortly thereafter, both groups considered the case of particles tunnelling into the nucleus. The study of semiconductors and the development of transistors and diodes led to the acceptance of electron tunnelling in solids by 1957. The work of Leo Esaki, Ivar Giaever and Brian David Josephson predicted the tunnelling of superconducting Cooper pairs, for which they received the Nobel Prize in Physics in 1973.[3]
## Introduction to the concept
Animation showing the tunnel effect and its application to STM microscope
Quantum tunnelling through a barrier. The energy of the tunnelled particle is the same but the amplitude is decreased.
Quantum tunneling through a barrier. At the origin (x=0), there is a very high, but narrow potential barrier. A significant Tunneling effect can be seen.
Quantum tunnelling falls under the domain of quantum mechanics: the study of what happens at the quantum scale. This process cannot be directly perceived, but much of its understanding is shaped by the macroscopic world, which classical mechanics can not adequately explain. To understand the phenomenon, particles attempting to travel between potential barriers can be compared to a ball trying to roll over a hill; quantum mechanics and classical mechanics differ in their treatment of this scenario. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier will not be able to reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down. Or, lacking the energy to penetrate a wall, it would bounce back (reflection) or in the extreme case, bury itself inside the wall (absorption). In quantum mechanics, these particles can, with a very small probability, tunnel to the other side, thus crossing the barrier. Here, the ball could, in a sense, borrow energy from its surroundings to tunnel through the wall or roll over the hill, paying it back by making the reflected electrons more energetic than they otherwise would have been.[9]
The reason for this difference comes from the treatment of matter in quantum mechanics as having properties of waves and particles. One interpretation of this duality involves the Heisenberg uncertainty principle, which defines a limit on how precisely the position and the momentum of a particle can be known at the same time.[4] This implies that there are no solutions with a probability of exactly zero (or one), though a solution may approach infinity if, for example, the calculation for its position was taken as a probability of 1, the other, i.e. its speed, would have to be infinity. Hence, the probability of a given particle's existence on the opposite side of an intervening barrier is non-zero, and such particles will appear—with no indication of physically transiting the barrier—on the 'other' (a semantically difficult word in this instance) side with a frequency proportional to this probability.
An electron wavepacket directed at a potential barrier. Note the dim spot on the right that represents tunnelling electrons.
Quantum tunneling in the phase space formulation of quantum mechanics. Wigner function for tunneling through the potential barrier $U(x)=8e^{-0.25 x^2}$ in atomic units (a.u.). The solid lines represent the level set of the Hamiltonian $H(x,p) = p^2 / 2 + U(x)$.
### The tunneling problem
The wave function of a particle summarizes everything that can be known about a physical system.[10] Therefore, problems in quantum mechanics center around the analysis of the wave function for a system. Using mathematical formulations of quantum mechanics, such as the Schrödinger equation, the wave function can be solved. This is directly related to the probability density of the particle's position, which describes the probability that the particle is at any given place. In the limit of large barriers, the probability of tunneling decreases for taller and wider barriers.
For simple tunneling-barrier models, such as the rectangular barrier, an analytic solution exists. Problems in real life often do not have one, so "semiclassical" or "quasiclassical" methods have been developed to give approximate solutions to these problems, like the WKB approximation. Probabilities may be derived with arbitrary precision, constrained by computational resources, via Feynman's path integral method; such precision is seldom required in engineering practice.
## Related phenomena
There are several phenomena that have the same behavior as quantum tunneling, and thus can be accurately described by tunneling. Examples include the evanescent wave coupling (the application of Maxwell's wave-equation to light) and the application of the non-dispersive wave-equation from acoustics applied to "waves on strings". Evanescent wave coupling, until recently, was only called "tunneling" in quantum mechanics; now it is used in other contexts.
These effects are modeled similarly to the rectangular potential barrier. In these cases, there is one transmission medium through which the wave propagates that is the same or nearly the same throughout, and a second medium through which the wave travels differently. This can be described as a thin region of medium B between two regions of medium A. The analysis of a rectangular barrier by means of the Schrödinger equation can be adapted to these other effects provided that the wave equation has travelling wave solutions in medium A but real exponential solutions in medium B.
In optics, medium A is a vacuum while medium B is glass. In acoustics, medium A may be a liquid or gas and medium B a solid. For both cases, medium A is a region of space where the particle's total energy is greater than its potential energy and medium B is the potential barrier. These have an incoming wave and resultant waves in both directions. There can be more mediums and barriers, and the barriers need not be discrete; approximations are useful in this case.
## Applications
Tunnelling occurs with barriers of thickness around 1-3 nm and smaller,[11] but is the cause of some important macroscopic physical phenomena. For instance, tunnelling is a source of current leakage in very-large-scale integration (VLSI) electronics and results in the substantial power drain and heating effects that plague high-speed and mobile technology; it is considered the lower limit on how small computer chips can be made.[12]
### Radioactive decay
Main article: Radioactive decay
Radioactive decay is the process of emission of particles and energy from the unstable nucleus of an atom to form a stable product. This is done via the tunnelling of a particle out of the nucleus (an electron tunnelling into the nucleus is electron capture). This was the first application of quantum tunnelling and led to the first approximations.
### Spontaneous DNA Mutation
Spontaneous mutation of DNA occurs when normal DNA replication takes place after a particularly significant proton has defied the odds in quantum tunneling in what is called “proton tunneling.”[13] (quantum bio) A hydrogen bond joins normal base pairs of DNA. There exists a double well potential along a hydrogen bond separated by a potential energy barrier. It is believed that the double well potential is asymmetric with one well deeper than the other so the proton normally rests in the deeper well. For a mutation to occur, the proton must have tunneled into the shallower of the two potential wells. The movement of the proton from its regular position is called a tautomeric transition. If DNA replication takes place in this state, the base paring rule for DNA may be jeopardized causing a mutation.[14] Per-Olov Lowdin was the first to develop this theory of spontaneous mutation within the double helix (quantum bio). Other instances of quantum tunneling-induced mutations in biology are believed to be the cause of aging and cancer.[citation needed]
### Cold emission
Main article: Semiconductor devices
Cold emission of electrons is relevant to semiconductors and superconductor physics. It is similar to thermionic emission, where electrons randomly jump from the surface of a metal to follow a voltage bias because they statistically end up with more energy than the barrier, through random collisions with other particles. When the electric field is very large, the barrier becomes thin enough for electrons to tunnel out of the atomic state, leading to a current that varies approximately exponentially with the electric field.[15] These materials are important for flash memory and for some electron microscopes.
### Tunnel junction
Main article: Tunnel junction
A simple barrier can be created by separating two conductors with a very thin insulator. These are tunnel junctions, the study of which requires quantum tunnelling.[16] Josephson junctions take advantage of quantum tunnelling and the superconductivity of some semiconductors to create the Josephson effect. This has applications in precision measurements of voltages and magnetic fields,[15] as well as the multijunction solar cell.
A working mechanism of a resonant tunnelling diode device, based on the phenomenon of quantum tunneling through the potential barriers.
### Tunnel diode
Main article: Tunnel diode
Diodes are electrical semiconductor devices that allow electric current flow in one direction more than the other. The device depends on a depletion layer between N-type and P-type semiconductors to serve its purpose; when these are very heavily doped the depletion layer can be thin enough for tunnelling. Then, when a small forward bias is applied the current due to tunnelling is significant. This has a maximum at the point where the voltage bias is such that the energy level of the p and n conduction bands are the same. As the voltage bias is increased, the two conduction bands no longer line up and the diode acts typically.[17]
Because the tunnelling current drops off rapidly, tunnel diodes can be created that have a range of voltages for which current decreases as voltage is increased. This peculiar property is used in some applications, like high speed devices where the characteristic tunnelling probability changes as rapidly as the bias voltage.[17]
The resonant tunnelling diode makes use of quantum tunnelling in a very different manner to achieve a similar result. This diode has a resonant voltage for which there is a lot of current that favors a particular voltage, achieved by placing two very thin layers with a high energy conductance band very near each other. This creates a quantum potential well that have a discrete lowest energy level. When this energy level is higher than that of the electrons, no tunnelling will occur, and the diode is in reverse bias. Once the two voltage energies align, the electrons flow like an open wire. As the voltage is increased further tunnelling becomes improbable and the diode acts like a normal diode again before a second energy level becomes noticeable.[18]
### Tunnelling field effect transistor
A European research project has demonstrated field effect transistors in which the gate (channel) is controlled via quantum tunnelling rather than by thermal injection, reducing gate voltage from ~1 volt to 0.2 volts and reducing power consumption by up to 100×. If these transistors can be scaled up into VLSI chips, they will significantly improve the performance per power of integrated circuits.[19]
### Quantum conductivity
Main article: Classical and quantum conductivity
While the Drude model of electrical conductivity makes excellent predictions about the nature of electrons conducting in metals, it can be furthered by using quantum tunnelling to explain the nature of the electron's collisions.[15] When a free electron wave packet encounters a long array of uniformly spaced barriers the reflected part of the wave packet interferes uniformly with the transmitted one between all barriers so that there are cases of 100% transmission. The theory predicts that if positively charged nuclei form a perfectly rectangular array, electrons will tunnel through the metal as free electrons, leading to an extremely high conductance, and that impurities in the metal will disrupt it significantly.[15]
### Scanning tunnelling microscope
Main article: Scanning tunnelling microscope
The scanning tunnelling microscope (STM), invented by Gerd Binnig and Heinrich Rohrer, allows imaging of individual atoms on the surface of a metal.[15] It operates by taking advantage of the relationship between quantum tunnelling with distance. When the tip of the STM's needle is brought very close to a conduction surface that has a voltage bias, by measuring the current of electrons that are tunnelling between the needle and the surface, the distance between the needle and the surface can be measured. By using piezoelectric rods that change in size when voltage is applied over them the height of the tip can be adjusted to keep the tunnelling current constant. The time-varying voltages that are applied to these rods can be recorded and used to image the surface of the conductor.[15] STMs are accurate to 0.001 nm, or about 1% of atomic diameter.[18]
## Faster than light
See also: Faster-than-light
It is possible for spin zero particles to travel faster than the speed of light when tunnelling.[3] This apparently violates the principle of causality, since there will be a frame of reference in which it arrives before it has left. However, careful analysis of the transmission of the wave packet shows that there is actually no violation of relativity theory. In 1998, Francis E. Low reviewed briefly the phenomenon of zero time tunneling.[20] More recently experimental tunneling time data of phonons, photons, and electrons are published by Günter Nimtz.[21]
## Mathematical discussions of quantum tunnelling
The following subsections discuss the mathematical formulations of quantum tunnelling.
### The Schrödinger equation
The time-independent Schrödinger equation for one particle in one dimension can be written as
$-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \Psi(x) + V(x) \Psi(x) = E \Psi(x)$ or
$\frac{d^2}{dx^2} \Psi(x) = \frac{2m}{\hbar^2} \left( V(x) - E \right) \Psi(x) \equiv \frac{2m}{\hbar^2} M(x) \Psi(x) ,$
where $\hbar$ is the reduced Planck's constant, m is the particle mass, x represents distance measured in the direction of motion of the particle, Ψ is the Schrödinger wave function, V is the potential energy of the particle (measured relative to any convenient reference level), E is the energy of the particle that is associated with motion in the x-axis (measured relative to V), and M(x) is a quantity defined by V(x) - E which has no accepted name in physics.
The solutions of the Schrödinger equation take different forms for different values of x, depending on whether M(x) is positive or negative. When M(x) is constant and negative, then the Schrödinger equation can be written in the form
$\frac{d^2}{dx^2} \Psi(x) = \frac{2m}{\hbar^2} M(x) \Psi(x) = -k^2 \Psi(x),\;\;\;\;\;\; \mathrm{where} \;\;\; k^2=- \frac{2m}{\hbar^2} M.$
The solutions of this equation represent traveling waves, with phase-constant +k or -k. Alternatively, if M(x) is constant and positive, then the Schrödinger equation can be written in the form
$\frac{d^2}{dx^2} \Psi(x) = \frac{2m}{\hbar^2} M(x) \Psi(x) = {\kappa}^2 \Psi(x), \;\;\;\;\;\; \mathrm{where} \;\;\; {\kappa}^2= \frac{2m}{\hbar^2} M.$
The solutions of this equation are rising and falling exponentials in the form of evanescent waves. When M(x) varies with position, the same difference in behaviour occurs, depending on whether M(x) is negative or positive. It follows that the sign of M(x) determines the nature of the medium, with positive M(x) corresponding to medium A as described above and negative M(x) corresponding to medium B. It thus follows that evanescent wave coupling can occur if a region of positive M(x) is sandwiched between two regions of negative M(x), hence creating a potential barrier.
The mathematics of dealing with the situation where M(x) varies with x is difficult, except in special cases that usually do not correspond to physical reality. A discussion of the semi-classical approximate method, as found in physics textbooks, is given in the next section. A full and complicated mathematical treatment appears in the 1965 monograph by Fröman and Fröman noted below. Their ideas have not been incorporated into physics textbooks, but their corrections have little quantitative effect.
### The WKB approximation
Main article: WKB approximation
The wave function is expressed as the exponential of a function:
$\Psi(x) = e^{\Phi(x)}$, where $\Phi''(x) + \Phi'(x)^2 = \frac{2m}{\hbar^2} \left( V(x) - E \right).$
$\Phi'(x)$ is then separated into real and imaginary parts:
$\Phi'(x) = A(x) + i B(x)$, where A(x) and B(x) are real-valued functions.
Substituting the second equation into the first and using the fact that the imaginary part needs to be 0 results in:
$A'(x) + A(x)^2 - B(x)^2 = \frac{2m}{\hbar^2} \left( V(x) - E \right)$.
To solve this equation using the semiclassical approximation, each function must be expanded as a power series in $\hbar$. From the equations, the power series must start with at least an order of $\hbar^{-1}$ to satisfy the real part of the equation; for a good classical limit starting with the highest power of Planck's constant possible is preferable, which leads to
$A(x) = \frac{1}{\hbar} \sum_{k=0}^\infty \hbar^k A_k(x)$
and
$B(x) = \frac{1}{\hbar} \sum_{k=0}^\infty \hbar^k B_k(x)$,
with the following constraints on the lowest order terms,
$A_0(x)^2 - B_0(x)^2 = 2m \left( V(x) - E \right)$
and
$A_0(x) B_0(x) = 0$.
At this point two extreme cases can be considered.
Case 1 If the amplitude varies slowly as compared to the phase $A_0(x) = 0$ and
$B_0(x) = \pm \sqrt{ 2m \left( E - V(x) \right) }$
which corresponds to classical motion. Resolving the next order of expansion yields
$\Psi(x) \approx C \frac{ e^{i \int dx \sqrt{\frac{2m}{\hbar^2} \left( E - V(x) \right)} + \theta} }{\sqrt[4]{\frac{2m}{\hbar^2} \left( E - V(x) \right)}}$
Case 2
If the phase varies slowly as compared to the amplitude, $B_0(x) = 0$ and
$A_0(x) = \pm \sqrt{ 2m \left( V(x) - E \right) }$
which corresponds to tunnelling. Resolving the next order of the expansion yields
$\Psi(x) \approx \frac{ C_{+} e^{+\int dx \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}} + C_{-} e^{-\int dx \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}}}{\sqrt[4]{\frac{2m}{\hbar^2} \left( V(x) - E \right)}}$
In both cases it is apparent from the denominator that both these approximate solutions are bad near the classical turning points $E = V(x)$. Away from the potential hill, the particle acts similar to a free and oscillating wave; beneath the potential hill, the particle undergoes exponential changes in amplitude. By considering the behaviour at these limits and classical turning points a global solution can be made.
To start, choose a classical turning point, $x_1$ and expand $\frac{2m}{\hbar^2}\left(V(x)-E\right)$ in a power series about $x_1$:
$\frac{2m}{\hbar^2}\left(V(x)-E\right) = v_1 (x - x_1) + v_2 (x - x_1)^2 + \cdots$
Keeping only the first order term ensures linearity:
$\frac{2m}{\hbar^2}\left(V(x)-E\right) = v_1 (x - x_1)$.
Using this approximation, the equation near $x_1$ becomes a differential equation:
$\frac{d^2}{dx^2} \Psi(x) = v_1 (x - x_1) \Psi(x)$.
This can be solved using Airy functions as solutions.
$\Psi(x) = C_A Ai\left( \sqrt[3]{v_1} (x - x_1) \right) + C_B Bi\left( \sqrt[3]{v_1} (x - x_1) \right)$
Taking these solutions for all classical turning points, a global solution can be formed that links the limiting solutions. Given the 2 coefficients on one side of a classical turning point, the 2 coefficients on the other side of a classical turning point can be determined by using this local solution to connect them.
Hence, the Airy function solutions will asymptote into sine, cosine and exponential functions in the proper limits. The relationships between $C,\theta$ and $C_{+},C_{-}$ are
$C_{+} = \frac{1}{2} C \cos{\left(\theta - \frac{\pi}{4}\right)}$
and
$C_{-} = - C \sin{\left(\theta - \frac{\pi}{4}\right)}$
With the coefficients found, the global solution can be found. Therefore, the transmission coefficient for a particle tunnelling through a single potential barrier is
$T = \frac{e^{-2\int_{x_1}^{x_2} dx \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}}}{ \left( 1 + \frac{1}{4} e^{-2\int_{x_1}^{x_2} dx \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}} \right)^2}$,
where $x_1,x_2$ are the 2 classical turning points for the potential barrier.
## References
1. Serway; Vuille (2008). College Physics 2 (Eighth ed.). Belmont: Brooks/Cole. ISBN 9780495554752.
2. Taylor, J. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. p. 234. ISBN 013805715X.
3. Razavy, Mohsen (2003). Quantum Theory of Tunneling. World Scientific. pp. 4, 462. ISBN 9812564888.
4. ^ a b c d Nimtz; Haibel (2008). Zero Time Space. Wiley-VCH. p. 1.
5. Gurney, R. W.; Condon, E. U. (1928). "Quantum Mechanics and Radioactive Disintegration". Nature 122: 439.
6. Gurney, R. W.; Condon, E. U. (1929). "Quantum Mechanics and Radioactive Disintegration". Phys. Rev 33 (2): 127–140. doi:10.1103/PhysRev.33.127.
7. Interview with Hans Bethe by Charles Weiner and Jagdish Mehra at Cornell University, 27 October 1966 accessed 5 April 2010
8. Friedlander, Gerhart; Kennedy, Joseph E.; Miller, Julian Malcolm (1964). Nuclear and Radiochemistry (2nd ed.). New York: John Wiley & Sons. pp. 225–7. ISBN 978-0-471-86255-0.
9. "Quantum Tunneling Time". ASU. Retrieved 2012-01-28.
10. Bjorken and Drell, "Relativistic Quantum Mechanics", page 2. Mcgraw-Hill College, 1965.
11. Lerner; Trigg (1991). Encyclopedia of Physics (2nd ed.). New York: VCH. p. 1308. ISBN 0895737523.
12.
13.
14. Taylor, J. (2004). Modern Physics for Scientists and Engineers. Prentice Hall. p. 479. ISBN 013805715X.
15. Lerner; Trigg (1991). Encyclopedia of Physics (2nd ed.). New York: VCH. pp. 1308–1309. ISBN 0895737523.
16. ^ a b Krane, Kenneth (1983). Modern Physics. New York: John Wiley and Sons. p. 423. ISBN 0471079634.
17. ^ a b Knight, R. D. (2004). Physics for Scientists and Engineers: With Modern Physics. Pearson Education. p. 1311. ISBN 0321223691.
18. Ionescu, Adrian M.; Riel, Heike (2011). "Tunnel field-effect transistors as energy-efficient electronic switches". 479 (7373): 329–337. doi:10.1038/nature10679.
19. Low, F. E. (1998). "Comments on apparent superluminal propagation". (Leipzig) 7 (7–8): 660–661. doi:10.1002/(SICI)1521-3889(199812)7:7/8<660::AID-ANDP660>3.0.CO;2-0.
20. Nimtz, G. (2011). "Tunneling Confronts Special Relativity". 41 (7): 1193–1199. doi:10.1007/s10701-011-9539-2.
## Further reading
• N. Fröman and P.-O. Fröman (1965). JWKB Approximation: Contributions to the Theory. Amsterdam: North-Holland.
• Razavy, Mohsen (2003). Quantum Theory of Tunneling. World Scientific. ISBN 981-238-019-1.
• Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-805326-X.
• James Binney and Skinner, D. (2010). The Physics of Quantum Mechanics: An Introduction (3rd ed.). Cappella Archive. ISBN 1-902918-51-7.
• Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 0-8053-8714-5.
• Vilenkin, Alexander; Vilenkin, Alexander; Winitzki, Serge (2003). "Particle creation in a tunneling universe". 68 (2): 023520. arXiv:gr-qc/0210034. Bibcode:2003PhRvD..68b3520H. doi:10.1103/PhysRevD.68.023520.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927636742591858, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Kepler_orbit
|
# Kepler orbit
A diagram of the various forms of the Kepler Orbit and their eccentricities. Blue is a hyperbolic trajectory (e > 1). Green is a parabolic trajectory (e = 1). Red is an elliptical orbit (e < 1). Grey is a circular orbit (e = 0).
For further closely relevant mathematical developments see also Two-body problem, also Gravitational two-body problem, and Kepler problem.
In celestial mechanics, a Kepler orbit describes the motion of an orbiting body as an ellipse, parabola, or hyperbola, which forms a two-dimensional orbital plane in three-dimensional space. (A Kepler orbit can also form a straight line.) It considers only the point-like gravitational attraction of two bodies, neglecting perturbations due to gravitational interactions with other objects, atmospheric drag, solar radiation pressure, a non-spherical central body, and so on. It is thus said to be a solution of a special case of the two-body problem, known as the Kepler problem. As a theory in classical mechanics, it also does not take into account the effects of general relativity. Keplerian orbits can be parametrized into six orbital elements in various ways.
In most applications, there is a large central body, the center of mass of which is assumed to be the center of mass of the entire system. By decomposition, the orbits of two objects of similar mass can be described as Kepler orbits around their common center of mass, their barycenter.
## Introduction
From ancient times until the 16th and 17th centuries, the motions of the planets were believed to follow perfectly circular geocentric paths as taught by the ancient Greek philosophers Aristotle and Ptolemy. Variations in the motions of the planets were explained by smaller circular paths overlaid on the larger path (see epicycle). As measurements of the planets became increasingly accurate, revisions to the theory were proposed. In 1543, Nicolaus Copernicus published a heliocentric model of the solar system, although he still believed that the planets traveled in perfectly circular paths centered on the sun.[citation needed]
### Johannes Kepler
In 1601, Johannes Kepler acquired the extensive, meticulous observations of the planets made by Tycho Brahe. Kepler would spend the next five years trying to fit the observations of the planet Mars to various curves. In 1609, Kepler published the first two of his three laws of planetary motion. The first law states:
"The orbit of every planet is an ellipse with the sun at a focus."
More generally, the path of an object undergoing Keplerian motion may also follow a parabola or a hyperbola, which, along with ellipses, belong to a group of curves known as conic sections. Mathematically, the distance between a central body and an orbiting body can be expressed as:
$r(\nu) = \frac{a(1-e^2)}{1+e\cos(\nu)}$
where:
• $r$ is the distance
• $a$ is the semi-major axis, which defines the size of the orbit
• $e$ is the eccentricity, which defines the shape of the orbit
• $\nu$ is the true anomaly, which is the angle between the current position of the orbiting object and the location in the orbit at which it is closest to the central body (called the periapsis)
Alternately, the equation can be expressed as:
$r(\nu) = \frac{p}{1+e\cos(\nu)}$
Where $p$ is called the semi-latus rectum of the curve. This form of the equation is particularly useful when dealing with parabolic trajectories, for which the semi-major axis is infinite.
Despite developing these laws from observations, Kepler was never able to develop a theory to explain these motions.[1]
### Isaac Newton
Between 1665 to 1666, Isaac Newton developed several concepts related to motion, gravitation and differential calculus. However, these concepts were not published until 1687 in the Principia, in which he outlined his laws of motion and his law of universal gravitation. His second of his three laws of motion states:
The acceleration a of a body is parallel and directly proportional to the net force acting on the body, is in the direction of the net force, and is inversely proportional to the mass of the body:
$\mathbf{F} = m\mathbf{a} = m\frac{d^2\mathbf{r}}{dt^2}$
Where:
• $\mathbf{F}$ is the force vector
• $m$ is the mass of the body on which the force is acting
• $\mathbf{a}$ is the acceleration vector, the second time derivative of the position vector $\mathbf{r}$
Strictly speaking, this form of the equation only applies to an object of constant mass, which holds true based on the simplifying assumptions made below.
The mechanisms of Newton's law of universal gravitation; a point mass m1 attracts another point mass m2 by a force F2 which is proportional to the product of the two masses and inversely proportional to the square of the distance (r) between them. Regardless of masses or distance, the magnitudes of |F1| and |F2| will always be equal. G is the gravitational constant.
Newton's law of gravitation states:
Every point mass attracts every other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between the point masses:
$F = G \frac{m_1 m_2}{r^2}$
where:
• $F$ is the magnitude of the gravitational force between the two point masses
• $G$ is the gravitational constant
• $m_1$ is the mass of the first point mass
• $m_2$ is the mass of the second point mass
• $r$ is the distance between the two point masses
From the laws of motion and the law of universal gravitation, Newton was able to derive Kepler's laws, demonstrating consistency between observation and theory. The laws of Kepler and Newton formed the basis of modern celestial mechanics until Albert Einstein introduced the concepts of special and general relativity in the early 20th century. For most applications, Keplerian motion approximates the motions of planets and satellites to relatively high degrees of accuracy and is used extensively in astronomy and astrodynamics.
## Simplified two body problem
To solve for the motion of an object in a two body system, two simplifying assumptions can be made:
1. The bodies are spherically symmetric and can be treated as point masses.
2. There are no external or internal forces acting upon the bodies other than their mutual gravitation.
The shapes of large celestial bodies are close to spheres. By symmetry, the net gravitational force attracting a mass point towards a homogeneous sphere must be directed towards its centre. The shell theorem (also proven by Isaac Newton) states that the magnitude of this force is the same as if all mass was concentrated in the middle of the sphere, even if the density of the sphere varies with depth (as it does for most celestial bodies). From this immediately follows that the attraction between two homogeneous spheres is as if both had its mass concentrated to its center.
Smaller objects, like asteroids or spacecraft often have a shape strongly deviating from a sphere. But the gravitational forces produced by these irregularities are generally small compared to the gravity of the central body. The difference between an irregular shape and a perfect sphere also diminishes with distances, and most orbital distances are very large when compared with the diameter of a small orbiting body. Thus for some applications, shape irregularity can be neglected without significant impact on accuracy.
Planets rotate at varying rates and thus may take a slightly oblate shape because of the centrifugal force. With such an oblate shape, the gravitational attraction will deviate somewhat from that of a homogeneous sphere. This phenomenon is quite noticeable for artificial Earth satellites, especially those in low orbits. At larger distances the effect of this oblateness becomes negligible. Planetary motions in the Solar System can be computed with sufficient precision if they are treated as point masses.
Two point mass objects with masses $m_1$ and $m_2$ and position vectors $\mathbf{r}_1$ and $\mathbf{r}_2$ relative to some inertial reference frame experience gravitational forces:
$m_1 \ddot{\mathbf{r}}_1 = \frac{-G m_1 m_2}{r^2} \mathbf{\hat{r}}$
$m_2 \ddot{\mathbf{r}}_2 = \frac{G m_1 m_2}{r^2} \mathbf{\hat{r}}$
where $\mathbf{r}$ is the relative position vector of mass 1 with respect to mass 2, expressed as:
$\mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2$
and $\mathbf{\hat{r}}$ is the unit vector in that direction and $r$ is the length of that vector.
Dividing by their respective masses and subtracting the second equation from the first yields the equation of motion for the acceleration of the first object with respect to the second:
$\ddot{\mathbf{r}} = - \frac{\mu}{r^2} \mathbf{\hat{r}}$
()
where $\mu$ is the gravitational parameter and is equal to
$\mu = G(m_1 + m_2)$
In many applications, a third simplifying assumption can be made:
3. When compared to the central body, the mass of the orbiting body is insignificant. Mathematically, m1 >> m2, so μ = G (m1 + m2) ≈ Gm1.
This assumption is not necessary to solve the simplified two body problem, but it simplifies calculations, particularly with Earth-orbiting satellites and planets orbiting the sun. Even Jupiter's mass is less than the sun's by a factor of 1047,[2] which would constitute an error of 0.096% in the value of μ. Notable exceptions include the Earth-moon system (mass ratio of 81.3), the Pluto-Charon system (mass ratio of 8.9) and binary star systems.
Under these assumptions the differential equation for the two body case can be completely solved mathematically and the resulting orbit which follows Kepler's laws of planetary motion is called a "Kepler orbit". The orbits of all planets are to high accuracy Kepler orbits around the Sun. The small deviations are due to the much weaker gravitational attractions between the planets, and in the case of Mercury, due to general relativity. The orbits of the artificial satellites around the Earth are, with a fair approximation, Kepler orbits with small perturbations due to the gravitational attraction of the sun, the moon and the oblateness of the Earth. In high accuracy applications for which the equation of motion must be integrated numerically with all gravitational and non-gravitational forces (such as solar radiation pressure and atmospheric drag) being taken into account, the Kepler orbit concepts are of paramount importance and heavily used.
### Orbital elements
Keplerian orbital elements.
Main article: Orbital elements
It is worth mentioning that any Keplerian trajectory can be defined by six parameters. The motion of an object moving in three-dimensional space is characterized by a position vector and a velocity vector. Each vector has three components, so the total number of values needed to define a trajectory through space is six. An orbit is generally defined by six elements (known as Keplerian elements) that can be computed from position and velocity, three of which have already been discussed. These elements are convenient in that of the six, five are unchanging for an unperturbed orbit (a stark contrast to two constantly changing vectors). The future location of an object within its orbit can be predicted and its new position and velocity can be easily obtained from the orbital elements.
Two define the size and shape of the trajectory:
• Semimajor axis ($a\,\!$)
• Eccentricity ($e\,\!$)
Three define the orientation of the orbital plane:
• Inclination ($i\,\!$) defines the angle between the orbital plane and the reference plane.
• Longitude of the ascending node ($\Omega\,\!$) defines the angle between the reference direction and the upward crossing of the orbit on the reference plane (the ascending node).
• Argument of periapsis ($\omega\,\!$) defines the angle between the ascending node and the periapsis.
And finally:
• True anomaly ($\nu$) defines the position of the orbiting body along the trajectory, measured from periapsis. Several alternate values can be used instead of true anomaly, the most common being $M$ the mean anomaly and $T$, the time since periapsis.
Because $i$, $\Omega$ and $\omega$ are simply angular measurements defining the orientation of the trajectory in the reference frame, they are not strictly necessary when discussing the motion of the object within the orbital plane. They have been mentioned here for completeness, but are not required for the proofs below.
## Mathematical solution of the differential equation () above
For the movement under any central force, i.e. a force aligned with $\hat{r}$, the specific relative angular momentum $\bar{H} = \bar{r} \times {\dot{\bar{r}}}$ stays constant:
$\dot {\bar{H}} = \dot {\overbrace{\bar{r} \times {\dot{\bar{r}}}}} = \dot{\bar{r}} \times {\dot{\bar{r}}} + \bar{r} \times {\ddot{\bar{r}}} =\bar{0} + \bar{0} = \bar{0}$
Introducing a coordinate system $\hat{x} \ , \ \hat{y}$ in the plane orthogonal to $\bar{H}$ and polar coordinates
$\bar{r} = r \cdot ( \cos\theta \cdot \hat{x} + \sin \theta \cdot \hat{y})$
we have,
$r^2 \cdot \dot{\theta} = H$
()
And the differential equation (1) takes the form (see "Polar coordinates#Vector calculus")
$\ddot{r} - r \cdot {\dot{\theta}}^2 = - \frac {\mu} {r^2}$
()
Taking the time derivative of (2) one gets
$\ddot{\theta} = - \frac {2 \cdot H \cdot \dot{r}} {r^3}$
()
Using the chain rule for differentiation one gets
$\dot{r} = \frac {dr} {d\theta} \cdot \dot {\theta}$
()
$\ddot{r} = \frac {d^2r} {d\theta^2} \cdot {\dot {\theta}}^2 + \frac {dr} {d\theta} \cdot \ddot {\theta}$
()
Using the expressions for $\dot{\theta},\ \ddot{\theta},\ \dot{r},\ \ddot{r}$ of equations (2), (4), (5) and (6) all time derivatives in (3) can be replaced by derivatives of r as function of $\theta\,$. After some simplification one gets
$\ddot{r} - r \cdot {\dot{\theta}}^2 = \frac {H^2} {r^4} \cdot \left ( \frac{d^2 r} {d\theta ^2} - 2 \cdot \frac{\left (\frac {dr} {d\theta} \right ) ^2} {r} - r\right )= - \frac {\mu} {r^2}$
()
The differential equation (7) can be solved analytically by the variable substitution
$r=\frac{1} {s}$
()
Using the chain rule for differentiation one gets:
$\frac {dr} {d\theta} = -\frac {1} {s^2} \cdot \frac {ds} {d\theta}$
()
$\frac {d^2r} {d\theta^2} = \frac {2} {s^3} \cdot \left (\frac {ds} {d\theta}\right )^2 - \frac {1} {s^2} \cdot \frac {d^2s} {d\theta^2}$
()
Using the expressions (10) and (9) for $\frac {d^2r} {d\theta^2}$ and $\frac {dr} {d\theta}$ one gets
$H^2 \cdot \left ( \frac {d^2s} {d\theta^2} + s \right ) = \mu$
()
with the general solution
$s = \frac {\mu} {H^2} \cdot \left ( 1 + e \cdot \cos (\theta-\theta_0)\right )$
()
where e and $\theta_0\,$ are constants of integration depending on the initial values for s and $\frac {ds} {d\theta}$.
Instead of using the constant of integration $\theta_0\,$ explicitly one introduces the convention that the unit vectors $\hat{x} \ , \ \hat{y}$ defining the coordinate system in the orbital plane are selected such that $\theta_0\,$ takes the value zero and e is positive. This then means that $\theta\,$ is zero at the point where $s$ is maximal and therefore $r= \frac {1}{s}$ is minimal. Defining the parameter p as $\frac {H^2}{\mu}$ one has that
$r = \frac {1}{s} = \frac {p}{1 + e \cdot \cos \theta}$
()
This is the equation in polar coordinates for a conic section with origin in a focal point. The argument $\theta\,$ is called "true anomaly".
For $e\ =\ 0\,$ this is a circle with radius p.
For $0\ < e\ <\ 1\,$ this is an ellipse with
$a = \frac {p}{1-e^2}$
()
$b = \frac {p}{\sqrt{1-e^2}} = a \cdot \sqrt{1-e^2}$
()
For $e\ =\ 1\,$ this is a parabola with focal length $\frac {p}{2}$
For $e\ >\ 1\,$ this is a hyperbola with
$a = \frac {p}{e^2-1}$
()
$b = \frac {p}{\sqrt{e^2-1}} = a \cdot \sqrt{e^2-1}$
()
The following image illustrates an ellipse (red), a parabola (green) and a hyperbola (blue)
An elliptic Kepler orbit with an eccentricity of 0.7, a parabolic Kepler orbit and a hyperbolic Kepler orbit with an eccentricity of 1.3. The distance to the focal point is a function of the polar angle relative to the horizontal line as given by the equation (13)
The point on the horizontal line going out to the right from the focal point is the point with $\theta = 0\,$ for which the distance to the focus takes the minimal value $\frac {p}{1 + e}$, the pericentre. For the ellipse there is also an apocentre for which the distance to the focus takes the maximal value $\frac {p}{1 - e}$. For the hyperbola the range for $\theta\,$ is
$\left [ -\cos^{-1}\left(-\frac{1}{e}\right) < \theta < \cos^{-1}\left(-\frac{1}{e}\right)\right ]$
and for a parobola the range is
$\left [ -\pi < \theta < \pi \right ]$
Using the chain rule for differentiation (5), the equation (2) and the definition of p as $\frac {H^2}{\mu}$ one gets that the radial velocity component is
$V_r = \dot{r} = \frac {H}{p} \cdot e \cdot \sin \theta = \sqrt{\frac {\mu}{p}} \cdot e \cdot \sin \theta$
()
and that the tangential component (velocity component perpendicular to $V_r$) is
$V_t = r \cdot \dot{\theta} = \frac {H}{r} = \sqrt{\frac {\mu}{p}} \cdot (1 + e \cdot \cos \theta)$
()
The connection between the polar argument $\theta\,$ and time t is slightly different for elliptic and hyperbolic orbits.
For an elliptic orbit one switches to the "eccentric anomaly" E for which
$x = a \cdot (\cos E -e)$
()
$y = b \cdot \sin E$
()
and consequently
$\dot{x} = -a \cdot \sin E \cdot \dot{E}$
()
$\dot{y} = b \cdot \cos E \cdot \dot{E}$
()
and the angular momentum H is
$H = x \cdot \dot{y} - y \cdot \dot{x}=a \cdot b \cdot ( 1 - e \cdot \cos E) \cdot \dot{E}$
()
Integrating with respect to time t one gets
$H \cdot t = a \cdot b \cdot ( E - e \cdot \sin E)$
()
under the assumption that time $t=0$ is selected such that the integration constant is zero.
As by definition of p one has
$H = \sqrt{\mu \cdot p}$
()
this can be written
$t = a \cdot \sqrt{\frac{a} {\mu}} ( E - e \cdot \sin E)$
()
For a hyperbolic orbit one uses the hyperbolic functions for the parameterisation
$x = a \cdot (e - \cosh E)$
()
$y = b \cdot \sinh E$
()
for which one has
$\dot{x} = -a \cdot \sinh E \cdot \dot{E}$
()
$\dot{y} = b \cdot \cosh E \cdot \dot{E}$
()
and the angular momentum H is
$H = x \cdot \dot{y} - y \cdot \dot{x}=a \cdot b \cdot ( e \cdot \cosh E-1) \cdot \dot{E}$
()
Integrating with respect to time t one gets
$H \cdot t= a \cdot b \cdot ( e \cdot \sinh E-E)$
()
i.e.
$t = a \cdot \sqrt{\frac{a} {\mu}} (e \cdot \sinh E-E)$
()
To find what time t that corresponds to a certain true anomaly $\theta\,$ one computes corresponding parameter E connected to time with relation (27) for an elliptic and with relation (34) for a hyperbolic orbit.
Note that the relations (27) and (34) define a mapping between the ranges
$\left [ -\infin < t < \infin\right ] \longleftrightarrow \left [-\infin < E < \infin \right ]$
## Some additional formulae
See also Equation of the center – Analytical expansions
For an elliptic orbit one gets from (20) and (21) that
$r = a \cdot (1-e \cdot \cos E)$
()
and therefore that
$\cos \theta = \frac{x} {r} =\frac{\cos E-e}{1-e \cdot \cos E}$
()
From (36) then follows that
$\tan^2 \frac{\theta}{2} = \frac{1-\cos \theta}{1+\cos \theta}= \frac{1-\frac{\cos E-e}{1-e \cdot \cos E}}{1+\frac{\cos E-e}{1-e \cdot \cos E}}= \frac{1-e \cdot \cos E - \cos E+e}{1-e \cdot \cos E + \cos E-e}= \frac{1+e}{1-e} \ \cdot\ \frac{1-\cos E}{1+\cos E}= \frac{1+e}{1-e} \ \cdot\ \tan^2 \frac{E}{2}$
From the geometrical construction defining the eccentric anomaly it is clear that the vectors $(\ \cos E\ ,\ \sin E\ )$ and $(\ \cos \theta\ ,\ \sin \theta\ )$ are on the same side of the x-axis. From this then follows that the vectors $\left( \cos\frac{E}{2}\ ,\ \sin\frac{E}{2} \right)$ and $\left( \cos\frac{\theta}{2}\ ,\ \sin\frac{\theta}{2} \right)$ are in the same quadrant. One therefore has that
$\tan \frac{\theta}{2} = \sqrt{\frac{1+e}{1-e}} \cdot \tan \frac{E}{2}$
()
and that
$\theta = 2 \cdot \operatorname{arg}\left(\sqrt{1-e} \ \cdot\ \cos \frac{E}{2}\ ,\ \sqrt{1+e} \ \cdot\ \sin\frac{E}{2}\right)+ n\cdot 2\pi$
()
$E = 2 \cdot \operatorname{arg}\left(\sqrt{1+e} \ \cdot\ \cos \frac{\theta}{2}\ ,\ \sqrt{1-e} \ \cdot\ \sin\frac{\theta}{2}\right)+ n\cdot 2\pi$
()
where "$\operatorname{arg}(x\ ,\ y)$" is the polar argument of the vector $(\ x\ ,\ y\ )$ and n is selected such that $\left |E -\theta\right| < \pi$
For the numerical computation of $\operatorname{arg}(x\ ,\ y)$ the standard function ATAN2(y,x) (or in double precision DATAN2(y,x)) available in for example the programming language FORTRAN can be used.
Note that this is a mapping between the ranges
$\left [ -\infin < \theta < \infin\right ] \longleftrightarrow \left [-\infin < E < \infin \right ]$
For an hyperbolic orbit one gets from (28) and (29) that
$r = a \cdot (e \cdot \cosh E-1)$
()
and therefore that
$\cos \theta = \frac{x} {r} =\frac{e-\cosh E}{e \cdot \cosh E-1}$
()
As
$\tan^2 \frac{\theta}{2} = \frac{1-\cos\theta}{1+\cos \theta}= \frac{1-\frac{e-\cosh E}{e \cdot \cosh E-1}}{1+\frac{e-\cosh E}{e \cdot \cosh E-1}}= \frac{e \cdot \cosh E - e +\cosh E}{e \cdot \cosh E + e -\cosh E}= \frac{e+1}{e-1}\ \cdot\ \frac{\cosh E-1}{\cosh E+1}= \frac{e+1}{e-1}\ \cdot\ \tanh^2 \frac{E}{2}$
and as $\tan \frac{\theta}{2}$ and $\tanh \frac{E}{2}$ have the same sign it follows that
$\tan \frac{\theta}{2} = \sqrt{\frac{e+1}{e-1}} \cdot \tanh \frac{E}{2}$
()
This relation is convenient for passing between "true anomaly" and the parameter E, the latter being connected to time through relation (34). Note that this is a mapping between the ranges
$\left [ -\cos^{-1}\left(-\frac{1}{e}\right) < \theta < \cos^{-1}\left(-\frac{1}{e}\right)\right ] \longleftrightarrow \left [-\infin < E < \infin \right ]$
and that $\frac{E}{2}$ can be computed using the relation
$\tanh ^{-1}x=\frac{1}{2}\ln \left( \frac{1+x}{1-x} \right)$
From relation (27) follows that the orbital period P for an elliptic orbit is
$P = 2\pi \cdot a \cdot \sqrt{\frac{a} {\mu}}$
()
As the potential energy corresponding to the force field of relation (1) is
$-\frac {\mu} {r}$
it follows from (13), (14), (18) and (19) that the sum of the kinetic and the potential energy
$\frac{{V_r}^2+{V_t}^2}{2}-\frac {\mu} {r}$
for an elliptic orbit is
$-\frac {\mu} {2 \cdot a}$
()
and from (13), (16), (18) and (19) that the sum of the kinetic and the potential energy for a hyperbolic orbit is
$\frac {\mu} {2 \cdot a}$
()
Relative the inertial coordinate system
$\hat{x} \ , \ \hat{y}$
in the orbital plane with $\hat{x}$ towards pericentre one gets from (18) and (19) that the velocity components are
$V_x = \cos \theta \cdot V_r - \sin \theta \cdot V_t = -\sqrt{\frac {\mu}{p}} \cdot \sin \theta$
()
$V_y = \sin \theta \cdot V_r + \cos \theta \cdot V_t = \sqrt{\frac {\mu}{p}} \cdot (e +\cos \theta)$
()
## Determination of the Kepler orbit that corresponds to a given initial state
This is the "initial value problem" for the differential equation (1) which is a first order equation for the 6-dimensional "state vector" $(\ \bar{r}\ ,\bar{v}\ )$ when written as
$\dot {\bar{v}} = -\mu \cdot \frac {\hat{r}} {r^2}$
()
$\dot {\bar{r}} = \bar{v}$
()
For any values for the initial "state vector" $(\ \bar{r_0}\ ,\bar{v_0}\ )$ the Kepler orbit corresponding to the solution of this initial value problem can be found with the following algorithm:
Define the orthogonal unit vectors $(\hat{r}\ ,\ \hat{t})$ through
$\bar{r_0} = r \cdot \hat{r}$
()
$\bar{v_0} = V_r \cdot \hat{r} + V_t \cdot \hat{t}$
()
with $r > 0$ and $V_t > 0$
From (13), (18) and (19) follows that by setting
$p = \frac{{(r \cdot V_t)}^2}{\mu }$
()
and by defining $e \ge 0$ and $\theta$ such that
$e \cdot \cos \theta = \frac{V_t} {V_0} - 1$
()
$e \cdot \sin \theta = \frac{V_r} {V_0}$
()
where
$V_0 = \sqrt{\frac{\mu}{p}}$
()
one gets a Kepler orbit that for true anomaly $\theta$ has the same r, $V_r$ and $V_t$ values as those defined by (50) and (51).
If this Kepler orbit then also has the same $(\hat{r}\ ,\ \hat{t})$ vectors for this true anomaly $\theta$ as the ones defined by (50) and (51) the state vector $(\bar{r}\ ,\ \bar{v})$ of the Kepler orbit takes the desired values $(\ \bar{r_0}\ ,\bar{v_0}\ )$ for true anomaly $\theta$.
The standard inertially fixed coordinate system $(\hat{x}\ ,\ \hat{y})$ in the orbital plane (with $\hat{x}$ directed from the centre of the homogeneous sphere to the pericentre) defining the orientation of the conical section (ellipse, parabola or hyperbola) can then be determined with the relation
$\hat{x} = \cos \theta \cdot \hat{r} - \sin \theta \cdot \hat{t}$
()
$\hat{y} = \sin \theta \cdot \hat{r} + \cos \theta \cdot \hat{t}$
()
Note that the relations (53) and (54) has a singularity when $V_r=0$ and
$V_t=V_0=\sqrt{\frac{\mu}{p}}=\sqrt{\frac{\mu}{\frac{{(r \cdot V_t)}^2}{\mu }}}$
i.e.
$V_t=\sqrt{\frac{\mu}{r}}$
()
which is the case that it is a circular orbit that is fitting the initial state $(\ \bar{r_0}\ ,\bar{v_0}\ )$
## The osculating Kepler orbit
Main article: Osculating orbit
For any state vector $(\bar{r} , \bar{v} )$ the Kepler orbit corresponding to this state can be computed with the algorithm defined above. First the parameters $p , e , \theta$ are determined from $r , V_r , V_t$ and then the orthogonal unit vectors in the orbital plane $\hat{x} , \hat{y}$ using the relations (56) and (57).
If now the equation of motion is
$\ddot {\bar{r}} = \operatorname{\bar{F}}(\bar{r},\dot {\bar{r}},t)$
()
where
$\operatorname{\bar{F}}(\bar{r},\dot {\bar{r}},t)$
is a function other than
$-\mu \cdot \frac {\hat{r}} {r^2}$
the resulting parameters
$p ,\, e ,\, \theta,\, \hat{x} ,\, \hat{y}$
defined by $\bar{r},\dot {\bar{r}}$ will all vary with time as opposed to the case of a Kepler orbit for which only the parameter $\theta$ will vary
The Kepler orbit computed in this way having the same "state vector" as the solution to the "equation of motion" (59) at time t is said to be "osculating" at this time.
This concept is for example useful in case
$\operatorname{\bar{F}}(\bar{r},\dot {\bar{r}},t)=-\mu \cdot \frac {\hat{r}} {r^2}+\operatorname{\bar{f}}(\bar{r},\dot {\bar{r}},t)$
where
$\operatorname{\bar{f}}(\bar{r},\dot {\bar{r}},t)$
is a small "perturbing force" due to for example a faint gravitational pull from other celestial bodies. The parameters of the osculating Kepler orbit will then only slowly change and the osculating Kepler orbit is a good approximation to the real orbit for a considerable time period before and after the time of osculation.
This concept can also be useful for a rocket during powered flight as it then tells which Kepler orbit the rocket would continue in in case the thrust is switched-off.
For a "close to circular" orbit the concept "eccentricity vector" defined as $\bar{e}=e \cdot \hat{x}$ is useful. From (53), (54) and (56) follows that
$\bar{e}=\frac{(V_t-V_0) \cdot \hat{r} - V_r \cdot \hat{t}}{V_0}$
()
i.e. $\bar{e}$ is a smooth differentiable function of the state vector $( \bar{r} ,\bar{v} )$ also if this state corresponds to a circular orbit.
## Citations
1. Bate, Mueller, White. pp 177–181
## References
• El'Yasberg "Theory of flight of artificial earth satellites", Israel program for Scientific Translations (1967)
• Bate, Roger; Mueller, Donald; White, Jerry (1971). Fundamentals of Astrodynamics. Dover Publications, Inc., New York. ISBN 0-486-60061-0.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 192, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108482003211975, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/259025/computing-fourier-series
|
# Computing Fourier Series
$$f(x)=\sin|x|$$
Is it possible to compute the Fourier Series of $f(x)$? It seems that it would admit a Fourier cosine representation because it is even (by looking at the graph), but the periodicity is a little strange. Is this even possible?
-
– Amzoti Dec 15 '12 at 1:36
On what interval do you want it? Perhaps $[0,2\pi]\,$ ? This is an even function, so its Fourier series shouldn't be that hard to evaluate... – DonAntonio Dec 15 '12 at 1:37
I think $[0, 2\pi]$. – Alex Dec 15 '12 at 1:40
It seems that it'll just be the Fourier series of $sin(x)$ on $[0,2\pi]$ and the Fourier series of $-sin(x)$ on $[-2\pi,0]$. Can anyone confirm this? – Alex Dec 15 '12 at 3:12
Why would you do that? The function is even so it's the same in both intervals you mention: no need to work twice – DonAntonio Dec 15 '12 at 20:28
## 1 Answer
$$c_0:=\frac{1}{\pi}\int_0^{2\pi}\sin x\,dx=\left.-\frac{1}{\pi}\,(\cos x)\right|_0^{2\pi}=0$$
$$\forall\,\,n\neq \pm \,1\,\,,\,c_n:=\frac{1}{2\pi}\int_0^{2\pi}\sin x\,e^{-inx}\,dx=\frac{1}{4\pi i}\int_0^{2\pi}\left(e^{ix}-e^{-ix}\right)e^{-inx}\,dx=$$
$$=\frac{1}{4\pi i}\int_0^{2\pi}\left(e^{ix(1-n)}-e^{-ix(1+n)}\right)dx=\frac{1}{4\pi i}\left[\frac{1}{i(1-n)}e^{ix(1-n)}+\frac{1}{i(1+n)}e^{ix(1+n)}\right]_0^{2\pi}=$$
$$\frac{1}{4\pi i}\left[\frac{1}{i(1-n)}\left(e^{2\pi i(1-n)}-1\right)+\frac{1}{i(1+n)}\left(e^{2\pi i(1+n)}-1\right)\right]=0$$
$$c_{-1}=\frac{1}{2\pi}\int_0^{2\pi}\sin x\,e^{ix}\,dx=\frac{1}{4\pi i}\int_0^{2\pi}\left(e^{2ix}-1\right)dx=\frac{1}{4\pi i}\left(-2\pi\right)=\frac{i}{2}$$
$$c_1=\frac{1}{2\pi}\int_0^{2\pi}\sin x\,e^{-ix}\,dx=\frac{1}{4\pi i}\int_0^{2\pi}\left(1-e^{-2ix}\right)dx=\frac{1}{4\pi i}2\pi=-\frac{i}{2}$$
Thus, in a pretty expected and even boring way, we get:
$$\sin x=c_{-1}e^{-ix}+c_1e^{ix}=\frac{i}{2}\left(e^{-ix}-e^{ix}\right)=\frac{e^{ix}-e^{-ix}}{2i}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605263471603394, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/141130-analytic-pointset-baire-space.html
|
# Thread:
1. ## analytic pointset, Baire space
Prove that the inverse image $g^{-1}[A]$ of an analytic pointset $A$ by a continuous function $g : \mathcal{N} \rightarrow \mathcal{N}$ is analytic.
Hint. Aim for an equivalence of the form $y \in g^{-1}[A] \text{ } \Leftrightarrow \text{ }(\exists x) \text{ } [y=f(\rho_1(x))=g(\rho_2(x))]$ where $f$ is continuous and $\rho_n$ are defined by $\rho_n(z) = (i \mapsto z(\rho(n, i)))$, and then use the result that says:
If $f, g : \mathcal{N} \rightarrow \mathcal{N}$ are continuous functions, then the set $E=\{ x | f(x)=g(x) \}$ of points on which they agree is closed.
I am not sure how to prove this. I would appreciate a few hints or suggestions. In our book, $\mathcal{N}$ denotes the Baire space. Also, subsets of the Baire space are called pointsets. Thanks in advance.
2. This is problem from Moschovakis' book, problem x10.5., p.153.
In the hint the author refers to the proof of another theorem on p. 153.
The same problem was posted here:
Re: analytic pointset, Baire space (In this thread Henno Brandsma gave a pointer to a proof of this fact in Jech's set theory.)
S.O.S. Mathematics CyberBoard :: View topic - analytic pointset, Baire space
http://www.mymathforum.com/viewtopic...c453858c85d8f6
http://www.mathhelpforum.com/math-he...ire-space.html
Art of Problem Solving • View topic - analytic pointset, Baire space In this point I tried to explain the hint in the way it is given in the book (since I do not find the context given in the post sufficient.)
(I'm posting these links in order to save the time of the helpers - in case the problem will be solved in one of the forums.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9068941473960876, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/24681/what-is-the-decision-theoretic-justification-for-bayesian-credible-interval-proc
|
# What is the decision-theoretic justification for Bayesian credible interval procedures?
(To see why I wrote this, check the comments below my answer to this question.)
## Type III errors and statistical decision theory
Giving the right answer to the wrong question is sometimes called a Type III error. Statistical decision theory is a formalization of decision-making under uncertainty; it provides a conceptual framework that can help one avoid type III errors. The key element of the framework is called the loss function. It takes two arguments: the first is (the relevant subset of) the true state of the world (e.g., in parameter estimation problems, the true parameter value $\theta$); the second is an element in the set of possible actions (e.g., in parameter estimation problems, the estimate $\hat{\theta})$. The output models the loss associated with every possible action with respect to every possible true state of the world. For example, in parameter estimation problems, some well known loss functions are:
• the absolute error loss $L(\theta, \hat{\theta}) = |\theta - \hat{\theta}|$
• the squared error loss $L(\theta, \hat{\theta}) = (\theta - \hat{\theta})^2$
• Hal Varian's LINEX loss $L(\theta, \hat{\theta}; k) = \exp(k(\theta - \hat{\theta})) - k(\theta - \hat{\theta}) - 1,\text{ } k \ne0$
## Examining the answer to find the question
There's a case one might attempt to make that type III errors can be avoided by focusing on formulating a correct loss function and proceeding through the rest of the decision-theoretic approach (not detailed here). That's not my brief – after all, statisticians are well equipped with many techniques and methods that work well even though they are not derived from such an approach. But the end result, it seems to me, is that the vast majority of statisticians don't know and don't care about statistical decision theory, and I think they're missing out. To those statisticians, I would argue that reason they might find statistical decision theory valuable in terms of avoiding Type III error is because it provides a framework in which to ask of any proposed data analysis procedure: what loss function (if any) does the procedure cope with optimally? That is, in what decision-making situation, exactly, does it provide the best answer?
## Posterior expected loss
From a Bayesian perspective, the loss function is all we need. We can pretty much skip the rest of decision theory -- almost by definition, the best thing to do is to minimize posterior expected loss, that is, find the action $a$ that minimizes $\tilde{L}(a) = \int_{\Theta}L(\theta, a)p(\theta|D)d\theta$.
(And as for non-Bayesian perspectives? Well, it is a theorem of frequentist decision theory -- specifically, Wald's Complete Class Theorem -- that the optimal action will always be to minimize Bayesian posterior expected loss with respect to some (possibly improper) prior. The difficulty with this result is that it is an existence theorem gives no guidance as to which prior to use. But it fruitfully restricts the class of procedures that we can "invert" to figure out exactly which question it is that we're answering. In particular, the first step in inverting any non-Bayesian procedure is to figure out which (if any) Bayesian procedure it replicates or approximates.)
## Hey Cyan, you know this is a Q&A site, right?
Which brings me – finally – to a statistical question. In Bayesian statistics, when providing interval estimates for univariate parameters, two common credible interval procedures are the quantile-based credible interval and the highest posterior density credible interval. What are the loss functions behind these procedures?
-
Very nice. But are they the only loss functions justifying these procedures? – guest Mar 16 '12 at 5:28
@Cyan>> Thanks for asking and answering the question for me :) I will read all this and upvote whenever possible. – Stéphane Laurent Mar 16 '12 at 10:53
@guest>> Assuming differentiability of $\tilde{L}(a)$, what you're asking, more-or-less, is: what does $\text{"}a \text{ satisfies some condition} \Rightarrow \frac{d}{da}\tilde{L}(a)=0 \text{"}$ imply about $\tilde{L}(a)$? Sad to say, that question is currently above my pay grade. – Cyan Mar 16 '12 at 18:41
1
Interesting quote from Berger's Statistical decision theory and Bayesian analysis: "we do not view credible sets as having a clear decision-theoretic role, and are therefore leery of 'optimality' approaches to selection of a credible set" – Simon Byrne Mar 18 '12 at 10:40
@Simon Byrne>> 1985 was a long time ago; I wonder if he still thinks that. – Cyan Mar 18 '12 at 17:34
show 2 more comments
## 1 Answer
In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le b$.
## Highest posterior density intervals
Let the posterior density be $f(\theta)$. The highest posterior density intervals correspond to the loss function that penalizes an interval that fails to contain the true value and also penalizes intervals in proportion to their length:
$L_{HPD}(\theta, (a, b); k) = I(\theta \notin [a, b]) + k(b – a), \text{} 0 < k \le max_{\theta} f(\theta)$,
where $I(\cdot)$ is the indicator function. This gives the expected posterior loss
$\tilde{L}_{HPD}((a, b); k) = 1 - \Pr(a \le \theta \le b|D) + k(b – a)$.
Setting $\frac{\partial}{\partial a}\tilde{L}_{HPD} = \frac{\partial}{\partial b}\tilde{L}_{HPD} = 0$ yields the necessary condition for a local optimum in the interior of the parameter space: $f(a) = f(b) = k$ – exactly the rule for HPD intervals, as expected.
The form of $\tilde{L}_{HPD}((a, b); k)$ gives some insight into why HPD intervals are not invariant to a monotone increasing transformation $g(\theta)$ of the parameter. The $\theta$-space HPD interval transformed into $g(\theta)$ space is different from the $g(\theta)$-space HPD interval because the two intervals correspond to different loss functions: the $g(\theta)$-space HPD interval corresponds to a transformed length penalty $k(g(b) – g(a))$.
## Quantile-based credible intervals
Consider point estimation with the loss function
$L_q(\theta, \hat{\theta};p) = p(\hat{\theta} - \theta)I(\theta < \hat{\theta}) + (1-p)(\theta - \hat{\theta})I(\theta \ge \hat{\theta}), \text{ } 0 \le p \le 1$.
The posterior expected loss is
$\tilde{L}_q(\hat{\theta};p)=p(\hat{\theta}-\text{E}(\theta|\theta < \hat{\theta}, D)) + (1 - p)(\text{E}(\theta | \theta \ge \hat{\theta}, D)-\hat{\theta})$.
Setting $\frac{d}{d\hat{\theta}}\tilde{L}_q=0$ yields the implicit equation
$\Pr(\theta < \hat{\theta}|D) = p$,
that is, the optimal $\hat{\theta}$ is the $(100p)$% quantile of the posterior distribution, as expected.
Thus to get quantile-based interval estimates, the loss function is
$L_{qCI}(\theta, (a,b); p_L, p_U) = L_q(\theta, a;p_L) + L_q(\theta, b;p_U)$.
-
1
Another way to motivate this is to re-write the loss function as a (weighted) sum of the width of the interval plus the distance, if any, by which the interval fails to cover the true $\theta$. – guest Mar 16 '12 at 5:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937947154045105, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/77769-projectile.html
|
# Thread:
1. ## projectile
a projectile is shot upwards with an initial velocity of $30m/s$ , its height at time t is given by $h=30t-4.9t^2$, during what period of time is the projectile more then $40m$ above the ground? express your answer rounded to the nearest hundredths
my thinking: $40=30t-4.9t^2$ but can't go anywhere from there
2. Originally Posted by william
a projectile is shot upwards with an initial velocity of $30m/s$ , its height at time t is given by $h=30t-4.9t^2$, during what period of time is the projectile more then $40m$ above the ground? express your answer rounded to the nearest hundredths
my thinking: $40=30t-4.9t^2$ but can't go anywhere from there
So $4.9t^2-30t+40=0$
Use the quadratic formula: $t=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$
$a=4.9$ $b=-30$ $c=40$
3. Originally Posted by JeWiSh
So $4.9t^2-30t+40=0$
Use the quadratic formula: $t=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$
$a=4.9$ $b=-30$ $c=40$
so it was above 40m between 1.9 and 4.1 seconds?
4. Originally Posted by william
so it was above 40m between 1.9 and 4.1 seconds?
Correct
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599481225013733, "perplexity_flag": "middle"}
|
http://scicomp.stackexchange.com/questions/3528/algorithms-for-large-sparse-integer-matrices
|
# Algorithms for Large Sparse Integer Matrices
I'm looking for a library that performs matrix operations on large sparse matrices w/o sacrificing numerical stability. Matrices will be 1000+ by 1000+ and values of the matrix will be between 0 and 1000. I will be performing the index calculus algorithm so I will be generating (sparse) row vectors of the matrix serially. As I develop each row, I will need to test for linear independence. Once I fill my matrix with the desired number of linearly independent vectors, I will then need to transform the matrix into reduced row echelon form.
The problem now is that my implementation uses Gaussian elimination to determine linear independence (ensuring row echelon form once all my row vectors have been found). However, given the density and size of the matrix, this means the entries in each new row become exponentially larger over time, as the lcm of the leading entries must be found in order to perform cancellation. Finding the reduced form of the matrix further exacerbates the problem.
So my question is, is there an algorithm, or better yet an implementation, that can test linear independence and solve the reduced row echelon form while keeping the entries as small as possible? An efficient test for linear independence is especially important since in the index calculus algorithm it is performed by far the most.
-
## 1 Answer
You can work modulo a number of large primes to get the results modulo these primes, then check whether there are rationals with few enough digits satisfying these congruences. If yes, you can check by a matrix-vector multiply whether the approximation found is exact. This can be turned into an exact decision algorithm.
However, if the determiant of the matrix has a size of the order of $10^{1000}$ (quite possible in your scenario), you'll generically not find solutions whose components need less that a few thousand digits.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9210078716278076, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/98859-combination.html
|
# Thread:
1. ## combination
"A question paper is divided into two groups, each containing 5 questions. A candidate is required to solve 6 questions in all but he is not permitted to attempt more than 4 questions from each group. Find the number of ways in which the candidate can select the question."
I thought he could either choose 3 from each group, or choose 4 from one group and 2 from the other. (5C3)(5C3)+(5C4)(5C2)=150 but the answer is supposed to be 200. I thought of multiplying 150 by 2 because choosing three from group 1 and three from group 2 is distinct from doing it the other way around. But that's 300, and the answer is supposed to be 200. Please assist.
2. Originally Posted by Pew³
"A question paper is divided into two groups, each containing 5 questions. A candidate is required to solve 6 questions in all but he is not permitted to attempt more than 4 questions from each group. Find the number of ways in which the candidate can select the question."
I thought he could either choose 3 from each group, or choose 4 from one group and 2 from the other. (5C3)(5C3)+(5C4)(5C2)=150 but the answer is supposed to be 200. I thought of multiplying 150 by 2 because choosing three from group 1 and three from group 2 is distinct from doing it the other way around. But that's 300, and the answer is supposed to be 200. Please assist.
The number of choices is ${5\choose 4}{5\choose2}+{5\choose 3}{5\choose 3}+{5\choose 2}{5\choose 4}=200$ since you can choose:
4 Q's from Group 1 and 2 Q's from Group 2 OR
3 Q's from Group 1 and 3 Q's from Group 2 OR
2 Q's from Group 1 and 4 Q's from Group 2
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9726957678794861, "perplexity_flag": "head"}
|
http://www.chemeurope.com/en/encyclopedia/Atomic_mass.html
|
My watch list
my.chemeurope.com
my.chemeurope.com
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
• My watch list
• My saved searches
• My saved topics
• My newsletter
Login
Login
Forgot your password?
Cookies deactivated
To use all functions of this page, please activate cookies in your browser.
• Home
• Encyclopedia
• Atomic_mass
# Atomic mass
The atomic mass (ma) is the mass of an atom at rest, most often expressed in unified atomic mass units.[1] The atomic mass may be considered to be the total mass of protons, neutrons and electrons in a single atom (when the atom is motionless). The atomic mass is sometimes incorrectly used as a synonym of relative atomic mass, average atomic mass and atomic weight; however, these differ subtly from the atomic mass. The atomic mass is defined as the mass of an atom, which can only be one isotope at a time and is not an abundance-weighted average. In the case of many elements that have one dominant isotope the actual numerical difference between the atomic mass of the most common isotope and the relative atomic mass or standard atomic weights can be very small such that it does not affect most bulk calculations but such an error can be critical when considering individual atoms. For elements with more than one common isotope the difference even to the most common atomic mass can be half a mass unit or more (e.g. chlorine). The atomic mass of an uncommon isotope can differ from the relative atomic mass or standard atomic weight by several mass units.
The relative atomic mass (Ar) (also known as atomic weight and average atomic mass) is the average of the atomic masses of all the chemical element's isotopes as found in a particular environment, weighted by isotopic abundance.[2] This is frequently used as a synonym for the standard atomic weight and it is not incorrect to do so since the standard atomic weights are relative atomic masses, although it is less specific to do so. Relative atomic mass also refers to non-terrestrial environments and highly specific terrestrial environments that deviate from the average or have different certainties (number of significant figures) than the standard atomic weights.
The standard atomic weight refers to the mean relative atomic mass of an element in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances.[3] These are what are included in a standard periodic table and is what is used in most bulk calculations. An uncertainty in brackets is included which often reflects natural variability in isotopic distribution rather than uncertainty in measurement.[4] For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count of the most stable isotope (ie, the isotope with the longest half-life) is listed in brackets in place of the standard atomic weight. Lithium represents a unique case where the natural abundances of the isotopes have been perturbed by human activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources such as rivers.
The relative isotopic mass is the relative mass of the isotope, scaled with carbon-12 as exactly 12. No other isotopes have whole number masses due to the different mass of neutrons and protons, as well as loss/gain of mass to binding energy. However, since mass defect due to binding energy is minimal compared to the mass of a nucleon, rounding the atomic mass of an isotope tells you the total nucleon count. Neutron count can then be derived by subtracting the atomic number.
## Mass defects in atomic masses
The pattern in the amounts the atomic masses deviate from their mass numbers is as follows: the deviation starts positive at hydrogen-1, becomes negative until a minimum is reached at iron-56, iron-58 and nickel-62, then increases to positive values in the heavy isotopes, with increasing atomic number. This corresponds to the following: nuclear fission in an element heavier than iron produces energy, and fission in any element lighter than iron requires energy. The opposite is true of nuclear fusion reactions: fusion in elements lighter than iron produces energy, and fusion in elements heavier than iron requires energy.
## Measurement of atomic masses
Direct comparison and measurement of the masses of atoms is achieved with mass spectrometry.
## Conversion factor between atomic mass units and grams
The standard scientific unit for dealing with atoms in macroscopic quantities is the mole (mol), which is defined arbitrarily as the amount of a substance with as many atoms or other units as there are in 12 grams of the carbon isotope C-12. The number of atoms in a mole is called Avogadro's number, the value of which is approximately 6.022 × 1023 mol-1. One mole of a substance always contains almost exactly the relative atomic mass or molar mass of that substance (which is the concept of molar mass), expressed in grams; however, this is almost never true for the atomic mass. For example, the standard atomic weight of iron is 55.847 g/mol, and therefore one mole of iron as commonly found on earth has a mass of 55.847 grams. The atomic mass of an 56Fe isotope is 55.935 u and one mole of 56Fe will in theory weigh 55.935g, but such amounts of pure 56Fe have never existed.
The formulaic conversion between atomic mass and SI mass in grams for a single atom is:
$m_{\rm{grams}}={m_{\rm{u}} \over N_{A}}$
where u is the atomic mass unit and NA is Avogadro's number.
## Relationship between atomic and molecular masses
Similar definitions apply to molecules. One can compute the molecular mass of a compound by adding the atomic masses of its constituent atoms (nuclides). One can compute the molar mass of a compound by adding the relative atomic masses of the elements given in the chemical formula. In both cases the multiplicity of the atoms (the number of times it occurs) must be taken into account, usually by multiplication of each unique mass by its multiplicity.
## History
In the history of chemistry the first scientists to determine atomic weights were John Dalton between 1803 and 1805 and Jöns Jakob Berzelius between 1808 and 1826. Atomic weight was originally defined relative to that of the lightest element hydrogen taken as 1.00. Stanislao Cannizzaro in the 1860's refined atomic weights by applying Avogadro's law. He formulated a law to determine atomic weights of elements: the different quantities of the same element contained in different molecules are all whole multiples of the atomic weight and determined atomic weights and molecular weights by comparing the vapor density of a collection of gases with molecules containing one or more of the chemical element in question [5].
In the early twentieth century, up until the 1960's chemists and physicists used two different atomic mass scales. The chemists used a scale such that the natural mixture of oxygen isotopes had an atomic mass 16, while the physicists assigned the same number 16 to the atomic mass of the most common oxygen isotope (containing eight protons and eight neutrons). However, because oxygen-17 and oxygen-18 are also present in natural oxygen this led to two different tables of atomic mass. The unified scale based on carbon-12, 12C, met the physicists' need to base the scale on a pure isotope, while being numerically close to the old chemists' scale.
The term atomic weight is being phased out slowly and being replaced by relative atomic mass, in most current usage. The history of this shift in nomenclature reaches back to the 1960's and has been the source of much debate in the scientific community. The debate was largely created by the adoption of the unified atomic mass unit and the realization that weight was in some ways an inappropriate term. The argument for keeping the term "atomic weight" was primarily that it was a well understood term to those in the field, that the term "atomic mass" was already in use (as it is currently defined) and that the term "relative atomic mass" was in some ways redundant. In 1979, in a compromise move, the definition was refined and the term "relative atomic mass" was introduced as a secondary synonym. Twenty years later the primacy of these synonyms was reversed and the term "relative atomic mass" is now the preferred term; however the "standard atomic weights" have maintained the same name. [6]
## Table of standard atomic weights
Group → 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
↓ Period
1 H
1.008
He
4.003
2 Li
6.941
Be
9.012
B
10.81
C
12.01
N
14.01
O
16.00
F
19.00
Ne
20.18
3 Na
22.99
Mg
24.31
Al
26.98
Si
28.09
P
30.97
S
32.07
Cl
35.45
Ar
39.95
4 K
39.10
Ca
40.08
Sc
44.96
Ti
47.87
V
50.94
Cr
52.00
Mn
54.94
Fe
55.84
Co
58.93
Ni
58.69
Cu
63.55
Zn
65.39
Ga
69.72
Ge
72.61
As
74.92
Se
78.96
Br
79.90
Kr
83.80
5 Rb
85.47
Sr
87.62
Y
88.91
Zr
91.22
Nb
92.91
Mo
95.94
Tc
[99]
Ru
101.07
Rh
102.91
Pd
106.42
Ag
107.87
Cd
112.41
In
114.82
Sn
118.71
Sb
121.76
Te
127.60
I
126.90
Xe
131.29
6 Cs
132.91
Ba
137.33
*
Hf
178.49
Ta
180.95
W
183.84
Re
186.21
Os
190.23
Ir
192.22
Pt
195.08
Au
196.97
Hg
200.59
Tl
204.38
Pb
207.2
Bi
208.98
Po
[209]
At
[210]
Rn
[222]
7 Fr
[223]
Ra
[226]
**
Rf
[263]
Db
[262]
Sg
[266]
Bh
[264]
Hs
[269]
Mt
[268]
Ds
[272]
Rg
[272]
Uub
[277]
Uut
[284]
Uuq
[289]
Uup
[288]
Uuh
[292]
Uus
[291]‡
Uuo
[293]‡
* Lanthanides La
138.91
Ce
140.12
Pr
140.91
Nd
144.24
Pm
[145]
Sm
150.36
Eu
151.96
Gd
157.25
Tb
158.93
Dy
162.50
Ho
164.93
Er
167.26
Tm
168.93
Yb
173.04
Lu
174.97
** Actinides Ac
[227]
Th
232.04
Pa
231.04
U
238.03
Np
[237]
Pu
[244]
Am
[243]
Cm
[247]
Bk
[247]
Cf
[251]
Es
[252]
Fm
[257]
Md
[258]
No
[259]
Lr
[262]
Element categories in the periodic table
Metals Metalloids Nonmetals
Alkali metals Alkaline earth metals Inner transition elements Transition elements Other metals Other nonmetals Halogens Noble gases
Lanthanides Actinides
Atomic number colors show state at standard temperature and pressure (0 °C and 1 atm)
Solids Liquids Gases
Primordial From decay Synthetic Undiscovered
## See also
• Tutorial on the concept and measurement of atomic mass
• Atomic Weights and the International Committee — A Historical Review
## References
1. ^ IUPAC Definition of Atomic Mass
2. ^ IUPAC Definition of Relative Atomic Mass
3. ^ IUPAC Definition of Standard Atomic Weight
4. ^ ATOMIC WEIGHTS OF THE ELEMENTS 2005 (IUPAC TECHNICAL REPORT), M. E. WIESER Pure Appl. Chem., V.78, pp. 2051, 2006
5. ^ Origin of the Formulas of Dihydrogen and Other Simple Molecules Andrew Williams Vol. 84 No. 11 November 2007 • Journal of Chemical Education 1779
6. ^ 'ATOMIC WEIGHT' -THE NAME, ITS HISTORY, DEFINITION, AND UNITS, P. DE BIEVRE and H. S. PEISER Pure&App. Chem., 64, 1535, 1992
be-x-old:Атамная маса
Categories: |
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Atomic_mass". A list of authors is available in Wikipedia.
http://www.chemeurope.com/en/encyclopedia/Atomic_mass.html
© 1997-2013 CHEMIE.DE Information Service GmbH
Lexicon
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.893635094165802, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/55889/list
|
## Return to Answer
6 explicit diagonalization
This can be arranged as follows. Let `$Q_i=V\vartriangle M$` be given, where $V$ is open and $M$ is meagremeager. If `$I_{i-1}\cap V\ne\varnothing$`, A choses chooses $A_i=Q_i$, otherwise $A_i=Q_i^c$ (note that in the latter case, we will have `$I_{i-1}\subseteq U_i$`). This determines $U_i$ and $M_{p(i,j)}$, and it remains to find $I_i$. Now, `$I_{i-1}\cap U_i$` is a nonempty open set. Moreover, $M_i$ is nowhere dense, hence theopen the open set `$I_{i-1}\cap U_i\smallsetminus\overline{M_i}$` is still nonempty, and therefore it contains an interval $I_i$. By shortening it if necessary, we can make sure that `$\overline{I_i}\subseteq I_{i-1}$`.
A also has a winning strategy if Q is restricted to Lebesgue measurable sets. The strategy is to maintain a chain $P_0\supseteq P_1\supseteq P_2\supseteq\cdots$ of perfect sets with positive measure such that $P_i\subseteq A_i$. Given $Q_i$, we choose $A_i$ so that `$P_{i-1}\cap A_i$` has positive measure. Using inner regularity, there exists a perfect set $P_i$ of positive measure included in `$P_{i-1}\cap A_i$`. This strategy guarantees that each `$\bigcap_{i<n}A_i$` is uncountable, and `$\bigcap_{i\in\omega}A_i\supseteq\bigcap_{i\in\omega}P_i$ `$\bigcap_{i\in\omega}A_i\supseteq\bigcap_{i\in\omega}P_i$` is nonempty by compactness.
In order to prove the theorem, observe that there are $2^\omega$ \mathfrak c=2^\omega$perfect subsets of$2^\omega$, and each of them has cardinality$2^\omega$, hence by an obvious diagonalization so we can construct$X\subseteq2^\omega$such that neither$X$nor enumerate them as `$X^c$ contains a perfect subset \{P_\alpha\mid\alpha<\mathfrak c\}$`. (this This is the place which does not work breaks in ZF + DC, we need$2^\omega$to be well ordered)ordered.) We construct disjoint sequences `$\{a_\alpha\mid\alpha<\mathfrak c\},\{b_\alpha\mid\alpha<\mathfrak c\}\subseteq2^\omega$` by induction as follows: each$P_\alpha$has cardinality$\mathfrak c$, whereas `$S=\{a_\beta,b_\beta\mid\beta<\alpha\}$` has smaller cardinality, hence we can choose disctinct$a_\alpha,b_\alpha\in P_\alpha\smallsetminus S$. Let `$X=\{a_\alpha\mid\alpha<\mathfrak c\}$`. By the construction, neither$X$nor$X^c\$ contains a perfect subset.
5 measurable sets
EDIT: The argument above does not require full axiom of choice, it goes through in ZF + DC. Shelah has shown the relative consistency of ZF + DC + "“every set of reals has the Baire property"property”, hence it is consistent with ZF + DC that A has a nondeterministic winning strategy in the unrestricted game.
A also has a winning strategy if Q is restricted to Lebesgue measurable sets. The strategy is to maintain a chain $P_0\supseteq P_1\supseteq P_2\supseteq\cdots$ of perfect sets with positive measure such that $P_i\subseteq A_i$. Given $Q_i$, we choose $A_i$ so that `$P_{i-1}\cap A_i$` has positive measure. Using inner regularity, there exists a perfect set $P_i$ of positive measure included in `$P_{i-1}\cap A_i$`. This strategy guarantees that each `$\bigcap_{i<n}A_i$` is uncountable, and `$\bigcap_{i\in\omega}A_i\supseteq\bigcap_{i\in\omega}P_i$ is nonempty by compactness.
In ZFC, neither player has a winning strategy in the unrestricted game. For Q, this is shown in Joel David Hamkins' answer.
Thus, for each $Q$ there exist $R,S\supseteq Q$ such that $R^\sigma\cap S^\sigma=\varnothing$. By induction, we can construct `$\{Q_t\mid t<2^{<\omega}\}$t\in2^{<\omega}\}$` such that
4 minor corrections
Neither player has a winning strategy, see below.
However, when the game is restricted so that Q can only play sets with the Baire property, then A has a winning strategy. Note that sets with the Baire property form a $\sigma$-algebra which includes all analytic sets, and assuming the axiom of projective determinacy, all projective sets.
In the restricted game, we can write $A_i=U_i\vartriangle\bigcup_jM_{p(i,j)}$, where $U_i$ is open, $M_k$ are nowhere dense, and $p\colon\omega\times\omega\to\omega$ is a bijective pairing function such that $p(i,j)\ge i,j$. The winning strategy for A is then to maintain a chain of nonempty bounded open intervals $I_n$ so that
• `$I_0\supseteq\overline{I_1}\supseteq I_1\supseteq\overline{I_2}\supseteq I_2\supseteq\cdots$`,
• `$I_i\subseteq U_i$`,
• `$I_i\cap M_i=\varnothing$`.
This can be arranged as follows. Let `$Q_i=V\vartriangle M$` be given, where $V$ is open and $M$ is meagre. If `$I_{i-1}\cap V\ne\varnothing$`, A choses $A_i=Q_i$, otherwise $A_i=Q_i^c$ (note that in the latter case, we will have `$I_{i-1}\subseteq U_i$`). This determines $U_i$ and $M_{p(i,j)}$, and it remains to find $I_i$. Now, `$I_{i-1}\cap U_i$` is a nonempty open set. Moreover, $M_i$ is nowhere dense, hence theopen set `$I_{i-1}\cap U_i\smallsetminus\overline{M_i}$` is still nonempty, and therefore it contains an interval $I_i$. By shortening it if necessary, we can make sure that `$\overline{I_i}\subseteq I_{i-1}$`.
When the game finishes using this strategy, then `$\bigcap_iI_i=\bigcap_i\overline{I_i}$` is nonempty by compactness. Any its element is included in every $U_i$, and avoids all `$M_{p(i,j)}$`, hence it lies in every $A_i$. On the other hand, each $A_i$ contains an interval minus a meager set, hence it cannot itself be meager, and in particular, it has more than one element.
EDIT: The argument above does not require full axiom of choice, it goes through in ZF + DC. Shelah has shown the relative consistency of ZF + DC + "every set of reals has the Baire property", hence it is consistent with ZF + DC that A has a nondeterministic winning strategy in the unrestricted game.
In ZFC, neither player has a winning strategy. For Q, this is shown in Joel David Hamkins' answer.
Theorem. A does not have a winning strategy.
For simplicity, I will consider the game played with the product space $2^\omega$ instead of $\mathbb R$. For $t\in2^{<\omega}$, let `$B_t=\{f\in2^\omega\mid f\supseteq t\}$` be a basic clopen set, let `$D_n=\{f\in2^\omega\mid f(n)=1\}$`, and let $C$ be the Boolean algebra consisting of sets of the form $X\vartriangle Y$, where $X$ is clopen and $Y$ is finite. Given a sequence of sets $Q=\langle Q_0,\dots,Q_n\rangle$ as moves of the first player and a strategy $\sigma$ of A, I will write `$\sigma(Q)=A_n\in\{Q_n,Q_n^c\}$` for A's move provided by the strategy, and `$Q^\sigma=\bigcap_{i\le n}A_n$`. The latter notation can also be used for infinite sequences $Q$. If $Q,R$ are sequences, then $Q\smallfrown R$ is their concatenation, $|Q|$ is the length of $Q$, and $Q\subseteq R$ means that $Q$ is an initial segment of $R$.
Lemma. If A has a winning strategy in the game played on a subset $G\subseteq2^\omega$, then $G$ contains a perfect subset.
Proof: let's consider finite sequences $Q$ consisting of elements of $C$. First, assume that there exists such $Q$ so that for every finite $R,S\supseteq Q$, $R^\sigma\cap S^\sigma\ne\varnothing$. Then it is easy to see that there exists an ultrafilter $F\subseteq C$ such that $\sigma(R)\in F$ for every $R\supseteq Q$. Let $R$ be the infinite sequence $\langle D_n\mid n\in\omega\rangle$. Then $|(Q\smallfrown R)^\sigma|\le1$. On the other hand, since $\sigma$ is a winning strategy, $(Q\smallfrown R)^\sigma$ is nonempty (and intersects $G$), hence it equals `$\{\alpha\}$` for some $\alpha\in2^\omega$. Then either `$(Q\smallfrown\{\alpha\})^\sigma=\{\alpha\}$` or `$(Q\smallfrown\{\alpha\}\smallfrown R)^\sigma=\varnothing$`, contradicting $\sigma$'s being a winning strategy.
Thus, for each $Q$ there exist $R,S\supseteq Q$ such that $R^\sigma\cap S^\sigma=\varnothing$. By induction, we can construct `$\{Q_t\mid t<2^\omega\}$<2^{<\omega}\}$` such that
• $t\subseteq s\Rightarrow Q_t\subseteq Q_s$,
• `$Q_{t\smallfrown0}^\sigma\cap Q_{t\smallfrown1}^\sigma=\varnothing$`.
Moreover, `$Q_t^\sigma\in C$`, hence we can write it as $X_t\vartriangle Y_t$ with $X_t$ clopen and $Y_t$ finite. By extending $Q_t$ (at the point where it is being constructed) with $D_i$, $i\le|t|$, we can make sure that
• $Q_t\subseteq B_s$ for some $s\in2^{<\omega}$, $|s|\ge|t|$.
By extending $Q_t$ with $Y_t$, we can make sure that $Q_t^\sigma$ actually equals $X_t\smallsetminus Y_t$. Then $X_{t\smallfrown0}\cap X_{t\smallfrown1}$ is a finite clopen set, hence it is empty, thus:
• `$\overline{Q_{t\smallfrown0}^\sigma}\cap\overline{Q_{t\smallfrown1}^\sigma}=\varnothing$`.
For every $f\in2^\omega$, let $Q$ be the infinite sequence $\bigcup_{t\subseteq f}Q_t$. Since $\sigma$ is a winning strategy, $Q^\sigma\cap G$ is nonempty, hence it contains an element $\phi(f)$. The properties of $Q_t$ ensure that $\phi\colon2^\omega\to2^\omega$ is injective and continuous, hence its range is a perfect subset of $G$, finishing the proof of the Lemma.
In order to prove the theorem, observe that there are $2^\omega$ perfect subsets of $2^\omega$, and each of them has cardinality $2^\omega$, hence by an obvious diagonalization we can construct $X\subseteq2^\omega$ such that neither $X$ nor $X^c$ contains a perfect subset (this is the place which does not work in ZF + DC, we need $2^\omega$ to be well ordered). Assume that $\sigma$ is a winning strategy for A in the game played on $2^\omega$. Take $Q_0=X$, and let $A_0=\sigma(Q_0)$. Then $\tau(R):=\sigma(Q_0\smallfrown R)$ defines a winning strategy for A in the game played on $A_0$, hence by the Lemma, $A_0$ has a perfect subset, a contradiction.
3 the game is not determined
My guess is that neither
Neither player has a winning strategy. Since arbitrary subsets of reals are allowed, there is absolutely no structure in the problem, and I can't imagine a winning strategy based on cardinalities alonesee below.
In the restricted game, we can write $A_i=U_i\triangle\bigcup_jM_{p(i,j)}$, A_i=U_i\vartriangle\bigcup_jM_{p(i,j)}$, where$U_i$is open,$M_k$are nowhere dense, and$p\colon\omega^2\to\omega$p\colon\omega\times\omega\to\omega$ is a bijective pairing function such that $p(i,j)\ge i,j$. The winning strategy for A is then to maintain a chain of nonempty bounded open intervals $I_n$ so that
This can be arranged as follows. Let `$Q_i=V\triangle Q_i=V\vartriangle M$` be given, where $V$ is open and $M$ is meagre. If `$I_{i-1}\cap V\ne\varnothing$`, A choses $A_i=Q_i$, otherwise $A_i=Q_i^c$ (note that in the latter case, we will have `$I_{i-1}\subseteq U_i$`). This determines $U_i$ and $M_{p(i,j)}$, and it remains to find $I_i$. Now, `$I_{i-1}\cap U_i$` is a nonempty open set. Moreover, $M_i$ is nowhere dense, hence theopen set `$I_{i-1}\cap U_i\smallsetminus\overline{M_i}$` is still nonempty, and therefore it contains an interval $I_i$. By shortening it if necessary, we can make sure that `$\overline{I_i}\subseteq I_{i-1}$`.
EDIT: The argument above does not require full axiom of choice, it goes through in ZF + DC. Shelah has shown the relative consistency of ZF + DC + "every set of reals has the Baire property", hence it is consistent with ZF + DC that A has a nondeterministic winning strategy in the unrestricted game.
In ZFC, neither player has a winning strategy. For Q, this is shown in Joel David Hamkins' answer.
Theorem. A does not have a winning strategy.
For simplicity, I will consider the game played with the product space $2^\omega$ instead of $\mathbb R$. For $t\in2^{<\omega}$, let `$B_t=\{f\in2^\omega\mid f\supseteq t\}$` be a basic clopen set, let `$D_n=\{f\in2^\omega\mid f(n)=1\}$`, and let $C$ be the Boolean algebra consisting of sets of the form $X\vartriangle Y$, where $X$ is clopen and $Y$ is finite. Given a sequence of sets $Q=\langle Q_0,\dots,Q_n\rangle$ and a strategy $\sigma$ of A, I will write `$\sigma(Q)=A_n\in\{Q_n,Q_n^c\}$` for A's move provided by the strategy, and `$Q^\sigma=\bigcap_{i\le n}A_n$`. The latter notation can also be used for infinite sequences $Q$. If $Q,R$ are sequences, then $Q\smallfrown R$ is their concatenation, and $Q\subseteq R$ means that $Q$ is an initial segment of $R$.
Lemma. If A has a winning strategy in the game played on a subset $G\subseteq2^\omega$, then $G$ contains a perfect subset.
Proof: let's consider finite sequences $Q$ consisting of elements of $C$. First, assume that there exists such $Q$ so that for every finite $R,S\supseteq Q$, $R^\sigma\cap S^\sigma\ne\varnothing$. Then it is easy to see that there exists an ultrafilter $F\subseteq C$ such that $\sigma(R)\in F$ for every $R\supseteq Q$. Let $R$ be the infinite sequence $\langle D_n\mid n\in\omega\rangle$. Then $|(Q\smallfrown R)^\sigma|\le1$. On the other hand, since $\sigma$ is a winning strategy, $(Q\smallfrown R)^\sigma$ is nonempty (and intersects $G$), hence it equals `$\{\alpha\}$` for some $\alpha\in2^\omega$. Then either `$(Q\smallfrown\{\alpha\})^\sigma=\{\alpha\}$` or `$(Q\smallfrown\{\alpha\}\smallfrown R)^\sigma=\varnothing$`, contradicting $\sigma$'s being a winning strategy.
Thus, for each $Q$ there exist $R,S\supseteq Q$ such that $R^\sigma\cap S^\sigma=\varnothing$. By induction, we can construct `$\{Q_t\mid t<2^\omega\}$` such that
• $t\subseteq s\Rightarrow Q_t\subseteq Q_s$,
• `$Q_{t\smallfrown0}^\sigma\cap Q_{t\smallfrown1}^\sigma=\varnothing$`.
• Moreover, `$Q_t^\sigma\in C$`, hence we can write it as $X_t\vartriangle Y_t$ with $X_t$ clopen and $Y_t$ finite. By extending $Q_t$ (at the point where it is being constructed) with $D_i$, $i\le|t|$, we can make sure that
• $Q_t\subseteq B_s$ for some $s\in2^{<\omega}$, $|s|\ge|t|$.
• By extending $Q_t$ with $Y_t$, we can make sure that $Q_t^\sigma$ actually equals $X_t\smallsetminus Y_t$. Then $X_{t\smallfrown0}\cap X_{t\smallfrown1}$ is a finite clopen set, hence it is empty, thus:
• `$\overline{Q_{t\smallfrown0}^\sigma}\cap\overline{Q_{t\smallfrown1}^\sigma}=\varnothing$`.
• For every $f\in2^\omega$, let $Q$ be the infinite sequence $\bigcup_{t\subseteq f}Q_t$. Since $\sigma$ is a winning strategy, $Q^\sigma\cap G$ is nonempty, hence it contains an element $\phi(f)$. The properties of $Q_t$ ensure that $\phi\colon2^\omega\to2^\omega$ is injective and continuous, hence its range is a perfect subset of $G$, finishing the proof of the Lemma.
In order to prove the theorem, observe that there are $2^\omega$ perfect subsets of $2^\omega$, and each of them has cardinality $2^\omega$, hence by an obvious diagonalization we can construct $X\subseteq2^\omega$ such that neither $X$ nor $X^c$ contains a perfect subset (this is the place which does not work in ZF + DC, we need $2^\omega$ to be well ordered). Assume that $\sigma$ is a winning strategy for A in the game played on $2^\omega$. Take $Q_0=X$, and let $A_0=\sigma(Q_0)$. Then $\tau(R):=\sigma(Q_0\smallfrown R)$ defines a winning strategy for A in the game played on $A_0$, hence by the Lemma, $A_0$ has a perfect subset, a contradiction.
2 mention consistency
My guess is that neither player has a winning strategy. Since arbitrary subsets of reals are allowed, there is absolutely no structure in the problem, and I can't imagine a winning strategy based on cardinalities alone.
However, when the game is restricted so that Q can only play sets with the Baire property, then A has a winning strategy. Note that sets with the Baire property form a $\sigma$-algebra which includes all analytic sets, and assuming the axiom of projective determinacy, all projective sets.
In the restricted game, we can write $A_i=U_i\triangle\bigcup_jM_{p(i,j)}$, where $U_i$ is open, $M_k$ are nowhere dense, and $p\colon\omega^2\to\omega$ is a bijective pairing function such that $p(i,j)\ge i,j$. The winning strategy for A is then to maintain a chain of nonempty bounded open intervals $I_n$ so that
• `$I_0\supseteq\overline{I_1}\supseteq I_1\supseteq\overline{I_2}\supseteq I_2\supseteq\cdots$`,
• `$I_i\subseteq U_i$`,
• `$I_i\cap M_i=\varnothing$`.
This can be arranged as follows. Let `$Q_i=V\triangle M$` be given, where $V$ is open and $M$ is meagre. If `$I_{i-1}\cap V\ne\varnothing$`, A choses $A_i=Q_i$, otherwise $A_i=Q_i^c$ (note that in the latter case, we will have `$I_{i-1}\subseteq U_i$`). This determines $U_i$ and $M_{p(i,j)}$, and it remains to find $I_i$. Now, `$I_{i-1}\cap U_i$` is a nonempty open set. Moreover, $M_i$ is nowhere dense, hence theopen set `$I_{i-1}\cap U_i\smallsetminus\overline{M_i}$` is still nonempty, and therefore it contains an interval $I_i$. By shortening it if necessary, we can make sure that `$\overline{I_i}\subseteq I_{i-1}$`.
When the game finishes using this strategy, then `$\bigcap_iI_i=\bigcap_i\overline{I_i}$` is nonempty by compactness. Any its element is included in every $U_i$, and avoids all `$M_{p(i,j)}$`, hence it lies in every $A_i$. On the other hand, each $A_i$ contains an interval minus a meager set, hence it cannot itself be meager, and in particular, it has more than one element.
EDIT: The argument above does not require full axiom of choice, it goes through in ZF + DC. Shelah has shown the relative consistency of ZF + DC + "every set of reals has the Baire property", hence it is consistent with ZF + DC that A has a winning strategy in the unrestricted game.
1
My guess is that neither player has a winning strategy. Since arbitrary subsets of reals are allowed, there is absolutely no structure in the problem, and I can't imagine a winning strategy based on cardinalities alone.
However, when the game is restricted so that Q can only play sets with the Baire property, then A has a winning strategy. Note that sets with the Baire property form a $\sigma$-algebra which includes all analytic sets, and assuming the axiom of projective determinacy, all projective sets.
In the restricted game, we can write $A_i=U_i\triangle\bigcup_jM_{p(i,j)}$, where $U_i$ is open, $M_k$ are nowhere dense, and $p\colon\omega^2\to\omega$ is a bijective pairing function such that $p(i,j)\ge i,j$. The winning strategy for A is then to maintain a chain of nonempty bounded open intervals $I_n$ so that
• `$I_0\supseteq\overline{I_1}\supseteq I_1\supseteq\overline{I_2}\supseteq I_2\supseteq\cdots$`,
• `$I_i\subseteq U_i$`,
• `$I_i\cap M_i=\varnothing$`.
This can be arranged as follows. Let `$Q_i=V\triangle M$` be given, where $V$ is open and $M$ is meagre. If `$I_{i-1}\cap V\ne\varnothing$`, A choses $A_i=Q_i$, otherwise $A_i=Q_i^c$ (note that in the latter case, we will have `$I_{i-1}\subseteq U_i$`). This determines $U_i$ and $M_{p(i,j)}$, and it remains to find $I_i$. Now, `$I_{i-1}\cap U_i$` is a nonempty open set. Moreover, $M_i$ is nowhere dense, hence theopen set `$I_{i-1}\cap U_i\smallsetminus\overline{M_i}$` is still nonempty, and therefore it contains an interval $I_i$. By shortening it if necessary, we can make sure that `$\overline{I_i}\subseteq I_{i-1}$`.
When the game finishes using this strategy, then `$\bigcap_iI_i=\bigcap_i\overline{I_i}$` is nonempty by compactness. Any its element is included in every $U_i$, and avoids all `$M_{p(i,j)}$`, hence it lies in every $A_i$. On the other hand, each $A_i$ contains an interval minus a meager set, hence it cannot itself be meager, and in particular, it has more than one element.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 258, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286906719207764, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/2790-linear-problems.html
|
# Thread:
1. ## Linear problems
1.
Solve for (x,y,z)
2x + 5y + 3z = 16
x/5 + y/2 + z/3 = 2
3x - 2y - 4z = -2
State clearly whether this system is consistent or inconsistent. State whether the solution is unique, or if there are an infinite number of solutions, or no solution at all. If the solution is unique, state the solution.
2.
The equations of three planes are:
3x + 2y + 7z = 25 (1)
2x + y + 4z = 14 (2)
5x + 3y + 11z = 39 (3)
Solve the above system of equations. State whether this system is consistent or inconsistent and how many solutions there are.
2. Originally Posted by LilDragonfly
1.
Solve for (x,y,z)
2x + 5y + 3z = 16
x/5 + y/2 + z/3 = 2
3x - 2y - 4z = -2
State clearly whether this system is consistent or inconsistent. State whether the solution is unique, or if there are an infinite number of solutions, or no solution at all. If the solution is unique, state the solution.
I do not know how you solve this, but I look at the determinant. Which is approximately, .633 since it is non-zero the solution is unique.
It solution is,
Using my matrix calculator I get,
$(x,y,z)=(10,-8,12)$
3. Thank you ThePerfectHacker for that, but is there anyone else that has a greater understanding on this topic to provide a fuller explanation on how to answer these two questions?
4. Originally Posted by LilDragonfly
Thank you ThePerfectHacker for that, but is there anyone else that has a greater understanding on this topic to provide a fuller explanation on how to answer these two questions?
Should have not I explained it with determinants?
Maybe you wanted me to find the inverse of the augmented matrix?
5. I don't think I really explained the question properly, so I have found a few statements on the solutions involved: unique, inconsistent..etc. To be quite honest, I have no idea what it all means but I hope it is some help for you. I would love to understand how you answer question one and two, so if you can explain it in lay-mans terms then I will be a very happy and rather informed girl
- When there is a solution or many solutions to a system of simultaneous equations they are said to be consistent equations. When there is no solution the equations are said to be inconsistent.
When solving simultaneous equations with two unknowns:
- one solutions means the lines intersect
- a true statement (eg 0 = 0) means there is an infinite number of solutions, which means the lines are coincident.
- a false statement means there are no solutions, this means the lines are parallel.
When solving simultaneous equations with three unknowns:
- a false statement from any pair of the three equations means there are no solutions, there are various possibilities for the relative positions of the planes.
- no false statements and no unique solution means there are an infinite number of solutions – here 3 planes intersect on a line or 2 planes are coincident with one intersecting them or 3 coincident planes.
If k = n and the matrix A is non-singular, then the system has a unique solution in the n variables. In particular, there is a unique solution if A has a matrix inverse A^-1. In this case, x = A^-1 b. (http://mathworld.wolfram.com/LinearS...Equations.html)
Thank you!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213480949401855, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/193820/inequality-fracab-sqrtab2c2-fracbc-sqrtbc2a2-fracca-sqrt?answertab=votes
|
# Inequality.$\frac{ab}{\sqrt{ab+2c^2}}+\frac{bc}{\sqrt{bc+2a^2}}+\frac{ca}{\sqrt{ca+2b^2}} \geq \sqrt{ab+bc+ca}$
Let $a,b,c > 0$. Prove that (using Hölder's inequality):
$$\frac{ab}{\sqrt{ab+2c^2}}+\frac{bc}{\sqrt{bc+2a^2}}+\frac{ca}{\sqrt{ca+2b^2}} \geq \sqrt{ab+bc+ca}.$$
Thanks :)
I tried to apply Hölder's inequality how I apply in this exercise but I didn't obtained anything.
-
## 1 Answer
Using Hölder's inequality, we get $$\left(\sum_{cyc} \frac {ab}{\sqrt{ab + 2c^2}}\right) \left(\sum_{cyc} \frac {ab}{\sqrt{ab + 2c^2}}\right) \left(\sum_{cyc} ab (ab + 2c^2)\right) \geq (ab + bc + ca)^3$$ The above inequality can be rewritten as $$\left(\sum_{cyc} \frac {ab}{\sqrt{ab + 2c^2}}\right)^2 (ab + bc +ca)^2 \geq (ab + bc + ca)^3$$ And so $$\sum_{cyc} \frac {ab}{\sqrt{ab + 2c^2}} \geq \sqrt{ab + bc + ca}$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455642104148865, "perplexity_flag": "middle"}
|
http://topologicalmusings.wordpress.com/2008/02/04/a-few-useful-identities-related-to-indefinite-integrals-part-4/
|
Todd and Vishal’s blog
Topological Musings
# A few useful identities related to (in)definite integrals (part 4)
February 4, 2008 in Problem Corner | Tags: definite integrals, identities, indefinite integrals, integration bee
Okay, I thought earlier that part 3 of the above series of posts would be my last one. For some reason, this series has turned out to be a somewhat popular one when considering the fact that a big chunk of blog visitors visit it. Probably, this is due to the fact that the 2008 MIT Integration Bee is going to be held sometime soon – I have no idea exactly when – or perhaps, there are other Integration Bees that are going to be held in other colleges/universities sometime soon. If I get some more feedback/interest, then I will consider posting more results/identities/tricks on this same topic.
As you might have noticed from the title of the post, this identity isn’t related to definite integrals; it is related to indefinite integrals. Of course, since definite integrals form a “subset” of indefinite integrals, we can apply this identity to either one of them.
Let me begin by posing two problems, which I ask you to solve in your head. If you are able to do so, then you probably know the trick that is stated below, and hence you may stop reading this post; else, continue reading.
Problem 1: Evaluate $\displaystyle \int e^x (\frac{x-2}{x^3}) \, dx$.
Problem 2: Evaluate $\displaystyle \int e^x (\ln x + \frac1{x}) \, dx$.
And, here’s our identity.
$(4) \displaystyle \int e^x (f(x) + f'(x)) \, dx = e^x f(x) + C$,
where $C$ is the constant of integration.
Proof: We use integration by parts. Recall, $\displaystyle \int u\, dv = uv - \int v\, du$, where $u \equiv f(x)$ and $v \equiv g(x)$. So, if we let $\displaystyle v = e^x$, then we have $\displaystyle \int e^x f(x) \, dx = \int f(x) \, d(e^x) = f(x) e^x - \int e^x d(f'(x))$, which leads us to our identity.
Now, you should be able to solve the above problems in your head in just a couple of seconds if not less.
Solution 1: $\displaystyle e^x/x^2 + C$.
Solution 2: $e^x \ln x + C$.
If you have found this particular series of posts useful, drop me a comment. Doing so will provide me the motivation to post more stuff on this topic in the near future.
• 221,479 hits
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9294074773788452, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/symmetry-breaking
|
# Tagged Questions
The symmetry-breaking tag has no wiki summary.
2answers
83 views
### The status of $SU(3)_C$ symmetry in the Standard Model
In the Standard Model of Particle physics the $SU(2)_{EW}$ symmetry and the $SU(2)$ isospin symmetry are broken. What about $SU(3)_C$? Is it broken too? if YES, what breaks the symmetry? If NO, what ...
1answer
131 views
### Early time in the Big Bang
I am not a physicist, so I would really appreciate using a simple language for the explanation of my question. From what I understood at the early Big Bang the four fundamental forces were unified to ...
1answer
78 views
### Symmetry breaking with Lagrangian
I have been studying the spontaneous symmetry braking from Zee (Quantum Field theory ) and found in the page 224, he wrote the lagrangian as \mathcal{L}= \frac{1}{2}\{ λ (∂φ)^2 + μ^2φ^ 2\} − ...
1answer
73 views
### How to find the Higgs coupling with a mixing matrix?
It is known that the couplings to the Higgs are proportional to the mass for fermions; $$g_{hff}=\frac{M_f}{v}$$ where $v$ is the VEV of the Higgs field. I'm trying to figure out why this is true ...
3answers
220 views
### Will Cone standing on its tip, without any other force other than gravity topple?
A cone standing on its tip is considered to be in unstable equilibrium as a slightest force could topple it. So, if the cone is stood on its tip with no other force other than gravity (and the ...
1answer
84 views
### Multiple vacua vs. vev's in qft
Take a (possibly supersymmetric) relativistic quantum field theory: when we construct it, we suppose that there is a unique vacuum state $|0\rangle$ which is Lorentz invariant, vector of some Hilbert ...
0answers
52 views
### Does the Standard Model plasma develop a spontaneous magnetisation at finite temperature?
Reference: arXiv:1204.3604v1 [hep-ph] Long-range magnetic fields in the ground state of the Standard Model plasma. Alexey Boyarsky, Oleg Ruchayskiy, Mikhail Shaposhnikov. The authors of this paper ...
1answer
94 views
### Spontaneous symmetry breaking: How can the vacuum be infinitly degenerate?
In classical field theories, it is with no difficulty to imagine a system to have a continuum of ground states, but how can this be in the quantum case? Suppose a continuous symmetry with charge $Q$ ...
0answers
52 views
### Spontaneous symmetry breaking in the quantum 1D XX model?
The ground states of the quantum 1D Ising and Heisenberg models exhibit spontaneous magnetization. Is this also true for the 1D XX model?
1answer
424 views
### Emergent symmetries
As we know, spontaneous symmetry breaking(SSB) is a very important concept in physics. Loosely speaking, zero temprature SSB says that the Hamiltonian of a quantum system has some symmetry, but the ...
1answer
210 views
### Spontaneous breaking of Lorentz invariance in gauge theories
I was browsing through the hep-th arXiv and came across this article: Spontaneous Lorentz Violation in Gauge Theories. A. P. Balachandran, S. Vaidya. arXiv:1302.3406 [hep-th]. (Submitted on 14 ...
0answers
58 views
### Dimensional transmutation in Gross-Neveu vs others
Firstly I don't know how generic is dimensional transmutation and if it has any general model independent definition. Is dimensional transmutation in Gross-Neveau somehow fundamentally different ...
0answers
29 views
### Residual symmetries of the superposition of two fcc lattices
Fcc lattices are Bravais lattices and so are invariant under a set of discrete translations plus inversions over the 3 axis ($x\rightarrow -x$,$y\rightarrow -y$,$z\rightarrow -z$). When one superposes ...
1answer
90 views
### Chiral perturbation theory: what is the Quark Condensate? why expand in $U$ rather than Goldstone fields?
I'm studying Chiral Perturbation Theory (ChPT) from Scherer's manuscript "Introduction to Chiral Perturbation Theory", available at http://arxiv.org/abs/hep-ph/0210398 What I am currently having ...
1answer
102 views
### What is the meaning of non-compactness in the context of $U(1)$ in gauge theories?
In John Preskill's review of monopoles he states Nowadays, we have another way of understanding why electric charge is quantized. Charge is quantized if the electromagnetic U(l)em gauge group ...
3answers
192 views
### Ising Ferromagnet: Spontaneous symmetry breaking or not?
In explaining/introducing second-order phase transition using Ising system as an example, it is shown via mean-field theory that there are two magnetized phases below the critical temperature. This ...
1answer
148 views
### Dispersion of ferromagnetic ($E\propto k^2$) and antiferromagnetic ($E\propto k$) spin wave
The dispersion of ferromagnetic spin wave at low energy is $E\propto k^2$, while $E\propto k$ for antiferromagnetic case. Is there a simple/physical argument (such as symmetry) for these results? ...
0answers
92 views
1answer
244 views
### Spontaneous symmetry breaking in SU(5) GUT?
At the end of this video lecture about grand unified theories, Prof. Susskind explains that there should be some kind of an additional Higgs mechanism at work, to break the symmetry between the ...
2answers
209 views
### Dynamical supersymmetry breaking and Witten index
Witten index, defined as ${\rm Tr}(-1)^F$, makes know if supersymmetry is spontaneously broken or not for a given model. But it is known that supersymmetry can be also broken dynamically and one can ...
2answers
297 views
### What is the role of the vacuum expectation value in symmetry breaking and the generation of mass?
Consider a theory of one complex scalar field with the following Lagrangian. $$\mathcal{L}=\partial _\mu \phi ^*\partial ^\mu \phi +\mu ^2\phi ^*\phi -\frac{\lambda}{2}(\phi ^*\phi )^2.$$ The ...
0answers
37 views
### Taylor-Slavnov Identity in spontaneously broken gauge theories
Where can I find a list of important Taylor-Slavnov identities in Spontaneously broken gauge theories? I am looking for not just the generating functional form, but rather a list of explicit ones ...
0answers
112 views
### Higgs stability in Standard Model
I am a little unclear on what ramifications a negative quartic at high energies has on our world at low energies. (1) First of all, is it that there is a second, isolated minimum that appears at ...
1answer
316 views
### Time crystals : fake or revolution?
This article about "crystals of time" just appeared on the PRL website. Viewpoint: Crystals of Time (http://physics.aps.org/articles/v5/116) The authors (including famous Frank Wilczek) claim that ...
2answers
856 views
### Norton's dome and its equation
Norton's dome is the curve $$h(r) = -\frac{2}{3g} r ^{3/2}.$$ Where $h$ is the height and $r$ is radial arc distance along the dome. The top of the dome is at $h = 0$. Via Norton's web. If we put ...
0answers
104 views
### Breaking of conformal symmetry
I am wondering something about the breaking of conformal symmetry: I know that it can be broken at the quantum level, anomalously, but I never encountered or heard about a model where it is broken "à ...
1answer
536 views
### Self energy, 1PI, and tadpoles
I'm having a hard time reconciling the following discrepancy: Recall that in passing to the effective action via a Legendre transformation, we interpret the effective action $\Gamma[\phi_c]$ to be ...
3answers
222 views
### Higgs Boson: The Big Picture
First, please pardon the ignorance behind this question. I know a fair amount of math but almost no physics. I'm hoping someone can give me a brief "big picture" explanation of how physicists were ...
1answer
148 views
### What if microstates increase proportional to universe volume?
I am probably a delusional crank with a lot of crazy, overly speculative conjectures. If I am not delusional, than at the very least I've been ahead of the curve, the last 40 or so years. I was a ...
1answer
152 views
### Does spontanous symmetry breaking affect Noethers theorem?
Does spontanous symmetry breaking affect the existence of a conserved charge? And how does depend on whether we look at a classical or a quantum field theory (e.g. the weak interacting theory)? ...
3answers
249 views
### How come a photon acts like it has mass in a superconducting field?
I've heard the Higgs mechanism explained as analogous to the reason that a photon acts like it has mass in a superconducting field. However, that's not too helpful if I don't understand the latter. ...
2answers
127 views
### Do particles gain mass only at energy levels found during the big bang?
I am trying to make sure my understanding is correct. At energies and temperatures found during the big bang (or at CERN recently), the Higgs mechanism comes into effect. When it does, there is a ...
2answers
233 views
### Effects of a non-Lorentz-invariant vacuum state
I'm here asking about real or though experiments (i.e., physical effects) where, at least in principle, one can see some consequence of a non-Lorentz-invariant vacuum state in an otherwise Poincare ...
4answers
2k views
### How does Higgs Boson get the rest mass?
Higgs Boson detected at LHC is massive. It has high relativistic mass means it has non-zero rest mass. Higgs Boson gives other things rest mass. But, how does it get rest mass by itself?
2answers
1k views
### Why do we need Higgs field to re-explain mass, but not charge?
We already had definition of mass based on gravitational interactions since before Higgs. It's similar to charge which is defined based on electromagnetic interactions of particles. Why did Higgs ...
1answer
151 views
### Higgs Boson - only little over GCSE physics? [closed]
According to http://www.guardian.co.uk/science/blog/2012/jul/04/higgs-boson-discovered-live-coverage-cern, the reporter says that higgs boson things are little over GCSE physics. So, English learn a ...
1answer
170 views
### What is Supersymmetry (SuSy)?
In particle physics, supersymmetry (often abbreviated SUSY) is a symmetry that relates elementary particles...etc. what is symmetry breaking? What is supersymmetry (SUSY)? What is spontaneous ...
1answer
104 views
### What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT?
What kinds of inconsistencies would one get if one starts with Lorentz noninvariant Lagrangian of QFT? The question is motivated by this preprint arXiv:1203.0609 by Murayama and Watanabe. Also, what ...
1answer
209 views
### Spontaneous symmetry breaking and 't Hooft and Polyakov monopoles
What is spontaneous symmetry breaking from a classical point of view. Could you give some examples, using classical systems.I am studying about the 't Hooft and Polyakov magnetic monopoles solutions, ...
1answer
1k views
### Spontaneous Time Reversal Symmetry Breaking?
It is known that you can break P spontaneously--- look at any chiral molecule for an example. Spontaneous T breaking is harder for me to visualize. Is there a well known condensed matter system which ...
1answer
155 views
### What is the code distance in quantum information theory?
What is the code distance in quantum information theory? Code distance seems to be a very important concept in fault tolerant quantum computation and topological quantum computation.
10answers
1k views
### What is spontaneous symmetry breaking in QUANTUM systems?
Most descriptions of spontaneous symmetry breaking, even for spontaneous symmetry breaking in quantum systems, actually only give a classical picture. According to the classical picture, spontaneous ...
1answer
72 views
### Can symmetry be restored in high energy scattering?
Suppose you have a field theory with a real scalar field $\phi$ and a potential term of the form $\lambda \phi^4 - \mu \phi^2$ that breaks the symmetry $\phi \to - \phi$ in the ground state. Is this ...
0answers
67 views
### What is the mean field value of a scalar field with spontaneously broken symmetry in a scattering event?
Consider you have a quantum field theory that undergoes spontaneous symmetry breaking at some critical temperature. It doesn't necessarily have to be a continuous symmetry that's broken, I don't think ...
0answers
142 views
### Breaking of Lorentz invariance
Thinking about the concept of symmetry breaking led me to the following question: Let's say that I have a theory described by a Lorentz invariant Lagrangian, and the true vacuum of the theory is not ...
1answer
166 views
### Goldstone's theorem and massless modes for $\phi^4$ theory
Consider a scalar field doublet $(\phi_1, \phi_2)$ with a Mexican hat potential $$V~=~\lambda (\phi_1^2+\phi_2^2-a^2)^2.$$ When $a=0$ this is a quartic potential and the symmetry is not ...
0answers
103 views
### Polyakov action as broken symmetry effective action
I would like to ask if it is possible to regard the Polyakov action as an effective action that describes the broken symmetric phase of a more general model. Could someone draw an analogy with O(N) ...
1answer
220 views
### Where is the “true” higgs if the LHC 125 GeV signal is rather a higher dimensional radion than a SM higgs?
In this article, Lumo introduces and explains the idea (presented by the original authors in this paper) that the LHC signal at about 125 GeV could alternatively be interpreted as a higher ...
1answer
93 views
### Why are we forced to choose a specific value for $\pi$ field in Nambu-Goldstone phenomenon?
In the sigma-model of spontaneous symmetry breaking, we have degenerate vacuum states. But if we don't pick up a particular value of VEV, we won't have any symmetry breaking. As I read from a book, in ...
1answer
174 views
### Does measurement, quantum in particular, always increase the total entropy?
Measurement of a quantum observable (in an appropriate, old-fashioned sense) necessarily involves coupling to a system with a macroscopically large number of degrees of freedom. Entanglement with this ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080566167831421, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/100871/list
|
## Return to Answer
3 fixed a couple of typos
Isn't it the case that, if $C$ and $D$ are equivalent categories and if, in both of these categories, each object is isomorphic to a proper class of other objects, then $C$ and $D$ are isomorphic (assuming global choice)? So, for example, the category of non-trivial commutative rings and the dual of the category of nonempty affine schemes are isomorphic. (I had to exclude the empty scheme, and therefore the trivial ring, because there's only one empty scheme but lots of trivial rings, which would mess up any attempt at an isomorphism.) More generally, if $F:C\to D$ is an equivalence of categories and if, for each object $a$ in $C$, the number of isomorphic copies of $a$ in $C$ equals the number of isomorphic copies of $F(a)$ in $D$, then there should (again with a generous use of choice) be an isomorphism from $C$ to $D$ (that is, furthermore, naturally isomorphic to the given $F$).
EDIT: Martin asked in a comment for a proof; I'll put a proof (or at least a sketch, which I hope will suffice) into the answer because it won't fit into a comment. Suppose $F:C\to D$ is an equivalence and, for each object $a$ of $C$, the isomorphism classes of $a$ and $F(a)$ are the same size. In $C$, choose one representative object from each isomorphism class of objects; write $a^*$ for the representative of the isomorphism class of $a$. Also choose, for each object $a$, an isomorphism $i_a:a\to a^*$, subject to the convention that `$i_{a^*}$` is the identity morphism of `$a^*$`. Do the same in $D$, but, instead of arbitrarily choosing the representative objects, use the objects `$F(a^*)$`; there's exactly one of these in each isomorphism class, because $F$ is an equivalence. But the isomorphisms `$i_b$`, from objects of $b$ of $D$ to the representatives, are still chosen arbitrarily except that, as before, for the representatives themselves we use identity morphisms. Now define a new functor $F':C\to D$ as follows. On the representative objects `$a^*$`, it agrees with $F$. On other objects, it acts in such a way that the isomorphism class of any `$a^*$` is mapped bijectively to the isomorphism class of `$F(a^*)$`; this is possible because I assumed that these isomorphism classes have the same size. Finally, if $f:a\to b$ is a morphism in $C$, then $F'$ should send it to the following mess: ```$$
i_{F'(b)}^{-1}F(i_bf{i_a}^{-1})i_{F'(a)}.
$$``` In perhaps more understandable language: Use `$i_a$` and `$i_b$` to transport $f$ to a morphism from `$a^*$` to `$b^*$`, apply $F$ to that, and then transport the result to map `$F'(a)\to F'(b)$` F'(b)$` via the chosen isomorphisms in $D$. It should be routine to check that this $F'$ is an isomorphism.
2 added 1772 characters in body
EDIT: Martin asked in a comment for a proof; I'll put a proof (or at least a sketch, which I hope will suffice) into the answer because it won't fit into a comment. Suppose $F:C\to D$ is an equivalence and, for each object $a$ of $C$, the isomorphism classes of $a$ and $F(a)$ are the same size. In $C$, choose one representative object from each isomorphism class of objects; write $a^*$ for the representative of the isomorphism class of $a$. Also choose, for each object $a$, an isomorphism $i_a:a\to a^*$, subject to the convention that `$i_{a^*}$` is the identity morphism of `$a^*$`. Do the same in $D$, but, instead of arbitrarily choosing the representative objects, use the objects `$F(a^*)$`; there's exactly one of these in each isomorphism class, because $F$ is an equivalence. But the isomorphisms `$i_b$`, from objects of $b$ to the representatives, are still chosen arbitrarily except that, as before, for the representatives themselves we use identity morphisms. Now define a new functor $F':C\to D$ as follows. On the representative objects `$a^*$`, it agrees with $F$. On other objects, it acts in such a way that the isomorphism class of any `$a^*$` is mapped bijectively to the isomorphism class of `$F(a^*)$`; this is possible because I assumed that these isomorphism classes have the same size. Finally, if $f:a\to b$ is a morphism in $C$, then $F'$ should send it to the following mess:`$$$$`In perhaps more understandable language: Use `$i_a$` and `$i_b$` to transport $f$ to a morphism from `$a^*$` to `$b^*$`, apply $F$ to that, and then transport the result to map $F'(a)\to F'(b)$` via the chosen isomorphisms in $D$. It should be routine to check that this $F'$ is an isomorphism.
1
Isn't it the case that, if $C$ and $D$ are equivalent categories and if, in both of these categories, each object is isomorphic to a proper class of other objects, then $C$ and $D$ are isomorphic (assuming global choice)? So, for example, the category of non-trivial commutative rings and the dual of the category of nonempty affine schemes are isomorphic. (I had to exclude the empty scheme, and therefore the trivial ring, because there's only one empty scheme but lots of trivial rings, which would mess up any attempt at an isomorphism.) More generally, if $F:C\to D$ is an equivalence of categories and if, for each object $a$ in $C$, the number of isomorphic copies of $a$ in $C$ equals the number of isomorphic copies of $F(a)$ in $D$, then there should (again with a generous use of choice) be an isomorphism from $C$ to $D$ (that is, furthermore, naturally isomorphic to the given $F$).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353822469711304, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/11673/ex-1-x-1-x-2-for-independent-gamma-random-variables/12018
|
# $E(X_1 | X_1 + X_2)$ for independent gamma random variables
The following result must be well known, but it is instructive to give it here in view of the recent posts Y= X+N what is E(X|Y=y) and http://mathoverflow.net/questions/47168/ex-1-x-1-x-2-where-x-i-are-integrable-independent-infinitely-divisib/47204#47204. This is also an interesting exercise in its own right.
Suppose that $X_1$ and $X_2$ are independent ${\rm Gamma}(c_i,\lambda)$ rv's, meaning that $X_i$, $i=1,2$, has density function $f_{X_i } (x) = \lambda ^{c_i } {\rm e}^{ - \lambda x} x^{c_i - 1} /\Gamma (c_i )$, $x > 0$ ($c_i$ are positive constants, $\Gamma$ is the gamma function). Show that $${\rm E}(X_1 | X_1 + X_2 = z) = \frac{{c_1 }}{{c_1 + c_2 }} z.$$
-
Of course, you are not allowed to apply the general principle indicated in the aforementioned posts (concerning infinitely divisible distributions). – Shai Covo Nov 24 '10 at 10:17
## 2 Answers
First of all, we note that $X_1 + X_2 \sim {\rm Gamma}(c_1 + c_2 , \lambda)$. Given that $X_1 + X_2 = z$, $X_1$ cannot exceed $z$. We can thus find ${\rm E}(X_1|X_1+X_2=z)$ as follows. $${\rm E}(X_1|X_1 + X_2 = z) = \int_{0}^z {xf_{X_1|X_1 + X_2} (x|z)\,{\rm d}x} = \int_{0}^z {x\frac{{f_{X_1} (x)f_{X_1 + X_2|X_1} (z|x)}}{{f_{X_1 + X_2} (z)}}\, {\rm d}x}.$$ Noting that $f_{X_1 + X_2|X_1} (z|x) = f_{X_2} (z-x)$, it follows after some algebra and a change of variable that the right-hand side integral is equal to $$z\frac{{\Gamma (c_1 + c_2 )}}{{\Gamma (c_1 )\Gamma (c_2 )}}\int_0^1 {x^{c_1 } (1 - x)^{c_2 - 1} \,{\rm d}x} = z\frac{{\Gamma (c_1 + c_2 )}}{{\Gamma (c_1 )\Gamma (c_2 )}}{\rm B}(c_1 + 1,c_2 ),$$ where ${\rm B}(a,b) = \int_0^1 {t^{a - 1} (1 - t)^{b - 1} \,{\rm d}t}$ is the beta function. Finally, from the alternative form ${\rm B}(a,b) = \frac{{\Gamma (a)\Gamma (b)}}{{\Gamma (a + b)}}$ followed by $\Gamma (p+1) = p \Gamma (p)$, we obtain $${\rm E}(X_1|X_1 + X_2 = z) = \frac{{c_1 }}{{c_1 + c_2 }} z.$$
-
another way to get this result is to first recall that $S := X_1+X_2$ and $R := X_2/X_1$ are independent. [this is well-known and can be proved by the usual method of derived distributions for two random variables. the mapping $(X_1,X_2) \to (R,S)$ is 1-1 and easily inverted. this is a standard example or exercise in many intro probability and math stat books - such as here.]
then, noting that $\frac{X_1}{X_1+X_2} = \frac{1}{1+R}$,
$${\mathrm E}\{ X_1 | S\} = {\mathrm E}\{ \frac{S}{1+R} | S\} = S\kern1pt {\mathrm E}\{ \frac{1}{1+R} | S\} = S\kern1pt{\mathrm E}\frac{1}{1+R}. \kern10pt (1)$$
the last equality in (1) follows from independence. also, [on repeating the argument - or just taking unconditional expectations in (1)],
$$c_1 = {\mathrm E}X_1 = {\mathrm E}\frac{S}{1+R} = {\mathrm E}\frac{1}{1+R}\kern2pt {\mathrm E}S = {\mathrm E}\frac{1}{1+R}(c_1+c_2),$$
so
$$\kern84pt {\mathrm E}\frac{1}{1+R} = \frac{c_1}{c_1+c_2}, \kern84pt (2)$$
which can be plugged into (1) to get the result.
[btw, (2) says that
$${\mathrm E}\frac{X_1}{X_1+X_2} = \frac{{\mathrm E}X_1}{{\mathrm E}(X_1+X_2)},$$
a relation which, in general, is a fond wish of students in an intro prob or stats course - and perhaps the instructors too.]
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359387159347534, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/ideal-gas+energy
|
# Tagged Questions
3answers
184 views
### Understanding mathematically the free expansion process of an ideal gas
I'm trying to understand mathematically that for the free expansion of an ideal gas the internal energy $E$ just depends on temperature $T$ and not volume $V$. In the free expansion process the ...
3answers
2k views
### How to deduce E=(3/2)kT?
It says in my course notes that a particle has so-called "kinetic energy" $E=\frac{3}{2}kT=\frac{1}{2}mv^²$ Where does this formula come from? What is k?
2answers
783 views
### Calculating work done on an ideal gas
I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is ...
1answer
666 views
### Work on ideal gas by piston
Imagine a thermally insulated cylinder containing a ideal gas closed at one end by a piston. If the piston is moved rapidly, so the gas expands from $V_i$ to $V_f$. The expanding gas will do work ...
3answers
220 views
### Simple question about a gas in a box with a moving wall
David Albert is a philosopher of Science at Columbia. His book "Time and Chance" includes this example (p 36). A gas is confined on one side of a box with a removable wall. "Draw the wall out, ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247758388519287, "perplexity_flag": "middle"}
|
http://www.cfd-online.com/W/index.php?title=Structured_mesh_generation&diff=13679&oldid=13669
|
[Sponsors]
Home > Wiki > Structured mesh generation
# Structured mesh generation
### From CFD-Wiki
(Difference between revisions)
Peter (Talk | contribs) m (Reverted edits by Reverse22 (talk) to last revision by Jasond)
Line 87: Line 87:
---- ----
[[Mesh classification|<< Mesh Classification]] | '''Structured Mesh Generation''' | [[ Unstructured mesh generation|Unstructured Mesh Generation>>]] [[Mesh classification|<< Mesh Classification]] | '''Structured Mesh Generation''' | [[ Unstructured mesh generation|Unstructured Mesh Generation>>]]
-
- [http://www.prlog.org/11289974-phone-number-lookup-verizon-phone-number-reverse-lookup-to-get-information-you-need-quickly.html reverse lookup]
-
- [http://thetvtopc.com/Reverse_Cell_Phone_Lookup_Number reverse phone lookup cell]
## Revision as of 09:15, 3 January 2012
Introduction
Mesh classification
Structured mesh generation
Unstructured mesh generation
Special topics
<< Mesh Classification | Structured Mesh Generation | Unstructured Mesh Generation>>
The simplest algorithms directly compute nodal placement from some given function. These algorithms are referred to as algebraic algorithms. Many of the algorithms for the generation of structured meshes are descendents of "numerical grid generation" algoritms, in which a differential equation is solved to determine the nodal placement of the grid. In many cases, the system solved is an elliptic system, so these methods are often referred to as elliptic methods. The best basic refence on this topic is the book of Thompson, Warsi, and Mastin[1]. There are more recent texts available, but this is the classic book on the subject, and it is available online here.
## Algebraic Grid Generation
The simplest way to obtain a grid would be to specify the grid coordinates $\vec{x}$ as the result of some vector function, or
$\vec{x}=\vec{x}\left(\vec{\xi}\right),$
where $\vec{\xi}$ is the "index" vector, sometimes referred to as a computational coordinate. For our purposes here the entries of the computational coordinate will range from zero to a maximum. If such a function can be found for a given geometry, then the actual generation of gridpoints is straightforward. The problem, however, is that the determination of the function is not neccessarily that easy. In practice, it is sometimes easier to add an intermediate parametric space, denoted by $\vec{s}$, in between the physical space representation of the grid and the computational space representation of the grid:
$\vec{x}=\vec{x}\left(\vec{s}\left(\vec{\xi}\right)\right).$
The entries in the computational coordinate are taken from the unit interval. This representation can help simplify matters, especially in the one dimensional case.
Many mesh generation systems (both structured and unstructured) require the generation of boundary grids before interior cells can be generated. This is an area in which algebraic grid generation is ideal - typically, we want to specify boundary edge point distributions quickly, with a minimum of complexity, and a high degree of repeatability. Consider a line segment joining two points $\vec{x}_1$ and $\vec{x}_2$ together. The segment can be expressed as the linear form
$\vec{x}(s)=s(\vec{x}_2 - \vec{x}_1) + \vec{x}_1,\ \ s\in [0,1].$
Similar expressions are possible for other curves connecting the two points. Of particular interest is the cubic Bézier curve, which allows the specification of direction and location at both endpoints and can be written as
$\vec{x}(s)=\vec{P}_0 (1-s)^3 + 3\vec{P}_1 s (1-s)^2 + 3 \vec{P}_2 s^2 (1-s) + \vec{P}_3 s^3,$
where the $P_i$'s are the control points and again $s\in [0,1]$.
By changing the functional expression for $s$, we can change the grid distribution along the line segement. These functions are often referred to as stretching functions, and there are many choices available. The simplest choice is a uniform distribution, in which we set
$s(\xi)= \frac{\xi}{I},$
where $\xi\in[0,I]$. For cases in which grid clustering is desired, the hyperbolic trigonometric functions such as the hyperbolic tangent are a popular choice. A simple one-parameter hyperbolic tangent stretching function is defined by
$s\left(\xi\right) = 1 + \frac{\tanh\left[\delta\left(\xi/I-1\right)\right]}{\tanh\left(\delta\right)},$
where $\delta$ is the stretching factor and $\xi\in[0,I]$. This function partitions the unit interval and allows the specification of a single location. This sort of distribution is good for wall-normal grid distribution in viscous flows. This distribution is due to Vinokur [2]. Vinokur's procedure for the determination of the proper stretching factor to obtain desired spacings uses the derivatives of the stretching functions. Suppose we wish for our first grid spacing to be $\Delta s$. This can be taken to mean that $s(1)=\Delta s$ or that $ds/d\xi (1) = \Delta s$. Vinokur's procedure guarantees the latter.
A related double-sided stretching function (that gives symmetric spacings about $\xi=I/2$) is given by
$u\left(\xi\right) = \frac{1}{2}\left[1 + \frac{\tanh\left[\delta\left(\xi/I-1/2\right)\right]}{\tanh\left(\delta/2\right)}\right].$
This function is good for duct flows, such as turbulent channel flow. In situations in which different grid spacings are desired, a stretching function can be constructed that has specified spacings at both ends: $\Delta s_1$ and $\Delta s_2$. Vinokur gives such a function, first defining
$A=\frac{\sqrt{\Delta s_2}}{\sqrt{\Delta s_1}},\ \ B=\frac{1}{I\sqrt{\Delta s_2\Delta s_1}}.$
The stretching factor $\delta$ is found from the solution of the transcendental equation
$\frac{\sinh\left(\delta\right)}{\delta} = B.$
The final grid distribution is then given by
$s\left(\xi\right)=\frac{u\left(\xi\right)}{A + (1-A)u\left(\xi\right)}.$
Again, Vinokur's procedure ensures that the derivative conditions $ds/d\xi (1) = \Delta s_1$ and $ds/d\xi (I) = \Delta s_2$, and not the grid spacings obtained in the direct evaluation of the stretching function.
For the generation of interior cells, algebraic techniques are also available, most usually in the form of interpolation between boundary faces.
## Elliptic Grid Generation
The oldest numerical grid generation techniques are based upon the solution of elliptic PDE's. Typically, a Poission-type equation is solved given the boundary grid distribution to generate interior nodal points. The solution domain is often topologically equivalent to a cube in 3D and a square in 2D. Consider the solution domain shown below with the indicated boundary resolution.
The simplest technique we could use here would be a solution of the Laplace equation using the standard second order finite difference stencil. This approach simplies into a form that is easily solved using Jacobi or Gauss-Seidel iterative techniques:
$\nabla\vec{x}=0\$
with Dirichlet boundary conditions is discretized as
$\ \vec{x}_{i,j} = \frac{\vec{x}_{i+1,j}+\vec{x}_{i-1,j}+\vec{x}_{i,j+1}+\vec{x}_{i,j-1}}{4}$
An initial grid is shown below in (a), and the resulting grid (after iterations) is shown in (b).
Note that the grid spacing near the curved section increases and then decreases as we move left to right, and that grid lines near the left and right boundaries are not very orthogonal. These issues are reasons that production grid generation techniques are usually more complicated. The addition of control functions allows for better grid clustering properties, which will be necessary for viscous flow simulations.
## References
1. ↑ Thompson, Joe F., Warsi, Z.U.A., and Mastin, C. Wayne, Numerical Grid Generation, Elsevier Science Publishers, 1985
2. ↑ Vinokur, Marcel, On One-Dimensional Stretching Functions for Finite-Difference Calculations, Journal of Computational Physics. 50, 215, 1983.
<< Mesh Classification | Structured Mesh Generation | Unstructured Mesh Generation>>
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.891417384147644, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/black-holes?page=4&sort=votes&pagesize=50
|
# Tagged Questions
A black hole is a volume from which photons, or any matter, can not escape. More formally, the coordinate speed of light at the event horizon - the boundary of a black hole - is zero, as measured by a sufficiently separated observer.
1answer
157 views
### Are “typical” black holes rotating, or stationary?
From my (somewhat limited) understanding of GR I know that there are two different kinds of solutions that produce a black hole, some that rotate and some that do not. What I can't figure out from my ...
1answer
168 views
### Will acceleration rate of expansion of space become faster than speed of light?
From watching cosmology lectures, it seems that the space between galaxies is expanding at an accelerating rate, my question is since it is the space that is (acceleratingly expanding), the special ...
2answers
332 views
### Deriving Birkhoff's Theorem
I am trying to derive Birkhoff's theorem in GR as an exercise: a spherically symmetric gravitational field is static in the vacuum area. I managed to prove that $g_{00}$ is independent of t in the ...
1answer
287 views
### Can Black Holes be the Dark Matter?
Seems to fit the definition: interacts with gravity, doesn’t radiate energy (except Hawking Radiation) and could create gravity lensing without absorbing very much of the light. Could 80% of the ...
1answer
494 views
### On black holes, Hawking radiation and gravitational atoms
Over the past hour or so I've been following one of my standard physics-based, wanders-through-the-internet. Specifically, I began by reviewing some details of dark energy theory but soon found myself ...
1answer
179 views
### Can a blackhole eat a blackhole?
I'm not a physicist and I do not understand maths. But I watch documentaries about "how it all began", "the big bang", "What is time", etc etc just really fascinating. I was wondering if a blackhole ...
3answers
626 views
### Black holes and positive/negative-energy particles
I was reading Brian Greene's "Hidden Reality" and came to the part about Hawking Radiation. Quantum jitters that occur near the event horizon of a black hole, which create both positive-energy ...
1answer
108 views
### Hawking radiation: direct matter -> energy conversion?
When a black hole evaporates, does it turn all the matter that has fallen in directly to energy, or will it somehow throw back out the same kind of matter (normal or anti) that went in?
1answer
213 views
### Charged particle close to a charged black hole - what happens?
Let's assume the Reissner–Nordström metric (charged black hole, non-rotating), for simplicity. The black hole is charged with a powerful electric charge. There's a particle nearby, of non-zero mass, ...
2answers
189 views
### Do all of our discoveries of black holes in nature depend on the validity of GR?
In the question Is there a black hole in the centre of the Milky Way? the answer by Motl seems to all but say the existence of that black hole is a fact (see also Evidence for black hole event ...
1answer
469 views
### Time dilation when falling into black hole
I know that if one astronaut falls into a black hole, then a distant observer will see him take an infinite amount of time to reach the event horizon (provided the observer can see light of ...
2answers
1k views
### How long does it take a black hole to eat a star?
I presume the answer is that it depends on the mass and size of the star and black hole, but I was wondering if somebody could provide some rough bounds (e.g. hours vs thousands of years). By ...
1answer
141 views
### Does the curvature of space-time cause objects to look smaller than they really are?
What's the difference between looking at a star from a black hole and looking at it from empty space? My guess is that the curvature of space-time distorts the wavelength of light thus changing the ...
2answers
205 views
### How does the evaporation of a black hole look for a distant observer?
Let's assume an observer looking at a distant black hole that is created by collapsing star. In observer frame of reference time near black hole horizon asymptotically slows down and he never see ...
1answer
87 views
### Where and how is the entropy of a black hole stored?
Where and how is the entropy of a black hole stored? Is it around the horizon? Most of the entanglement entropy across the event horizon lies within Planck distances of it and are short lived. Is ...
1answer
179 views
### What is a virtual photon pair?
When describing a black hole evaporation in the hawking black body radiation it is usually said that is due to a virtual photon pair, is it this what happens? And what is virtual photon pair, does the ...
2answers
167 views
### What is the mechanism for fast scrambling of information by black holes?
Sekino and Susskind have argued that black holes scramble information faster than any quantum field theory in this paper. What is the mechanism for such scrambling?
1answer
250 views
### Can two particles remain entangled even if one is past the event horizon of a black hole?
Can two particles remain entangled even if one is past the event horizon of a black hole? If both particles are in the black hole? What changes occur when the particle(s) crosses(cross) the event ...
1answer
300 views
### How does an object falling into a plain Schwarschild black hole appear from near the black hole?
I know that when viewed from infinity (or from a very large distance from the black hole event horizon), an object that falls into the black hole will appear to slow down and will become more and more ...
1answer
370 views
### Why isn't black hole information loss this easy (am I missing something basic)?
Ok, so on Science channel was a special about Hawking/Susskind debating black holes, which can somehow remove information from the universe. A) In stars, fusion converts 4 hydrogen into 1 helium, ...
1answer
554 views
### Supermassive black holes with the density of the Universe
This question was inspired by the answer to the question "If the universe were compressed into a super massive black hole, how big would it be" Assume that we have a matter with a uniform density ...
2answers
81 views
### What reason(s) exist to suppose that all degeneracy pressures can be overcome in Black-Hole formation?
In models of stellar collapse to a black hole, it is a given that density increases without bound towards a singularity. Electron degeneracy I get. Neutron degeneracy I get. I assume there's some ...
2answers
134 views
### Is Brian Cox right to claim that Gravity is a strong force for large masses, is it wrong, or is it only a matter of interpretation?
I watched a program of his in which it was claimed that since mass bends space in accordance to General Relativity, then in the case of very large stars it becomes a strong force to the point of being ...
2answers
134 views
### What is the motivation for assuming “Page” scrambling for Hawking radiation?
What is the motivation for assuming "Page" scrambling for Hawking radiation? Obviously, at the semiclassical level, we want the outgoing Hawking radiation to look thermal and mixed. However, surely ...
1answer
89 views
### NGCC 1277— a recoil ejection?
Recent calculations agree that a merging pair of supermassive black holes can emit enough gravitational waves to eject themselves from a galaxy. Could NGCC 1277, a small galaxy with a 17 billion ...
2answers
164 views
### Does the Chandrasekhar Limit scale for a Black Hole?
No physicist/astrophysicist I; All I know about the Chandrasekhar limit is that it apparently limits the mass a star may survive, beyond which it degenerates to a neutron star, or a black-hole. Does ...
1answer
127 views
### Is it possible for a black hole to form for an observer at spatial infinity?
To my knowledge if you calculate the coordinate time (time experienced by an observer at spatial infinity) it takes an infinite amount of time for an object to fall past the horizon of a Schwarzschild ...
1answer
193 views
### Falling into a black hole
I've heard it mentioned many times that "nothing special" happens for an infalling observer who crosses the event horizon of a black hole, but I've never been completely satisfied with that statement. ...
8answers
885 views
### Black hole - white hole (collision)
This is it: Nonspinnig, equally massive black hole and white hole experience a direct collision. What shall happen? What shall be the result of such a collision?
0answers
51 views
### Gravitational redshift of Hawking radiation
How can Hawking radiation with a finite (greather than zero) temperature come from the event horizon of a black hole? A redshifted thermal radiation still has Planck spectrum but with the lower ...
0answers
89 views
### Alternate geodesic completions of a Schwarzschild black hole
The Kruskal-Szekeres solution extends the exterior Schwarzschild solution maximally, so that every geodesic not contacting a curvature singularity can be extended arbitrarily far in either direction. ...
0answers
81 views
### Can a nearly-extremal black hole be stable against Schwinger vacuum breakdown?
I was doing some basic algebra to estimate the range of possible masses $M$ and electric charge $Q$ for a nearly extremal Reissner-Noström black hole. I want to see if the logic is correct the ...
0answers
84 views
### Information scrambling and Hawking non-thermal radiation states
Could a very small black hole where half of its entropy has been radiated, emit Hawking radiation that is macroscopically distinct from being thermal? i.e: not a black body radiator. Or would the ...
0answers
45 views
### Kerr solution for finite collapse time
The Kerr black hole solutions gives an analytic continuation that is asymptotically flat. Some people have argued that this is another universe, but others state that the analytic continuation ...
1answer
139 views
### Do black holes play a role in quantum decoherence?
Sorry for such a vague question but I could have sworn I read somewhere that Hawking proposed the reason we might see a classically appearing universe is due to the possible role of black holes in ...
0answers
170 views
### What is a black hole?
Is there a definition of a black hole in a generic spacetime? In some books, for example Wald's, black holes are defined for asymptotically flat spacetime with strong asymptotic predictability, ...
1answer
106 views
### Rotation of Spacetime => Change in orbit/path
Along the idea of frame-dragging; Will the rotation of a black hole, which has some velocity v and angular momentum, influence its path in 3D space? I've seen the fact that depending on the ...
4answers
386 views
### Can a black hole be explained by newtonian gravity?
In the simple explanation that a black hole appears when a big star collapses under missing internal pressure and huge gravity, I can't see any need to invoke relativity. Is this correct?
3answers
572 views
### Is it possible that QM is just GR?
The more I learn about General Relativity, the more it seems like it isn't fully understood. It seems that before it's full consequences were exhaustively understood, not 10 years after its discovery, ...
2answers
103 views
### Is there something like Hawking radiation that makes protons emit component quarks?
If Hawking radiation can escape from black holes, could quarks perhaps become separated from protons despite it being "impossible" for that to happen?
1answer
381 views
### What happens to a photon in a black hole?
Assume a photon enters the event horizon of a black hole. The gravity of the black hole will draw the photon into the singularity eventually. Doesn't the photon come to rest and therefore lose it's ...
1answer
162 views
### How small are the smallest black holes?
How little mass can a black hole contain and still be a "stable" black hole? What would the diameter be, in terms of the event horizon?
3answers
258 views
### Can we see a spaceship falling into a black hole and entering the event horizon?
Or it pauses in time because the spaceship reaches the speed of light, c?
2answers
129 views
### What would the effect be of a small black hole colliding with the earth?
If a small black hole (say about .1 mm radius or 1% of Earth's mass) came flying along at the speed of a comet or higher and impacted the earth, what would happen? Would it pass through the earth (and ...
1answer
84 views
### Are different frequencies of light lensed differently during gravitational lensing a bit like refraction?
So I was wondering about the event horizon on a black hole. And wondering if the point of no return for radio waves vs gamma rays would be different. I guess the logic being, since gamma rays have ...
2answers
1k views
### Strongest force in nature
Possible Duplicate: What does it mean to say “Gravity is the weakest of the forces”? It is said nuclear force is the strongest force in nature.. But it is not true near a black ...
4answers
471 views
### White Holes and Time-Reversed Oppenheimer-Snyder collapse
So, the canned explanation that I always hear about why the white hole solution of the extended Schwarzschild solution is non-physical is that "The matter distribution cuts off the white hole ...
1answer
153 views
### If a magnetic monopole falls into a schwarzchild black hole, what happens to the magnetic field?
By the no-hair theorem, black holes can only have mass, charge and angular momentum. Does "charge" include "magnetic charge" (such as from a magnetic monopole)? Can black holes have magnetic charge ...
2answers
538 views
### About Susskind's claim “information is indestructible”
I really can't understand what Leonard Susskind means when he says that information is indestructible. Is that information really recoverable? He himself said that entropy is hidden information. ...
2answers
94 views
### Is time going backwards beyond the event horizon of a black hole?
For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? This question is a followup ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297534227371216, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3826286
|
Physics Forums
Thread Closed
Page 3 of 4 < 1 2 3 4 >
Blog Entries: 1
## How to solve the Liar Paradox
Quote by sigurdW I think i do...But im not infallible. So far my reaction is superficial like: H is not really a game since it takes an infinity to get it started... I mean its not well defined as a subgame... Its like looking at chessplayers rotating the board and never make a first move.
OK, but if it's not a finite game, then it's not a valid choice for Player 1 to call out. Since Player 1 cannot call out Hypergame, Hypergame always terminates in a finite amount of time, and thus Hypergame is a valid choice for player 1 to call out!
Quote by lugita15 OK, but if it's not a finite game, then it's not a valid choice for Player 1 to call out. Since Player 1 cannot call out Hypergame, Hypergame always terminates in a finite amount of time, and thus Hypergame is a valid choice for player 1 to call out!
If its infinite then its finite and vice versa...It is a paradox and i will probably solve it since it seems familiar...
The only paradox yet where I dont find my tecniques appliable is the Sorites Paradox.
Tomorrow it is! Good Night.
Edit: Now it is tomorrow... The plan is to finish off your subjects one at a time at a leisurely pace.
But I think wed better spend some time checking my solution of the Liar Paradox,
its good also for new comers who tend to read only the last few entries thereby missing important information.
1 Sentence 1 is not true. (assumption)
2 Sentence 1 = "Sentence 1 is not true." (Empirical truth from 1 by inspection)
3 Sentence 1 is true. (The negation of 1, by substitution from 2 to 1 and simplifying)
Here the core of the Liar Paradox is exposed!
Informally:If sentence 1 is true then it is not true, and if so then again its true and so on.
And since everything is either true or not true then sentence 1 is both true and not true!
This state of affairs contradicts the Law of contradiction and makes Classical Logic inconsistent! The Logicians abandoned Classical Logic and formulated Logics that excluded self referential sentences from the domain of their logic. Thereby excluding sentences like: I think this thought therefore I am !
Formally there is yet no contradiction arrived at, so lets add it:
4 Sentence 1 is not true and sentence 1 is true. (contradiction from 1 and 3)
Here the road to the paradox consist in denying the assumption expressed in sentence 1,
and that results in an affirmation instead of a denial...let us leave the road to defeat and
check the remaining alternative: Denying sentence 2!
On denying sentence 2! (Part One) Let us listen to the opposition: But sigurdW, you yourself affirm that sentense 2 is true so IF you deny it you are contradicting yourself! sigurdW: I claim that sentence 2 is an empirically true contradiction! Thats to say: Sentence 2 is empirically true and logically not true and that is not to contradict myself! Proof: 10 Sentence 1 = " Sentence 1 is not true." (ASSUMPTION!) 11 Sentence 1 is true if and only if "Sentence 1 is not true." is true (from 10) 12 Sentence 1 is true if and only if Sentence 1 is not true.(from 11) Sentence 12 is a contradiction and the assumption in sentence 10 must be denied! 13 It is not true that Sentence 1 = " Sentence 1 is not true." (Logical Truth) 14 Sentence 1 = " Sentence 1 is not true." (Empirical Truth = sentence 2) The extraordinary fact that an empirical truth and a logical truth contradicts each other must be explained...
Edit:
Today I make it simpler:
Definition:
y is a Liar Identity if and only if y is of the form: x = "x is not true",
and if y is true then x is a Liar Sentence defined by y.
THESIS:No liar identity is Logically true.
Proof (Based on: (a=b) implies (Ta<-->Tb) )
1. Suppose x="x is not true" (assumption)
2. Then x is true if and only if "x is not true" is true (from 1)
3. And we get: x is true if and only if x is not true (from 2)
4. Sentence 3 contradicts the assumption. (QED)
The logical form of the foundation of the Paradox:
1. x is not true.
2. x = "x is not true".
Some values for x makes the liar Identity Empirically true:
1. Sentence 1 is not true. (Liar Sentence)
2. Sentence 1 = " Sentence 1 is not true." (Liar Identity)
To get to the paradox one must produce " 3. Sentence 1 is true." from sentences 1 and 2.
But since sentence 2 is BOTH Empirically true and Logically false it can not be a well formed sentence!
Therefore no paradox can be derived from sentence 1,or any other liar sentence.
And here is how the thread originally started:
Quote by sigurdW Alfred Tarski diagnosed the Liar Paradox as arising only in languages that are "semantically closed", by which he meant a language in which it is possible for one sentence to predicate truth (or falsehood) of another sentence in the same language (or even of itself). To avoid self-contradiction, Tarski says it is necessary when discussing truth values to envision levels of languages, each of which can predicate truth (or falsehood) only of languages at a lower level. So, when one sentence refers to the truth-value of another, it is semantically higher. The sentence referred to is part of the "object language", while the referring sentence is considered to be a part of a "meta-language" with respect to the object language. It is legitimate for sentences in "languages" higher on the semantic hierarchy to refer to sentences lower in the "language" hierarchy, but not the other way around. This prevents a system from becoming self-referential. How to prove him wrong? English is a semantically closed language so lets begin by stating the conditions for the Liar to arise: 1 Sentence 1 is not true. 2 Sentence 1 ="Sentence 1 is not true. Being careful I will not accept sentence 2 on its face value, perhaps its not true? If it IS true then no harm is done if we declare it to be true...so this is what you should work with: 1 Sentence 1 is not true. 2 Sentence 1 = "Sentence 1 is not true" 3 Sentence 2 is true. Now try to derive the Liar Paradox! (I predict you will fail to do so! Will you prove me wrong?)
Blog Entries: 3
Quote by sigurdW How to prove him wrong? English is a semantically closed language so lets begin by stating the conditions for the Liar to arise: 1 Sentence 1 is not true. 2 Sentence 1 ="Sentence 1 is not true. Being careful I will not accept sentence 2 on its face value, perhaps its not true? If it IS true then no harm is done if we declare it to be true...so this is what you should work with: 1 Sentence 1 is not true. 2 Sentence 1 = "Sentence 1 is not true" 3 Sentence 2 is true. Now try to derive the Liar Paradox! (I predict you will fail to do so! Will you prove me wrong?)
I find this thread hard to follow but I am returning to your first post in this thread to say that what you are trying to solve in the last post of this thread is not the liar's paradox. In sentence 2 you are using the symbol "=" in the normal scene but at the same time you are using it to mean "If and only If". I think the inconsistent use of the symbol "=" is confusing. Also I don't think what you start with in your final post in this thread:
x if and only if (x is not true)
is the liars paradox as self reference is completely removed.
Quote by John Creighto I find this thread hard to follow but I am returning to your first post in this thread to say that what you are trying to solve in the last post of this thread is not the liar's paradox. In sentence 2 you are using the symbol "=" in the normal scene but at the same time you are using it to mean "If and only If". I think the inconsistent use of the symbol "=" is confusing.
I know the subject is difficult so I havent been surprised that comments are few.
Suppose we have the identity "a=b" then from it we can get the equivalence "a is true if and only if b is true". The identity IMPLIES the equivalence but they are not identical. So you see I am not using the identity sign to mean anything but what it normally means!
Besides:Note that I am analysing what is supposed to be the beginning of a legitimate derivation of the liar paradox:
1 Sentence 1 is not true.
2 Sentence 1 = "Sentence 1 is not true"
3 Sentence 1 is true.
You must take care so you yourself doesnt solve the paradox by making the derivation of sentence 3 impossible.
(Thats my job: showing sentence 3 to be not derivable from sentences 1 and 2)
I thank you for your interest in this unbelievably (yes I am NOT joking.) difficult matter,
you are mistaken but you are an adventurous person honestly trying to check my argument.
Dont let my objection to your first attempt stop you from digging deeper into the matter :)
Quote by John Creighto Also I don't think what you start with in your final post in this thread: x if and only if (x is not true) is the liars paradox as self reference is completely removed.
Im looking for this sentence in my post but I dont find it: "x if and only if (x is not true)"
Perhaps you can quote the post and make the objectional sentence (if its there) bold or something? The sentence "x if and only if (x is not true)" is indeed not wellformed and if I wrote it theres some correctioning (and self flagellation) that needs to be done.
Perhaps you read it while I still was editing the post? That would explain it. Cya ;)
Quote by John Creighto I find this thread hard to follow
I take your comment very seriously! I am adressing a problem that is over two thousand years old and I should make every effort to present my solution in a clear manner aimed, not at the expert, but the general reader. So I will now start again from the beginning: What is the Liar Paradox?
I guess the dictionary will say something like: The Liar Paradox arises when you try to find out whether a sentence that says of itself that it is not true (Liar Sentence) is true or not.
Since, in my view there IS no Liar Paradox I define the objects believed to cause the paradox
My Definition:
y is a Liar Identity if and only if y is of the form: x = "x is not true",
and if y is true then x is a Liar Sentence defined by y.
The most common way to introduce the LP is to start with a "Liar" definition:
A Liar Definition: Let the words "The liar" be a name of the sentence "The liar is not true."
Then it is assumed that:
1 The liar is not true. (Liar Sentence)
And from the definition is gotten:
2 The liar = "The liar is not true." (Liar Identity)
And from the above we deduce:
3 The liar is true.
A formal proof takes a few lines more but its not necessary,
the reader should be able now to foresee the disastrous result that follows!
Sentence 1 shows itself to be true if it is false, and false if it is true .
Thereby showing English together with Classic Logic to be inconsistent.
So what is my solution? Whenever you look at a definition you should ask yourself if it is valid! In this case you should ask: Isnt the definiens in the definiendum? The General Liar Definition: x = "x is not true." The proposer of the paradox will then say: The paradox can be produced by other means... the definition is not necessary. And there are other circular definitions accepted by scientists... For example the definition of simultaneity in special relativity. You should not,as a student of Logic, accept that answer... your cool reaction should be: Oh well, let us assume the liar definition is valid then we get: 1 x = "x is not true" Since (a=b) implies that (Ta <->Tb) then from sentence 1 we get: 2 x is true if and only if "x is not true" is true The right side can be simplified and we get a contradiction: 3 x is true if and only if x is not true Therefore the liar definition is NOT valid after all... By what other means can the paradox be demonstrated did you say? (But first let us rest, so the eventual reader might catch up, and raise objections!)
Blog Entries: 3
Quote by sigurdW I know the subject is difficult so I havent been surprised that comments are few. Suppose we have the identity "a=b" then from it we can get the equivalence "a is true if and only if b is true". The identity IMPLIES the equivalence but they are not identical. So you see I am not using the identity sign to mean anything but what it normally means!
I know what you are doing. You are applying a truth function to each side of the equation. However, I don't think you get a=b. Instead you get A=A which isn't particularly useful.
I thank you for your interest in this unbelievably (yes I am NOT joking.) difficult matter, you are mistaken but you are an adventurous person honestly trying to check my argument.
I actually wanted to start my own tread on this and their are things I want to say on this topic but before doing so we need to make the proceeding discussion much clearer.
Dont let my objection to your first attempt stop you from digging deeper into the matter :)
It will be easier if the target doesn't keep shifting. I see a lot of very similar posts and it is hard to know which to critique.
Im looking for this sentence in my post but I dont find it: "x if and only if (x is not true)"
My appologies. This was from sigurdW post (Post #38).
Quote by John Creighto I know what you are doing. You are applying a truth function to each side of the equation. However, I don't think you get a=b. Instead you get A=A which isn't particularly useful.
Instead of defending proof 1 right now...
You should give your contra argument in better detail Im not at all sure why you think i get A=A.
Ill just prove the same thing differently, you cant use the same argument so what is your next contra?
Proof 2
From the Law of Identity we get:
1 x = x
By Double Negation we get:
2 It is not the case that x = "x is not true"
And heres another one for your third contra.
Proof 3
Suppose:
1 x = "x is not true"
Straight from the definition of truth we get:
2 "x is not true"is true if and only if x is not true
And now a contradiction is derivable:
3 x is true if and only if x is not true
Therefore:
4 Its not true that x = "x is not true"
And to make the fact finally obvious:
Proof 4
Suppose:
1 x = "x is not true"
Let x be "water is wet" then we get:
2 "water is wet" = ""water is wet" is not true = "water is not wet"
Now let x be "Water is not wet" then we get:
3 "water is not wet" = ""water is not wet"is not true" = "water is wet"
Neither a true sentence nor a false sentence makes sentence 1 true. Therefore sentence 1 must be a contradiction. QED
PS Make a truth table!
Please state your contra arguments so anyone (including me) can understand them.
Heres the Logical form of The Liar: 1 x is not true 2 x = "x is not true" You should by now understand what the function of sentence 2 is? It makes a statement identical to its own negation! Only a statement being both true and false can satisfy it. And if sentence 1 gets selfreferential it CAN satisfy the Liar Identity. So Logic forbids it. Proof: Suppose: 1 x = Zx Then: 2 Zx = ZZx And the logical conclusion (tautology) is: 3 (x = Zx) implies (Zx = ZZx) Let: 4 Z = is not true Then we get: 5 (x = x is not true) implies (x is not true = x is true) and its a Logical Demand that: 6 Its not true that x = x is not true So any time we construct a sentence that says of itself that it is not true then we defy the laws of logic. And this EXPLAINS and SOLVES the Paradox of the Liar.
Blog Entries: 3
I want to return to posts #14 to # 16 with regards to prior’s solution. I’m not sure that prior solution is the correct solution. However, I think that by focusing on his solution we may better clarify the rules of logic which we are using. I believe this will help to clarify the posts which have been presented hitherto. I know there are different types of logic and when approaching such difficult paradoxes if we aren’t explicit about what rules of logic we are using then any complex derivations will be difficult to follow. Prior’s solution defines a rule of logic which isn’t universally agreed on. This rule is that any statement implicitly affirms its own truth.
I’m going to quote lugita15 (post #15) as a possible way to apply Prior's rule:
I'm saying that "This statement is false" is the same as saying "This statement is false and "This statement is false" is true."" Or to put in terms of P, P says "P is false", so it's implicitly saying "P is false and "P is false" is true", which is equivalent to saying "P is false and P is true", which is a contradiction.
As this seemed to produce some agreement. Now the criticism given by Wikipedia of Prior’s solution is as follows:
“ But the claim that every statement is really a conjunction in which the first conjunct says "this statement is true" seems to run afoul of standard rules of propositional logic, especially the rule, sometimes called Conjunction Elimination, that from a conjunction any of the conjuncts can be derived. Thus, from, "This statement is true and this statement is false", it follows that "this statement is false" and so we have, once again, a paradoxical (and non-conjunctive) statement. It seems then that Prior's attempt at resolution requires either a whole new propositional logic or else the postulation that the "and" in, "This statement is true and this statement is false", is a special type of conjunctive for which Conjunction Elimination does not apply. But then we need, at least, an expansion of standard propositional logic to account for this new kind of "and".[6]
...
6- Kirkham, Theories of Truth, chap. 9
http://en.wikipedia.org/wiki/Liar_paradox#Arthur_Prior
My response to the criticism which is cited from Kirkam, is that the word, "this", references the entire construct which is:
, "This statement is true and this statement is false"
And hence direct conjunction elimination is not possible. Now applying prior’s implicit assumption to sigurdW’s post #38:
we see that step two is superfluous since it is implied in step 1.
However, sigurdW is using propositional logic which deals more with reducing logical propositions then the assertion of truth. In contrast the laws of thought attempt to get more at the heart of what is true and false in the world.
We can certainly try to use propositional logic to prove a truth value of the liar paradox but I suspect that if possible, that it will be challenging to do so in a self consistent way which maintains the self reference.
and hence direct conjunction elimination is not possible. For purposes of propositional logic we could distinguish between: an "independent And" and a "dependent And" which is analogous how in statistics; we distinguish between independent and dependent random variables. Now applying prior’s implicit assumption to sigurdW’s post #38:
Step two is superfluous as it is implied in step 1.
Yet sigurdW is using propositional logic which deals more with reducing logical propositions then the assertion of truth. In contrast the laws of thought attempt to get more at the heart of what is true and false in the world.
We can certainly try to use propositional logic to prove a truth value of the liars paradox but I suspect that if possible, that it will be: challenging to do this in a self consistent way and at the same time maintain self reference -- since propositional logic distinguishes the atoms (things which we can assert as true or false) from the propositions.
Recognitions: Gold Member Science Advisor Staff Emeritus You still really don't get it. First, you seem to have any concept of idea of "consistency". Or, more precisely, you seem to be unable to grasp what it would mean for something to be inconsistent. Your entire point seems to be based entirely on an inability to comprehend that, in the presence of inconsistency, one can produce two valid arguments with contradictory conclusions, which has led you to the nonsensical rebuttal "there is no contradiction -- you can't do that because it would lead to a contradiction". To resolve a pseudo-paradox, one must demonstrate that one (or both) of the arguments are flawed. To resolve a true paradox, one must actually abandon the inconsistent theory and create a new one in which is free from that contradiction. Secondly, the problem the liar's paradox brings to light is that various rules for forming formulas conflict with logical semantics. If one is not to alter logic, one must instead alter the grammar by which formulas are constructed. One could take the approach of rejecting formulas based on whether or not they lead to contradictions, but there are two serious flaws with this approach: An argument involves many formulas . One needs a rule to decide which of the many formulas are disallowed We can be faced with situations such as the possibility that "P and Q" might be a disallowed formula, even when "P" and "Q" are both allowed. Without rules to guarantee that one is allowed to combine formulas in various ways, it would be nearly impossible to reason at all Third, I think it would be interesting to point out that in the logic of computation, there is no problem with there being a sentence P satisfying "P = P is not true". Here's an implementation in python: Code: ```def P():
return not P()``` however, your computer probably cannot do this computation: an equivalent implementation that will not overflow your stack is: Code: ```def P():
while True:
pass```
Blog Entries: 3
sigurdW, I re-labeled your statements in the following:
Quote by sigurdW Proof 3 Suppose: s1: x = "x is not true" Straight from the definition of truth we get: s2: "x is not true"is true if and only if x is not true And now a contradiction is derivable: s3: x is true if and only if x is not true Therefore: s4: Its not true that x = "x is not true"
Now lets try to connect them:
SS1: s1&s2->s3
SS2: Not S3 -> Not S1 or Not S2
We know that not S2 is false because S2 is true (by the law of identity). S3 appears to violate the law of identity so should be false. Consequently not S3 should be true. Thus Not S1 must be true and hence S1 must be false.
However, the conclusion we arrive at should be obvious so why did we assume the opposite in the first place? Also what does it have to do with the liar's paradox?
Now with regards to Hurkyl above post. I am not sure who he is responding to but I certainly admit I don't have a good grasp on this stuff but I'm not trying to resolve the liar's paradox. Rather I am only trying to see if any of the attempts to do so in this tread have made any sense. I think Hurkly's following comment sheds the most light on this:
"Secondly, the problem the liar's paradox brings to light is that various rules for forming formulas conflict with logical semantics. If one is not to alter logic, one must instead alter the grammar by which formulas are constructed."
Perhaps this is what the previous posters were trying to do but in doing so then there is no paradox -- and hence there is nothing to prove. However, if we are left with nothing to prove then all attempts to do so are superfluous.
Quote by John Creighto I want to return to posts #14 to # 16 with regards to prior’s solution. I’m not sure that prior solution is the correct solution. However, I think that by focusing on his solution we may better clarify the rules of logic which we are using. I believe this will help to clarify the posts which have been presented hitherto. I know there are different types of logic and when approaching such difficult paradoxes if we aren’t explicit about what rules of logic we are using then any complex derivations will be difficult to follow. Prior’s solution defines a rule of logic which isn’t universally agreed on. This rule is that any statement implicitly affirms its own truth.
I accept that x implies "x is true", but Im not sure of what that may commit me to :)
Ive looked at the discussion between hurky and logita, and it seems that priors solution may resemble mine...Since we both seem to claim a sentence having two truth values is not well formed
Quote by lugita15 My preferred resolution to the Liar Paradox is Prior's, summarized here. The idea is that the liar sentence, like all sentences, asserts its own truth. So a sentence that asserts both its truth and its falsity must be false.
But I dont get the conclusion that the Liar Sentence is false!! It becomes a sentencefunction since its liar identity is not well formed!
Quote by John Creighto Now applying prior’s implicit assumption to sigurdW’s post #38: we see that step two is superfluous since it is implied in step 1.
You mean that the sentence (2. Then x is true if and only if "x is not true" is true ) is implied by priors assumption from (1. Suppose x="x is not true") ?
Or do you mean that (2. Sentence 1 = " Sentence 1 is not true.") is implied by(1. Sentence 1 is not true.)
My first impression is that I disagree in both cases.
Quote by sigurdW #38: y is a Liar Identity if and only if y is of the form: x = "x is not true", and if y is true then x is a Liar Sentence defined by y. THESIS:No liar identity is Logically true. Proof (Based on: (a=b) implies (Ta<-->Tb) ) 1. Suppose x="x is not true" (assumption) 2. Then x is true if and only if "x is not true" is true (from 1) Here Priors rule on 1 will give "x="x is not true" is true", but 2 is an equivalence! 3. And we get: x is true if and only if x is not true (from 2) 4. Sentence 3 contradicts the assumption. (QED) The logical form of the foundation of the Paradox: 1. x is not true. 2. x = "x is not true". And here Priors rule on 1 will give "x is not true" is true, but 2 is an identity! Some values for x makes the liar Identity Empirically true: 1. Sentence 1 is not true. (Liar Sentence) 2. Sentence 1 = " Sentence 1 is not true." (Liar Identity) To get to the paradox one must produce " 3. Sentence 1 is true." from sentences 1 and 2. But since sentence 2 is BOTH Empirically true and Logically false it can not be a well formed sentence! Therefore no paradox can be derived from sentence 1,or any other liar sentence.y is a Liar Identity if and only if y is of the form: x = "x is not true", and if y is true then x is a Liar Sentence defined by y. THESIS:No liar identity is Logically true. Proof (Based on: (a=b) implies (Ta<-->Tb) ) 1. Suppose x="x is not true" (assumption) 2. Then x is true if and only if "x is not true" is true (from 1) 3. And we get: x is true if and only if x is not true (from 2) 4. Sentence 3 contradicts the assumption. (QED) The logical form of the foundation of the Paradox: 1. x is not true. 2. x = "x is not true". Some values for x makes the liar Identity Empirically true: 1. Sentence 1 is not true. (Liar Sentence) 2. Sentence 1 = " Sentence 1 is not true." (Liar Identity) To get to the paradox one must produce " 3. Sentence 1 is true." from sentences 1 and 2. But since sentence 2 is BOTH Empirically true and Logically false it can not be a well formed sentence! Therefore no paradox can be derived from sentence 1,or any other liar sentence.
Lets look into the details of my version of the Correspondence Theory of Truth:
Liar Identities are a special case of Referential Identities.
Quote by sigurdW Definition: y is a Referential Identity if and only if y is of the form: x is the object the "x" in the sentence "Zx" refers to. Most referential identies are not sentences, say we have the sentence: The Sun is shining. Then the referential identity contains the words "The Sun" and the object that IS the Sun and theres a virtual equality sign joining them together. This makes the definition of truth work: The sentence "the Sun is shining." is true if and only if the Sun is shining. This is easier to understand if we only consider the set of self referential sentences...lets pick one for inspection: 1. Sentence 1 contains five words Its referential identity is a sentence! 2. Sentence 1 = "Sentence 1 contains five words" And all we have to do is to count the words in the quote at the right side of the identity.
So I think any similarities to Priors theory are superficial. and youll have to convince me that Priors assumption is equivalent to my referential identities.
I like talking to you,I need to practise defence,so lets not be in any hurry,lets face the facts together :)
Quote by John Creighto sigurdW, I re-labeled your statements in the following: Originally Posted by sigurdW Proof 3 Suppose: s1: x = "x is not true" Straight from the definition of truth we get: s2: "x is not true"is true if and only if x is not true And now a contradiction is derivable: s3: x is true if and only if x is not true Therefore: s4: Its not true that x = "x is not true" Now lets try to connect them: SS1: s1&s2->s3 SS2: Not S3 -> Not S1 or Not S2 We know that not S2 is false because S2 is true (by the law of identity). S3 appears to violate the law of identity so should be false. Consequently not S3 should be true. Thus Not S1 must be true and hence S1 must be false. However, the conclusion we arrive at should be obvious so why did we assume the opposite in the first place? Also what does it have to do with the liar's paradox?
Let me check if I understand you:
Do you accept that its not true that x = "x is not true"?
Its the cornerstone of my thinking:
y is a Liar Identity if and only if y is of the form: x = "x is not true",
and if y is true then,and only then, x is a Liar Sentence defined by y.
So if y is not true then x is not a Liar sentence claiming itself to be not true!
And how then can there be a paradox?
1 Sentence 1 is not true (assumed Liar Sentence)
2 Sentence 1 = "Sentence 1 is not true" (logically false and empirically true Liar Identity)
An extraordinary fact is now coming up to the surface!
How CAN a sentence be logically false and empirically true??
Arent logical truths and falsehoods supposed not to ever collide with empirical reality? Logic was thought to be barren but it has brought forth a contradiction, Poincare said... Is this even worse?...Or is there a satisfactory explanation?
Quote by John Creighto Now with regards to Hurkyl above post. I am not sure who he is responding to but I certainly admit I don't have a good grasp on this stuff but I'm not trying to resolve the liar's paradox. Rather I am only trying to see if any of the attempts to do so in this tread have made any sense. I think Hurkly's following comment sheds the most light on this: "Secondly, the problem the liar's paradox brings to light is that various rules for forming formulas conflict with logical semantics. If one is not to alter logic, one must instead alter the grammar by which formulas are constructed." Perhaps this is what the previous posters were trying to do but in doing so then there is no paradox -- and hence there is nothing to prove. However, if we are left with nothing to prove then all attempts to do so are superfluous.
Yes... I have a problem with hurkyl too, he doesnt seem to back up his cl...Whatever they are.
I think your statement in blue shows unbiased thinking.
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by John Creighto "Secondly, the problem the liar's paradox brings to light is that various rules for forming formulas conflict with logical semantics. If one is not to alter logic, one must instead alter the grammar by which formulas are constructed." Perhaps this is what the previous posters were trying to do but in doing so then there is no paradox -- and hence there is nothing to prove. However, if we are left with nothing to prove then all attempts to do so are superfluous.
There is nothing to do in the various forms of logic used today. For example, first-order logic solved the issue by simply disallowing predicates to operate on predicates entirely. The grammar only allows one to evaluate predicates at variable symbols. P(Q), for example, is simply not in the language of well-formed formulas, if P and Q are both predicate symbols.
One can look for other ways to slip self reference into the logic: this is essentially what a Gödel numbering is, and the liar's paradox becomes becomes Tarski's theorem on the undefinability of truth. (Gödel's first incompleteness theorem is the same idea, but referring to provability rather than truth)
This continues with higher-order logics. e.g. second-order logic introduces second-order predicates that are allowed to operate upon first-order predicates and variables, but not second-order predciates. Both steps of the usual formal version of the liar's paradox fail:
• We can't define a predicate $\Phi(P) := \neg P(P)$ because P(P) isn't a well-formed formula. (P is a first-order predicate, so we cannot evaluate P at P)
• Even if we could, we can't consider $\Phi(\Phi)$ anyways. ($\Phi$ is a second-order predicate, so we cannot evaluate $\Phi$ at $\Phi$)
In lambda calculus, all of the steps of the usual version of the Liar's paradox can be executed:
$$F := \lambda x. \mathrm{NOT}(x x)$$
$$S := F F$$
it's easy to see that S is a liar sentence:
$$S = FF = (\lambda x. \mathrm{NOT}(x x)) F = \mathrm{NOT}(F F) = \mathrm{NOT\ } S$$
It's also easy to see the right hand sides are both lambda expressions so one cannot weasel out of a paradox by claiming that either F or S is not well-formed. So we are stuck with a lambda expression S with the property that S is neither TRUE nor FALSE.
Fortunately, there are plenty of other things S can be, so there is no paradox.
Note that an older form of Lambda calculus suffered from the Kleene-Rosser paradox. Stanford's pages state that Curry considered the paradox as analogous to Russel's paradox and the Liar's paradox.
In the theory of computation, the recursion theorem lets us write down a liar Turing machine directly, by the program:
• Let P be my own source code.
• Simulate the execution of P.
• If P returns True, then return False.
• return True
But again, no paradox: this is simply a Turing machine that never halts.
In various modern forms of logic, the Liar's paradox simply isn't paradoxical. Or more precisely, no way is known to construct an inconsistency of logic using the idea of the Liar's paradox. Instead, the idea simply becomes a useful proof by contradiction technique, e.g. to prove in ZFC that the class of all sets is a proper class, or in the theory of computation to demonstrate the halting problem is not computable.
The Liar's paradox only remains a threat of inconsistency when one is trying to devise new logics, trying to understand the semantics of natural languages, or other similar sorts of situations.
Thread Closed
Page 3 of 4 < 1 2 3 4 >
Thread Tools
| | | |
|----------------------------------------------------|--------------------------------------------|---------|
| Similar Threads for: How to solve the Liar Paradox | | |
| Thread | Forum | Replies |
| | Set Theory, Logic, Probability, Statistics | 5 |
| | General Math | 8 |
| | Set Theory, Logic, Probability, Statistics | 3 |
| | General Physics | 14 |
| | General Math | 20 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412441253662109, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/102402-graphing-trigonometric-functions.html
|
# Thread:
1. ## Graphing trigonometric functions
Okay, so my book spends a whole lot of paper and ink talking about these, but it never goes and explicitly breaks down what's going on.
What would help my understanding immensely is if someone took the following functions and broke down what each variable did on the graph.
$Y= c + a sin 2\pi/b (x - d)$
$Y= C + a tan \pi/b (x-d)$
and the equivelents for the secant and cosecant functions. (cos and cot behave the same way as sin and tangent if I'm getting anything at all from the book) I'm having a lot of trouble because my book and the online video lectures split the sections up differently and emphasize different parts.
2. Hello Wolvenmoon
Originally Posted by Wolvenmoon
Okay, so my book spends a whole lot of paper and ink talking about these, but it never goes and explicitly breaks down what's going on.
What would help my understanding immensely is if someone took the following functions and broke down what each variable did on the graph.
$Y= c + a sin 2\pi/b (x - d)$
$Y= C + a tan \pi/b (x-d)$
and the equivelents for the secant and cosecant functions. (cos and cot behave the same way as sin and tangent if I'm getting anything at all from the book) I'm having a lot of trouble because my book and the online video lectures split the sections up differently and emphasize different parts.
I'll show you how the first one works, and leave you to sort out the second for yourself - it's basically similar.
In the attachments, I've built up the function step by step.
The first shows $y=\sin(2\pi x)$. This is a single cycle of a sine wave for values of $x$ from $0$ to $1$. Note that $y$ has values between $\pm1$.
The second shows $y=\sin\Big(\frac{2\pi}{4}\, x\Big)$; in other words $y=\sin\Big(\frac{2\pi}{b}\, x\Big)$, with $b =4$. You'll see that the difference is that $x$ now needs to take values from 0 to 4 to give one complete cycle. So $b$ gives the wavelength of the sine wave.
The third graph shows $y=\sin\Big(\frac{2\pi}{4} (x-.05)\Big)$. So I've given $d$ the value $0.5$ in your formula $y=c+a\sin\Big(\frac{2\pi}{b} (x-d)\Big)$. If you look closely, you'll see that the graph has been shifted $0.5$ units to the right. So $d$ gives the phase-shift.
Graph #4 is $y=3\sin\Big(\frac{2\pi}{4} (x-0.5)\Big)$. So I've put $a=3$. You'll notice that $y$ now takes values between $\pm3$. So $a$ is the amplitude of the sine wave.
Finally, I've shown you the graph of $y=1+3\sin\Big(\frac{2\pi}{4} (x-.05)\Big)$. So I've put $c=1$. You'll see that $y$ now takes values between $-2$ and $+4$, the graph having been shifted upwards by $1$ unit. $c$, then, is the vertical shift.
Grandad
Attached Thumbnails
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518301486968994, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/188050-equation-tangent-line-inverse-function.html
|
# Thread:
1. ## Equation for tangent line of inverse function.
Let f(x)=(x^4)+(x^3)+1
Let g(x) be the inverse of f(x) and define F(x)=f(2g(x)). Find an equation for the tangent line to y=F(x) at x=3.
The answer the book gives is y=(88x-89)/7 I just have no clue how to get there! Please help!
2. ## Re: Equation for tangent line of inverse function.
If $y= f(2f^{-1}(x))$, by the chain rule, the derivative is $y'= f'(2f^{-1}(x))(2(f^{-1}(x)'$. Further, ( $(f^{-1})'= \frac{1}{f'(y)}$ where y is such that f(y)= x. Here, $f(x)= x^4+ x^3+ 1$ so $f'(x)= 4x^3+ 3x^2$. g(x) will be the value, y, such that f(y)= $y^4+ y^3+ 1= 3$. Fortunately, it is clear that y= 1 satisfies that equation so $g(3)= f^{-1}(3)= 1$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8751571774482727, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/38185/about-measurable-sets-and-intervals/38427
|
About measurable sets and intervals
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a Lebesgue measurable set A with strictly positive measure, can we find an open interval (a,b) such that x belongs to A for almost every x in (a,b)?
Thanks in advance for any comments!
-
6
the cantor set of positive measure is a counterexample. – rpotrie Sep 9 2010 at 15:12
2
There's also a measurable set $A\subset\mathbb{R}$ such that for all nonempty open interval $I$ one has $0 < m(A\cap I) < m(I).$ It's a well-known, old exercise. See Rudin's book. – Pietro Majer Sep 9 2010 at 15:38
3
See, for example en.wikipedia.org/wiki/…, You may be interested in a weaker statement that what you ask for, see en.wikipedia.org/wiki/Lebesgue's_density_theorem – rpotrie Sep 9 2010 at 15:38
1
The question seems to have been answered. Voting to close as no longer relevant. – Victor Protsak Sep 9 2010 at 16:43
1
We don't generally close questions on MO just because they have been answered---after all, there are hundreds of non-closed questions here with excellent answers. Rather, I think what should happen is that rpotrie should answer the question with an answer about the fat Cantor set, rather than merely a comment (or someone else should if he doesn't). – Joel David Hamkins Sep 9 2010 at 20:19
show 7 more comments
3 Answers
The usual Cantor set constructed by removing 1/3 at each step is nowhere dense but has measure 0. However, there exist nowhere dense sets which have positive measure. The trick is to try to remove less, for instance you remove 1/4 from each side of [0,1] during the first step then 1/16 from each pieces etc...
The resulting set is the fat Cantor set: it is nowhere dense and it has positive measure.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I actually worked this problem out during my Measure Theory course a couple years ago.
http://www.austinmohr.com/Work_files/hw2_3.pdf
Therein, you'll find a construction of a Cantor-like set having any measure strictly between 0 and 1. As with the Cantor set, you cannot find an interval contained in this Cantor-like set.
I don't suggest you accept my solution without checking it yourself, as it was written by one still coming to grips with the material. It should give you a good idea of how to construct your proof, however.
-
ThankS Austin Mohr, both your comments and pdf file are very useful. It is really hard for me to choose an answer between you and Alephomega's and also Rpotrie's one. I just chose the one with highest mark. Thank you again. :-) – Anand Sep 15 2010 at 12:06
From the positive side, as I've mentioned on the comments, you should look at Lebesgue's density theorem. It says that you will get intervals where the measure of the set intersected with the interval is as close to full as you like, in fact, this can be done for intervals around almost every point of the set, and it holds in other contexts also.
-
Thanks Rpotrie for your useful comments. :-) – Anand Sep 15 2010 at 12:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517816305160522, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/108087-discrete-uniform-random-variable-2-a.html
|
# Thread:
1. ## Discrete Uniform Random Variable 2
I have the question:
Explain why a discrete uniform random variable, X, on [a,b] has the same variance as a discrete uniform random variable, W, on [1,b-a+1].
Use this information to establish that
...I don't even know where to start so any help would be great cheers
2. Originally Posted by sirellwood
I have the question:
Explain why a discrete uniform random variable, X, on [a,b] has the same variance as a discrete uniform random variable, W, on [1,b-a+1].
Use this information to establish that
...I don't even know where to start so any help would be great cheers
[a, b] --> [1, b - a + 1] suggests a horizontal translation of -a + 1: W = X - a + 1. And you should know that the shape of a distribution is unaffected by horizontal translation ....
3. ah ok, so you are saying that Var(X)=(b-a)*(b-a)/12. and Var(W)=(b-a)*(b-a)/12
for the first one a=a, b=b; for the second one a=1, b=b-a+1; so...
Var(X1)=Var(W)?
4. $V(aX+b)=a^2V(X)$
The b, is a shift and that only changes the mean (center) not the variance (spread).
2b or not 2b? oh well it's 2am.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993880748748779, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/lagrangian-formalism+terminology
|
# Tagged Questions
4answers
231 views
### What makes an equation an 'equation of motion'?
Every now and then, I find myself reading papers/text talking about how this equation is a constraint but that equation is an equation of motion which satisfies this constraint. For example, in the ...
3answers
261 views
### What is the difference between manifest Lorentz invariance and canonical Lorentz invariance?
I often read that the Lorentz symmetry is manifest in the path integral formulation but is not in the canonical quantization - what does this really mean?
4answers
262 views
### Is the Lagrangian of a quantum field really a 'functional'?
Weinberg says, page 299, The quantum theory of fields, Vol 1, that The Lagrangian is, in general, a functional $L[\Psi(t),\dot{\Psi}(t)$], of a set of generic fields $\Psi[x,t]$ and their time ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140782952308655, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/96192-3-body-problem.html
|
# Thread:
1. ## 3-body problem
There are 3 masses with mass $m_1 m_2 m_3$
Assume that the mass of $m_1$ is very large ( like the Sun comparing to the Earth and Mars ) the position of it remains in the origin
My question is what is the path of $m_2$ and $m_3$ ? Is the path also an ellipse ?
2. Originally Posted by simplependulum
There are 3 masses with mass $m_1 m_2 m_3$
Assume that the mass of $m_1$ is very large ( like the Sun comparing to the Earth and Mars ) the position of it remains in the origin
My question is what is the path of $m_2$ and $m_3$ ? Is the path also an ellipse ?
Perturbed ellipse.
Look at several possibilities:
1. Planet m2 has a circular orbit near m1; planet m3 has a circular orbit far from m1. As m3 orbits [m1;m2] the center of gravity changes, which will modify the orbit of m3.
2. m2 & m3 are of equal mass and have common center of mass orbit and far from m1.
2a. same as 2 but both are close to m1.
3. m2 & m3 are of equal mass and at equal distance but on opposite sides of m1.
You'll need to determine the orbital velocity that will maintain the longest stable orbit.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949030876159668, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/79608/conjugation-stable-generating-sets-in-almost-simple-groups
|
## Conjugation stable generating sets in almost simple groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everyone!
Given a finite simple group $S$, is it always possible to find two elements $x,y \in S$ with the property that for every $a,b \in S$ we have $\langle x^a,y^b \rangle = S$?
More in general, given a finite almost simple group $X$ with socle $S$, is it possible to find two elements $x,y \in X$ with the property that for every $a,b \in X$ we have $\langle x^a,y^b \rangle \supseteq S$?
I can deal with the alternating case, but I don't know where to look for results about the general case.
Thank you for any contribution.
-
Stupid question: With $x^a$ you mean $a^{-1} x a$ ? – Arno Kret Oct 31 2011 at 10:41
Hei Arno :) yes, I mean that. – Martino Garonzi Oct 31 2011 at 11:28
## 1 Answer
This property is known as "being invariably generated by $x$ and $y$" in the literature (though it has not been so extensively studied). For a nonabelian finite simple group, Kantor, Lubotzky and Shalev have shown that, indeed, such $x$ and $y$ exist. (This is Theorem 1.3 in http://front.math.ucdavis.edu/1010.5722 , which contains a lot of interesting material on similar questions).
-
Great! Thank you ! – Martino Garonzi Oct 31 2011 at 15:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128162860870361, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/36743?sort=votes
|
## solving series of linear systems with diagonal perturbations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to solve a series of linear systems Ax=b as quickly as quickly as possible. However, the systems are related. Specifically, each matrix A is given by:
cI + E
where E is a fixed sparse, symmetric positive definite real matrix (unchanged in all the linear systems), I is the identity matrix, and c is a varying complex number.
In other words, I am wondering how to quickly solve a series of complex linear systems which are all identical except for complex perturbations along the diagonal. I should say that the resulting matrices are not necessarily Hermitian, so currently I compute the LU decomposition. This works, but given the large number of rather closely related systems to be solved, I wonder if there is a better way to solve the problem, perhaps by using a more expensive (e.g. QR) decomposition up front.
(Edit for Jiahao: Yes, the bs are all the same.) (Edit for J. Mangaldan: The matrices are of order n=10^5 ~ 10^6, with about 10 times that many nonzeros.)
Update:
I'd like to thank everyone here for their suggestions. My implementation is ugly, but in the end interpolation was the key to a reasonable (10x) speedup. Since the c are quite close (imagine a small region of the complex plane, small in the sense that the spectrum of the matrix E is much larger) I could get away with computing solutions for a subset of the values of c and interpolating a solution for a given value of c using the precomputed values. It isn't elegant at all but it's something.
-
I assume the bs are all the same then? – Jiahao Chen Aug 26 2010 at 11:55
Exactly how big are your matrices anyway? This can crucially affect the practicality of proposed solutions. – J. M. Aug 26 2010 at 12:33
$10^6$... you've essentially said none of the proposals thus far are practical. ;) So what you in fact want is the vector `$(E+cI)^{-1}b$` with varying c; I'll check my notes on special methods for sparse matrices and report back. Only one thing: definitely QR decomposition will not be a practical solution to your problem either! – J. M. Aug 26 2010 at 12:51
## 3 Answers
You want the resolvent of $E$ (at $z:=-c$). Recall it's an analytic function of $z$ defined on the resolvent set, $\mathbb{C}\setminus\operatorname{spec}(E)$. According to the complexity of the matrix $E$, and with the number and the location of the $c$ you need to consider, it may be worth computing a power series expansion at various centers so as to cover the set $\{c\}$ of the data. For $|z|$ larger than the spectral radius you have of course the Laurent expansion $(z-E)^{-1}=1/z+ E/z^2+ E^2/z^3+\dots$
-
Well, (E+cI)x=b can indeed be turned into something like (E/c+I)x=b/c, and thus you can use the geometric series technique. Of course, if the spectral radius of E/c is bigger than 1, this idea is shot. – J. M. Aug 26 2010 at 12:29
Is there a method to use the resolvent without computing it explicitly? It seems to use the resolvent would have to be recalculated for all cs, and explicit computation could result in massive fill-in (loss of sparsity) as @J. Mangaldan pointed out above once products like E^2 are computed. – Jiahao Chen Aug 26 2010 at 12:50
1
Well, one can do it Krylov-style, assuming there is a nice black box for matrix-vector multiplications. Maintain a vector v initialized to the right-hand side, and at every iteration multiply this by E. But again, this is only feasible if it can be assured that the spectral radius of E/c never exceeds unity. – J. M. Aug 26 2010 at 13:00
Yes, as I wrote. Nevertheless, finitely many expansions suffice to cover the set of {c}; whether this approach is efficient depends on the details. – Pietro Majer Aug 26 2010 at 14:22
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
a) There are formulae such as the Woodbury identity that allow for rank k updates to a previously solved problem, which I think fits your problem nicely.
b) In addition, using a reasonably smart iterative algorithm such as conjugate gradients (or whatever is appropriate for your problem) will also be helpful since you can feed it the solution from your previous problem, and for small perturbations the new solution can be computed very quickly.
In practice I have found it sufficient to use just (b), but it might be worth trying both separately or together.
-
(a) does not look useful since the updates are full-rank (b) seems useful only if the $c$s are close. Or am I missing something? – Federico Poloni Aug 26 2010 at 12:14
Sherman-Morrison-Woodbury, as already stated, is of no help since it is intended for low-rank corrections; e.g., corrections expressible in the form $UV^T$, where $U$ and $V$ are rectangular. CG is of no use since he said in the problem statement "I should say that the resulting matrices are not necessarily Hermitian", and computing $(E+cI)^T(E+cI)$ so that you can apply CG can result in a much denser matrix. – J. M. Aug 26 2010 at 12:24
@Frederico: That's true, I forgot about fill-in, since using Woodbury would require the computation of E^-1 . E^-1. I am assuming that the c's are small since the OP did say diagonal perturbations. – Jiahao Chen Aug 26 2010 at 12:54
@J. Mangaldan: that is true, CG is not guaranteed to work. Sometimes it does anyway though! But really without knowing more about the problem it is difficult to recommend any particular iterative solver. Tricks like level shifting (regularizing the matrix A to make it always posdef) can sometimes be useful in using CG. BCG certainly doesn't seem feasible due to fillin. My first thought was regularized CG or GMRES. – Jiahao Chen Aug 26 2010 at 12:54
A method that sometimes works and sometimes doesn't, IMHO, cannot be recommended in good conscience to somebody whose problem's properties still have to be explored fully. Besides, he said c can be complex, and that can play havoc with regularization. – J. M. Aug 26 2010 at 12:57
show 1 more comment
If you're doing a full LU decomposition and ignoring sparsity, then you could switch to a Schur decomposition (costs $25n^3$ instead of $2/3n^3$, but allows you to solve any of the resulting systems within $O(n^2)$). If you're using sparsity, as far as I know it is an open research problem how to exploit fully this property (see e.g. the rational Krylov method).
-
1
E might be sparse, the Schur decomposition is definitely dense, and computational time and storage can be prohibitive. I'm assuming that E is a large enough matrix that the "LU decomposition" alluded to in the original post is in fact set to do something like the Cuthill-McKee ordering to maintain sparsity in the triangular factors. – J. M. Aug 26 2010 at 12:32
Unfortunately, I do need to maintain sparsity or the problem becomes too big to handle. (I am using the KLU package, but I expect that most LU solvers would be able to handle the linear systems I have.) – Fumiyo Eda Aug 26 2010 at 12:47
As I said, the reason why those LU solvers can manage your large matrices is that they do a preliminary analysis of sparsity pattern before performing the LU decomposition. Blindly triangularizing or pivoting can result in disastrous fill-in, and thus pattern analysis is a crucial step for these LU solvers. – J. M. Aug 26 2010 at 12:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432212710380554, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/26352/pca-analysis-with-centered-variables
|
# PCA analysis with centered variables
If I center my variables and then run a PCA analysis, do I need to interpret negative eigenvectors different than positive eigenvectors?
-
2
What does it mean that a vector is negative? If $\mathbf{x}$ is an eigenvector of the matrix $\mathbf{A}$ then $\mathbf{Ax}=\lambda\mathbf{x}$ for some $\lambda$. But then $A(-\mathbf{x})=-A\mathbf{x}=\lambda(-\mathbf{x})$ so $-\mathbf{x}$ is also an eigenvector. – MånsT Apr 12 '12 at 12:08
1
– whuber♦ Apr 12 '12 at 14:36
I don't mind, thank you @whuber. I think I need to rephrase my question. In my PCA analysis I have in a component both negative and positive variables. This is probably very basic, but I've been told different things about the interpretation of the variables within a component, so I just need some clarification. First; Is it so that the negative values within a component moves in the same direction and the positive in the opposite? Or is it so that I should only look at the absolute values of the variables within a component? I hope this makes sense. – Hanne Apr 12 '12 at 22:30
Ah, so what you mean is how to interpret the relative signs of the coefficients of the eigenvectors. (Even the signs are meaningless because negating an eigenvector will reverse all the signs, but the relative signs do not change, no matter how the eigenvector is scaled, and therefore are meaningful.) Because not everybody reads all comments, it would be good for you to edit your question to make this clear. – whuber♦ Apr 13 '12 at 13:33
– whuber♦ Aug 27 '12 at 18:06
## 2 Answers
I think you have it backwards. If the value is positive, then a higher score on that variable is associated with a higher score on the component, if the value is negative, then a higher score implies a lower score on the component.
In addition, people sometimes use PCA to determine whether to keep or combine certain variables for a subsequent analysis. This is not, strictly speaking, an appropriate use of PCA. Factor analysis should be used for this purpose, but at any rate, people do it. In such a case, people will look at the absolute value to see if it is above some arbitrary threshold, such as .5, and if so, retain (or combine), and if not, drop. For what it's worth, I don't recommend this.
Update: I can't tell if I answered the right question or not. @whuber's second comment, in my opinion, is right on the money, and also consistent with my first paragraph above. However, the question is now different than before, and different from how I understand @whuber's comment, so I am a little confused. Essentially, PCA solves for the eigenvectors and eigenvalues. Neither will be negative whether or not you centered your variables first. The eigenvalues are the lengths of the corresponding eigenvectors. Just as I cannot buy a board -10 feet (i.e., -3 meters) long to build a patio, you cannot have a negative eigenvalue. The eigenvector returned will also be positive. You could negate it by multiplying all the signs by -1, but as @whuber notes, that would be meaningless. Once again as @whuber notes, the relative signs are meaningful, and their relation to the component is as I stated in my first paragraph above. That is, the relative signs (negative vs. positive) will denote the same relationship between higher (/ lower) scores on the variable and the component whether the variables were centered first or not.
-
centering your variables shouldn't change the PCA results, as PCA first determines a correlation matrix and goes on from there. The correlations between your variables should be the same regardless, so the PCA results should not be affected by any mean centering you perform.
-
I think in practice that is not always the case. Usually, $\mathbf X \mathbf X'$ is decomposed. If $\mathbf X$ is mean centered, $\mathbf X \mathbf X' = COV (\mathbf X)$, if it is mean centered and variance scaled, then $\mathbf X \mathbf X' = COR (\mathbf X)$. many (most) PCA functions will do the mean centering and/or variance scaling as part of their default pre-processing, but many also allow to switch this off. See e.g. the arguments `center` and `scale.` of `prcomp` in R. – cbeleites Apr 13 '12 at 12:48
@cbeleites that is interesting, thanks. With regard to the original question, the centering should not change the PCA results, unless you specify alternative options when you call the PCA function? – Luke Apr 13 '12 at 12:53
iff the default option is to center, then the results should not change (but they may flip as Måns explained). If you don't know whether your PCA algorithm does the centering and scaling, you can actually test it that way. – cbeleites Apr 13 '12 at 13:14
To be precise: the above formulae are not exactly correct for $COV$ and $COR$ but are what PCA usually does. If you really want to calculate correlation or covariance, you need to divide by the degrees of freedom $COV(\mathbf X) = \frac{1}{df} \mathbf X_{centered} \mathbf X_{centered}^′$; e.g. for the sample covariance $df = n - 1$ unless you have extra knowledge like mean must be zero for theoretical reasons. – cbeleites Apr 13 '12 at 13:23
1
@whuber: Sure! Depending on the data, the uncentered first PC may actually be very close to the mean vector. However, most PCA functions will do the centering by default, and the results do not change whether the mean centering is done outside the PCA function or inside or both outside and again inside. And it may be quite hidden in the results, e.g. R's `prcomp` has the center but by default doesn't print it. I've also seen it reported it as the "$0^{th}$ component". So it may not be very obvious that it is actually done. – cbeleites Apr 13 '12 at 14:28
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310716390609741, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/58474/prove-convexity-concavity-of-a-complicated-function?answertab=votes
|
# Prove convexity/concavity of a complicated function
Can anyone help me to prove the convexity/concavity of following complicated function...? I have tried a lot of methods (definition, 1st derivative etc.), but this function is so complicated, and I finally couldn't prove... however, I plotted with many different parameters, it's always concave to $\rho$...
$$f\left( \rho \right) = \frac{1}{\lambda }(M\lambda \phi - \rho (\phi - \Phi )\ln (\rho + M\lambda ) + \frac{1}{{{e^{(\rho + M\lambda )t}}\rho + M\lambda }}\cdot( - (\rho + M\lambda )({e^{(\rho + M\lambda )t}}{\rho ^2}t(\phi - \Phi ) )$$ $$+ M\lambda (\phi + \rho t\phi - \rho t\Phi )) + \rho ({e^{(\rho + M\lambda )t}}\rho + M\rho )(\phi - \Phi )\ln ({e^{(\rho + M\lambda )t}}\rho + M\lambda ))$$
Note that $\rho > 0$ is the variable, and $M>0, \lambda>0, t>0, \phi>0, \Phi>0$ are constants with any possible positive values...
-
Is $f$ the function you wanted to optimize in your previous question? :D Edit: Oh no I didn't realized that $f$ is one-variable here! – Jineon Baek Aug 19 '11 at 9:52
Dear Jineon, you are right, this is a part of the equation in my previous question... and btw, in this function only $\rho$ is variable, others are just any positive values. – Dobby Aug 20 '11 at 4:11
@Dobby: Try separating this function out into a sum of functions $g_i(\rho)$ and see if $g_i$ is convex. – Jacob Oct 24 '11 at 15:11
When $\phi = \Phi$ your function is of the form $A/(B \rho e^{\rho t} + C)$ where $A,B,C,t>0$. This should be convex, not concave. – Robert Israel Mar 22 '12 at 19:21
## 1 Answer
I am a newcomer so would prefer to just leave a comment, but alas I see no "comment" button, so I will leave my suggestion here in the answer box.
I have often used a Nelder-Mead "derivative free" algorithm (fminsearch in matlab) to minimize long and convoluted equations like this one. If you can substitute the constraint equation $g(\rho)$ into $f(\rho)$ somehow, then you can input this as the objective function into the algorithm and get the minimum, or at least a local minimum.
You could also try heuristic methods like simulated annealing or great deluge. Heuristic methods would give you a better chance of getting the global minimum if the solution space has multiple local minima. In spite of their scary name, heuristic methods are actually quite simple algorithms.
As for proving the concavity I don't see the problem. You mention in your other post that both $g(\rho)$ and $f(\rho)$ have first and second derivatives, so it should be straightforward, right?
-
Dear, thank you so much for your suggestions, I am trying your methods : ) GL! – Dobby Oct 5 '11 at 6:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311890006065369, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/38351/list
|
## Return to Question
2 Clarification and addition of hypotheses
# Are half-transitivevertexandedge-transitive graphs determined by their spectrum?
A graph is called half-transitive vertex and edge transitive if the automorphism group is transitive on both vertices and edges.
The spectrum of a graph is the collection (with multiplicities) of eigenvalues of the incidence matrix.
Supposedly, it is conjectured that almost all graphs have the property that they are the unique graph with their spectrum (at least, according to MathWorld).
So,
If $\Gamma_1,\Gamma_2$ are half-transitive two vertex and edge transitive graphsdetermined by spectra? If not, is there some subclass which iswith the same valence, or which is known to be?are isospectral (have the same spectrum) then does it follow that $\Gamma_1\cong \Gamma_2$?
1
# Are half-transitive graphs determined by their spectrum?
A graph is called half-transitive if the automorphism group is transitive on both vertices and edges.
The spectrum of a graph is the collection (with multiplicities) of eigenvalues of the incidence matrix.
Supposedly, it is conjectured that almost all graphs have the property that they are the unique graph with their spectrum (at least, according to MathWorld).
So, are half-transitive graphs determined by spectra? If not, is there some subclass which is, or which is known to be?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598560333251953, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/17497-mathematical-induction-help.html
|
# Thread:
1. ## Mathematical Induction help
Hi all, Im new to the forum and I am having difficulties understanding Inductions.
I have these 2 examples from the textbook and they give me answers but I was wondering if anyone can help me through it. I keep staring at it and it just doesn't sink in.
1st problem:
Prove that the sum of the first N odd positive integers = n^2
I get the first step prove that P(1) = true which is sum of the first odd integer which is 1 and 1^2 = 1.
The textbook puts 1+3+5+...+(2k-1)=k^2
I know 2k-1 ='s an odd integer but this is what confuses me
the textbook has 1+3+5....+(2k-1)+(2k+1) = (k+1)^2
then..
has 1+3+5....+(2k-1)+(2k+1) = [1+3+...(2k-1)] + (2k+1)
=k^2 + (2k+1) ---where did k^2 come from?
=k^2 + 2k + 1
=(k+1)^2
2. Originally Posted by ff4930
Hi all, Im new to the forum and I am having difficulties understanding Inductions.
I have these 2 examples from the textbook and they give me answers but I was wondering if anyone can help me through it. I keep staring at it and it just doesn't sink in.
1st problem:
Prove that the sum of the first N odd positive integers = n^2
I get the first step prove that P(1) = true which is sum of the first odd integer which is 1 and 1^2 = 1.
The textbook puts 1+3+5+...+(2k-1)=k^2
I know 2k-1 ='s an odd integer but this is what confuses me
the textbook has 1+3+5....+(2k-1)+(2k+1) = (k+1)^2
then..
has 1+3+5....+(2k-1)+(2k+1) = [1+3+...(2k-1)] + (2k+1)
=k^2 + (2k+1) ---where did k^2 come from?
=k^2 + 2k + 1
=(k+1)^2
Typically in a mathematical induction problem you prove the statement for some specific value of k, then use the theorem itself to simplify your problem.
Specifically in your case we know that the theorem is true for k = 1.
Let me change your variables a bit. (This may make this more transparent, or it may confuse you. I hope the former!) Assume the theorem is true for some k = n. Then
$1 + 3 + ~ ... ~ + (2n - 1) = n^2$
We need to show that the theorem is true for the next value of k, n + 1. That is, we need to show that:
$1 + 3 + ~ ... ~ + (2n - 1) + (2n + 1) = (n + 1)^2$
Break down the sum on the LHS:
$(1 + 3 + ~ ... ~ + (2n - 1)) + (2n + 1) = (n + 1)^2$
The sum in parenthesis is just $n^2$, according to our assumption. So we have:
$n^2 + (2n + 1) = (n + 1)^2$
which is an identity, so the theorem is true for k = 1, thus k = 2, 3, 4, ...
-Dan
3. Thanks for the detailed explanation but I dont get this part
We need to show that the theorem is true for the next value of k, n + 1. That is, we need to show that:
why is it 2n+1?
wouldn't next value be another odd integer which is (2n-1), and since is k+1 wouldn't it be (2n-1)+1? sorry if Im not seeing it right away.
Ok, is this right?
because they give us the formula for the next odd integer is 2 times(n-1)
so to find n+1 it is 2 times(n+1)?
4. Originally Posted by ff4930
Thanks for the detailed explanation but I dont get this part
why is it 2n+1?
wouldn't next value be another odd integer which is (2n-1), and since is k+1 wouldn't it be (2n-1)+1? sorry if Im not seeing it right away.
Ok, is this right?
because they give us the formula for the next odd integer is 2 times(n-1)
so to find n+1 it is 2 times(n+1)?
The formula for the "k"th odd integer is 2k - 1. So the next one will be:
$2(k + 1) - 1 = 2k + 2 - 1 = 2k + 1$
-Dan
5. Thank you for your help.
Ive tried applying what Ive learned from that to the 2nd example and if you can help clarify it because Im a little stuck.
prove n < 2^n
for all positive integers n
always have to prove the base step which is easy P(1) = 1 < 2^1 = 2
Then I have to prove that it is also correct for k+1 integers
which is to prove K + 1 < 2 ^k+1
Here is where Im stuck, the textbook says add 1 to both sides of k < 2^k, noting that 1 <= 2^k
which is k+1 < 2^k +1 <= 2^k + 2^k = 2^k+1
I just dont understand where the 2^k + 2^k came from.
6. Originally Posted by ff4930
Ive tried applying what Ive learned from that to the 2nd example and if you can help clarify it because Im a little stuck.
prove n < 2^n
for all positive integers n
always have to prove the base step which is easy P(1) = 1 < 2^1 = 2
Then I have to prove that it is also correct for k+1 integers
which is to prove K + 1 < 2 ^k+1
Here is where Im stuck, the textbook says add 1 to both sides of k < 2^k, noting that 1 <= 2^k
which is k+1 < 2^k +1 <= 2^k + 2^k = 2^k+1
I just dont understand where the 2^k + 2^k came from.
One thing that is definitely confusing you is your notation. PLEASE USE PARENTHESIS!!!
You need to show that if
$n < 2^n$
That
$n + 1 < 2^{n + 1}$
(If you can't use the fancy LaTeX, write this as "n + 1 < 2^(n + 1)")
So let's take the hint. Let's look at
$n < 2^n$
and add 1 to both sides:
$n + 1 < 2^n + 1$ <-- "2^n + 1" is NOT "2^(n + 1)" This was an error in your work, I think.
Now, note that
$2^{n + 1} = 2^n \cdot 2 = 2^n + 2^n$
and because $1 \leq n$ we also know that $1 < 2 \leq 2^n$
So
$n + 1 < 2^n + 1 < 2^n + 2^n = 2^{n + 1}$
Thus
$n + 1 < 2^{n + 1}$
(Note: There may be an easier way to do this. This method was the first one that came to me after looking at the hint.)
-Dan
7. Thank You very much for taking time out to explain it.
My problem is grasping the information. Im going to die when the exam comes if Im having so much trouble with examples. Hopefully after some practice problems Ill be allright. Thanks again.
8. Originally Posted by ff4930
Thank You very much for taking time out to explain it.
My problem is grasping the information. Im going to die when the exam comes if Im having so much trouble with examples. Hopefully after some practice problems Ill be allright. Thanks again.
Worry makes its own problems. Just find as many problems to work as you can and you'll get it down. Practice, practice, practice...
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449687600135803, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/3173-binomial-hypothesis-tests-hard-one.html
|
# Thread:
1. ## binomial, hypothesis tests - hard one
I'm really having trouble with this question and its pretty long too. If anyone can help me figure out the answers, i would really appreciate it.
In a recent survey, 480 of 600 Canadians polled stated that they were dissatisfied with politicians. Of the remaining 120 who were polled, 75% were satisfied with politicians and 25% had no opinion. Assuming that these findings can be generalized to all Canadians.
A) In a random sample of 4 Canadians, what is the probability that no more than 1 would be satisfied with politicians?
Answer is 0.89
B) If two random and independent samples of Canadian's were taken, one consisting of 20 people and the other of 25 people, what is the probability that more than 18 of the sample of 20 or that between 19 and 23 (inclusively) of the sample of 25 would state a definite opinion (for or against) about politicians?
Answer is 0.8302
C) Among people who initially have no opinion, 50% will typically form an opinion (for or against an issue) after reading a relevant news report. In an attempt to get more people to form an opinion and take one side or the other on political issues, a news service has developed a new approach to presenting information on the issues. It tried this out on a random sample of 20 people who initially claimed that they had no opinion about an issue. The news service will conclude that the new approach is more effective if at least 15 of these 20 report a definite opinion about the issue after reading about it. What is the probability that the news will conclude that the new approach is more effective even if it is in fact no better than previous methods?
D) In a second study, the news service tries out the new approach to presenting information on a random sample of 25 people who initially expressed no opinion about an issue. The news service will test the hypothesis that the new approach is more successful than previous methods in getting people to form an opinion using alpha < 0.03. Suppose that the new approach would actually cause 80% of initially no-opinion people to form a definite opinion. Given this, what is the probability that the news service will draw the correct conclusion based upon their second study.
a) 0.89
b) 0.8302
c) 0.021
d) 0.891
2. Originally Posted by swoopesjr01
I'm really having trouble with this question and its pretty long too. If anyone can help me figure out the answers, i would really appreciate it.
In a recent survey, 480 of 600 Canadians polled stated that they were dissatisfied with politicians. Of the remaining 120 who were polled, 75% were satisfied with politicians and 25% had no opinion. Assuming that these findings can be generalized to all Canadians.
A) In a random sample of 4 Canadians, what is the probability that no more than 1 would be satisfied with politicians?
Answer is 0.89
There are to cases here:
1)All four are dissafisfied
2)Exaclty one is satisfied, i.e. exactly 3 are disatisfyeied.
The probability of 1 is binomial probability problem with $p=480/600=.8$
Thus,
${4 \choose 4}(.8)^4(.2)^0=.4096$
The probability of 2 is,
${4 \choose 3}(.8)^3(.2)^1=.4096$
Thus intotal we have,
$.8192$
3. Originally Posted by ThePerfectHacker
There are to cases here:
1)All four are dissafisfied
2)Exaclty one is satisfied, i.e. exactly 3 are disatisfyeied.
The probability of 1 is binomial probability problem with $p=480/600=.8$
Thus,
${4 \choose 4}(.8)^4(.2)^0=.4096$
The probability of 2 is,
${4 \choose 3}(.8)^3(.2)^1=.4096$
Thus intotal we have,
$.8192$
Here you need to use $p=(120\times 0.75)/600$ as the probability that
someone is satisfied and $q=1-p$ that they were dissatisfied
or expressed no opinion.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9709236025810242, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/68614/why-do-these-inequalities-in-metric-spaces-hold
|
# Why do these inequalities in metric spaces hold?
The other day I stumbled across some inequalities regarding properties of metric spaces. I'm curious to see a proof of why it holds.
Suppose $(X,\rho)$ is any metric space. For a given $\epsilon\gt 0$, I let $N(X,\epsilon)$ denote the least $n$ such that $X=\bigcup\limits_{i=1}^n U_i$ where $U_i$ are sets such that $\operatorname{diam}(U_i)\leq 2\epsilon$. I also denote by $M(X,\epsilon)$ the greatest number of $m$ points $x_i$, $1\leq i\leq m$ such that $\rho(x_i,x_j)\gt\epsilon$ whenever $i\neq j$.
With this notation, what is it that $N(X,\epsilon)\leq M(X,\epsilon)$ and $M(X,\epsilon)\leq N(X,\epsilon/2)$? Thanks.
-
## 2 Answers
For given $\epsilon$, pick a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$, and form closed balls with radius $\epsilon$ around them. If there is a point in $X$ that belongs to none of these balls, we can add it to the set, contradicting the maximality of $M(X,\epsilon)$. Thus these $M(X,\epsilon)$ sets of diameter $2\epsilon$ cover $X$, and hence $N(X,\epsilon)\le M(X,\epsilon)$.
For the other direction, note that a set with diameter $2\epsilon/2=\epsilon$ can contain at most one point of a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$; thus we need at least $M(X,\epsilon)$ such sets to cover $X$.
-
Let $S$ be a set of the greatest number of points $x_i$, $1 \leq i \leq M(X,\epsilon) =: m$ such that $\rho(x_i, x_j) > \epsilon$ (call this property P).
Define balls centered around $x_i$ with radius $\epsilon$, $B(x_i, \epsilon) = \{ y \in X | \rho(x_i, y) \leq \epsilon \}$
Claim: $\displaystyle X = \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$
Proof: $\bigcup _{1 \leq i \leq m} B(x_i, \epsilon) \subseteq X$ is obvious.
Consider an arbitrary $x \in X$.
If $\forall i \leq m, \rho (x, x_i) > \epsilon$, then we can add $x$ to $S$ and which will contradict the fact that it is the greatest set with property P. Thus there must exist at least one index $1 \leq j \leq m$ such that $\rho (x, x_j) \leq \epsilon$. But this means $x \in B(x_j,\epsilon)$ and thus $x \in \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$. Hence $X \subseteq \bigcup _{1 \leq i \leq m} B(x_i, \epsilon)$ and we are done.
Due to the fact that diam$(B(x_i,\epsilon)) \leq 2\epsilon$ and the claim, we have:
$N(X,\epsilon) \leq m$
The idea for the second inequality is to consider the family of sets $T = \{ U_i\}$ for $1 \leq i \leq N(X,\frac{\epsilon}{2})$. Construct an injective map $\phi:S \to T$. $\phi$ maps $x_i$ to a set in the family $T$ that contains $x_i$. Can you complete the argument?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440544247627258, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/53431/does-any-research-mathematics-involve-solving-functional-equations/53526
|
## Does any research mathematics involve solving functional equations?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a somewhat frivolous question, so I won't mind if it gets closed. One of the categories of Olympiad-style problems (e.g. at the IMO) is solving various functional equations, such as those given in this handout. While I can see the pedagogical value in doing a few of these problems, I never saw the point in practicing this particular type of problem much, and now that I'm a little older and wiser I still don't see anywhere that problems of this type appear in a major way in modern mathematics.
(There are a few notable exceptions, such as the functional equation defining modular forms, but the generic functional equation problem has much less structure than a group acting via a cocycle. I am talking about a contrived problem like finding all functions $f : \mathbb{R} \to \mathbb{R}$ satisfying
$$f(x f(x) + f(y)) = y + f(x)^2.$$
When would this condition ever appear in "real life"?!)
Is this impression accurate, or are there branches of mathematics where these kinds of problems actually appear? (I would be particularly interested if the condition, like the one above, involves function composition in a nontrivial way.)
Edit: Thank you everyone for all of your answers. As darij correctly points out in the comments, I haven't phrased the question specifically enough. I am aware that there is a lot of interesting mathematics that can be phrased as solving certain nice functional equations; the functional equations I wanted to ask about are specifically the really contrived ones like the one above. The implicit question being: "relative to other types of Olympiad problems, would it have been worth it to spend a lot of time solving functional equations?"
-
4
I'm ambivalent about your actual question, but I would strongly prefer it if you changed your title. Surely you can find an adjective other than "serious" to describe what you mean? – Yemon Choi Jan 27 2011 at 1:19
2
maybe try searching MathSciNet for all recent papers with primary MSC 39BXX? – Willie Wong Jan 27 2011 at 1:33
2
@Yemon: changed to "research." Does that sound better? – Qiaochu Yuan Jan 27 2011 at 1:58
4
It's somewhat strange to exclude any example that involves a group action. If we've learned anything in the last century, it is that a majority of interesting mathematical structure that are worth considering are tied to the actions of groups (or group-like objects)... – Terry Tao Jan 27 2011 at 10:35
2
I think two questions should be distinguished here: (1) Does any research mathematics involve solving (nontrivial) functional equations (by any method whatsoever)? (2) Does any research mathematics involve the methods usually employed for olympiad-style elementary functional equations? While the answer to (1) is clearly a "yes" (differential equations, difference equations etc.), I believe the answer to (2) is a No in the sense that most methods I have learnt in my olympiad time are utterly useless to me now. Of course, some func. eqn. problems HAVE conceptual solutions (such as finding a ... – darij grinberg Jan 27 2011 at 10:48
show 9 more comments
## 14 Answers
In additive combinatorics, one often seeks to count patterns such as an arithmetic progression $a, a+r, \ldots, a+(k-1)r$. When doing so, one is naturally led to expressions such as $${\bf E}_{a,r \in G} f_0(a) f_1(a+r) \ldots f_{k-1}(a+(k-1)r)$$ for some finite abelian group $G$ and some complex-valued functions $f_0,\ldots,f_{k-1}$. If these functions are bounded in magnitude by $1$, then the above expression is also bounded in magnitude by one. When does equality hold? Precisely when one has a functional equation $$f_0(a) f_1(a+r) \ldots f_{k-1}(a+(k-1)r) = c$$ for some constant $c$ of magnitude $1$. One can solve this functional equation, and discover that each $f_j$ must take the form $f_j(a) = e^{2\pi i P_j(a)}$ for some polynomial $P_j: G \to {\bf R}/{\bf Z}$ of degree at most $k-2$. This observation can be viewed as the starting point for the study of Gowers uniformity norms, and one can perform a similar analysis to start understanding many other patterns in additive combinatorics.
In ergodic theory, cocycle equations, of which the coboundary equation
$$\rho(x) = F(T(x)) - F(x)$$
is the simplest example, play an important role in the study of extensions of dynamical systems and their cohomology. Despite the apparently algebraic nature of such equations, though, one often solves these equations instead by analytic means (and in particular, not by IMO techniques), for instance by using the spectral theory or mixing properties of the shift $T$, and exploiting the measurability or regularity properties of $\rho$ or $F$. (The solving of such equations, incidentally, is a crucial aspect of the ergodic theory analogue of the study of the Gowers uniformity norms mentioned earlier, as developed by Host-Kra and Ziegler.)
Returning to the more "contrived" functional equations of Olympiad type, note that such equations usually use (a) the additive structure of the domain and range, (b) the multiplicative structure of the domain and range, and (c) the fact that the domain and range are identical (so that one can perform compositions such as $f(f(x))$). In most mathematical subjects, at least one of these features is absent or irrelevant, which helps explain why such equations are relatively rare in research mathematics. For instance, in many branches of analysis, the range of functions (typically ${\bf R}$ or ${\bf C}$) usually has no natural reason to be identified with the domain of functions (which may accidentally'' be ${\bf R}$ or ${\bf C}$, but is often more naturally viewed in a more general category, such as that of measure spaces, topological spaces, or manifolds), so (c) is usually absent. Conversely, in dynamics, (c) is prominent, but (a) and (b) are not. The only fields that come to my mind that naturally exhibit all three of (a), (b), (c) (without also automatically exhibiting much richer algebraic structure, such as ring homomorphism structure) are complex dynamics, universal algebra, and certain types of cryptography, but I don't have enough experience in these fields to actually provide some interesting examples.
-
Very good point in the last paragraph. Together with the other answers (especially the one about information theory) my curiosity has pretty much been satisfied. – Qiaochu Yuan Jan 28 2011 at 0:01
1+ Very illuminating – Martin Brandenburg Jan 28 2011 at 9:21
See also math.la.asu.edu/~fpsac01/PROGRAM/bousquet.pdf . – Max Muller Mar 10 2011 at 17:11
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is used a lot in information theory: See this.
-
+1. In fact, a couple of days ago I (re)borrowed from the library the book On Measures of Information and their Characterizations by Aczél and Daróczy, mentioned on the page Ross links to. It devotes many of its pages to... well, characterizing measures of information. For example, it discusses various known characterizations of Shannon entropy, giving theorems of the form "Shannon entropy is the unique function satisfying the following functional equations". – Tom Leinster Jan 27 2011 at 9:00
I don't know if this actually counts (since I don't know if this functional equation is actually useful...), but in the paper
R.M. Dicker, A set of independent axioms for a field and a condition for a group to be the multiplicative group of a field, Proc. London Math. Soc., 18, 1968, p. 114-124
you can find the following theorem:
Let $A$ be an abelian group (written multiplicatively). Adjoin a new element $0$ to get the set $F$. Assume that there is a function $f : F \to F$ such that for all $x,y \in F$ with $y \neq 0$:
1) $f(0) = 1$
2) $f(f(x))=x$
3) $f(f(x) f(y)) = y f(x f(y^{-1}))$
Then $F$ can be made into a field such that $A = F^*$ and $f(x) = 1 - x$.
Here is another example from algebra which can be found in the article
Zoran Sunik, An Ideal Functional Equation with a Ring, Mathematics Magazine 77 (2004) 4, 310--313.
If $R$ is an integral domain, then the maps $f : R \to R$ solving the functional equation
$f(xz - y) f(x) f(y) + 3 f(0) = 1 + 2 f(0)^2 + f(x) f(y)$
are exactly the characteristic functions of ideals of $R$.
Also functional equations describing subrings and prime ideals are mentioned there.
-
These are nice examples. I have no idea if the second one is at all useful, but I have seen Connes use the first one in a paper on F_1 (I think). – Qiaochu Yuan Jan 27 2011 at 14:58
I think that the theorem of Ritt referred to in Borger's answer is the following: Let $f(x)$ and $g(x)$ be in $\textbf{C}(x)$, and suppose that they satisfy the functional equation $$f(g(x))=g(f(x)).$$ Then after conjugation by a linear fractional transformation, one of the following is true. (1) $f(x)=ax^n$ and $g(x)=bx^m$. (2) $f$ and $g$ are Cheybshev polynomials. (3) $f$ and $g$ are associated to endomorphisms of an elliptic curve (often called Lattes maps). (4) $f$ and $g$ have a common iterate, i.e., $f^m=g^n$ for some $m,n\ge1$.
The short way to say this is that the solutions to $f(g(x))=g(f(x))$ in rational functions are either associated to an algebraic group ($G_m$ for (1) and (2), an elliptic curve for (3)), or $f$ and $g$ have a common iterate.
-
There is a nice (and maybe more "readable/modern") proof of Ritt's theorem by Eremenko math.purdue.edu/~eremenko/dvi/feq-e.pdf. – BY Jan 27 2011 at 17:02
Dear Joe -- That's actually not the theorem of Ritt's I had in mind, although it is one close to my heart. I've added the theorem I was thinking of to my response. – James Borger Jan 28 2011 at 0:34
Feigenbaum universal transition of a dynamical system (like the logistic map) to chaotic behaviour through period-doubling cascade involves the functional equation $$g(x)=-\frac1\lambda g(g(\lambda x))$$ with the boundary conditions $$g(0)=1,\qquad g'(0)=0,\qquad g''(0)<0.$$ The parameter $\lambda$ for which a solution exists near $x=0$ is the inverse of the Feigenbaum constant.
-
I think it's mainly a problem of most Olympiad-style problems, rather than of functional equations -you may write down bizarre and unreal equations of any type, algebraic, ODE, PDE, integral, etc. Possibly, a reason of the "success" of functional equations in Olympiads is that they are somehow more elementary -no need of derivatives or even calculus, and are therefore more a suitable topic for these competitions. Yet important functional equations appear naturally everywhere; conjugation problems, to start with, in Algebra or in Dynamical Systems.
-
In enumerative combinatorics, you often end up with a functional equation for the generating function of the thing you're trying to count. These can involve compositions of the function with itself (and differentiation, exponential functions, ...) Often these don't have closed-form solutions, and you use them to get recurrences or asymptotics. Try searching for generating functions for labelled and unlabelled trees for some simple examples.
-
Good point, though the question seems to be mostly about closed-form solution. I do have to ask: do you have a simple example of obtaining recurrences this way? I'm not a specialist, and the only examples I can think of off the top of my head go the other way around: finding a functional equation from known recurrences. – Thierry Zell Jan 27 2011 at 15:25
@Thierry: it is often more natural to write down a functional equation than a recurrence. For example, one definition of the Catalan numbers immediately gives C(x) = 1 + x C(x)^2, from which not only the standard recurrence but the standard closed form can be easily derived. There are more sophisticated examples (e.g. involving Lagrange inversion). – Qiaochu Yuan Jan 27 2011 at 15:55
1
Another nice example is the generating function for trees $A(x) = xE(A(x))$ where $E$ is the species of sets. – Martin Rubey Jan 28 2011 at 20:17
There is a whole theory of functional equations (or functional identities) in algebras. It was used for instance to obtain solutions to some of Herstein's problems on Lie homomorphisms. An overview of the theory is given in the book
Bresar, Chebotar, Martindale, Functional identities
The area has its own entry in the 2010 MSC (16R60).
-
It seems there's a book on this subject.
-
On page 47 of this pdf (page label 7-4) on complex dynamics, http://zakuski.math.utsa.edu/~jagy/Milnor_1991.pdf Milnor refers to the solution of $$\alpha(f(z)) = 1 + \alpha(z)$$ in Theorem 7.7. In later editions that were published as actual books, he refers to $\alpha$ as a Fatou coordinate.
-
I'm not sure if this is the kind of thing you want...
Let $\wp$ denote the Weierstrass $\wp$-function with respect to a lattice. Then for any integer $n$, there is a rational function $f(x)\in\mathbf{C}(x)$ (depending on the lattice) such that for all $z\in\mathbf{C}$, we have $\wp(nz)=f(\wp(z))$. This is just a disguised version of the fact that for a point $P$ on an elliptic curve in Weierstrass form, the $x$-coordinate of $nP$ depends only on the $x$-coordinate of $P$. In the "complex multiplication" case, where the ring of endomorphisms of the lattice is bigger than $\mathbf{Z}$ (in which case it's a rank two subring of $\mathbf{C}$), then such $f$'s exist for any endomorphism $n$.
I think Ritt classified the entire functions that admit algebraic functional equations.
[Added: Although it's now clear that this is not the direction the OP wanted to go in, I thought I might add a bit more detail for the record.
The paper of Ritt's I was thinking of is "Periodic functions with a multiplication theorem", Trans. Amer. Math. Soc. 23 (1922), no. 1, 16–25. He restricts himself to periodic functions $g$, but apparently he does allow them to be meromorphic. What he proves is that if a periodic meromorphic function $g$ has a functional equation of the form $g(nz)=f(g(z))$ for some $n\in\mathbf{C}$ and rational function $f$, then '$|n|\geq 1'$. If `$|n|>1$`, then $g$ is one of the following: (i) a linear function of a function of the form $\cos(az+b)$, (ii) a linear function of a function of the form $\exp(az)$, (iii) a function of the form $\wp(z+a)$, $\wp'(z+a)$, $\wp''(z+a)$, or $\wp'''(z+a)$. If `$|n|=1$`, then $g$ must be one of a short list of rational expressions in exponential functions and derivatives of $\wp$-functions.
(NB I haven't thought about the argument. I'm just copying from his paper. He also gives more detail about which possibilities occur when.)
Interestingly, he also mentions a result of Poincare that for any rational $f$ satisfying $f(0)=0$ and `$|f(0)|>1$`, there is a meromorphic function $g$ with functional equation $g(f'(0)z)=f(g(z))$.
And last, from what I remember, the theory of Drinfeld modules gives examples of analytic functions (such as the Carlitz exponential) over local fields of nonzero characteristic with similar functional equations. It would be interesting to prove a Ritt-like converse result in that case.]
-
1
This is a nice example, but it's another description of something highly structured: an algebraic group. Maybe I am more annoyed by the lack of structure in these questions than I am by the fact that they are functional equations... – Qiaochu Yuan Jan 27 2011 at 9:26
I've just been poking around and found that Ritt has other papers with similar themes, such as "Meromorphic functions with addition or multiplication theorems", Trans. Amer. Math. Soc. 29 (1927), no. 2, 341–360. – James Borger Jan 28 2011 at 0:45
Many problems in optimization (such as finding exact constants or maximizers to inequalities for linear operators) are equivalent to finding all solutions to associated Euler-Lagrange equations. Of course, these functional equations will involve the operator we started with, so they aren't elementary enough to be a satisfactory answer. However, sometimes these problems are solved by showing that the solutions (or related functions) must satisfy more elementary looking functional equations. As an example, here are some elementary looking functional equations: Find all complex-valued measurable functions that satisfy (almost everywhere) the equations:
1) $f(x)f(y) = F(|x|^2 +|y|^2, x+y)$ for $x,y \in R^2$,
2) $g(x)g(y) = G(|x|+|y|, x+y)$ for $x,y \in R^3$,
3) $h(x)h(y) = H(x^2 +y^2+z^2, x+y+z)$ for $x,y,z \in R$.
These and similar problems arise in the work of D. Foschi on maximizers to Strichartz inequalities in low dimensions.
-
A clone (in universal algebra) is a (graded by arity) set of functions on a base set A, which is closed under composition and contains n-ary projection functions p_i(abar) = a_i . Determining the structure of this clone is like finding out all the functional equations that can be satisfied on A using members of the clone.
If one can determine some such relations as whether one function distributes over another, this sometimes leads to normal forms. One can then build term-rewriting systems to simplify expressions or show some terms are equal to others (unification). In practice, many problems that we want to solve (logic minimization, satisfaction) turn out to be time- or space-intractable, if not undecidable.
I submit that clone theory is the study of a suitable generalization of your question. You might look at some recent papers to see if the field is of further interest to you.
Gerhard "Ask Me About System Design" Paseman, 2011.01.26
-
Since the question is about "really contrived functional equations" you should define what do you count as "really contrived". There are some important classes of functional equations.
• Difference equations. They are discrete difference analogs of differential equations
• Iterative equations. They usually can be reduced to difference equations
• Delay differential equations. The combinations of difference and differential equations.
These classes are applied in many spheres. That's why they deserve to be researched. If you are speaking about equations that are outside of these classes, then indeed they are not frequently used in applied mathematics, but still may be interesting for research. Mathematicians research what they perceive as interesting, some research things that even in theory cannot be used in applied fields.
I also want to add some considerations about usefullness of solving completely random contrived equations. Such equations may lead to interesting insights into special functions and uncover interesting connections between them. Of course the mathematicians usually want to find general method which may be applicable not only to a particular equation but to a large class.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 8, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397132992744446, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/57465/can-we-unify-addition-and-multiplication-into-one-binary-operation-to-what-exten/57489
|
## Can we unify addition and multiplication into one binary operation? To what extent can we find universal binary operations?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The question is the extent to which we can unify addition and multiplication, realizing them as terms in a single underlying binary operation. I have a number of questions.
1. Is there a binary operation $n\star m$ on the integers $\mathbb{Z}$ such that both addition $+$ and multiplication $\cdot$ can be expressed as specific composition expressions using only $\star$? A more relaxed version of the question would allow constants into the expressions; for example, perhaps we can arrange that $a+b=0\star(a\star b)$ and $ab=1\star(a\star b)$.
One obvious idea is to try somehow to use a pairing function, so that $a\star b$ codes up both $a$ and $b$ into one number, which then can appear as one argument to $\star$, whose other argument will signal whether or not addition or multiplication is desired. But there is the difficulty of making these two tasks not interfere with one another.
Note that if we allow a trinary operation, then we can easily do it simply by defining $\star(0,a,b)=a+b$ and $\star(1,a,b)=ab$ and extending this arbitrarily. Can we get rid of the need for parameters here?
2. Can we prove that there is no associative such binary operation $\star$ on the integers, from which both $+$ and $\cdot$ are expressible by terms?
3. Does every ring have such a binary operation $\star$ from which both addition $+$ and multiplication $\cdot$ are expressible as $\star$-terms? Does it matter if the ring is finite or infinite?
4. Can every countable family $F$ of finitary operations on an infinite set $Z$ be unified by a single binary operation $\star$ on $Z$, so that every function in $F$ is the same function as that induced by some $\star$-term?
This question arose from what I found interesting about a recent question on math.SE, asked by someone who was interested in the phenomenon that logical and and or are expressible from nand and also from nor.
-
1
Very interesting question! (By some coincidence I was wondering about a similar thing myself too...) – ex falso quodlibet Mar 5 2011 at 16:09
1
In the introductory course I was TA'ing last semester we gave an exercise to define $\le$, $+$ and $\cdot$ from $a\mod b$ (when everything is zero when taken mod 0). – Asaf Karagila Mar 5 2011 at 16:46
1
Although most probably unrelated to your question, something of this ilk happens in the context of Poisson algebras. In a Poisson algebra there are two operations: a commutative, associative multiplication and a Lie bracket, subject to a compatibility condition. It is possible to unify them into one product whose symmetric part is the commutative multiplication and the antisymmetric part is the Lie bracket. – José Figueroa-O'Farrill Mar 5 2011 at 18:18
There is a notion of universal term. When I get to my copy of ALV, I will look it up. I remember it as an exercise as well as work done by George McNulty. Gerhard "Ask Me About Clone Theory" Paseman, 2011.03.05 – Gerhard Paseman Mar 5 2011 at 18:43
Asaf, indeed, one can easily define the operations by formulas of first order logic as in your exercise. In my question, however, I am seeking a binary operation having mere terms that give rise to the given functions. – Joel David Hamkins Mar 5 2011 at 18:56
## 8 Answers
The answer to Q4 is yes. Let $X$ be any infinite set. Wlog $X= Z\times\mathbb N$, where there is a bijection $i:X\to Z\times \lbrace0\rbrace$. For $x=(z,n)$ write $x+1$ for $(z,n+1)$. [Typo corrected.]
You are given countably many finitary functions $g_1, g_2, \ldots$. We may assume there is a pairing function $x*y$ among them, so we may as well assume that all of them are binary. (Due to Sierpinski, I think. E.g., $g(x,y,z) = h(x*(y*z))$ for some unary $h$.)
Now there is a binary function $f$ satisfying the following for all $x,y\in X$:
1. $f(x,x) = x+1$.
2. $f(x, x+1) = i(x)$.
3. $f(i(x)+k,i(y)) = g_k(x,y)$ for $k=1,2,\ldots$.
Clearly $f$ generates the functions $x+1$, $i(x)$, and $g_k$ for all $k$.
-
I forgot to mention that for finite sets there is a 1935 paper of Donald L. Webb (Zentralblatt 0012.00103), pointed out to me by Agnes Szendrei, which shows that there is a single binary operation generating all functions, a generalization of the Sheffer stroke: $f(x,x) = x+1$ modulo $n$, and $f(x,y) = 0$ otherwise. – Goldstern May 12 2011 at 17:25
2
Great! This seems very clear. (But I think you mean $(z,n+1)$ in the first paragraph.) I am inclined to accept this answer as the most sweeping general solution provided. – Joel David Hamkins May 13 2011 at 13:27
Thanks for the correction. – Goldstern May 13 2011 at 14:43
1
Reference: Martin Goldstern, A single binary function is enough. Contributions to general algebra 20, 35–37, Heyn, Klagenfurt, 2012. Abstract: (1) Every countable clone (on any base set) is contained in a 1-generated clone. (2) For a countable base set, the local clone generated by a single function f will be the full clone of all operations (this holds for many operations f). (3) But on an uncountable base set a finitely or countably generated local clone will never contain all operations. See Math Reviews MR2908433. – Goldstern Dec 3 at 15:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
With constants, the answer to Q1 is "yes". Let me work on the natural numbers ${\bf N} = \{0,1,2,\ldots,\}$ for notational simplicity. The idea is to identify ${\bf N}$ with the shifted naturals $4 + {\bf N}$ using the shift function $f(n) := n+4$ and its partial inverse $g(n) := \hbox{max}(n-4,0)$, and by using some pairing function $\pi: {\bf N}^2 \to {\bf N}$ that is inverted by coordinate functions $c_1, c_2: {\bf N} \to {\bf N}$ (thus $(n,m) = (c_1(\pi(n,m)), c_2(\pi(n,m))$ for all $n,m$). The point is that the shift creates some "room" in which to store the unary functions. More precisely, define
$$0 \star n := f(n)$$ $$1 \star n := g(n)$$ $$2 \star n := c_1(n)$$ $$3 \star n := c_2(n)$$
for any n, and
$$n \star m := \pi( g(n) + g(m), g(n) * g(m) )$$
for $n,m \geq 4$. Then
$$n + m = c_1( f(n) \star f(m) )$$ and $$n * m = c_2( f(n) \star f(m) )$$ and so one can write both multiplication and addition in terms of composition operations.
Clearly one can make the same idea work on the integers after placing them in one-to-one correspondence with the natural numbers (which distorts the addition and multiplication operations, but no actual properties of these operations were needed in the above construction).
I would imagine that some clever ad hoc trick would allow one to simulate constants such as 0,1,2,3, for instance by designing the $\star$ operation so that $n \star n$ is usually 0, though I don't see how to simulate four separate constants without using branching, which presumably is not allowed in this exercise.
UPDATE: OK, I found an ad hoc trick to encode constants, again working on the natural numbers for simplicity. The first observation is that one never wants to have a fixed point $n$ where $n \star n = n$, as then any binary operation formed by composition with $\star$ must always map this fixed point to itself. So we do the next best thing, which is to enforce
$$n \star n := 0$$ for non-zero n, and $$0 \star 0 := 1$$ (say). So $n \star n$ is always going to be either 0 or 1. Furthermore, if $n \star n$ is zero, then $(n \star n) \star (n \star n)$ is one, and if $n \star n$ is one, then $(n \star n) \star (n \star n)$ is zero. Hence if we then enforce $$0 \star 1 = 1 \star 0 = 2$$ then we have the identity $$((n \star n) \star (n \star n)) \star (n \star n) = 2$$ for all n, which allows us to define the constant 2 as a composition word from an arbitrary input n. If we then enforce $$0 \star 2 = 1 \star 2 := 3$$ and $$2 \star 0 = 2 \star 1 := 4$$ then we can define the constants 3 and 4 also, since $3 = (n \star n) \star 2$ and $4 = 2 \star (n \star n)$. If we then enforce $$2 \star 3 := 5; 2 \star 4 := 6; 3 \star 2 := 7; 3 \star 4 := 8; 4 \star 2 := 9; 4 \star 3 := 10$$ then we have now made all the constants from 5 to 10 definable, with no constraints as yet as to how $\star$ acts on these constants, other than to annihilate the diagonal ($5 \star 5 = 0$, etc.).
Now we need shift operators, say $f(n) := n+11$ and $g(n) := \max(n-11,0)$, to make room for all the constants that have been created. Encoding $g$ is easy, e.g. we can enforce $$5 \star n := g(n)$$ for all $n$, as this does not conflict with the existing requirement that $5 \star 5 = 0$. Encoding $f$ is slightly trickier. We can write $f$ as a composition $f = h \circ k$, where $k: {\bf N} \to {\bf N}$ is an injective "Hilbert's hotel" map that maps $6$ to $0$ and avoids $7$ in the range, and $h: {\bf N} \to {\bf N}$ is such that $h(g(n)) = n+11$ for all $n$, and $h(7)=0$. Then we can enforce $$6 \star n := k(n)$$ $$7 \star n := h(n)$$ and $f(n)$ is then $f(n) = 7 \star ( 6 \star n )$.
Finally, we can encode pairing and coordinate functions as before: $$8 \star n := c_1(n)$$ $$9 \star n := c_2(n)$$ $$n \star m := \pi( g(n) + g(m), g(n) * g(m) )$$ for $n,m \geq 11$, where we choose the pairing function $\pi$ to only take values $11$ and greater to avoid collision. Then we can recover addition and multiplication as before.
-
Very good. Your new edited version seems like it may provide an attack on questions 3 and 4 as well. – Joel David Hamkins Mar 5 2011 at 19:38
2
It seems to me that this answer combined with mine will do the job. Terry showed how to get the constant-free operation that encodes two binary operations, and we only need to encode a single binary operation (the partial application of a PCA). – Andrej Bauer Mar 6 2011 at 8:57
Your definition of $n \star n$ for $n \ge 11$ seems to be inconsistent: you want it to be both $0$ and $\pi(g(n)+g(m),g(n)g(m)) \ge 11$. You can probably get around this by encoding more functions with more constants (such as $n\mapsto 2n$ and $m\mapsto 2m+1$, and redefining $\pi$ accordingly), using the same techniques that you've already demonstrated. – zeb May 13 2011 at 16:59
A partial combinatory algebra (PCA) is a set $A$ with a partial binary operation $\bullet$, called application, which is combinatorially complete in the sense that "every polynomial is represented". More precisely, given a term $t(x_1, \ldots, x_n)$ built from the elements of $A$ and variables $x_1, \ldots, x_n$ using the application operation $\bullet$, there exists an element $a \in A$ such that, for all $b_1, \ldots, b_n \in A$, $$a \bullet b_1 \bullet \cdots \bullet b_n \simeq t(b_1, \ldots, b_n),$$ where application associates to the right, $u \bullet v \bullet w = (u \bullet v) \bullet w)$ and $\simeq$ is Kleene's equality: if one side is defined then so is the other and they are equal.
For example, in a combinatory alebra there is always an element $a$ such that $a \bullet b \simeq b \bullet b$ for all $b$.
An example of a combinatory algebra is a model of the untyped $\lambda$-calculus, since the element representing $t(x_1, \ldots, x_n)$ is simply $\lambda x_1 x_2 \ldots x_n . t(x_1, \ldots, x_n)$.
A standard theorem about PCA's states that $(A, {\bullet})$ is combinatorially complete if, and only if, it contains elements $k, s \in A$ such that $k \bullet a \bullet b = a$ and $s \bullet a \bullet b \bullet c \simeq (a \bullet c) \bullet (b \bullet c)$, for all $a, b, c \in A$.
The set of natural numbers $\mathbb{N}$ may be equipped with the structure of a PCA, known as Kleene's first algebra, if we define $n \bullet m = \phi_n(m)$, where $\phi$ is a standard enumeration of partial recursive maps. The representable maps in Kleene's first algebra are precisely the partial recursive ones. Therefore, for any binary partial recursive map $f : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ there exists a natural number $c$ such that $c \bullet m \bullet n \simeq f(m, n)$ for all $m, n \in \mathbb{N}$.
This answers your question positively and quite a bit more generally, i.e., there is an operation on $\mathbb{N}$ which generates all partial computable maps (of all arities), provided you are willing to consider partially defined operations. If we want to represent a countable family of possibly non-computable maps, then a suitable variation of Kleene's first algebra over an oracle will accomplish that.
If you are willing to consider non-computable maps we can use a cheap trick to make our operation total. Define a new operation $\star$ on $\mathbb{N}$ by $$m \star n = \begin{cases} m \bullet n & \text{if $m \bullet n$ defined}\\ 42 & \text{otherwise}\end{cases}$$ which has the following property. Given any total computable map $f : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ there is a number $c$ such that $c \star m \star n = f(m,n)$ for all $m, n \in \mathbb{N}$. To see this, just take the $c$ which works for $\bullet$. Since $c \bullet m \bullet n$ is always defined we get $c \star m \star n = c \bullet m \bullet n = f(m, n)$. Note however that $\star$ also represents some non-computable maps.
Partial combinatory algebras are important in certain branches of computability and in theory of programming languages. Unfortunately, we know almost nothing about the general structure of PCAs.
-
Dang it! :-) I was just setting out to write something about combinatory logic. :-) – Todd Trimble Mar 5 2011 at 20:01
Andrej, Thanks! I like this answer very much. But you still seem to need the constants, so it doesn't answer the strong version of question 1. But perhaps one can omit the constants somehow with a complicated term. Another point is that it appears that we can relativize your idea to have an oracle, and thereby get an answer to question 3 in the case of countable rings (using the ring operations as an oracle). With a view to question 4, is there any reason to expect such operations on uncountable sets? – Joel David Hamkins Mar 5 2011 at 20:03
Every powerset is a (total) combinatory algebra because it is a model of $\lambda$-calculus, so at least for powers of $2$ we get a similar result. Reals numbers are a power of two (in the insane world of classical mathematics with axiom of choice). – Andrej Bauer Mar 5 2011 at 20:11
2
Ok, I will edit my answer so that we don't need constants. – Andrej Bauer Mar 5 2011 at 20:12
2
This will take a while because I also have to put a six-year old to bed (and six-year olds specialize in not going to bed). – Andrej Bauer Mar 5 2011 at 20:32
show 2 more comments
The answer to Q2 is also yes, at least if no constants are allowed. Suppose that addition and multiplication can be modeled by some words involving an associative operation $\star$. If we then let $A_k := 1 \star \ldots \star 1$ be the $k$-fold iterated $\star$ of $1$, we thus see that $A_1=1$ and $A_k + A_l = A_{ak + bl}$ and $A_k \cdot A_l = A_{ck+dl}$ for some natural numbers $a,b,c,d$ and all $k,l$ (reflecting the number of times $n,m$ appear in the words for $n+m$ and $n \cdot m$ respectively). Thus addition and multiplication starting from the generator 1 have simultaneously been encoded as additive operations, which can be shown to be impossible by a variety of means (for instance, one can use the sum-product theorem of Erdos and Szemeredi to rule this out, although this is something of a sledgehammer and I am sure that there are much more elementary ways to proceed here).
The situation seems more interesting with constants, though. My guess is that because you did not require the $\star$ operation to be invertible in any way, one could still pull off the shift-encoding trick while respecting associativity, though this would require some more ad hoc contortions.
-
1
Here's a nice simple way for this case - $A_1\cdot A_1=A_1$, so c+d=1, so we either have nm=n for all n,m, or nm=m for all n,m! – Harry Altman May 12 2011 at 22:27
The exercise in Algebras, Lattices, Varieties Volume I by McKenzie, McNulty, and Taylor may be of interest: There is a term t(x,y,z) in three variables in the language of one binary symbol which is universal in the sense, that, given an infinite set A and a ternary operation G(x,y,z) on the set A, there is a binary operation on A such that, when t is constructed out of this binary operation, then G(x,y,z) = t(x,y,z) for all triples x,y,z from A. Note that the term t specified is independent of A or G. I do not know if universality is defined for tuples of terms (I would be expect that serious side conditions need to be present in order for a tuple of terms to represent uniformly a tuple of operations of the same arity, thus making it not so universal).
For the question regarding whether a given countable family of operations can be built from one operation, this is a consequence of asking if the clone generated by the family of operations is generated by one operation. Given that there are uncountable many clones on a 3-element set, I would say that the question is of interest but unlikely to be solvable uniformly or even nicely. By this I mean, if Q(F) is some way of expressing that the family F generated a clone which is finitely generated, I can imagine necessary condtions implied by Q(F) and sufficient conditions which could imply Q(F), but nothing which would be equivalent. (The question of whether F is contained in a finitely generated clone might be easier, but I do not know how much easier.)
You probably have better access to references on clone theory than I; you might try pursuing those first before asking me for a further opinion.
Gerhard "Ask Me About System Design" Paseman, 2011.03.05
-
I suspect one can answer Q1 by finding terms without constants by building on the universal term t. For example, if G(a,b,c) returns b+c when a=b and otherwise returns b*c, then addition could be t(a,a,b) and multiplication is close to ( but not quite) t(t(a,a,a),a,b). Perhaps someone quicker than I can find the appropriate arguments in terms of t. Even if I found it though, I can't yet tell you what the binary operation underlying t is. Gerhard "Ask Me About System Design" Paseman, 2011.03.05 – Gerhard Paseman Mar 5 2011 at 21:37
Step 1: Get some room by defining first the diagonal of $*$ (which is mapped to numbers divisible by $5$):
• $z*z := 5z$ for $z > 0$ and $z*z := 5z-5$ for $z <= 0$.
Now we define the meaning of $x*y$ for $x\ne y$ depending on $x \bmod 5$ and $y \bmod 5$.
Step 2: We recode all integers in different ways (to signal if we want to add or multiply)
• $z * (z*z) := 5z+1$ (special $x*y$ for $y = 0 \bmod 5$ and $x \ne y$).
• $(z*z) * (z*(z*z)) := 5z+2$ (special $x*y$ for $x = 0 \bmod 5$ and $y = 1 \bmod 5$).
• $(z*z) * ((z*z) * (z*(z*z))) := 5z+3$ (special $x*y$ for $x = 0 \bmod 5$ and $y = 2 \bmod 5$).
Step 3: We define the addition and the multiplication
• $(y*(y*y)) * ((z*z) * (z*(z*z))) := y + z$ (special $x*y$ for $x = 1 \bmod 5$ and $y = 2 \bmod 5$).
• $(y*(y*y)) * ((z*z) * ((z*z) * (z*(z*z)))) := y \cdot z$ (special $x*y$ for $x = 1 \bmod 5$ and $y = 3 \bmod 5$).
The still undefined values of $x*y$ can be assigned arbitrarily. The last two definitions give the formulas for addition and multiplication.
-
There is still room (and you can create new one with repeating step 1 for another prime) for other operations, so I see no reason why this approach couldn't encode countably many binary operations on $\mathbb{Z}$ answering a special case of your Q4. – Someone May 13 2011 at 16:28
There is an affinity of this idea with Goldstern's solution. If you don't work mod $5$, but replace $\mathbb{Z}$ with $\mathbb{Z}\times\mathbb{Z}$, you'll be able to accommodate infinitely many functions. – Joel David Hamkins May 13 2011 at 16:38
@Joel: Or if I run out of space, I just increase the modulus. – Someone May 13 2011 at 16:41
You may have (in principle) solved the ALV exercise I mentioned in my post. Your term bears some resemblance to the universal term they have. Gerhard "Ask Me About System Design" Paseman, 2011.05.13 – Gerhard Paseman May 13 2011 at 16:49
@Gerhard: I guess you are right. – Someone May 13 2011 at 17:04
Let $R$ be a ring and suppose $j\colon R \to R$ is an injection such that $j(a) \neq a,a+1$ for all $a \in R$. (Such an injection exists iff $R$ has more than two elements: take $j$ to be a translation $x \mapsto x+c$.)
Define $a \star a:=j(a)$ and $j(a)\star b:=a+b$. Since $j(a) \neq a$, this is well defined and $(a \star a) \star b=a+b$.
Now define $(j(a)+1) \star b :=ab$. Again this is well-defined, and $ab=((a \star a)\star 1)\star b)$.
-
1
If I carry out your idea in the integers, using $j(a)=a+2$, then I seem to have $2\star 2=j(2)=4$ by your first requirement, but also $2\star 2=j(0)\star 2=0+2$. So your definition is not well-defined. – Joel David Hamkins Mar 5 2011 at 18:51
What if you take (for the integers) $j$ to be $a \mapsto a^4+2$? Then this $j$ has the property that $im j$+1 is disjoint from $im j$, which seems to cause the trouble. – Guntram Mar 5 2011 at 19:40
That alone won't fix the problem, since your definitions will require that $j(a)\star j(a)$ is both $j(j(a))$ and also $a+j(a)$. – Joel David Hamkins Mar 5 2011 at 19:51
In a sense, addition is a universal operation. Hilbert's Thirteenth Problem. One answer shows that any operation $g \colon \mathbb R^n \to \mathbb R$ can be written as a finite combination of several unary functions and the binary function addition.
-
Thanks Gerald. I am aware of this theorem, which uses the combination of functions to implement a pairing function---some of the unary functions stretch the reals to make $0$s in every other digit, and then adding them combines two reals into one real whose digits are the two interleaved, and another unary function carries out the desired function on these codes. But this method doesn't answer my question unless you can get it all down to one binary function. – Joel David Hamkins Mar 5 2011 at 18:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 225, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376822710037231, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/31725/what-does-matrix-multiplication-actually-mean?answertab=oldest
|
# What does matrix multiplication actually mean?
If I multiply two numbers, say 3 and 5, I know it means add 3 to itself 5 times or add 5 to itself 3 times.
But If I multiply two matrices, what does it mean ?
I mean I can't think it in terms of repetitive addition.
What is the intuitive way of thinking about multiplication of matrices ?
-
22
Do you think of $3.9289482948290348290 \times 9.2398492482903482390$ as repetitive addition? – lhf Apr 8 '11 at 12:43
8
It depends on what you mean by 'meaning'. – Mitch Apr 8 '11 at 12:58
2
I believe a more interesting question is why matrix multiplication is defined the way it is. – M.B. Apr 8 '11 at 13:01
4
I can't help but read your question's title in Double-Rainbow-man's voice... – Adrian Petrescu Apr 8 '11 at 16:04
4
– Billare Apr 8 '11 at 16:42
show 5 more comments
## 8 Answers
Matrix ¨multiplication¨ is the composition of two linear functions. The composition of two linear functions is a linear function.
If a linear function is represented by A and another by B then AB is their composition. BA is the their reverse composition.
Thats one way of thinking of it. It explains why matrix multiplication is the way it is instead of piecewise multiplication.
-
6
Actually, I think it's the only (sensible) way of thinking of it. Textbooks which only give the definition in terms of coordinates, without at least mentioning the connection with composition of linear maps, (such as my first textbook on linear algebra!) do the student a disservice. – wildildildlife Apr 8 '11 at 16:26
Learners, couple this knowledge with mathinsight.org/matrices_linear_transformations . It may save you a good amount of time. :) – naxa Jan 30 at 16:36
The short answer is that a matrix corresponds to a linear transformation. To multiply two matrices is the same thing as composing the corresponding linear transformations.
The following is covered in a text on linear algebra (such as Hoffman-Kunze):
This makes most sense in the context of vector spaces over a field. You can talk about vector spaces and (linear) maps between them without ever mentioning a basis. When you pick a basis, you can write the elements of your vector space as a sum of basis elements with coefficients in your base field (that is, you get explicit coordinates in terms of for instance real numbers). If you want to calculate something, you typically pick bases for your vector spaces. Then you can represent your linear map as a matrix with respect to the given bases, with entries in your base field (see e.g. the above mentioned book for details as to how). We define matrix multiplication such that matrix multiplication corresponds to composition of the linear maps linear transformations.
Added (Details on the representation of a linear map by a matrix). Let $V$ and $W$ be two vector spaces with ordered bases $e_1,\dots,e_n$ and $f_1,\dots,f_m$ respectively, and $L:V\to W$ a linear map.
First note that since the $e_j$ generate $V$ and $L$ is linear, $L$ is completely determined by the images of the $e_j$ in $W$, that is, $L(e_j)$. Explicitly, note that by the definition of a basis any $v\in V$ has a unique expression of the form $a_1e_1+\cdots+a_ne_n$, and $L$ applied to this pans out as $a_1L(e_1)+\cdots+a_nL(e_n)$.
Now, since $L(e_j)$ is in $W$, it has a unique expression of the form $b_1f_1+\dots+b_1f_m$, and it is clear that the value of $e_j$ under $L$ is uniquely determined by $b_1,\dots,b_m$, the coefficients of $L(e_j)$ with respect to the given ordered basis for $W$. In order to keep track of which $L(e_j)$ the $b_i$ are meant to represent, we write (abusing notation for a moment) $m_{ij}=b_i$, yielding the matrix $(m_{ij})$ of $L$ with respect to the given ordered bases.
This might be enough to play around with why matrix multiplication is defined the way it is. Try for instance a single vector space $V$ with basis $e_1,\dots,e_n$, and calculate the corresponding matrix of the square $L^2=L\circ L$ of a single linear transformation $L:V\to V$, or say, calculate the matrix corresponding to the identity transformation $v\mapsto v$.
-
Apart from the interpretation as the composition of linear functions (which is, in my opinion, the most natural one), another viewpoint is on some occasions useful.
You can view them as something akin to a generalization of elementary row/column operations. If you compute A.B, then the coefficients in j-th row of A tell you, which linear combination of rows of B you should compute and put into the j-th row of the new matrix.
Similarly, you can view A.B as making linear combinations of columns of A, with the coefficients prescribed by the matrix B.
With this viewpoint in mind you can easily see that, if you denote by $\vec{a}_1,\dots,\vec a_k$ the rows of the matrix $A$, then the equality $$\begin{pmatrix}\vec a_1\\\vdots\\\vec a_k\end{pmatrix}B= \begin{pmatrix}\vec a_1B\\\vdots\\\vec a_kB\end{pmatrix}$$ holds. (Of course, you can obtain this equality directly from definition or by many other methods. My intention was to illustrate a situation, when familiarity this viewpoint could be useful.)
-
I came here to talk about composition of linear functions, or elementary row/column operations. But you already said it all. Great job, and +1 also you would deserve more. – wok Apr 8 '11 at 21:01
Asking why matrix multiplication isn't just componentwise multiplication is an excellent question: in fact, componentwise multiplication is in some sense the most "natural" generalization of real multiplication to matrices: it satisfies all of the axioms you would expect (associativity, commutativity, existence of identity and inverses (for matrices with no 0 entries), distributivity over addition).
The usual matrix multiplication in fact "gives up" commutativity; we all know that in general AB != BA while for real numbers ab = ba. What do we gain? Invariance with respect to change of basis. If P is an invertible matrix,
$$P^{-1}AP + P^{-1}BP = P^{-1}(A+B)P$$ $$(P^{-1}AP) (P^{-1}BP) = P^{-1}(AB)P$$ In other words, it doesn't matter what basis you use to represent the matrices A and B, no matter what choice you make their sum and product is the same.
It is easy to see by trying an example that the second property does not hold for multiplication defined component-wise. This is because the inverse of a change of basis $P^{-1}$ no longer corresponds to the multiplicative inverse of $P$.
-
8
+1, but I can't help but point out that if componentwise multiplication is the most "natural" generalization, it is also the most boring generalization, in that under componentwise operations, a matrix is just a flat collection of $mn$ real numbers instead of being a new and useful structure with interesting properties. – Rahul Narain Apr 8 '11 at 17:09
You shouldn't try to think in terms of scalars and try to fit matrices into this way of thinking. It's exactly like with real and complex numbers. It's difficult to have an intuition about complex operations if you try to think in terms of real operations.
scalars are a special case of matrizes, as real numbers are a special case of complex numbers.
So you need to look at it from the other, more abstract side. If you think about real operations in terms of complex operations, they make complete sense (they are a simple case of the complex operations).
And the same is true for Matrices and scalars. Think in terms of matrix operations and you will see that the scalar operations are a simple (special) case of the corresponding matrix operations.
-
One way to try to "understand" it would be to think of two factors in the product of two numbers as two different entities: one is multiplying and the other is being multiplied. For example, in $5\cdot4$, $5$ is the one that is multiplying $4$, and $4$ is being multiplied by $5$. You can think in terms of repetitive addition that you are adding $4$ to itself $5$ times: You are doing something to $4$ to get another number, that something is characterized by the number $5$. We forbid the interpretation that you are adding $5$ to itself $4$ times here, because $5\cdot$ is an action that is related to the number $5$, it is not a number. Now, what happens if you multiply $4$ by $5$, and then multiply the result by $3$? In other words,
$3\cdot(5\cdot4)=?$
or more generally,
$3\cdot(5\cdot x)=?$
when we want to vary the number being multiplied. Think of for example, we need to multiply a lot of different numbers $x$ by $5$ first and then by $3$, and we want to somehow simplify this process, or to find a way to compute this operation fast. Without going into the details, we all know the above is equal to
$3\cdot(5\cdot x)=(3\times 5)\cdot x$,
where $3\times 5$ is just the ordinary product of two numbers, but I used different notation to emphasize that this time we are multiplying "multipliers" together, meaning that the result of this operation is an entity that multiplies other numbers, in contrast to the result of $5\cdot4$, which is an entity that gets multiplied. Note that in some sense, in $3\times5$ the numbers $5$ and $3$ participates on an equal ground, while in $5\cdot4$ the numbers have differing roles. For the operation $\times$ no interpretation is available in terms of repetitive addition, because we have two actions, not an action and a number. In linear algebra, the entities that gets multiplied are vectors, the "multiplier" objects are matrices, the operation $\cdot$ generalizes to the matrix-vector product, and the operation $\times$ extends to the product between matrices.
-
To get away from how these two kinds of multiplications are implemented (repeated addition vs row/column dot product) and how they behave symbolically/algebraically (associativity, distribute over 'addition' have annihilators), I'd like to answer in the manner 'what are they good for?'
Multiplication of numbers gives area of a rectangle with sides given by the multiplicands (or number of total points if thinking discretely).
Multiplication of matrices (since it involves quite a few more actual numbers than just simple multiplication) is quite a bit more complex. Though the composition of linear functions (as mentioned elsewhere) is the essence of it, that's not the most intuitive description of it (to someone without the abstract algebraic experience). A more visual intuition is that one matrix, multiplying with another, results in the transformation of a set of points (the columns of the right-hand matrix) into new set of points (the columns of the resulting matrix). That is, take a set of $n$ points in $n$-D, put them as columns in an $n$by$n$ matrix; if you do multiply this from the left by another $n$ by $n$ matrix, you'll transform that 'cloud' of points to a new cloud.
This transformation isn't some random thing; it might rotate the cloud, it might expand the cloud (it won't 'translate' the cloud), it might collapse the cloud into a line (a lower number of dimensions might be needed). But it's transforming the whole cloud all at once, smoothly (near points get translated to near points).
So that is one way of getting the 'meaning' of matrix multiplication.
I have a hard time getting a good metaphor (any metaphor) of matrix multiplication with simple numerical multiplication so I won't force it - hopefully someone else might bee able to come up with a better visualization that shows how they're more alike beyond the similarity of some algebraic properties.
-
First, Understand Vector multiplication by a scalar.
Then, think on a Matrix, multiplicated by a vector. The Matrix is a "vector of vectors".
Finally, Matrix X Matrix extends the former concept.
-
This is a great answer. Now explained enough though. It is a similar way to how Gilbert Strang 'builds' up matrix multiplication in his video lectures. – Vass May 18 '11 at 15:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391680955886841, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/91803/definition-of-cw-complexes
|
## Definition of CW complexes
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Spanier's Algebraic Topology, he defined CW complexes assumed an additional strange condition: the cell must have the coherent topology with the characteristic map and the inclusion map of its boundary. What is the meaning of this condition? Some other author seems never use it, and they asked the space to be Hausdorff. Are the CW complexes Hausdorff in Spanier's way? Do the various definitions agree? Thanks!
-
3
math.stackexchange.com would be a good website for your question. I think it would be a good idea to put a little more effort into it and list the definitions you're referring to. People likely do not have the textbooks you're referring to on hand. – Ryan Budney Mar 21 2012 at 9:27
I thought this had been answered in mathoverflow.net/questions/74863/… – Ronnie Brown Mar 25 2012 at 10:04
## 1 Answer
It basically says that a CW complex has the coherent topology from its closed cells. This should be wrapped up into any definition of a CW complex that you see.
For example in Massey's book it is equivalent to statement (iv):
A subset $A$ is closed if and only if $A \cap \bar{e}$ is closed for all $n$-cells, $e^n,n=0,1,2,\ldots$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499641060829163, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.