url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/280909/determining-an-exterior-normal?answertab=oldest
|
Determining an Exterior Normal
Given a surface , that can be represented by the equation: $F(x,y,z)=0$ . How can I determine which of the vectors $-\text{gradF} ,\text {grad} F$ is the exterior normal and which is the interior normal?
In addition, if this surface can be represented as $z=f(x,y)$ , we know that the vectors $(f_x,f_y , -1 )$ are normals. But how can I determine who is the exterior normal and who is the interior one?
Thanks !
-
2
I think that to make sense your question you need assume that the surface is compact! – user52188 Jan 17 at 18:38
1 Answer
Let's assume, as Edgar Matias suggested, that our surface is compact, so we have the interior as the bounded region and the exterior as the unbounded one. I don't think that you can answer this question by considering the gradient locally, since it's not too hard imagine two manifolds with the same gradient at a point, but where in one case the gradient is inward, and in the other case the gradient is outward (imagine a shape that folds over itself).
One possible way of answering this question is to integrate the gradient over the manifold, fixing the outward orientation on the manifold. If the integral is positive, the gradient was the external normal, otherwise it was the internal normal.
-
Great ! Thanks! – theMissingIngredient Jan 17 at 19:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534834027290344, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/1960?sort=oldest
|
## Dyck paths on rectangles
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The number of Dyck paths in a square is well-known to equal the catalan numbers: http://mathworld.wolfram.com/DyckPath.html But what if, instead of a square, we ask the same question with a rectangle? If one of its sides is a multiple of the other, then again there is a nice formula for the number of paths below the diagonal, but is there a nice formula in general? What is the number of paths from the lower-left corner of a rectangle with side lengths a and b to its upper-right corner staying below the diagonal (except for its endpoint)? I am also interested in asymptotics.
-
2
Can you be more precise about the definition you're using here? Should we take "diagonal" to mean diagonal of the rectangle? – Qiaochu Yuan Oct 22 2009 at 22:18
Yes, the diagonal of the rectangle from its lower-left vertex to its upper-right vertex. – domotorp Oct 22 2009 at 23:47
## 4 Answers
If I understood your question correctly, the numbers you're looking for are called Ballot numbers. The number of paths from (0,0) to (m,n) (where m>n) which stay below the diagonal is \frac{m-n}{m+n}\binom{m+n}{m}
Moreover, if m>r⋅n, then the number of lattice paths from (0,0) to (m,n) which stay below the line x=r⋅y is \fra{m-rn}{m+n}\binom{m+n}{m}. (I haven't worked this out, but Ira Gessel says so in Introduction to Lattice Path Enumeration)
-
No, this is not what I want. If the sides of the rectangle are 100 and 250, then the ballot numbers would give an answer for a path that stays above the line from (0,0) to (100,200). Btw, this is why I have written that I also know the answer if one side is a multiple of the other. – domotorp Oct 22 2009 at 23:46
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Is a sum OK?
I am used to a different rotation of the paths. I think the paths you are looking for can also be described as all paths above the x-axis, with steps (1,1) and (1,-1), that starts at (0,0) and ends on the line x=y+n for some (x,y) from (n,0) to (n+m,m).
(If instead they end at the line x=n, we get the Ballot paths.)
Let B(n,k) be the Ballot numbers, B(n,k)= # paths from (0,0) to (n,k). Now, all paths must pass the line x=n. From there on it is just a binomial path, so the number of paths are sum_{k=0,2,4,...,n} B(k,n)*( (n-m-k)/2 choose k/2)
(n choose k)= Binomial coefficient, n!/(k!(n-k)!)
-
I am afraid that your problem with the 1,1 and 1,-1 steps seems to be different from the one I described. – domotorp Oct 23 2009 at 1:10
No? Take the rectangle, rotate it 45 degrees clockwise so that the diagonal becomes the x-axis, then reflect it in the x-axis. – Robert Parviainen Oct 23 2009 at 9:49
Then the steps would not be 1,1 and 1,-1 but something depending on a and b. – domotorp Oct 23 2009 at 14:48
I heard a talk at Indiana University last March by Timothy Chow. Here's his abstract, which seems to give a negative answer to your question about rectangles whose sides have non-integer ratio:
It is a classical result that if k is a positive integer, then the number of lattice paths from (0,0) to (a+1,b) taking unit north or east steps that avoid touching or crossing the line x = ky is
(a+b choose b) - k (a+b choose b-1).
Disappointingly, no such simple formula is known if k is rational but not an integer (although there does exist a determinant formula). We show that if we replace the straight-line boundary with a periodic staircase boundary, and if we choose our starting and ending points carefully, then the natural generalization of the above simple formula holds. By varying the boundary slightly we obtain other cases with simple formulas, but it remains somewhat mysterious exactly when a simple formula can be expected. Time permitting, we will also describe some recent related work by Irving and Rattan that provides an alternative proof of some of our results.
This is joint work with Chapman, Khetan, Moulton, and Waters.
-
Odd. I would expect that the generating function is algebraic because the corresponding language should be context-free, so I'm surprised nobody's worked it out. – Qiaochu Yuan Oct 23 2009 at 2:07
3
Nice, the full paper is here: www-math.mit.edu/~tchow/lattice.pdf – domotorp Oct 23 2009 at 15:12
Since that Mirko Visontai told me that the answer is ${a+b\choose a}/(a+b)$ if $\gcd(a,b)=1$. The proof is the following (with k=a and l=b):
The number of 0--1 vectors with $k$ 0's and $l$ 1's is ${k+l\choose k}$, so we have to prove that out of these vectors exactly $1/(k+l)$ fraction is an element of $L(k,l)$. The set of all vectors can be partitioned into equivalence classes. Two vectors $p$ and $q$ are equivalent if there is a cyclic shift that maps one into the other, i.e., if for some $j$, $p_i = q_{i+j}$ for all $i$. We will prove that exactly one element from each equivalence class will be in $L(k,l)$. This proves the statement as each class consists of $k+l$ elements because $gcd(k,k+l)=1$.
We can view each 0--1 sequence as a walk on $\mathbb R$ where each 0 is a $-l/(k+l)$ step and each 1 is a $+k/(k+l)$ step. Each $(k,l)$ walk starts and ends at zero and each walk reaches its maximum height exactly once, otherwise $ak + bl = 0$ for some $0 < a +b < k+l$ which would imply $\gcd(k,l) \neq 1$. If we take the cyclic shift that starts from the top'', we stay in the negative region throughout the walk, which corresponds to remaining under the diagonal in the lattice path case. Any other cyclic shift goes above zero, which corresponds to going above the diagonal at some point.
-
Sorry for using different variables but it seemed to much work to replace them. Also, I have problems typing tex stuff at mathoverflow because the shortcut buttons prevent me from producing symbols like \ and } on the hungarian keyboard when I answer a question. Surprisingly not when I write a comment. I wonder if anyone had similar problems. – domotorp Jan 19 2010 at 6:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224269986152649, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/3044/partition-a-set-into-subsets-of-size-k/3049
|
# Partition a set into subsets of size $k$
Given a set $\{a_1,a_2,\dots,a_{lk}\}$ and a positive integer $l$, how can I find all the partitions which includes subsets of size $l$ in Mathematica? For instance, given `{1,2,3,4}` and `l=2`, the output should be:
$$\text{Partition 1: }\{1,2\},\{3,4\}$$ $$\text{Partition 2: }\{1,3\},\{2,4\}$$ $$\text{Partition 3: }\{1,4\},\{2,3\}$$
In Mathematica notation:
````{
{{1,2},{3,4}},
{{1,3},{2,4}},
{{1,4},{2,3}}
}
````
Edit: Another example for `l=3`, and `{1,2,3,4,5,6}`:
````{
{{1, 2, 3}, {4, 5, 6}},
{{1, 2, 4}, {3, 5, 6}},
{{1, 2, 5}, {3, 4, 6}},
{{1, 2, 6}, {3, 4, 5}},
{{1, 3, 4}, {2, 5, 6}},
{{1, 3, 5}, {2, 4, 6}},
{{1, 3, 6}, {2, 4, 5}},
{{1, 4, 5}, {2, 3, 6}},
{{1, 4, 6}, {2, 3, 5}},
{{1, 5, 6}, {2, 3, 4}}
}
````
-
@murray: I think it would be clear if you read the example. The size of the set is $kl$, so it can be simply partitioned into $k$ subsets of size $l$. I am looking for all such partitions. – Mohsen Mar 15 '12 at 23:57
@Mohsen, yes, I know you use $k$ and $l$ but unfortunately in your example, $k$ = $l$ = 2. Which is why it's hardly the best example one might start from. – murray Mar 16 '12 at 3:35
@murray: I just added a new example with $k=2$ and $l=3$. – Mohsen Mar 16 '12 at 17:57
## 5 Answers
Try with
````partitions[list_, l_] := Join @@
Table[
{x, ##} & @@@ partitions[list ~Complement~ x, l],
{x, Subsets[list, {l}, Binomial[Length[list] - 1, l - 1]]}
]
partitions[list_, l_] /; Length[list] === l := {{list}}
````
The list must have a length multiple of l
-
I believe this is correct. It is based on `BellList` from Robert M. Dickau.
````Module[{n = 4, q = 2, r, BellList},
r = n/q;
BellList[1] = {{{1}}};
BellList[n_Integer?Positive] :=
Join @@ (ReplaceList[#,
{{b___, {S : Repeated[_, {1, q - 1}]}, a___} :> {b, {S, n}, a},
{S : Repeated[_, {1, r - 1}]} :> {S, {n}}}
] & /@ BellList[n - 1]);
BellList[n]
]
````
You'll have to make sure `n` is divisible by `q` or adapt it to behave as you want. Also, this uses natural numbers for the set, but these in turn can be used as indices to extract elements from the working set.
-
+1 for throwing in the formal name of the problem. I had a hard time googling it. – István Zachar Mar 15 '12 at 20:06
Use `Subsets`:
````Subsets[{1, 2, 3, 4}, {2}]
````
Gives:
````{{1, 2}, {1, 3}, {1, 4}, {2, 3}, {2, 4}, {3, 4}}
````
````a = {1, 2, 3, 4};
DeleteDuplicates[Sort /@ (Sort /@ Partition[#, 2] & /@ Permutations[a, {4}])]
````
which outputs
```` {{{1, 2}, {3, 4}}, {{1, 3}, {2, 4}}, {{1, 4}, {2, 3}}}
````
-
1
Actually, I am looking for partitions, not permutations. So, $\{1,2\},\{3,4\}$ is one partition and $\{1,3\},\{2,4\}$ is another one. I just edited the question to clarify this. – Mohsen Mar 15 '12 at 19:12
How's this version? – Eli Lansey Mar 15 '12 at 19:40
Really lazy search through all partitioned permutations:
````set = {1, 2, 3, 4};
Union[Sort /@ (Sort /@ Partition[#, 2] & /@
Permutations[set, {4}])]
````
````{
{{1, 2}, {3, 4}},
{{1, 3}, {2, 4}},
{{1, 4}, {2, 3}}
}
````
And a more economic one:
````set = {1, 2, 3, 4};
subsets = Subsets[set, {2}];
Table[{i, Cases[subsets, _?(Union[#, i] === set &)]}, {i,
Take[subsets, Length@subsets/2]}]
````
````{
{{1, 2}, {3, 4}},
{{1, 3}, {2, 4}},
{{1, 4}, {2, 3}}
}
````
This generates all the subsets of size 2 (6 of them), and scans through them one-by-one to find their complement in the same list. Since it can be assumed that the output of `Subsets` is regular, the above can be simplified to simply split the `subsets` list to two, and merging the first half with the reversed second half.:
````n = Length@subsets;
MapThread[List, {Take[subsets, n/2], Take[Reverse@subsets, n/2]}]
````
-
Thanks István, actually I am looking for a solution for a general $l$, not only $l=2$. So, your last solution doesn't help, but the other ones looks great. I tested them for $l=3$ and $k=3$ (i.e., 9 numbers) and they work perfect, but for 12 numbers they don't. Is there any way to solve the issue or there are too many partitions with $l=3$ and $k=4$, ummm? – Mohsen Mar 15 '12 at 19:48
Yes, I have realized that, so I am now working on a general solution. For the worst, one can do a partial tree-traversal by fixing the first digit and changing the next one, fixing it and changing the next one, etc... I'm pretty sure that before I could solve this, others will post some beautiful answers. – István Zachar Mar 15 '12 at 19:59
You might start with the `Combinatorica` add-on function `SetPartitions` and then select those partitions satisfying the condition about the size of their members. But this may be too "extravagant" an approach when the original set size becomes a bit big.
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8423462510108948, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/39729/numerical-formulation-of-dirac-equation-plus-electromagnetic-field?answertab=oldest
|
# numerical formulation of Dirac equation plus electromagnetic field
I have the following equations describing the electron field in a (classic) electromagnetic field:
$$c\left(\alpha _i\right.{\cdot (P - q(A + A_b) + \beta mc) \psi = E \psi }$$
where $A_b$ is the background field and $A$ is the one generated by the local Dirac field
I presume that the equation for the electromagnetic field $A$ generated by the electron would be:
$$\nabla_{\mu}\nabla^{\mu} A_{\nu} = \frac{\psi \gamma^{\nu} \psi}{\epsilon_0}$$
Question: Is there a way to numerically solve these systems of equations to find eigenstates of the system?
Side Show Question: Are these eigenstates physically meaningful? do i still need to apply second quantization procedure in order to know which eigenstates are physically meaningful (i.e: stable) and which are not?
-
## 2 Answers
I guess these are tough questions, as the system is nonlinear. I can only give some references. In some of them, some numerical solutions of this system were found: Phys. Rev. A 60, 4291–4300 (1999) (also arXiv:physics/0001038 ), http://maths-old.anu.edu.au/research.publications/proceedings/039/CMAproc39-booth.pdf and references there. For what it's worth, in my work arXiv:1111.4630 it is shown how to eliminate spinor field from the system (but complex electromagnetic potentials are introduced, which produce the same electromagnetic field, so their imaginary part is defined by one common function, so you have just 5 unknown real functions).
-
The main problem in your proposed equation is that the electromagnetic equation with the D'Alambertian over the vector potential is not in Hamiltonian form, this means that the separation of solutions in Sturm–Liouville eigenstates of the energy operator is not manifest in the equation. Without that, you cannot find eigenstates of the coupled system.
You might find this dissertation interesting: On the canonical formulation of electrodynamics and wave dynamics
In there, the author analizes a Hamiltonian formulation for the electrodynamic field that is amenable to numerical solution coupled with the Schrödinger equations. Depending on what you actually want to find this should suffice (or not)
Regarding to your other question, i'm not confident giving an authoritative answer to that, so i'll let others jump on it
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259394407272339, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/43347-number-theory-print.html
|
# Number Theory
Printable View
• July 9th 2008, 02:41 PM
JCIR
Number Theory
Let $a,b \in \mathbb{Z} .$ with (a,4)=2 and (b,4)=2. find (a+b, 4) and prove that your answer is correct.
• July 9th 2008, 04:20 PM
Reckoner
Quote:
Originally Posted by JCIR
Let $a,b \in \mathbb{Z} .$ with (a,4)=2 and (b,4)=2. find (a+b, 4) and prove that your answer is correct.
$(a + b,\,4) = 4$
$\emph{Proof: }$ First, note that the only possible divisors of 4 are 1, 2, and 4, so we only need show that $4\mid(a + b)$
$(a,\,4) = 2\text{ and }(b,\,4)=2\Rightarrow2\mid a\text{ and }2\mid b\Rightarrow\exists p,\,q\in\mathbb{Z},\;a = 2p,\,b = 2q$
$\Rightarrow a + b = 2(p + q)$
But $2\nmid p$, for if so, $\exists m\in\mathbb{Z},\;a = 2(2m) = 4m\Rightarrow4\mid a$ and so $(a,\,4) = 4$ and not 2 as required. Similarly, $2\nmid q$.
Thus $p$ and $q$ are odd so their sum $p + q$ must be even (i.e., $\exists s,\,t\in\mathbb{Z},\;p = 2s + 1\text{ and }q = 2t + 1$ $\Rightarrow p + q = 2s + 2t + 2 = 2(s + t + 1)$ with $s + t + 1\in\mathbb{Z}$). So $4\mid2(p + q)\Rightarrow4\mid(a + b)$ as required to show that $(a + b,\,4) = 4\quad\square$
All times are GMT -8. The time now is 11:22 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394466280937195, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/15521/making-change-for-a-dollar-and-other-number-partitioning-problems/15540
|
# Making Change for a Dollar (and other number partitioning problems)
I was trying to solve a problem similar to the "how many ways are there to make change for a dollar" problem. I ran across a site that said I could use a generating function similar to the one quoted below:
The answer to our problem (293) is the coefficient of $x^{100}$ in the reciprocal of the following:
$(1-x)(1-x^5)(1-x^{10})(1-x^{25})(1-x^{50})(1-x^{100})$
But I must be missing something, as I can't figure out how they get from that to $293$. Any help on this would be appreciated.
-
## 6 Answers
You should be able to compute it using a Partial Fraction representation (involving complex numbers). For instance see this previous answer: Minimum multi-subset sum to a target.
Note, this partial fraction expansion needs to be calculated only one time. Once you have that, you can compute the way to make change for an arbitrary amount pretty quickly.
In this case, I doubt they really did that for finding the coefficient of $x^{100}$. It is probably quicker to just multiply out, ignoring the terms which would not contribute to the coefficient of $x^{100}$. Or you could try computing the partial fraction representation of only some of the terms and then multiply out.
Note, if you are multiplying out to find the coefficient of $x^{100}$, it would be easier not to go to the reciprocal, which arises from considering an infinite number of terms.
You just need to multiply out
$$(\sum_{j=0}^{100} x^j)\ (\sum_{j=0}^{20} x^{5j})\ (\sum_{j=0}^{10} x^{10j})\ (\sum_{j=0}^{4} x^{25j})\ (1 + x^{100})$$
which would amount to enumerating the different ways to make the change (and in fact is the the way we come up with the generating function in the first place).
You could potentially do other things, like computing the $100^{th}$ derivative at $0$, or computing a contour integral of the generating function divided by $x^{100}$, but I doubt they went that route either.
Hope that helps.
-
I appreciate the answer, lots of good information in it. Sadly my level in mathematics is deffinetely at the lower end (and a little rusty to boot) and from how the other website made it sound I thought it would be easier then you've described. – Peter Dec 27 '10 at 14:46
You "just" have to follow the prescription: find the formal power series (no need to think about convergence) that is defined and check the number that multiplies x^100. There's a reason I put just in quotes. There is no obvious route to 293 that I can see. Mathematica can do it with just one command, but I can't get Alpha to do it.
-
We can ease the calculation by noting that the number of ways of changing 100 equals the number of ways of representing the numbers less than or equal to $100$ as the sum of the numbers $5, 10, 25, 50$ and $100$, since the pennies can make up any remaining difference.
Noting that all these number are divisible by $5$ we can conclude that the number of ways of representing $100$ in units of $1, 5, 10, 25, 50$ and $100$ is the sum of the coefficients up to and including the term in $x^{20}$ in the expansion of
$$\frac{1}{(1-x)(1-x^2)(1-x^5)(1-x^{10})(1-x^{20})} .$$
-
Thanks for the information. The problem I am actually working on at the moment is very similar to this one, but not all the values are divisible by 5. But there was no way for you to know that :), and it may prove useful in working on other problems. – Peter Dec 27 '10 at 14:49
I think you calculate $[x^{100}](1-x)^{100}(1-x^5)^{20}(1-x^{10})^{10}(1-x^{25})^4(1-x^{50})^2(1-x^{100})$, but that calculation seems to be brute force.
-
Calculating coefficient before $x^{100}$ can be done quite easy and quickly in this situation. I will show it for coins 5,10,20,50, because the idea is relevant and that will be faster. Denote:
$\displaystyle P_5(x)=\frac{1}{1-x^5}=1+x^5 P_5(x)$ (multiply by denominator)- generating function for changing money only with 5 cents
$\displaystyle P_{5,10}(x)=\frac{P_5(x)}{1-x^{10}}=P_5(x)+x^{10}P_{5,10}(x)$ - for changing with coins 5,10, and so on..
$P_{5,10,20}(x)=P_{5,10}(x)+x^{20}P_{5,10,20}(x)$
$P_{5,10,20,50}(x)=P_{5,10,20}(x)+x^{50}P_{5,10,20,50}(x)$
We are looking for sequence $p_n$, where $P_{5,10,20,50}(x)=\sum_{n}p_n x^n$. Denote: $P_5(x)=\sum_{n}q_n x^n, \ P_{5,10}(x)=\sum_{n}r_n x^n, \ P_{5,10,20}(x)=\sum_{n}s_n x^n$, and by the relations with generating functions above, it follows:
$$q_n=1, \ r_n=q_n+r_{n-10}, \ s_n=r_n+s_{n-20}, \ p_n=s_n+p_{n-50}$$ so it takes 5 minutes to calculate $p_n$ fo small $n$, which is an answer. For bigger $n$ maybe better will be to solve this system of recurrences (because I'm not sure about complexity of finding $p_n$ from these recurrences, but it is unsatisfying I think) and derive closed form formula for $p_n$ but for now I can't do it.
-
For the record, I'll copy a snippet form this answer to a question that was closed as a duplicate to this question, as it explains exactly how to compute the given coefficient explicitly, which is really the same as the method given in the answer by ray in a more algorithmic formulation. I just give the procedure here, for more explanations see the answer linked to.
Let $c$ denote an array of $101$ integers indexed from $0$ to $100$.
• Initialise your array so that $c[0]=1$ and $c[i]=0$ for all $i>0$.
• For $k=1,5,10,25,50,100$ (in any order) do:
• for $i=0,1,\ldots,100-k$ (in this order) do:
• add $c[i]$ to $c[i+k]$.
• Now $c[100]$ gives your answer.
This computation gives you the coefficient of $x^{100}$ in the power series for $1/((1-x)(1-x^5)(1-x^{10})(1-x^{25})(1-x^{50})(1-x^{100}))$, which equals $293$.
-
Thanks for this, much easier for a programmer like me to understand :) – Peter Nov 27 '12 at 13:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570654630661011, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/6428/snells-law-starting-from-qft/6429
|
# Snell's law starting from qft? [duplicate]
This question already has an answer here:
• How are classical optics phenomena explained in QED (Snell's law)? 2 answers
Can one "interpret" Snell's law in terms of QED and the photon picture? How would one justifiy this interpretation with some degree of mathematical rigour? At the end I would like to have a direct path from qft to snell's law as an approximation which is mathematically exact to some degree and gives a deeper physical insight (i.e. from a microscopic = qft perspective) to Snell's law.
-
Maybe a more interesting question would be: Can one "interpret" Snell's law in terms of QED and the photon picture? I.e. can we describe what a single photon does in processes like refraction? – Lagerbaer Mar 6 '11 at 15:47
Thanks, I have incorporated your suggestion and modified the question. Regarding your second sentence notice the problematic nature of the term "single photon".. – student Mar 6 '11 at 16:06
Aww, that renders my answer quite stupid... I don't think there is much to interpret then, it's not like a single photon has a well-defined trajectory that suddenly changes direction. (also @Lagerbaer) – Tobias Kienzler Mar 6 '11 at 16:09
2
– Qmechanic♦ Oct 7 '11 at 8:50
There are really two questions here. "Can we understand the index of refraction in terms of QED?" and "Does QED imply ray optics in the appropriate limit once the index of refraction is known". Feynman give a nice answer the second is his pop-sci book on QED. – dmckee♦ Mar 10 '12 at 16:45
## marked as duplicate by Qmechanic♦Mar 31 at 18:53
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
## 3 Answers
Sure. Start with QED, obtain Maxwell's equations, do the paraxial approximation and finally use Fermat's principle.
-
1
Isn't using Fermat 'cheating' in a way? Using the Fermat/Lagrangian-approach is the textbook way of proving Snell's law. I'm guessing 'student' is rather looking for a more direct path integral-approach from QFT. – Gerben Mar 6 '11 at 15:46
1
– Tobias Kienzler Mar 6 '11 at 15:57
2
Isn't Fermat's principle pretty much the same thing as the path-integral approach? The E&M Lagrangian density is proportional to $E^{2} + B^{2}$, which works out to a constant proportional to the square of the wave amplitude in the case of the wave. In the classical limit, you expect the action to be minimized, and in the case of a well-defined beam to which Snell's Law would apply, you'd expect the path to be a curve in space. Clearly, the only way to get a stationary phase for this action is to look for the shortest path. Ergo, Fermat's principle. – Jerry Schirmer Mar 6 '11 at 16:08
Though I would say that QED is obtained from quantizing the Maxwell Lagrangian, not the other way around. – Jerry Schirmer Mar 6 '11 at 22:12
@Jerry not necessarily, you can also start with the free particle path integral and use the Yang-Mills theory for U(1), thus starting with local symmetry and obtaining the Maxwell Lagrangian as a result. – Tobias Kienzler Mar 7 '11 at 6:50
show 3 more comments
This appears to be explained in detail in Feynman's "QED the strange theory of light and matter" in Chapter 2, page 39 to 45, of the 2006 edition, in more or less plain English.
-
In regard to the single photon aspect of the question, I speculate that the explanation is similar to the Mossbauer effect, ie the photon is absorbed and re-emitted by the entire mirror/crystal rather than a single atom. If you insist on thinking of a single photon being absorbed and re-emitted by a single atom, you are going to have to invoke the regularity of the mirror or crystal and use constructive and destructive interference, as mentioned in other answers.
You can see the necessity of a coherent many body solution from the fact that a single atom does not refract or reflect, it scatters, and similarly for a rough mirror.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262604713439941, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/String_Theory/Supersymmetry
|
Supersymmetry
This chapter on supersymmetry intends to present it WITHOUT the use of Grassmann variables, preferring to use instead the formalism of Z2 grading.
A Z2-graded vector space is a vector space together with an assignment of an even (bosoninc) (corresponding to 0 of Z2) and an odd (fermionic) (corresponding to 1) subspace such that the vector space is the direct sum of the even and odd subspaces.
An even vector is an element of the even subspace and an odd vector is an element of the odd subspace. A pure vector is either an even or an odd vector. Any vector can be decomposed uniquely as the sum of an even and an odd vector.
The tensor product of two Z2-graded vector spaces is another Z2-graded vector space.
In fact, in this book, we will take the stronger point of view that it makes no physical sense to add even and odd vectors together. From this point of view, we might as well view a Z2-graded vector space as an ordered pair <V0,V1> where V0 is the even space and V1 is the odd space.
Similarly, a Z2-graded algebra is an algebra A with a direct sum decomposition into an even and an odd part such that the product of two pure elements obeys the Z2 relations. Alternatively, we can think of it as <A0,A1>.
A Lie superalgebra is a Z2-graded algebra whose product [·, ·], called the Lie superbracket or supercommutator, satisfies
$[x,y]=-(-1)^{|x| |y|}[y,x]$
and
$(-1)^{|z| |x|}[x,[y,z]]+(-1)^{|x| |y|}[y,[z,x]]+(-1)^{|y| |z|}[z,[x,y]]=0$
where x, y, and z are pure in the Z2-grading. Here, |x| denotes the degree of x (either 0 or 1).
Lie superalgebras are a natural generalization of normal Lie algebras to include a Z2-grading. Indeed, the above conditions on the superbracket are exactly those on the normal Lie bracket with modifications made for the grading. The last condition is sometimes called the super Jacobi identity.
Note that the even subalgebra of a Lie superalgebra forms a (normal) Lie algebra as all the funny signs disappear, and the superbracket becomes a normal Lie bracket.
One way of thinking about a Lie superalgebra—it's not the most symmetric way of looking at it—is to consider its even and odd parts, L0 and L1 separately. Then, L0 is a Lie algebra, L1 is a linear rep of L0, and there exists a symmetric L0-intertwiner $\{.,.\}:L_1\otimes L_1\rightarrow L_0$ such that for all x,y and z in L1,
$\left\{x, y\right\}[z]+\left\{y, z\right\}[x]+\left\{z, x\right\}[y]=0$
A supermanifold is a concept in noncommutative geometry. Recall that in noncommutative geometry, we don't look at point set spaces but instead, the algebra of functions over them. If M is a (differential) manifold and H is an (smooth) algebra bundle over M with a Grassmann algebra as the fiber, then the space of (smooth) sections of M forms a supercommutative algebra under pointwise multiplication. We say that this algebra defines the supermanifold (which isn't a point set space).
If M is a real manifold and we define an involution * over the fiber turning it into a * algebra, then the resulting algebra would define a real supermanifold.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930935800075531, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/1409/how-to-calculate-expected-return-based-on-historical-data-for-mean-variance-anal?answertab=active
|
# How to calculate expected return based on historical data for Mean Variance Analysis
I've recently started reading some books on asset allocation and portfolio theory but I don't work in the field and don't have much knowledge yet.
So I've been reading up on mean-variance analysis and my question is regarding the computation of the expected returns for a particular asset. From what I understand historical data is used to predict future returns. In the book that I'm currently reading, the author provides monthly returns for a particular stock and then we're asked to calculate the expected return for future months.
My question is this, if i have historical open/close data for a particular stock, how do I use this information to calculate the returns? Since the return is based on the share price when the stock was purchased and the price when the it was sold, I'm not sure exactly what the calculation would look like.
-
Did your last sentence get cut off? – chrisaycock♦ Jul 7 '11 at 1:36
sorry, i was in mid-sentence when i was interrupted. :) – miggety Jul 7 '11 at 5:55
Even if it is not precisely the question, I think the question raises another issue, which is "Does past returns provide a good estimate of future returns?" These protfolio allocation algos are good but IMO it is a bit easy to assume that we have a good estimate of the returns. If you give me a good estimate of the future return, let me manage your money, its not too hard. Rather than focusing on that, shouldn't we focus on how to forecast returns? – RockScience Jul 8 '11 at 2:42
@RockScience: yes, i've heard the same question asked many times, and have seen in many texts the pitfalls of relying on past returns to estimate future earnings. Although i'm aware of some of these pitfalls, i'm just getting started in this field and so having an understanding of some of the techniques, even if out of fashion, gives me something to cut my teeth on. While we're on the topic though, do you know of any good resources (links, books, etc.) that discuss how to forecast returns? Is that the type of problem time-series analysis would be used for? – miggety Jul 11 '11 at 21:52
@miggety: There is no one best technique to forecast returns. It is the most difficult part of your model. Some take discretionary decisions, others trust machine learning like god. There is a lot of different things to explore. And yes definitely time series analysis is a good basis for this problem. Good luck – RockScience Jul 12 '11 at 0:56
show 1 more comment
## 3 Answers
-
Thanks Bill, i think that'll get me going in the right direction. – miggety Jul 7 '11 at 19:40
## Did you find this question interesting? Try our newsletter
email address
http://ci.columbia.edu/ci/premba_test/c0332/s6/s6_3.html contains an example with the percentage returns over the last 10 years (something like $r_{year N}=\frac{P_{end year N} - P_{start year N}}{P_{start year N}} \cdot 100$%).
and here is another link http://academicearth.org/lectures/portfolio-diversification. This is an entire course from Yale University (including this subject).
-
Hi rtybase, it may be helpful to summarize what's in those links. You'll notice that the other link-based answers on here didn't get much respect either. – chrisaycock♦ Jan 26 '12 at 21:11
@chrisaycock - fixed! ;) – rtybase Jan 26 '12 at 21:31
You can have a look at
http://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average
But I doubt you'll have very good results as it is a very naive technique.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354955554008484, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/62916/trivial-map-on-sigma-algebra-mod0-is-trivial
|
## trivial map on $\sigma-$algebra $\mod{}0$ is trivial
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi everyone! I am currently studying the basic theory of measurable actions and need the following result, which I am not able to prove myself. It is stated without a proof, so probably it should not be hard, but I am lost...
Question: Suppose $T$ is an invertible measure-preserving map of standard probability measure space $(X,\mu)$. Suppose that $TA=A$ for all measurable subsets $A\subset{}X$, where the equality is up to sets of measure $0$. Prove that the set of those $x$ where $Tx\neq{}x$ has measure $0$.
-
You will need some countability-type hypotheses on the measure space. For example, it is a standard measure space. If not, build a counterexample in product space $[0,1]^{[0,1]}$. – Gerald Edgar Apr 25 2011 at 13:20
Gerald, thanks, I've corrected the question. – David Berman Apr 25 2011 at 13:22
## 1 Answer
If $X$ is a standard probability space then we may assume it to be the disjoint union of an interval with Lebesgue measure and a countable set of atoms. If $p$ is an atom, then by assumption, $T(p) = p$, since $p$ has positive measure. So none of the atoms can be "bad" points. So we may assume that there are no atoms, so that $X=[0,1]$ and the measure is Lebesgue measure. Now consider all subintervals $I$ of $[0,1]$ with rational endpoints and gather all points in $TI$ which are outside $I$. We get a countable union of measure zero sets, hence a measure zero set. Denote it by $Z$. Now let $x \notin Z$. This implies that $Tx$ belongs to arbitrarily small intervals around $x$ and is thus equal to $x$. The desired result follows.
-
Great! thank you, Mark! – David Berman Apr 25 2011 at 16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381046891212463, "perplexity_flag": "head"}
|
http://ams.org/bookstore-getitem/item=memo/221/1037
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Zeta Functions for Two-Dimensional Shifts of Finite Type
Jungchao Ban, National Dong Hwa University, Hualien, Taiwan, Wen-Guei Hu and Song-Sun Lin, National Chiao Tung University, Hsinchu, Taiwan, and Yin-Heng Lin, National Central University, ChungLi, Taiwan
SEARCH THIS BOOK:
Memoirs of the American Mathematical Society
2013; 60 pp; softcover
Volume: 221
ISBN-10: 0-8218-7290-7
ISBN-13: 978-0-8218-7290-1
List Price: US\$60
Individual Members: US\$36
Institutional Members: US\$48
Order Code: MEMO/221/1037
This work is concerned with zeta functions of two-dimensional shifts of finite type. A two-dimensional zeta function $$\zeta^{0}(s)$$, which generalizes the Artin-Mazur zeta function, was given by Lind for $$\mathbb{Z}^{2}$$-action $$\phi$$. In this paper, the $$n$$th-order zeta function $$\zeta_{n}$$ of $$\phi$$ on $$\mathbb{Z}_{n\times \infty}$$, $$n\geq 1$$, is studied first. The trace operator $$\mathbf{T}_{n}$$, which is the transition matrix for $$x$$-periodic patterns with period $$n$$ and height $$2$$, is rotationally symmetric. The rotational symmetry of $$\mathbf{T}_{n}$$ induces the reduced trace operator $$\tau_{n}$$ and $$\zeta_{n}=\left(\det\left(I-s^{n}\tau_{n}\right)\right)^{-1}$$.
The zeta function $$\zeta=\prod_{n=1}^{\infty} \left(\det\left(I-s^{n}\tau_{n}\right)\right)^{-1}$$ in the $$x$$-direction is now a reciprocal of an infinite product of polynomials. The zeta function can be presented in the $$y$$-direction and in the coordinates of any unimodular transformation in $$GL_{2}(\mathbb{Z})$$. Therefore, there exists a family of zeta functions that are meromorphic extensions of the same analytic function $$\zeta^{0}(s)$$. The natural boundary of zeta functions is studied. The Taylor series for these zeta functions at the origin are equal with integer coefficients, yielding a family of identities, which are of interest in number theory. The method applies to thermodynamic zeta functions for the Ising model with finite range interactions.
• Introduction
• Periodic patterns
• Rationality of $$\zeta_{n}$$
• More symbols on larger lattice
• Zeta functions presented in skew coordinates
• Analyticity and meromorphic extensions of zeta functions
• Equations on $$\mathbb{Z}^{2}$$ with numbers in a finite field
• Square lattice Ising model with finite range interaction
• Bibliography
AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8223329186439514, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/94427/list
|
## Return to Question
4 deleted 39 characters in body
Hi
I am meeting a problem concerning semi-definite positive matrices, and I have no clue concerning them, the classical approaches I know have not given any result, maybe people used to manipulating them could help me...
Call $\mathscr{P}$ the convex set of symmetric SDP matrices $S$ of order $N\geq 1$ such that $|S_{ij}|<1$ . Consider the mapping $$\Phi:S=(S_{ij})\mapsto (\frac{2}{\pi}\arcsin(S_{ij})).$$
I think it is a (well) known fact that $\Phi(\mathscr{P})\subset \mathscr{P}$. My question is the following: Is $\Phi(\mathscr{P})$ convex?
At first I thought that the answer was "no", but I checked that for $N=2$ the answer is "yes". I have no clue as how to test higher dimensions. Does anyone have a suggestion on a method?
Another possible formulation: Given two SDP matrices $A$ and $B$, and $a\in [0,1]$, is the matrix $C$ defined by $$C_{ij}=\frac{\pi}{2}\sin\left(a\frac{2}{\pi}\arcsin(A_{ij})+(1-a)\frac{2}{\pi}\arcsin(B_{ij})\right)$$ $C_{ij}=\sin\left(a\arcsin(A_{ij})+(1-a)\arcsin(B_{ij})\right)$\$ also SDP?
3 edited tags
2 added 235 characters in body
Hi
I am meeting a problem concerning semi-definite positive matrices, and I have no clue concerning them, the classical approaches I know have not given any result, maybe people used to manipulating them could help me...
Call $\mathscr{P}$ the convex set of symmetric SDP matrices $S$ of order $N\geq 1$ such that $|S_{ij}|<1$ . Consider the mapping $$\Phi:S=(S_{ij})\mapsto (\frac{2}{\pi}\arcsin(S_{ij})).$$
I think it is a (well) known fact that $\Phi(\mathscr{P})\subset \mathscr{P}$. My question is the following: Is $\Phi(\mathscr{P})$ convex?
At first I thought that the answer was "no", but I checked that for $N=2$ the answer is "yes". I have no clue as how to test higher dimensions. Does anyone have a suggestion on a method?
Another possible formulation: Given two SDP matrices $A$ and $B$, and $a\in [0,1]$, is the matrix $C$ defined by $$C_{ij}=\frac{\pi}{2}\sin\left(a\frac{2}{\pi}\arcsin(A_{ij})+(1-a)\frac{2}{\pi}\arcsin(B_{ij})\right)$$ also SDP?
1
# Mapping a subset of semi-definite matrices through arcsinus
Hi
I am meeting a problem concerning semi-definite positive matrices, and I have no clue concerning them, the classical approaches I know have not given any result, maybe people used to manipulating them could help me...
Call $\mathscr{P}$ the convex set of symmetric SDP matrices $S$ of order $N\geq 1$ such that $|S_{ij}|<1$ . Consider the mapping $$\Phi:S=(S_{ij})\mapsto (\frac{2}{\pi}\arcsin(S_{ij})).$$
I think it is a (well) known fact that $\Phi(\mathscr{P})\subset \mathscr{P}$. My question is the following: Is $\Phi(\mathscr{P})$ convex?
At first I thought that the answer was "no", but I checked that for $N=2$ the answer is "yes". I have no clue as how to test higher dimensions. Does anyone have a suggestion on a method?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.971051037311554, "perplexity_flag": "head"}
|
http://dsp.stackexchange.com/questions/tagged/untagged
|
# Tagged Questions
The untagged tag has no wiki summary.
1answer
66 views
### How to calculate auto-correlation of a bpsk modulated signal or how to calculate expectation value of complex exponential function
How to calculate auto-correlation of a bpsk modulated signal or how to calculate expectation value of complex exponential function manually not by using matlab or any other software? for example, if ...
1answer
57 views
### How to distinguish between the different frequency domains?
Sometimes the terms 'Fourier domain', 'complex frequency domain', 'Frequency domain' and 's domain' are used interchangeably. Take those answers here for example: ...
1answer
27 views
### How can I graph a simulation of a FM 16 bits codification?
I have to develop a software where the user inputs a 16 bits binary sequence, which is supposed to be the codification of a FM (or even AM) signal, then I should graph it according to Frequence / Time ...
1answer
65 views
### Locking on to a square wave signal with minimum oversampling
I'm designing a device that will have an IR photodiode connected to a low power microcontroller's ADC pin. At times, another device will be transmitting a 48KHz square wave, and I'd like to be able to ...
2answers
63 views
### Linear model of the modified error function
We can write the error function ($E(w)=1/2\sum_{n=1}^{N}\{y(x_n,w)-t_n\}^2$) as a linear model using its partial derivatives. Is it possible to do the same thing about the modified error function? ...
2answers
405 views
### How to find poles of transfer function by looking at the step response?
How to find poles of transfer function by looking at the step response? Given a step response graph like such: How would I find the sketch for its poles on the complex plane? The only thing I can ...
1answer
62 views
### Clarifying some notation in Bishop book
In Bishop book, page 4, section 1.1, there's a notation I don't seem to understand what's meant by it. The whole paragraph, with which the section begins, is: We begin by introducing a simple ...
1answer
104 views
### How do I implement IFFT to filter low frequencies from a group,after Freq detection by FFT?
I use a combination of Visual Basic Programming interfaced with FAMOS which is a DSP software like MATLAB but much easier.I am trying to separate signals from a group,which requires filtering each ...
1answer
114 views
### Is possible to get overshoot on bessel filter?
I tried to use bessel filter from http://www-users.cs.york.ac.uk/~fisher/mkfilter/trad.html, but in case large dataset i have overshoot as i know bessel filter shouldnt have overshoot, so is code ...
2answers
73 views
### Resampling necessary?
I have a file that is composed of n data records with a sample rate s. Each of these records has a duration d. Now due to postprocessing I have to change the duration to d=d/3 thus afterwards I'll ...
1answer
115 views
### basic bandpass filter
I am attempting to implement a basic bandpass filter but the center frequency of my filter seems to be irrelevant as I can change it all I want but it has no effect on the filter. where am I going ...
3answers
761 views
### Optimized Ansi C libraries for DSP
I am new to DSP? Where is a good resource for open source DSP algorithms? MATLAB is great at making protos but once we move to C coding, it takes time and we end up making too many mistakes. I would ...
1answer
208 views
### Why does diagonal loading of a covariance matrix make an adaptive beamformer more robust in the case of a perturbed array?
It has been shown that 'diagonal loading' a covariance matrix derived for an adaptive beamformer can improve robustness of the beamformer when the antenna array is perturbed, albeit at the expense of ...
2answers
329 views
### Magnitude of Power Spectral Density with Different Sampling Frequency
For exponential signals (sine or cosine), if the sampling frequency $f_s$ is equal to the length of signal, $N$, the magnitude of psd for each sine signal is proportional to amplitude in time domain, ...
1answer
436 views
### How to avoid denormalized numbers?
The same floating-point AMD X86-64 digital signal processing system mentioned in my previous question has a problem where it sometimes slows down substantially when signals attain values very near ...
2answers
130 views
### What are less computationally demanding alternatives to the Viterbi Decoder?
What are less computationally demanding alternatives to the Viterbi Decoder? Ideally what I would like is a list of the most commonly used approximate methods, along with brief pros and cons.
4answers
860 views
### How do I extrapolate a 1D signal?
I have a signal of some length, say 1000 samples. I would like to extend this signal to 5000 samples, sampled at the same rate as the original (i.e., I want to predict what the signal would be if I ...
1answer
363 views
### How does adaptive Huffman coding work?
Huffman coding is a widely used method of entropy coding used for data compression. It assumes that we have complete knowledge of a signal's statistics. However, there are versions of Huffman coding ...
1answer
351 views
### What approximation techniques exist for the square super-root function?
I need to implement an approximation to the inverse of $x^x$, i.e. the square super-root (ssrt) function. For example, $\mathrm{ssrt}(2) \approx 1.56$ means that $1.56^{1.56} \approx 2$. I'm not as ...
3answers
524 views
### How to check the FFT results of a sine wave?
I have been given an audio file (sine wave) 1000Hz as an input to my FFT algorithm. I have got 8192 power spectrum samples in a array. What is the best and easiest way to check whether my output is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133567214012146, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/12045?sort=votes
|
## What are fixed points of the Fourier Transform
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The obvious ones are 0 and $e^{-x^2}$ (with annoying factors), and someone I know suggested hyperbolic secant. What other fixed points (or even eigenfunctions) of the Fourier transform are there?
-
## 3 Answers
The following is discussed in a little more detail on pages 337-339 of Frank Jones's book "Lebesgue Integration on Euclidean Space" (and many other places as well).
Normalize the Fourier transform so that it is a unitary operator $T$ on $L^2(\mathbb{R})$. One can then check that $T^4=1$. The eigenvalues are thus $1$, $i$, $-1$, and $-i$. For $a$ one of these eigenvalues, denote by $M_a$ the corresponding eigenspace. It turns out then that $L^2(\mathbb{R})$ is the direct sum of these $4$ eigenspaces!
In fact, this is easy linear algebra. Consider $f \in L^2(\mathbb{R})$. We want to find $f_a \in M_a$ for each of the eigenvalues such that $f = f_1 + f_{-1} + f_{i} + f_{-i}$. Using the fact that $T^4 = 1$, we obtain the following 4 equations in 4 unknowns:
$f = f_1 + f_{-1} + f_{i} + f_{-i}$
$T(f) = f_1 - f_{-1} +i f_{i} -i f_{-i}$
$T^2(f) = f_1 + f_{-1} - f_{i} - f_{-i}$
$T^3(f) = f_1 - f_{-1} -i f_{i} +i f_{-i}$
Solving these four equations yields the corresponding projection operators. As an example, for $f \in L^2(\mathbb{R})$, we get that $\frac{1}{4}(f + T(f) + T^2(f) + T^3(f))$ is a fixed point for $T$.
-
2
To add a little detail: The four eigenspaces are the closed linear spans of concrete functions, called Hermite functions, that are of the form (Hermite polynomial)e^{-x^2}. So you get a lot of fixpoints of the Fourier transform, namely everything that is the limit in mean square of linear combinations of those of the Hermite functions that belong to the eigenvalue 1. – engelbrekt Jan 17 2010 at 0:14
@engelbrekt: did our answers cross? I think we must have commented at about the same time – Yemon Choi Jan 17 2010 at 4:21
@Choi: Yes, they crossed. I remember noticing that. – engelbrekt Jan 17 2010 at 7:15
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Following on a little from Andy's comment, Hermite polynomials (multiplied by a Gaussian factor) give a basis of eigenvectors for the FT as an operator on $L^2({\mathbb R})$
-
A very important fixed point of the Fourier transform that isn't in $L^2$ is the Dirac comb distribution, informally $$D(x) = \sum_{n\in Z} \delta(x-n),$$ or more properly, defined by its pairing on smooth functions of sufficient decay by $$\langle D, f\rangle = \sum_{n\in Z} f(n).$$ The fact that $D$ is equal to its Fourier transform is really just the Poisson summation formula.
(I wrote an argument explaining why $D$ should be its own Fourier transform in an answer to another question: http://mathoverflow.net/questions/14568/truth-of-the-poisson-summation-formula/14580#14580)
-
1
These are tempered distributions, and Andy's argument carries over verbatim to these. – Robin Chapman Sep 18 2010 at 6:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094357490539551, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/250136/what-does-it-mean-to-say-a-random-variable-is-non-negative/250139
|
# What does it mean to say a random variable is non-negative?
How would you define a random variable to be non-negative ???
What are some examples of a Negative random variable ???
-
2
A good example of a negative random random variable is my stock portfolio. – copper.hat Dec 3 '12 at 18:08
2
Think of a random variable as a measurement of some sorts. Some measurements are always positive (eg, the number shown when you throw a die), some are always negative (eg, actual car speed less the speed shown on your properly functioning speedometer, actually this example is non-positive), some are neither (distance walked today less the distance walked yesterday). – copper.hat Dec 3 '12 at 18:15
## 4 Answers
$X$ is non-negative just means that $P(X<0)=0$. The opposite of "non-negative" is not "negative," just that the random variable might take a negative value, that is $P(X<0)>0$.
A "negative" random variable is one that is always negative - that is: $P(X<0)=1$. Similarly, for "positive," $P(X>0)=1$. Note that a positive random variable is necessarily non-negative. But a non-negative random variable can be zero.
-
so a normally distributed random variable is not non-negative then ??? – user1769197 Dec 3 '12 at 18:10
Definitely, no normal distribution is non-negative. – Thomas Andrews Dec 3 '12 at 18:12
1
@user1769197 To use less negations, that is: normal distribution on the real numbers has to yield negative values. This is because the bell shape tapers off forever in both directions, including the negative direction. – rschwieb Dec 3 '12 at 18:14
A non-negative random variable is one which takes values greater than or equal to zero with probability one, i.e., $X$ is non-negative if $\mathbb{P}(X \geq 0) = 1$.
A negative random variable is one which takes values less than zero with probability one, i.e., $Y$ is negative if $P(Y < 0) = 1$. An example would a random variable which is equal to $-1$ with probability $1/2$ and equal to $-6$ with probability $1/2$, or if $Y \sim \operatorname{Exponential}(\lambda)$ then $-Y$ is a negative random variable (since $Y$ is a positive random variable).
Note in particular that saying a random variable is non-negative is not the opposite of saying it is negative.
-
Suppose your random variable is your net return in dollars on a game in a casino.
If you pay money to play and lose it all (or lose part of it) the variable would be negative.
If you win more than you bet, your return will be positive.
Conceivably, if the game is rigged for you to always lose, all of the possible (nonzero probability) outcomes could result in you losing money. That could be called a "negative random variable".
-
And thus, to answer the OP's other question, if there's zero chance that you walk away with less money than you come with, the random variable is non-negative. – Brett Frankel Dec 3 '12 at 18:15
I am super curious why this solution might warrant a downvote. – rschwieb Dec 3 '12 at 18:19
1
+1 Seems like a good example to me. – copper.hat Dec 3 '12 at 18:22
A random variable $X$ is non-negative precisely if $$\Pr(X\ge0)=1.$$
The number of times you're struck by lightning this afternoon is an example.
The time you have to wait for the bus is another.
Viewing $X$ as a function whose domain is a probability space, it means the range of the function is $[0,\infty)$, or sometimes $[0,\infty]$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215816855430603, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/53630-parabolic-arch-bridge.html
|
# Thread:
1. ## Parabolic Arch Bridge
A bridge is to be built in the shape of a parabolic arch and is to have a span of 100 feet. The height of the arch, a distance of 40 feet from the center, is to be 10 feet. Find the height of the arch at its center.
2. Originally Posted by magentarita
A bridge is to be built in the shape of a parabolic arch and is to have a span of 100 feet. The height of the arch, a distance of 40 feet from the center, is to be 10 feet. Find the height of the arch at its center.
Draw the parabola with the y-axis as axis of symmetry and the bed of the road will be the x-axis. See diagram.
The points (-50, 0) and (50, 0) lie on the parabola on the x-axis since the span is 100.
The points (-40, 10) and (40, 10) also lie on the parabola.
We use $(x-h)^2=4p(y-k)$ for our equation of a parabola with vertex (h, k) since the axis if symmetry is vertical. We know our vertex is at (0, k). We need to find k.
Substituting point (50, 0) into this equation, we get:
$(50-0)^2=4p(0-k)$
$\boxed{2500=0p-4pk}$
Substituting point (40, 10) into this equation, we get:
$(40-0)^2=4p(10-k)$
$\boxed{1600=40p-4pk}$
Use the two boxed equations to solve for p.
$2500= \ \ 0p-4pk$
$1500=40p-4pk$
Subtract the two equations to get:
$900=-40p$
$p=-\frac{45}{2}$
Now, to find k, we substitute p back into one of our boxed equations.
$2500=0\left(\frac{45}{2}\right)-4\left(-\frac{45}{2}\right)k$
$2500=90k$
$k=\frac{350}{9} \approx 27.8$ feet.
Attached Thumbnails
3. ## ok....
Originally Posted by masters
Draw the parabola with the y-axis as axis of symmetry and the bed of the road will be the x-axis. See diagram.
The points (-50, 0) and (50, 0) line on the parabola on the x-axis since the span is 100.
The points (-40, 10) and (40, 10) also lie on the parabola.
We use $(x-h)^2=4p(y-k)$ for our equation of a parabola with vertex (h, k) since the axis if symmetry is vertical. We know our vertex is at (0, k). We need to find k.
Substituting point (50, 0) into this equation, we get:
$(50-0)^2=4p(0-k)$
$\boxed{2500=0p-4pk}$
Substituting point (40, 10) into this equation, we get:
$(40-0)^2=4p(10-k)$
$\boxed{1600=40p-4pk}$
Use the two boxed equations to solve for p.
$2500= \ \ 0p-4pk$
$1500=40p-4pk$
Subtract the two equations to get:
$900=-40p$
$p=-\frac{45}{2}$
Now, to find k, we substitute p back into one of our boxed equations.
$2500=0\left(\frac{45}{2}\right)-4\left(-\frac{45}{2}\right)k$
$2500=90k$
$k=\frac{350}{9} \approx 27.8$ feet.
4. Originally Posted by magentarita
You are toooooo kind! Blush Blush
5. ## ok.....
Originally Posted by masters
Draw the parabola with the y-axis as axis of symmetry and the bed of the road will be the x-axis. See diagram.
The points (-50, 0) and (50, 0) lie on the parabola on the x-axis since the span is 100.
The points (-40, 10) and (40, 10) also lie on the parabola.
We use $(x-h)^2=4p(y-k)$ for our equation of a parabola with vertex (h, k) since the axis if symmetry is vertical. We know our vertex is at (0, k). We need to find k.
Substituting point (50, 0) into this equation, we get:
$(50-0)^2=4p(0-k)$
$\boxed{2500=0p-4pk}$
Substituting point (40, 10) into this equation, we get:
$(40-0)^2=4p(10-k)$
$\boxed{1600=40p-4pk}$
Use the two boxed equations to solve for p.
$2500= \ \ 0p-4pk$
$1500=40p-4pk$
Subtract the two equations to get:
$900=-40p$
$p=-\frac{45}{2}$
Now, to find k, we substitute p back into one of our boxed equations.
$2500=0\left(\frac{45}{2}\right)-4\left(-\frac{45}{2}\right)k$
$2500=90k$
$k=\frac{350}{9} \approx 27.8$ feet.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897377610206604, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/76590-compact-sets.html
|
# Thread:
1. ## Compact sets
Find an infinite collection {Sn : n is an element of N} of compact sets R such that the union from n=1 to infinity Sn is compact.
2. Are you sure that you have copied this correctly? As written, it is almost trivial.
$S_n = \left[ {\frac{{ - 1}}{n},\frac{1}{n}} \right],\;\;N = \mathbb{Z}^ +$
3. I had the proper equation all written out but it didn't transfer over to the post, so I wrote it out in words. I'm sure that's how it reads in words.
4. Originally Posted by noles2188
I had the proper equation all written out but it didn't transfer over to the post, so I wrote it out in words. I'm sure that's how it reads in words.
Do you know that my example give you this?
[MATh]\bigcup\limits_{n = 1}^\infty {\left[ {\frac{{ - 1}}{n},\frac{1}{n}} \right]} = \left[ { - 1,1} \right][/tex]
5. In fact, I just read it again and the end should read "not compact" instead of "compact". Sorry for the confusion.
6. Originally Posted by noles2188
In fact, I just read it again and the end should read "not compact" instead of "compact". Sorry for the confusion.
Each singleton set is compact in the standard topology on R.
Take an infinite union of singleton sets, let's say, $\bigcup_{k \in N}\{k\}$ is an infinite union of compact sets, which is a set of natural numbers N.
Take an open interval, let's say, $I_x=(x-1/2, x+1/2), x \in N$. Then $C=\{I_x:x \in N\}$ is an open cover for N, but it has no finite subcover.
You can find other examples in other topological spaces, such as a discrete topology on N.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441757798194885, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/172669/for-what-value-of-h-the-set-is-linearly-dependent/172670
|
# For what value of h the set is linearly dependent?
For what value of $h$ set $(\vec v_1 \ \vec v_2 \ \vec v_3)$ is linearly dependent? $$\vec v_1=\left[ \begin{array}{c} 1 \\ -3 \\ 2 \end{array} \right];\ \vec v_2=\left[ \begin{array}{c} -3 \\ 9 \\ -6 \end{array} \right] ;\ \vec v_3=\left[ \begin{array}{c} 5 \\ -7 \\ h \end{array} \right]$$
Attempt: After row reducing the augmented matrix of $A\vec x=\vec 0$ where $A=(\vec v_1 \ \vec v_2 \ \vec v_3)$:
$$\begin{bmatrix} 1 & -3 & 5 & 0 \\ -3 & 9 & -7 & 0 \\ 2 & -6 & h & 0 \end{bmatrix} \sim \begin{bmatrix} 1 & -3 & 5 & 0 \\ 0 & 0 & 8 & 0 \\ 0 & 0 & h-10 & 0 \end{bmatrix}$$
I am not sure whether the set is linearly dependent when $h=10$ or for any $h$. Help please.
-
The set is always linearly dependent since $v_2 = -3v_1.$ – user2468 Jul 19 '12 at 2:27
@J.D. so it is enough for the set of three vectors to have two vectors that are collinear to be a linearly dependent set, right? – Dostre Jul 19 '12 at 3:39
1
Indeed. A quick geometric reminder for yourself: the basis in $\Bbb{R}^3.$ If you pick two vectors collinear in the direction of the $x$-axis & a vector in the $z$ direction, would you be able to describe every vector in $\Bbb{R}^3$? Of course not. – user2468 Jul 19 '12 at 3:45
## 1 Answer
That reduced matrix shows you that the set of vectors is linearly dependent for every value of $h$. If $h\ne 10$, the system has no solution, and if $h=10$, it has infinitely many, so there is no value of $h$ that gives it exactly one solution.
Indeed, you can see this directly from the vectors themselves: $v_2=-3v_1$.
-
I think you may have confused the data, @Brian: if $\,h=10\,$ then the third row becomes all zero and, thus, the original set of three vectors is linearly dependent, as asked. True, if $\,h\neq 10\,$ then the homogeneous system is inconsistent, but we don't really care about that as nothing was asked about solutions of linear systems, homogeneous or non-homog. – DonAntonio Jul 19 '12 at 2:17
In fact, I think the OP confused himself by writing down an augmented matrix as if he wanted to solve some linear system, whereas a $\,3\times 3\,$ matrix with the vectors' components is enough to find out whether they're l.i. or not. And then yes, as you wrote: for any value of $\,h\,$ the three vectors are l.d. This is also easy to check calculating the easy determinant of that square matrix, which is zero no matter what $\,h\,$ is. – DonAntonio Jul 19 '12 at 2:21
@DonAntonio: I suspect that you’re right about the confusion, but you missed the point of my answer. If the three vectors were linearly independent for some $h$, then for that $h$ the homogeneous system would have only the trivial solution. But there is no value of $h$ for which this is the case, so for every $h$ the vectors must be linearly dependent. – Brian M. Scott Jul 19 '12 at 2:24
I think I got it, @Brian. I just meant the phrase "...so there is no value of h that gives it exactly one solution." seemed to imply we were looking for solutions of some (homogeneous) system oflinear eq's and this seemed to me going astray from the question's point, but I see you followed the OP's work he himself showed. +1, anyway. – DonAntonio Jul 19 '12 at 2:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580058455467224, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/12804/list
|
## Return to Question
4 fixed typos
A cardinal $\lambda$ is weakly inaccessible, iff it is a. it is regular (i.e. a set of cardinalily cardinality $\lambda$ can't be represented as a union of sets of cardinality $<\lambda$ indexed by a set of cardinality $<\lambda$) and b. for all cardinals $\mu<\lambda$ we have $\mu^+<\lambda$ where $\mu^+$ is the successor of $\mu$. Strongly inaccessible cardinals are defined in the same way, with $\mu^+$ replaced by $2^\mu$. Usually one also adds the condition that $\lambda$ should be uncountable.
As far as I understand, a "large cardinal" is a weakly inaccessible cardinal which has with some extra properties. In set theory one considers various "large cardinal axioms", which assert the existence of large cardinals of various kinds. Notice that these axioms are quite different from, say the Continuum Hypothesis. In particular, one can't deduce the consistency of ZFC + there exists at least one (uncountable) weakly inaccessible cardinal from the consistency of ZFC, see e.g. Kanamori, The Higher Infinite, p.19. I.e., assuming ZFC is consistent, these axioms can not be shown independent of ZFC.
The "reasonable" large cardinal axioms seem to be ordered according to their consistency strength, as explained e.g. here http://en.wikipedia.org/wiki/Large_cardinal. This is not a theorem, just an observation. A list of axioms according to their consistency strength can be found e.g. on p. 472 of Kanamori's book mentioned above. (Noticeably, it starts with "0=1", which is a very strong axiom indeed.)
Large cardinals appear to occur seldom in "everyday" mathematics. One such instance when they occur is when one tries to construct the foundations of category theory. One of the ways to do that (and the one that seems (to me) to be the most attractive) is to start with the set theory and to add Grothendieck's Universe axiom, which states that every set is an element of a Grothendieck universe.
(As an aside remark, let me mention another application of large cardinal axioms: incredibly, the fastest known solution of the word problem in braid groups originated from research on large cardinal axioms; the proof is independent of the existence of large cardinals, although the first version of the proof did use them. See Dehornoy, From large cardinals to braids via distributive algebra, Journal of knot theory and ramifications, 4, 1, 33-79.)
Translated into the language of cardinals, the Universe axiom says that for any cardinal there is a strictly larger strongly inaccessible cardinal. I have heard several times that this is pretty low on the above consistency strength list, but was never able to understand exactly how low. So I would like to ask: does the existence of a (single) large cardinal of some kind imply (or is equivalent to) the Universe axiom?
3 corrected a typo
A cardinal $\lambda$ is weakly inaccessible, iff it is a. regular (i.e. a set of cardinalily $\lambda$ can't be represented as a union of sets of cardinality $<\lambda$ indexed by a set of cardinality $<\lambda$) and b. for all cardinal cardinals $\mu<\lambda$ we have $\mu^+<\lambda$ where $\mu^+$ is the successor of $\mu$. Strongly inaccessible cardinals are defined in the same way, with $\mu^+$ replaced by $2^\mu$. Usually one also adds the condition that $\lambda$ should be uncountable.
As far as I understand, a "large cardinal" is a weakly inaccessible cardinal which has some extra properties. In set theory one considers various "large cardinal axioms", which assert the existence of large cardinals of various kinds. Notice that these axioms quite different from, say the Continuum Hypothesis. In particular, one can't deduce the consistency of ZFC + there exists at least one (uncountable) weakly inaccessible cardinal from the consistency of ZFC, see e.g. Kanamori, The Higher Infinite, p.19. I.e., assuming ZFC is consistent, these axioms can not be shown independent of ZFC.
The "reasonable" large cardinal axioms seem to be ordered according to their consistency strength, as explained e.g. here http://en.wikipedia.org/wiki/Large_cardinal. This is not a theorem, just an observation. A list of axioms according to their consistency strength can be found e.g. on p. 472 of Kanamori's book mentioned above. (Noticeably, it starts with "0=1", which is a very strong axiom indeed.)
Large cardinals appear to occur seldom in "everyday" mathematics. One such instance when they occur is when one tries to construct the foundations of category theory. One of the ways to do that (and the one that seems (to me) to be the most attractive) is to start with the set theory and to add Grothendieck's Universe axiom, which states that every set is an element of a Grothendieck universe.
(As an aside remark, let me mention another application of large cardinal axioms: incredibly, the fastest known solution of the word problem in braid groups originated from research on large cardinal axioms; the proof is independent of the existence of large cardinals, although the first version of the proof did use them. See Dehornoy, From large cardinals to braids via distributive algebra, Journal of knot theory and ramifications, 4, 1, 33-79.)
Translated into the language of cardinals, the Universe axiom says that for any cardinal there is a strictly larger strongly inaccessible cardinal. I have heard several times that this is pretty low on the above consistency strength list, but was never able to understand exactly how low. So I would like to ask: does the existence of a (single) large cardinal of some kind imply (or is equivalent to) the Universe axiom?
2 corrected typos
A cardinal $\lambda$ is weakly inaccessible, iff it is a. regular (i.e. a set of cardinalily $\lambda$ can't be represented as a union of sets of cardinality $<\lambda$ indexed by a set of cardinality $<\lambda$) and b. for all cardinal $\mu<\lambda$ we have $\mu^+<\lambda$ where $\mu^+$ is the successor of $\mu$. Strongly inaccessible cardinals are defined in the same way, with $\mu^+$ replaced by $2^\mu$. Usually one also adds the condition that $\lambda$ should be uncountable.
As far as I understand, a "large cardinal" is a weakly inaccessible cardinal which has some extra properties. In set theory one considers various "large cardinal axioms", which assert the existence of large cardinals of some kindvarious kinds. Notice that these axioms quite different from, say the Continuum Hypothesis. In particular, one can't deduce the consistency of ZFC + there exists at least one (uncountable) weakly inaccessible cardinal from the consistency of ZFC, see e.g. Kanamori, The Higher Infinite, p.19. I.e., assuming ZFC is consistent, these axioms can not be shown independent of ZFC.
The "reasonable" large cardinal axioms seem to be ordered according to their consistency strength, as explained e.g. here http://en.wikipedia.org/wiki/Large_cardinal. This is not a theorem, just an observation. A list of axioms according to their consistency strength can be found e.g. on p. 472 of Kanamori's book mentioned above. (Noticeably, it starts with "0=1", which is a very strong axiom indeed.)
Large cardinals appear to occur seldom in "everyday" mathematics. One such instance when they occur is when one tries to construct the foundations of category theory. One of the ways to do that (and the one that seems (to me) to be the most attractive) is to start with the set theory and to add Grothendieck's Universe axiom, which states that every set is an element of a Grothendieck universe.
(As an aside remark, let me mention another application of large cardinal axioms: incredibly, the fastest known solution of the word problem in braid groups originated from research on large cardinal axioms; the proof is independent of the existence of large cardinals, although the first version of the proof did use them. See Dehornoy, From large cardinals to braids via distributive algebra, Journal of knot theory and ramifications, 4, 1, 33-79.)
Translated into the language of cardinals, the Universe axiom says that for any cardinal there is a strictly larger strictly strongly inaccessible cardinal. I have heard several times that this is pretty low on the above consistency strength list, but was never able to understand exactly how low. So I would like to ask: does the existence of a (single) large cardinal of some kind imply (or is equivalent to) the Universe axiom?
1
# Large cardinal axioms and Grothendieck universes
A cardinal $\lambda$ is weakly inaccessible, iff it is a. regular (i.e. a set of cardinalily $\lambda$ can't be represented as a union of sets of cardinality $<\lambda$ indexed by a set of cardinality $<\lambda$) and b. for all cardinal $\mu<\lambda$ we have $\mu^+<\lambda$ where $\mu^+$ is the successor of $\mu$. Strongly inaccessible cardinals are defined in the same way, with $\mu^+$ replaced by $2^\mu$. Usually one also adds the condition that $\lambda$ should be uncountable.
As far as I understand, a "large cardinal" is a weakly inaccessible cardinal which has some extra properties. In set theory one considers various "large cardinal axioms", which assert the existence of large cardinals of some kind. Notice that these axioms quite different from, say the Continuum Hypothesis. In particular, one can't deduce the consistency of ZFC + there exists at least one (uncountable) weakly inaccessible cardinal from the consistency of ZFC, see e.g. Kanamori, The Higher Infinite, p.19.
The "reasonable" large cardinal axioms seem to be ordered according to their consistency strength, as explained e.g. here http://en.wikipedia.org/wiki/Large_cardinal. This is not a theorem, just an observation. A list of axioms according to their consistency strength can be found e.g. on p. 472 of Kanamori's book mentioned above. (Noticeably, it starts with "0=1", which is a very strong axiom indeed.)
Large cardinals appear to occur seldom in "everyday" mathematics. One such instance when they occur is when one tries to construct the foundations of category theory. One of the ways to do that (and the one that seems (to me) to be the most attractive) is to start with the set theory and to add Grothendieck's Universe axiom, which states that every set is an element of a Grothendieck universe.
(As an aside remark, let me mention another application of large cardinal axioms: incredibly, the fastest known solution of the word problem in braid groups originated from research on large cardinal axioms; the proof is independent of the existence of large cardinals, although the first version of the proof did use them. See Dehornoy, From large cardinals to braids via distributive algebra, Journal of knot theory and ramifications, 4, 1, 33-79.)
Translated into the language of cardinals, the Universe axiom says that for any cardinal there is a strictly larger strictly inaccessible cardinal. I have heard several times that this is pretty low on the above consistency strength list, but was never able to understand exactly how low. So I would like to ask: does the existence of a (single) large cardinal of some kind imply (or is equivalent to) the Universe axiom?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528695940971375, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/44359/range-of-the-radon-transform/44470
|
## Range of the Radon Transform
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let us consider the Radon transform in two dimensions:
$$\tag{1}Rf(r,\theta):=\int\limits_{-\infty}^{\infty} f(r\cos\theta-t\sin\theta,r\sin\theta+t\cos\theta) dt,$$
where $r\in\mathbb{R}$ and $0\leq\theta\leq \pi$. There is a well known theorem about the range of the transform.
Theorem. A function $g(r,\theta)$ can be represented as a Radon transform of some function $f(x,y)$ (i.e. $g=R[f]$) if and only if for all integers $n\geq0$ $$\int\limits_{-\infty}^{\infty} r^ng(r,\theta) dr$$ is a homogeneous polynomial of $\cos\theta$ and $\sin\theta$.
Obviously, if $g(r,\theta)$ belongs to the range of the Radon transform then the inverse Radon transform of the function $g(r,\theta)$ is $f(x,y)$.
Now let us consider a function which DOES NOT belong to the range of the transform.
QUESTION: What we would receive if we apply the inverse Radon transform to a function not from the range of the transform?
For example, consider function $g(r,\theta):= e^{-r^2}$ if $0\leq\theta\leq \pi/2$ and $g(r,\theta):= e^{-r^2(1-\cos\theta\sin\theta)}$ if $\pi/2\leq\theta\leq\pi$. This function does not belong to the range of the Radon transform. Then, on the one hand, there is no function $f$ such that $g=R[f]$. On the other hand, $g= R[ R^{-1}g ]$.
What's wrong with this paradox?
Thanks!
UPDATE: Let us notice that $R[R^{-1}g]$ is defined correctly, but it is not equal to $g$.
Indeed, if $g=R[R^{-1}g]$, then
$$\int\limits_{-\infty}^{\infty} r^ng(r,\theta) dr = \int\limits_{-\infty}^{\infty} r^n R[R^{-1}g] (r,\theta) dr=$$
$$=\int\int r^n [R^{-1}g] (r\cos\theta−t\sin\theta,r\sin\theta+t\cos\theta)drdt=$$
$$=\int\int (u\cos\theta+v\sin\theta)^n [R^{-1}g] (u,v)dudv,$$
which is a homogeneous polynomial of $\cos\theta$ and $\sin\theta$ (we just have to expand the brackets). On the other hand it is NOT a homogeneous polynomial (by assumption). Therefore $g\neq R[R^{-1}g]$.
-
Without specifying the domain of the Radon transform, it does not make sense to talk about its range. My feeling is that this is the root of your supposed paradox – Yemon Choi Oct 31 2010 at 19:41
Yemon, I don't quite understand your comment. f(x,y) is defined on $\mathbb{R}^2$ and Radon transform, $R[f](r,\theta)$ is defined on $\mathbb{R}\times [0;\pi]$. – Oleg Oct 31 2010 at 20:07
1
Actually, no. If you define the domain to be all functions for which the integral (1) converges, your "well known theorem" is false. The Radon and inverse Radon transforms establishes a bijection between Schwarz functions on R^2 and with Schwarz functions on $S^1 \times R$ that satisfies the homogeneous polynomial condition its moment. See chapter 1 of Helgason's book www-math.mit.edu/~helgason/Radonbook.pdf See also Dirk's answer below. So most likely when you take the inverse transform of you function, you get something that decays only slightly faster than $|x|^{-2}$. – Willie Wong Nov 1 2010 at 19:42
1
No, if $g$ is such that $R^{-1}$ is well defined and $R^{-1}g$ is such that $RR^{-1}g$ is well defined, $RR^{-1}g = g$ by definition. The problem is that $R: \mathcal{S}(\mathbb{R}^2) \to \mathcal{S}(\mathbb{P})$ with the image being functions satisfying the homogeneous polynomial condition, while $R$ may still send a bigger space (say, a space of functions for which the trace on all lines are defined and absolutely integrable) to something else. For comparison, think of the Fourier transform. It is a bijection of Schwarz functions, but it also is a bijection of $L^2$ function with itself. – Willie Wong Nov 1 2010 at 23:36
1
The last integral does not converge. – Willie Wong Nov 2 2010 at 21:44
show 8 more comments
## 1 Answer
Probably you refer to some theorem in "Mathematics of Computerized Tomography" by Frank Natterer (e.g. Theorem 4.2)? Then you are assuming that the domain in $\mathcal{S}$ and if I remember correctly, in that book this denotes the Schwartz space of rapidly decaying $C^\infty$-functions. Hence you paradox is resolved by the fact that $R^{-1} g$ is not a Schwartz function.
-
Hello Dirk! Thanks for this explanation. But anyway, it is easy to see, that one can apply a Radon transform to function $R^{-1}g$ and $RR^{-1}g$ is defined correctly. What is your opinion, is $RR^{-1}g$ equal to $g$ (see update of the question)? Thanks! – Oleg Nov 2 2010 at 20:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221506714820862, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/276418/how-to-apply-response-time-graph-on-sensor-value
|
# How to apply response time graph on sensor value?
The following is LM35 Thermal response time in air
The following is temperature reading from LM35 sensor. Horizontal axis is time in sec.
So this is not "real-time" temperature graph. The question is having thermal response graph, how to best adjust real readings from sensor to get "real-time" values?
-
Are you asking how to deal with the fact that the sensor output is a filtered version of the actual temperature? If so, you could try and build a model of the sensor (first order may be sufficient), characterizing the model (ie, extracting teh parameters) and then some form of observer/estimator. Kalman may be your friend here. Some delay will be inevitable. If your application depends on having real-time values, and a bad estimate can have bad impact, then you need a better sensor or professional advice... – copper.hat Jan 12 at 18:28
@copper.hat: filtered or better to say delayed. The sensor will stabilize temperature after approximately after 300 seconds. If you can form your answer into something more practical I would love to read it and try if I can. – Pablo Jan 12 at 18:31
@copper.hat: all common sensors(including this high precision one) have what is called "time constant", time required for equilibrium, so this I should deal with this calculations. – Pablo Jan 12 at 18:34
Pablo, there are too many issues to give a comprehensive answer here. A simple model of the sensor output would have dynamics $\dot{x} = \frac{1}{T_{\text{sensor}}}(u-x)$, where $x$ is the measurement and $u$ is the actual temperature and $T_{\text{sensor}}$ is the (presumably) thermal time constant. You are trying to estimate $u$ given $x$. This is fairly standard stuff in control systems engineering. However, depending on your application characteristics (maybe the inputs change very slowly, etc.), you can often do better (where better means cheaper, simpler, faster...). – copper.hat Jan 12 at 18:56
## 1 Answer
I would think that you would approach this from several angles.
1. As can see from their three graphs, the conditions that they use to generate are very controlled and they 'somewhat' define them, that is, that used a different part soldered to a different printed circuit board (PCB) with various dimensions. Have you duplicated these conditions for your readings including times, the PCB, the voltages, etc.?
2. These curves are 'typical' responses under those very controlled conditions and your mileage may vary because there are many factors in play.
3. If you look at aL35 Spec Sheet, you see that these are called 'typical' charts and there can be a pretty wide variability as allowed by the specifications of the part.
4. Additionally, in that spec sheet, you see that they 'guarantee' 'Accuracy versus Temperature' and I would recommend trying to validate that the part is meeting those criteria as the typical graphs may be difficult to recreate since there are so many variables.
I would also suggest asking the part manufacturer for more details of the hardware, circuits, PCBs, et. al. in order to duplicate those results. Maybe even the Application Engineers, but I doubt you would make much headway.
However, validating the 'Guaranteed' performance numbers and playing with those characteristics in order to massage your measurements will likely be the most fruitful approach.
Regards
-
+1 (I read this post yesterday, like a couple others, but held off lest I tip the scale in $\uparrow$ 's ;-) – amWhy May 9 at 0:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518360495567322, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/123101/discrepancy-in-the-application-of-the-identity-sum-n-infty-infty-fn
|
# Discrepancy in the Application of the Identity $\sum_{n=-\infty}^\infty f(n) = -\sum_{j=1}^l \operatorname{Res}(g;a_j)$
The theorem in its entirety is as follows:
Let $a_1,\ldots,a_l\in\mathbb{C}$ be pairwise different non-integral numbers. Let f be an analytic function in $\mathbb{C}-\{a_1,\ldots,a_l\}$ and set $g(z):=\pi \cot(\pi z)f(z)$, such that $|z^2f(z)|$ is bounded outside a suitable compact set. Then:
$$\sum_{n=-\infty}^\infty f(n) = -\sum_{j=1}^l \operatorname{Res}(g;a_j)$$
The book wants me to use this theorem to prove that $\sum_{n=1}^\infty \frac{1}{n^{2}} = \frac{\pi^2}{6}$. Everything points to me setting $f(z)=\frac{1}{z^2}$, but the pole of $\frac{1}{z^2}$ is zero which is an integral number and is thus a point at which f must be analytic, thus the theorem cannot be applied, what am I missing here? Thanks.
Edit: I guess I'm suppose to slightly modify the theorem so it works for an overlapping pole, I probably don't need clarification on this after all.
-
## 1 Answer
Hint: you are on the right track. Try the function $$f(z) = \frac{1}{z^2 +a^2}$$ which coincides with your guess in the limit $a\to 0$. The additional term in the sum (due to the pole of $g(z)$ at $z=0$ can be simply subtracted). The rest of $\sum_n f(n)$ is two times the requested sum. Take the limit $a\to0$ and you are done...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517836570739746, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/09/04/the-problem-with-pointwise-convergence/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## The Problem With Pointwise Convergence
I wrote this up yesterday between my sections of college algebra, but forgot to post it afterwards. Oops.
We’ve got a problem with the topology of pointwise convergence. The subspace of continuous functions isn’t closed. What does that mean? It means that if we take a sequence of continuous functions, their pointwise limit may not be continuous.
Here’s an example in the real numbers. Let $f_n(x)=\frac{x^{2n}}{1+x^{2n}}$, which is a sequence of well-defined continuous functions on the entire real line. But if we take the pointwise limit $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$ we find that $f(x)=0$ for $|x|<1$, that $f(x)=1$ for $|x|>1$, and that $f(x)=\frac{1}{2}$ for $x=\pm1$. So the functions in the sequence are continuous at $x=\pm1$, but the limiting function isn’t. It would be one thing if the sequence just failed to converge at some points — closedness doesn’t require all sequences to converge — but the pointwise limit clearly exists, and it fails to be continuous.
What we need is a stronger sense of convergence: one in which fewer sequences converge in the first place, and hopefully one in which the continuous functions turn out to be closed. But it should also obey the same definition as that of the pointwise limit when it does exist. And to find it we’ll need to recast the question of continuity in the limit.
Remember that a function is continuous at a point $x_0$ if it agrees with its limit there. That is, if $\lim\limits_{x\rightarrow x_0}f(x)=f(x_0)$. But the function $f$ should be the pointwise limit of the sequence $f_n$: $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$. And each of these functions is continuous: $\lim\limits_{x\rightarrow x_0}f_n(x)=f_n(x_0)$. Putting these together, the condition for continuity in the limit is
$\lim\limits_{x\rightarrow x_0}\lim\limits_{n\rightarrow\infty}f_n(x)=\lim\limits_{n\rightarrow\infty}\lim\limits_{x\rightarrow x_0}f_n(x)$.
So our question is really about when we can exchange limits. For which sequences of functions do the dependence on $x$ and that on $n$ play well enough together to allow these limits to be exchanged? We’ll answer that question tomorrow.
### Like this:
Posted by John Armstrong | Analysis, Functional Analysis
## 3 Comments »
1. When I saw the title of this post, my immediate reaction was, “How is he going to fit all the problems into one post?!”
Comment by hilbertthm90 | September 4, 2008 | Reply
2. Well, sure there’s a lot of problems. But this is the one that motivates the next step!
Comment by | September 4, 2008 | Reply
3. [...] Today we’ll give the answer to the problem of pointwise convergence. It’s analogous to the notion of uniform continuity in a metric space. In that case we noted [...]
Pingback by | September 5, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335577487945557, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/233350/is-the-discrete-topology-on-x-the-only-one-containing-all-infinite-subsets-of
|
# Is the Discrete Topology on $X$ the Only One Containing All Infinite Subsets of $X$?
Prove or find counterexamples.
Let $X$ be an infinite set and $T$ be a topology on $X$. If $T$ contains every infinite subset of $X$, then $T$ is the discrete topology.
-
6
Can you find two infinite sets whose intersection is finite? – Gerry Myerson Nov 9 '12 at 5:02
## 2 Answers
Suppose $T$ is a topology containing all the infinite subsets of $X$. I claim every finite subset also belongs to $T$, and so $T$ is the discrete topology.
To see this, let $A$ be any finite subset of $X$. Since $X$ is infinite, $X \setminus A$ is infinite. Partition $X \setminus A$ into two disjoint infinite subsets $Y_1$ and $Y_2$ (this can always be done if the Axiom of Choice is assumed).
Now, $Y_1 \cup A$ and $Y_2 \cup A$ are both infinite sets, so they belong to $T$. Moreover, their intersection is precisely $A$. Since topologies are closed under finite intersection, it must be the case that $A$ belongs to $T$. Since $A$ was an arbitrary finite set, the claim follows.
-
1
Ever heard of amorphous sets? We can't necessarily partition infinite sets into two disjoint infinite subsets without reliance on some choice principle. – Cameron Buie Nov 9 '12 at 5:36
I think it is safe to assume the question takes the Axiom of Choice for granted. – madprob Nov 9 '12 at 5:44
And we rely on choice principles all the time, and what harm does it do? – Gerry Myerson Nov 9 '12 at 5:44
@madprob: I suspect you're right. – Cameron Buie Nov 9 '12 at 5:45
1
@GerryMyerson: Certainly, no harm is done by using them. On the other hand, no harm is done by noting when they're used, just in case they aren't meant to be for some reason. – Cameron Buie Nov 9 '12 at 5:45
show 6 more comments
Let's suppose that $X$ is an amorphous set, in the cofinite topology. By the amorphous nature of $X$, every infinite subset of $X$ has a finite complement, so every infinite subset of $X$ is open. Thus, the open subsets of $X$ are precisely the empty set, $X$, and the infinite proper subsets of $X$. However, this is not discrete, as (for example) no singleton subset of $X$ is open.
Now, if we have enough Choice so that there aren't any amorphous sets, then Austin's approach is the way to go for a proof. Otherwise, the above serves as a counterexample.
Remark: I don't intend this to be a "competing" answer with Austin's. I intended merely to elucidate why I brought up the Axiom of choice and why he then felt compelled to make mention of it in his answer. He answered before I did, it's a good answer, and you didn't pre-specify how much (if any) Choice you're using. If you like mine, feel free to upvote, but if you're debating which of our answers to accept, go with his.
-
Thanks for your comments. – M.Sina Nov 9 '12 at 7:42
You're welcome. Hopefully I didn't just confuse things for you. – Cameron Buie Nov 9 '12 at 18:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408707618713379, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/116328?sort=newest
|
## Density of the “multiplicative odd numbers”
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am interested in the set $A$ of all positive integer numbers such that when factored into primes, the sum of the exponents is odd (I think of $A$ as the multiplicative odd numbers).
I want to know if it has positive upper density, more precisely $$\bar d(A):=\limsup_{n\to\infty}\frac{|A\cap[1,n]|}n$$ I think I read somewhere that it has density $1/2$ (and the $\lim$ exist, not just the $\limsup$), but I would be happy with a proof that $\bar d(A)>0$.
-
1
The number of prime factors of $n$, counted with multiplicities, is often denoted $\Omega(n)$. You are asking about $\sum_1^n(-1)^{\Omega(m)}$. – Gerry Myerson Dec 13 at 22:42
1
It will indeed have density 1/2. Just looking at power of 2 factors shows that it is in the range [1/3,2/3]. Then looking at power of 3 factors will get a tighter range about 1/2, and so on. – George Lowther Dec 13 at 22:47
2
To explain my previous comment. Let $S$ be the set of numbers of the form $4^r(2s+1)$. This has density $(1/2)(1+1/4+1/4^2+\cdots)=2/3$. Write $f(n)=\lvert A\cap[1,n]\rvert$. The property that $\Omega(2n)=\Omega(n)+1$ gives $$\lvert A\cap[1,n/2]\cap S\rvert+\lvert A\cap[1,n]\cap S^c\rvert=\lvert S\cap[1,n/2]\rvert.$$ So, $f(n)=\lvert S\cap[1,n/2]\rvert+\lvert A\cap(n/2,n]\cap S\rvert$ giving the inequality $$1/3\le\liminf_{n\to\infty}f(n)/n\le\limsup_{n\to\infty}f(n)/n\le2/3.$$ – George Lowther Dec 13 at 23:47
## 2 Answers
Gerry has the right idea here: you are asking about the limiting behaviour of the sum $$\frac{1}{x} \sum_{n \leq x}{\frac{1 - (-1)^{\Omega(n)}}{2}}.$$ The arithmetic function $\lambda(n) = (-1)^{\Omega(n)}$ is known as Liouville's function. It is well-known (and equivalent to the prime number theorem!) that the summatory function of the Liouville function, $$L(x) = \sum_{n \leq x}{\lambda(n)},$$ satisfies the asymptotic $$L(x) = o(x)$$ as $x$ tends to infinity. (In fact, one can probably improve this slightly in the usual way to get better error terms in the prime number theorem.) So it is indeed true that $$d(A) = \lim_{x \to \infty} \frac{1}{x} \sum_{n \leq x}{\frac{1 - (-1)^{\Omega(n)}}{2}} = \frac{1}{2},$$ and with a little work you could actually say something slightly stronger about the rate at which this converges.
-
Thanks! So this seems to be well known among number theorists. Can you give a reference where I can find a proof that $L(x)=o(x)$? – Joel Moreira Dec 13 at 23:37
I don't know of a direct reference of such a proof, but this theorem is folklore and it's pretty easy to see why it's true: one can show (say, via comparing Euler products) that $\sum_{n=1}^{\infty}\lambda(n)n^{-s}=\zeta(2s)/\zeta(s)$ for $\Re(s)>1$, then use the fact that this extends meromorphically to the entire complex plane and has no poles in the region $\Re(s)>1-c/\log(|\Im(s)|+2)$. So basically the usual way of proving the prime number theorem, just with a different Dirichlet series. – Peter Humphries Dec 13 at 23:59
Also, you may be interested in the contents of this paper: staff.science.uu.nl/~dahme104/… – Peter Humphries Dec 14 at 0:00
1
For example, problem 11(b) in section 6.2 of Montgomery and Vaughan's book "Multiplicative Number Theory I" is to use the method of proof Peter described to prove that $|L(x)| \le Ax \exp({-}c\sqrt{\log x})$ for some positive constants $A$ and $c$. Interestingly, problem 11(c) suggests that it is easy to derive this bound from the corresponding bound for $M(x) = \sum_{n\le x} \mu(n)$, which might be easier to find in the literature. The link between the two functions is given by $\lambda(n) = \sum_{d\colon d^2\mid n} \mu(n/d^2)$. – Greg Martin Dec 14 at 0:18
@Greg: So, you have $L(x)=\sum_d M(x/d^2)$ giving $\limsup_{x\to\infty}\lvert L(x)/x\rvert\le\limsup_{x\to\infty}\lvert M(x)/x\rvert\sum_d d^{-2}$ (=0). Similarly, $M(x)=\sum_d \mu(d)L(x/d^2)$, so you can go in either direction. – George Lowther Dec 14 at 0:41
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Define the zeta density of the set of integers $A$ as
$$d(A) = \lim_{x \to 1+} \frac1{\zeta(x)} \sum_{k \in A} k^{-x}.$$
Then from results given at the OEIS, this is
$$d(A) = \lim_{x \to 1+} {1 \over \zeta(x)} \left[ {\zeta(x)^2 - \zeta(2x) \over 2\zeta(x)} \right].$$
After some simplifcation this is
$$\lim_{x \to 1} \left( {1 \over 2} - {\zeta(2x) \over 2 \zeta(x)^2} \right)$$
and recalling that $\lim_{x \to 1} \zeta(x)$ is infinity while $\zeta(2) = \pi^2/6$, this is $1/2$.
Now, if a set has a natural density, then it has a zeta density, and the two densities are equal; see for example Chapter 2 of Diaconis' PhD dissertation. So we can conclude that if your set has a natural density, then that natural density is $1/2$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404890537261963, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/fake-proofs+induction
|
# Tagged Questions
2answers
88 views
### Flawed proof that all positive integers are equal
Suppose that we are trying to prove that for every positive integer n, if x and y are positive integers with max(x, y) = n, then x = y. For the base case, we suppose n = 1. If max(x, y) = 1 and x and ...
8answers
2k views
### There are no bearded men in the world - What goes wrong in this proof?
Several years ago in a textbook I read this example as a faulty use of proof by induction. I never really realized why it fails. Here it goes: Theorem. There are no bearded men in the world. ...
12answers
3k views
### All natural numbers are equal.
I saw the following "theorem" and its "proof". I can't explain well why the argument is wrong. Could you give me clear explanation so that kids can understand. Theorem: All natural numbers are ...
2answers
107 views
### Find the demonstration error for the statement “All positive integers are equal”
All positive integers are equal, that is, for each $n \in \mathbb{N}$ the assertion $P(N): 1 = \cdots = n$ is true. (i) $P(1)$ is true because $1 = 1$ (ii) Suppose that $P(n)$ is true, then \$1 = ...
1answer
89 views
### Find the fallacy in the following treatment
Claim: any two positive integers are equal Proof: Let $A(n)$ be statement: if $a$ and $b$ are any two positive integers such that $\max(a,b)=n$ then $a=b$ Suppose $A(r)$ is true. Let ...
1answer
202 views
### Explain what’s bogus about the proof.
I couldn't find what is wrong with this strong induction proof, any one knows ? Question: A sequence of numbers is weakly decreasing when each number in the sequence is $\geq$ the numbers after it. ...
3answers
516 views
### Fake induction proof
Using the induction method: $(\forall P)[[P(0) \land ( \forall k \in \mathbb{N}) (P(k) \Rightarrow P(k+1))] \Rightarrow ( \forall n \in \mathbb{N} ) [ P(n) ]]$ Why this proof is wrong? \$P(x)\equiv ...
2answers
2k views
### Proof of 1=0 by mathematical induction?
I got stuck with a problem that pop up in my mind while learning limits. I am still a high school student. $\lim\limits_{n\to\infty}(\underbrace{\frac{1}{n}+\frac{1}{n}+\cdots+\frac{1}{n}}_{m})=0$ ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357649683952332, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/207055-question-total-probability-formula.html
|
# Thread:
1. ## Question on total probability formula
Hi, my question is:
Two different squares are selected at random on an 8 x 8 chessboard. What is the probability that they share a common boundary (i.e. that htey have an edge in common, not just a single corner)?
For this question, it says I'm supposed to use the total probability formula. Any help would be greatly appreciated
2. ## Re: Question on total probability formula
Hey sakuraxkisu.
I'm not familiar with that term, but you could point out what the total probability formula is in your book/lecturer/whatever?
3. ## Re: Question on total probability formula
Hello, sakuraxkisu!
I too have never heard of the total probability formula.
Could you explain it?
Two different squares are selected at random on an 8 x 8 chessboard.
What is the probability that they share a common boundary?
Selecting 2 of the 64 squares, there are: . ${64\choose2} \,=\,2016$ outcomes.
Consider placing a domino in a row: . $\boxtimes\!\!\boxtimes\!\!\square\!\square\! \square\! \square\!\square\! \square$
There are 7 possible positions.
On the entire board, there are $8\times 7 \,=\,56$ ways
. . to place a domino horizontally.
Consider placing a domino in a column.
There are 7 possible positions.
On the entire board, there are $8\times7 \,=\,56$ ways
. . to place a domino vertically.
Hence, there are $56+56\,=\,112$ pairs of adjacent squares.
The probability is; . $\frac{112}{2016} \:=\:\frac{1}{18}$
4. ## Re: Question on total probability formula
In my lecture notes, it says that the formula is:
$\ P(A) = \sum_{i=1}^{K}\ P(A \cap B_{i}) = \sum_{i=1}^{K}\ P(A \mid B_{i}) P(B_{i})$
Assuming that $B_{1} , B_{2}, ..., B_{K}$ form a partition of the sample space and that $A \cap B_{1}, A \cap B_{2},..., A \cap B_{K}$ form a partition of A.
I got the same answer as you Soroban, although my method was slightly different. I think what confused me most was how this formula could be used in the question. Thank you anyway though
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272507429122925, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/89306/list
|
## Return to Question
3 logic
2 Clarified what was meant by "constructive"
I am searching for a constructive proof of the following fact: If $X$ is an infinite set, there exists an uncountable family `$(X_\alpha)_{\alpha \in A}$` of infinite subsets of $X$ such that $X_\alpha \cap X_\beta$ is finite whenever $\alpha \neq \beta$. The way I know how to prove this statement is as follows.
First, it suffices to prove the case when $X$ is countable. Thus we can choose a bijection between $X$ and $\mathbb{Q} \cap [0,1]$. To save notation we can tacitly assume that $X = \mathbb{Q} \cap [0,1]$.
Let the index set be $A = [0,1] \setminus X$, i.e. all the irrationals in $[0,1]$. For each $\alpha \in A$, choose a sequence $(x_{\alpha 1},x_{\alpha 2},\dots)$ of elements of $X$ such that $x_{\alpha n} \to \alpha$ as $n \to \infty$, and let $X_\alpha = \{ x_{\alpha_n} \mid n \in \mathbb{N} \}$.
Since $\alpha$ is irrational, the sequence $(x_{\alpha n})$ cannot be eventually constant, so $X_\alpha$ is infinite. And if $\alpha \neq \beta$ then the sequences $(x_{\alpha n})$ and $(x_{\beta n})$ can have only finitely many terms in common since they have different limits, so $X_\alpha \cap X_\beta$ is finite.
Is it possible to do this in a more constructive way? I know very little about set theory and logic, so I apologize if this question is too elementary. Also, I wasn't sure about any relevant tags other than set-theory, so please feel free to add appropriate tags.
Edit: to clarify, I didn't have a clear notion of what I meant by "constructive" here. What I didn't like about the proof I gave above was that it required a choice of sequence of rationals converging to each irrational. The answers so far all address this concern adequately.
1
# Uncountable family of infinite subsets with pairwise finite intersections
I am searching for a constructive proof of the following fact: If $X$ is an infinite set, there exists an uncountable family `$(X_\alpha)_{\alpha \in A}$` of infinite subsets of $X$ such that $X_\alpha \cap X_\beta$ is finite whenever $\alpha \neq \beta$. The way I know how to prove this statement is as follows.
First, it suffices to prove the case when $X$ is countable. Thus we can choose a bijection between $X$ and $\mathbb{Q} \cap [0,1]$. To save notation we can tacitly assume that $X = \mathbb{Q} \cap [0,1]$.
Let the index set be $A = [0,1] \setminus X$, i.e. all the irrationals in $[0,1]$. For each $\alpha \in A$, choose a sequence $(x_{\alpha 1},x_{\alpha 2},\dots)$ of elements of $X$ such that $x_{\alpha n} \to \alpha$ as $n \to \infty$, and let $X_\alpha = \{ x_{\alpha_n} \mid n \in \mathbb{N} \}$.
Since $\alpha$ is irrational, the sequence $(x_{\alpha n})$ cannot be eventually constant, so $X_\alpha$ is infinite. And if $\alpha \neq \beta$ then the sequences $(x_{\alpha n})$ and $(x_{\beta n})$ can have only finitely many terms in common since they have different limits, so $X_\alpha \cap X_\beta$ is finite.
Is it possible to do this in a more constructive way? I know very little about set theory and logic, so I apologize if this question is too elementary. Also, I wasn't sure about any relevant tags other than set-theory, so please feel free to add appropriate tags.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582390785217285, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showpost.php?p=3752992&postcount=12
|
Thread: Relativity of Simultaneity View Single Post
Recognitions:
Gold Member
Quote by DaleSpam Now, transforming to the primed coordinates using the above formulas (v=0.5) gives $(t'_A,x'_A)=(0,0)$, $(t'_B,x'_B)=(-.5,1)$, and $(t'_C,x'_C)=(1,-.5)$. So we see that $t_A \ne t_B$ meaning that simultaneity is relative, and the time between A and C is still 1 meaning that time does not dilate.
What happened to gamma?
The way I calculate the three transformed events, I get:
A' = (0,0)
B' = (-0.577,1.1547)
C' = (1.1547,-0.577)
So A and C do not have the same time coordinates so they are not simultaneous.
EDIT: I see that wasn't your point. I should have said, the time between A and C is not the same as before, it's longer in the primed frame. But I wouldn't call that time dilation, it's just different coordinates for a pair of events.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500042796134949, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/108535/does-the-group-completion-theorem-apply-to-the-james-construction
|
## Does the group completion theorem apply to the James construction?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In other words, is the natural map $M \to \Omega B M$, for $M=JX$ the James construction on a space, a group completion? (By "group completion" I mean at the level of homology, I am aware of the space level version.) The versions of the group completion theorem that I have found such as Segal/McDuff have conditions on the monoid $M$, involving some kind of commutativity condition, and it is not clear to me that $JX$ satisfies any of these conditions. Could someone provide a reference or statement of a group completion theorem that clearly applies to $JX$, or is there actually a counterexample?
-
If $X$ is connected, then $JX$ is connected, so there should be no problems. So, for a possible counter-example you could look at the case $X$ equal to the disjoint union of three points. – Lennart Meier Oct 1 at 11:19
I am particularly interested in the case when $X$ is not connected, otherwise there are many proofs out there. For discrete spaces, it seems to be true by direct calculation. – Justin Young Oct 1 at 13:47
1
Perhaps Section 6 of Classifying Spaces of Topological Monoids and Categories, Z. Fiedorowicz, American Journal of Mathematics, Vol. 106, No. 2 (Apr., 1984), pp. 301-350 would be useful to you. Stable URL: jstor.org/stable/2374307 – Benjamin Dickman Oct 2 at 5:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9130647778511047, "perplexity_flag": "head"}
|
http://www.thefullwiki.org/Lucas%E2%80%93Lehmer_primality_test
|
Lucas–Lehmer primality test: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
Encyclopedia
Updated live from Wikipedia, last check: May 18, 2013 22:41 UTC (53 seconds ago)
From Wikipedia, the free encyclopedia
This article is about the Lucas–Lehmer test (LLT), that only applies to Mersenne numbers. There is also a Lucas-Lehmer-Riesel test for numbers of the form N = k2n−1, with 2n > k, based on the LLT: see Lucas-Lehmer-Riesel test. There is also a Lucas-Lehmer-Reix test for Fermat numbers Fn = 22n + 1, with seed = 5, based on the LLT: see the "External links".
In mathematics, the Lucas–Lehmer test is a primality test for Mersenne numbers. The test was originally developed by Edouard Lucas in 1856 [1][2], and subsequently improved by Lucas in 1878 and Derrick Henry Lehmer in the 1930s.
The test
The Lucas-Lehmer test works as follows. Let Mp = 2p − 1 be the Mersenne number to test with p an odd prime (because p is exponentially smaller than Mp, we can use a simple algorithm like trial division for establishing its primality). Define a sequence {s i} for all i ≥ 0 by
$s_i= \begin{cases} 4 & \mbox{if }i=0; \ s_{i-1}^2-2 & \mbox{otherwise.} \end{cases}$
The first few terms of this sequence are 4, 14, 194, 37634, ... (sequence A003010 in OEIS). Then Mp is prime iff
$s_{p-2}\equiv0\pmod{M_p}.$
The number sp − 2 mod M p is called the Lucas–Lehmer residue of p. (Some authors equivalently set s1 = 4 and test sp−1 mod Mp). In pseudocode, the test might be written:
```// Determine if Mp = 2p − 1 is prime
Lucas-Lehmer(p)
var s ← 4
var M ← 2p − 1
repeat p − 2 times:
s ← ((s × s) − 2) mod M
if s = 0 return PRIME else return COMPOSITE
```
By performing the `mod M` at each iteration, we ensure that all intermediate results are at most p bits (otherwise the number of bits would double each iteration). It is exactly the same strategy employed in modular exponentiation.
Time
complexity
In the algorithm as written above, there are two expensive operations during each iteration: the multiplication `s × s`, and the `mod M` operation. The `mod M` operation can be made particularly efficient on standard binary computers by observing the following simple property:
$k \equiv (k \hbox{ mod } 2^n) + \lfloor k/2^n \rfloor \pmod{2^n - 1}$.
In other words, if we take the least significant n bits of k, and add the remaining bits of k, and then do this repeatedly until at most n bits remain, we can compute the remainder after dividing k by the Mersenne number 2n−1 without using division. For example:
| | | |
|--------------|----|--------------------------|
| 916 mod 25−1 | = | 11100101002 mod 25−1 |
| | = | 111002 + 101002 mod 25−1 |
| | = | 1100002 mod 25−1 |
| | = | 12 + 100002 mod 25−1 |
| | = | 100012 mod 25−1 |
| | = | 100012 |
| | = | 17. |
Moreover, since `s × s` will never exceed M2 < 22p, this simple technique converges in at most 2 p-bit additions, which can be done in linear time. As a small exceptional case, the above algorithm may produce 2n−1 for a multiple of the modulus, rather than the correct value of zero; this should be accounted for.
With the modulus out of the way, the asymptotic complexity of the algorithm depends only on the multiplication algorithm used to square s at each step. The simple "grade-school" algorithm for multiplication requires O(p2) bit-level or word-level operations to square a p-bit number, and since we do this O(p) times, the total time complexity is O(p3). The most efficient known multiplication method, the Schönhage-Strassen algorithm based on the Fast Fourier transform, requires O(p log p log log p) time to square a p-bit number, reducing the complexity to O(p2 log p log log p) or Õ(p2).[1]
By comparison, the most efficient randomized primality test for general integers, the Miller-Rabin primality test, takes O(k p2 log p log log p) bit operations using FFT multiplication, where k is the number of iterations and is related to the error rate. This is a constant factor difference for constant k, but in practice the cost of doing many iterations and other differences lead to worse performance for Miller-Rabin. The most efficient deterministic primality test for general integers, the AKS primality test, requires Õ(p6) bit operations in its best known variant and is dramatically slower in practice.
Examples
Suppose we wish to verify that M3 = 7 is prime using the Lucas-Lehmer test. We start out with s set to 4 and then update it 3−2 = 1 time, taking the results mod 7:
• s ← ((4 × 4) − 2) mod 7 = 0
Because we end with s set to zero, M3 is prime.
On the other hand, M11 = 2047 = 23 × 89 is not prime. To show this, we start with s set to 4 and update it 11−2 = 9 times, taking the results mod 2047:
• s ← ((4 × 4) − 2) mod 2047 = 14
• s ← ((14 × 14) − 2) mod 2047 = 194
• s ← ((194 × 194) − 2) mod 2047 = 788
• s ← ((788 × 788) − 2) mod 2047 = 701
• s ← ((701 × 701) − 2) mod 2047 = 119
• s ← ((119 × 119) − 2) mod 2047 = 1877
• s ← ((1877 × 1877) − 2) mod 2047 = 240
• s ← ((240 × 240) − 2) mod 2047 = 282
• s ← ((282 × 282) − 2) mod 2047 = 1736
Because s is not zero, M11=2047 is not prime. Notice that we learn nothing about the factors of 2047, only its Lucas–Lehmer residue, 1736.
Proof of
correctness
Lehmer's original proof of the correctness of this test is complex, so we'll depend upon more recent refinements. Recall the definition:
$s_i= \begin{cases} 4 & \mbox{if }i=0; \ s_{i-1}^2-2 & \mbox{otherwise.} \end{cases}$
Then our theorem is that Mp is prime iff $s_{p-2}\equiv0\pmod{M_p}.$
We begin by noting that ${\langle}s_i{\rangle}$ is a recurrence relation with a closed-form solution. Define $\omega = 2 + \sqrt{3}$ and $\bar{\omega} = 2 - \sqrt{3}$; then we can verify by induction that $s_i = \omega^{2^i} + \bar{\omega}^{2^i}$ for all i:
$s_0 = \omega^{2^0} + \bar{\omega}^{2^0} = (2 + \sqrt{3}) + (2 - \sqrt{3}) = 4.$
$\begin{array}{lcl}s_n & = & s_{n-1}^2 - 2 \ & = & \left(\omega^{2^{n-1}} + \bar{\omega}^{2^{n-1}}\right)^2 - 2 \ & = & \omega^{2^n} + \bar{\omega}^{2^n} + 2(\omega\bar{\omega})^{2^{n-1}} - 2 \ & = & \omega^{2^n} + \bar{\omega}^{2^n}, \ \end{array}$
where the last step follows from $\omega\bar{\omega} = (2 + \sqrt{3})(2 - \sqrt{3}) = 1$. We will use this in both parts.
Sufficiency
In this direction we wish to show that $s_{p-2}\equiv0\pmod{M_p}$ implies that Mp is prime. We relate a straightforward proof exploiting elementary group theory given by J. W. Bruce[2] as related by Jason Wojciechowski[3].
Suppose $s_{p-2} \equiv 0 \pmod{M_p}$. Then $\omega^{2^{p-2}} + \bar{\omega}^{2^{p-2}} = kM_p$ for some integer k, and:
$\begin{align} \omega^{2^{p-2}} & = kM_p - \bar{\omega}^{2^{p-2}} \ \left(\omega^{2^{p-2}}\right)^2 & = kM_p\omega^{2^{p-2}} - (\omega\bar{\omega})^{2^{p-2}} \ \omega^{2^{p-1}} & = kM_p\omega^{2^{p-2}} - 1.\quad\quad\quad\quad\quad(1) \end{align}$
Now suppose Mp is composite with nontrivial prime factor q > 2 (all Mersenne numbers are odd). Define the set $X = \{a + b\sqrt{3} | a, b \in \mathbb{Z}_q\}$ with q2 elements, where $\mathbb{Z}_q$ is the integers mod q, a finite field. The multiplication operation in X is defined by:
$(a + b\sqrt{3})(c + d\sqrt{3}) = [(ac + 3bd) \hbox{ mod } q] + [(bc + ad) \hbox{ mod } q]\sqrt{3}.$
Since q > 2, ω and $\bar{\omega}$ are in X. Any product of two numbers in X is in X, but it's not a group under multiplication because not every element x has an inverse y such that xy = 1. If we consider only the elements that have inverses, we get a group X* of size at most q2 − 1 (since 0 has no inverse).
Now, since $M_p \equiv 0 \pmod q$, and $\omega \in X$, we have $kM_p\omega^{2^{p-2}} = 0$ in X, which by equation (1) gives $\omega^{2^{p-1}} = -1$. Squaring both sides gives $\omega^{2^p} = 1$, showing that ω is invertible with inverse $\omega^{2^{p}-1}$ and so lies in X*, and moreover has an order dividing 2p. In fact the order must equal 2p, since $\omega^{2^{p-1}} \neq 1$ and so the order does not divide 2p − 1. Since the order of an element is at most the order (size) of the group, we conclude that $2^p \leq q^2 - 1 < q^2$. But since q is a nontrivial prime factor of Mp, we must have $q^2 \leq M_p = 2^p-1$, yielding the contradiction 2p < 2p − 1. So Mp is prime.
Necessity
In the other direction, we suppose Mp is prime and show $s_{p-2} \equiv0\pmod{M_p}$. We rely on a simplification of a proof by Öystein J. R. Ödseth.[4] First, notice that 3 is a quadratic non-residue mod Mp, since 2 p − 1 for odd p > 1 only takes on the value 7 mod 12, and so the Legendre symbol properties tell us (3 | Mp) is −1. Euler's criterion then gives us:
$3^{(M_p-1)/2} \equiv -1 \pmod{M_p}.\,$
On the other hand, 2 is a quadratic residue mod Mp, since $2^p \equiv 1 \pmod{M_p}$ and so $2 \equiv 2^{p+1} = \left(2^{(p+1)/2}\right)^2 \pmod{M_p}$. Euler's criterion again gives:
$2^{(M_p-1)/2} \equiv 1 \pmod{M_p}.\,$
Next, define $\sigma = 2\sqrt{3}$, and define X* similarly as before as the multiplicative group of $\{a + b\sqrt{3} | a, b \in \mathbb{Z}_{M_p}\}$. We will use the following lemmas:
$(x+y)^{M_p} \equiv x^{M_p} + y^{M_p} \pmod{M_p}$
(from Proofs of Fermat's little theorem#Proof_using_the_binomial_theorem)
$a^{M_p} \equiv a \pmod{M_p}$
for every integer a (Fermat's little theorem)
Then, in the group X* we have:
$\begin{align} (6+\sigma)^{M_p} & = 6^{M_p} + (2^{M_p})(\sqrt{3}^{M_p}) \ & = 6 + 2(3^{(M_p-1)/2})\sqrt{3} \ & = 6 + 2(-1)\sqrt{3} \ & = 6 - \sigma. \end{align}$
We chose σ such that ω = (6 + σ)2 / 24. Consequently, we can use this to compute $\omega^{(M_p+1)/2}$ in the group X*:
$\begin{align} \omega^{(M_p+1)/2} & = (6 + \sigma)^{M_p+1}/24^{(M_p+1)/2} \ & = (6 + \sigma)^{M_p}(6 + \sigma)/(24 \times 24^{(M_p-1)/2}) \ & = (6 - \sigma)(6 + \sigma)/(-24) \ & = -1. \end{align}$
where we use the fact that
$24^{(M_p-1)/2} = (2^{(M_p-1)/2})^3(3^{(M_p-1)/2}) = (1)^3(-1) = -1.$
Since $M_p \equiv 3 \pmod 4$, all that remains is to multiply both sides of this equation by $\bar{\omega}^{(M_p+1)/4}$ and use $\omega\bar{\omega}=1$:
$\begin{align} \omega^{(M_p+1)/2}\bar{\omega}^{(M_p+1)/4} & = -\bar{\omega}^{(M_p+1)/4} \ \omega^{(M_p+1)/4} + \bar{\omega}^{(M_p+1)/4} & = 0 \ \omega^{(2^p-1+1)/4} + \bar{\omega}^{(2^p-1+1)/4} & = 0 \ \omega^{2^{p-2}} + \bar{\omega}^{2^{p-2}} & = 0 \ s_{p-2} & = 0. \end{align}$
Since sp−2 is an integer and is zero in X*, it is also zero mod Mp.
Applications
The Lucas-Lehmer test is the primality test used by the Great Internet Mersenne Prime Search to locate large primes, and has been successful in locating many of the largest primes known to date.[5] They consider it valuable for finding very large primes because Mersenne numbers are considered somewhat more likely to be prime than randomly chosen odd integers of the same size. Additionally, the test is considered valuable because it can provably test a very large number for primality within affordable time and (in contrast to the equivalently fast Pépin's test for any Fermat number) can be tried on a large search space of numbers with the required form before reaching computational limits.
References
1. ^ Colquitt, W. N.; Welsh, L., Jr. (1991), "A New Mersenne Prime", Mathematics of Computation 56 (194): 867–870, "The use of the FFT speeds up the asymptotic time for the Lucas-Lehmer test for Mp from O(p3) to O(p2 log p log log p) bit operations."
2. ^ J. W. Bruce. A Really Trivial Proof of the Lucas-Lehmer Test. The American Mathematical Monthly, Vol.100, No.4, pp.370–371. April 1993.
3. ^ Jason Wojciechowski. Mersenne Primes, An Introduction and Overview. January 2003. http://wonka.hampshire.edu/~jason/math/smithnum/project.ps
4. ^ Öystein J. R. Ödseth. A note on primality tests for N = h · 2n − 1. Department of Mathematics, University of Bergen. http://www.uib.no/People/nmaoy/papers/luc.pdf
5. ^ GIMPS Home Page. Frequently Asked Questions: General Questions: What are Mersenne primes? How are they useful? http://www.mersenne.org/faq.htm#what
Further
reading
• Crandall, Richard; Pomerance, Carl (2001), Prime Numbers: A Computational Perspective (1st ed.), Berlin: Springer, ISBN 0387947779 Section 4.2.1: The Lucas–Lehmer test, pp.167–170.
Categories:
Related topics
Up to date as of August 19, 2010
Got something to say? Make a comment.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8615877032279968, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/Orbit
|
Definitions
Nearby Words
# orbit
[awr-bit] /ˈɔrbɪt/
orbit, in astronomy, path in space described by a body revolving about a second body where the motion of the orbiting bodies is dominated by their mutual gravitational attraction. Within the solar system, planets, dwarf planets, asteroids, and comets orbit the sun and satellites orbit the planets and other bodies.
## Planetary Orbits
From earliest times, astronomers assumed that the orbits in which the planets moved were circular; yet the numerous catalogs of measurements compiled especially during the 16th cent. did not fit this theory. At the beginning of the 17th cent., Johannes Kepler stated three laws of planetary motion that explained the observed data: the orbit of each planet is an ellipse with the sun at one focus; the speed of a planet varies in such a way that an imaginary line drawn from the planet to the sun sweeps out equal areas in equal amounts of time; and the ratio of the squares of the periods of revolution of any two planets is equal to the ratio of the cubes of their average distances from the sun. The orbits of the solar planets, while elliptical, are almost circular; on the other hand, the orbits of many of the extrasolar planets discovered during the 1990s are highly elliptical.
After the laws of planetary motion were established, astronomers developed the means of determining the size, shape, and relative position in space of a planet's orbit. The size and shape of an orbit are specified by its semimajor axis and by its eccentricity. The semimajor axis is a length equal to half the greatest diameter of the orbit. The eccentricity is the distance of the sun from the center of the orbit divided by the length of the orbit's semimajor axis; this value is a measure of how elliptical the orbit is. The position of the orbit in space, relative to the earth, is determined by three factors: (1) the inclination, or tilt, of the plane of the planet's orbit to the plane of the earth's orbit (the ecliptic); (2) the longitude of the planet's ascending node (the point where the planet cuts the ecliptic moving from south to north); and (3) the longitude of the planet's perihelion point (point at which it is nearest the sun; see apsis).
These quantities, which determine the size, shape, and position of a planet's orbit, are known as the orbital elements. If only the sun influenced the planet in its orbit, then by knowing the orbital elements plus its position at some particular time, one could calculate its position at any later time. However, the gravitational attractions of bodies other than the sun cause perturbations in the planet's motions that can make the orbit shift, or precess, in space or can cause the planet to wobble slightly. Once these perturbations have been calculated one can closely determine its position for any future date over long periods of time. Modern methods for computing the orbit of a planet or other body have been refined from methods developed by Newton, Laplace, and Gauss, in which all the needed quantities are acquired from three separate observations of the planet's apparent position.
## Nonplanetary Orbits
The laws of planetary orbits also apply to the orbits of comets, natural satellites, artificial satellites, and space probes. The orbits of comets are very elongated; some are long ellipses, some are nearly parabolic (see parabola), and some may be hyperbolic. When the orbit of a newly discovered comet is calculated, it is first assumed to be a parabola and then corrected to its actual shape when more measured positions are obtained. Natural satellites that are close to their primaries tend to have nearly circular orbits in the same plane as that of the planet's equator, while more distant satellites may have quite eccentric orbits with large inclinations to the planet's equatorial plane. Because of the moon's proximity to the earth and its large relative mass, the earth-moon system is sometimes considered a double planet. It is the center of the earth-moon system, rather than the center of the earth itself, that describes an elliptical orbit around the sun in accordance with Kepler's laws. All of the planets and most of the satellites in the solar system move in the same direction in their orbits, counterclockwise as viewed from the north celestial pole; some satellites, probably captured asteroids, have retrograde motion, i.e., they revolve in a clockwise direction.
In physics, an orbit is the gravitationally curved path of one object around a point or another body, for example the gravitational orbit of a planet around a star.
Historically, the apparent motion of the planets were first understood in terms of epicycles, which are the sums of numerous circular motions. This predicted the path of the planets quite well, until Johannes Kepler was able to show that the motion of the planets were in fact elliptical motions. Sir Isaac Newton was able to prove that this was equivalent to an inverse square, instantaneously propagating force he called gravitation. Albert Einstein later was able to show that gravity is due to curvature of space-time, and that orbits lie upon geodesics and this is the current understanding.
## History
In the geocentric model of the solar system, mechanisms such as the deferent and epicycle were supposed to explain the motion of the planets in terms of perfect spheres or rings.
The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarized in his three laws of planetary motion. First, he found that the orbits of the planets in our solar system are elliptical, not circular (or epicyclic), as had previously been believed, and that the sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed of the planet depends on the planet's distance from the sun. And third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the sun. For each planet, the cube of the planet's distance from the sun, measured in astronomical units (AU), is equal to the square of the planet's orbital period, measured in Earth years. Jupiter, for example, is approximately 5.2 AU from the sun and its orbital period is 11.86 Earth years. So 5.2 cubed equals 11.86 squared, as predicted.
Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies responding to an instantaneously propagating force of gravity were conic sections. Newton showed that a pair of bodies follow orbits of dimensions that are in inverse proportion to their masses about their common center of mass. Where one body is much more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body.
Albert Einstein was able to show that gravity was due to curvature of space-time and was able to remove the assumption of Newton that changes propagate instantaneously. In relativity theory orbits follow geodesic trajectories which approximate very well to the Newtonian predictions. However there are differences and these can be used to determine which theory relativity agrees with. Essentially all experimental evidence agrees with relativity theory to within experimental measuremental accuracy.
## Planetary orbits
Within a planetary system; planets, dwarf planets, asteroids (a.k.a. minor planets), comets, and space debris orbit the central star in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a central star is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. To date, no comet has been observed in our solar system with a distinctly hyperbolic orbit. Bodies which are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about that planet.
Owing to mutual gravitational perturbations, the eccentricities of the orbits of the planets in our solar system vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest eccentricities are those of the orbits of Venus and Neptune.
As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest from each other. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an Earth orbit, respectively.)
In the elliptical orbit, the center of mass of the orbiting-orbited system will sit at one focus of both orbits, with nothing present at the other focus. As a planet approaches periapsis, the planet will increase in speed, or velocity. As a planet approaches apoapsis, the planet will decrease in velocity.
See also:
### Understanding orbits
There are a few common ways of understanding orbits.
• As the object moves sideways, it falls toward the central body. However, it moves so quickly that the central body will curve away beneath it.
• A force, such as gravity, pulls the object into a curved path as it attempts to fly off in a straight line.
• As the object moves sideways (tangentially), it falls toward the central body. However, it has enough tangential velocity to miss the orbited object, and will continue falling indefinitely. This understanding is particularly useful for mathematical analysis, because the object's motion can be described as the sum of the three one-dimensional coordinates oscillating around a gravitational center.
As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). Imagine a cannon sitting on top of a tall mountain, which fires a cannonball horizontally. The mountain needs to be very tall, so that the cannon will be above the Earth's atmosphere and the effects of air friction on the cannonball can be ignored.
If the cannon fires its ball with a low initial velocity, the trajectory of the ball curves downward and hits the ground (A). As the firing velocity is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense — they are describing a portion of an elliptical path around the center of gravity — but the orbits are interrupted by striking the Earth.
If the cannonball is fired with sufficient velocity, the ground curves away from the ball at least as much as the ball falls — so the ball never strikes the ground. It is now in what could be called a non-interrupted, or circumnavigating, orbit. For any specific combination of height above the center of gravity, and mass of the planet, there is one specific firing velocity that produces a circular orbit, as shown in (C).
As the firing velocity is increased beyond this, a range of elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be elliptical orbits at slower velocities; these will come closest to the Earth at the point half an orbit beyond, and directly opposite, the firing point.
At a specific velocity called escape velocity, again dependent on the firing height and mass of the planet, an infinite orbit such as (E) is produced — a parabolic trajectory. At even faster velocities the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space".
The velocity relationship of two objects with mass can thus be considered in four practical classes, with subtypes:
1. No orbit
2. Interrupted orbits
• Range of interrupted elliptical paths
3. Circumnavigating orbits
• Range of elliptical paths with closest point opposite firing point
• Circular path
• Range of elliptical paths with closest point at firing point
4. Infinite orbits
• Parabolic paths
• Hyperbolic paths
## Newton's laws of motion
In many situations relativistic effects can be neglected, and Newton's laws give a highly accurate description of the motion. Then the acceleration of each body is equal to the sum of the gravitational forces on it, divided by its mass, and the gravitational force between each pair of bodies is proportional to the product of their masses and decreases inversely with the square of the distance between them. To this Newtonian approximation, for a system of two point masses or spherical bodies, only influenced by their mutual gravitation (the two-body problem), the orbits can be exactly calculated. If the heavier body is much more massive than the smaller, as for a satellite or small moon orbiting a planet or for the Earth orbiting the Sun, it is accurate and convenient to describe the motion in a coordinate system that is centered on the heavier body, and we can say that the lighter body is in orbit around the heavier. (For the case where the masses of two bodies are comparable an exact Newtonian solution is still available, and qualitatively similar to the case of dissimilar masses, by centering the coordinate system on the center of mass of the two.)
Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational . Since work is required to separate two massive bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses the gravitational energy decreases without limit as they approach zero separation, and it is convenient and conventional to take the potential energy as zero when they are an infinite distance apart, and then negative (since it decreases from zero) for smaller finite distances.
With two bodies, an orbit is a conic section. The orbit can be open (so the object never returns) or closed (returning), depending on the total kinetic + potential energy of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, always less. Since the kinetic energy is never negative, if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits have negative total energy, parabolic trajectories have zero total energy, and hyperbolic orbits have positive total energy.
An open orbit has the shape of a hyperbola (when the velocity is greater than the escape velocity), or a parabola (when the velocity is exactly the escape velocity). The bodies approach each other for a while, curve around each other around the time of their closest approach, and then separate again forever. This may be the case with some comets if they come from outside the solar system.
A closed orbit has the shape of an ellipse. In the special case that the orbiting body is always the same distance from the center, it is also the shape of a circle. Otherwise, the point where the orbiting body is closest to Earth is the perigee, called periapsis (less properly, "perifocus" or "pericentron") when the orbit is around a body other than Earth. The point where the satellite is farthest from Earth is called apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part.
Orbiting bodies in closed orbits repeat their path after a constant period of time. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows:
1. The orbit of a planet around the Sun is an ellipse, with the Sun in one of the focal points of the ellipse. Therefore the orbit lies in a plane, called the orbital plane. The point on the orbit closest to the attracting body is the periapsis. The point farthest from the attracting body is called the apoapsis. There are also specific terms for orbits around particular bodies; things orbiting the Sun have a perihelion and aphelion, things orbiting the Earth have a perigee and apogee, and things orbiting the Moon have a perilune and apolune (or, synonymously, periselene and aposelene). An orbit around any star, not just the Sun, has a periastron and an apastron.
2. As the planet moves around its orbit during a fixed amount of time, the line from Sun to planet sweeps a constant area of the orbital plane, regardless of which part of its orbit the planet traces during that period of time. This means that the planet moves faster near its perihelion than near its aphelion, because at the smaller distance it needs to trace a greater arc to cover the same area. This law is usually stated as "equal areas in equal time."
3. For a given orbit, the ratio of the cube of its semi-major axis to the square of its period is constant.
Note that that while the bound orbits around a point mass, or a spherical body with an ideal Newtonian gravitational field, are all closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (as caused, for example, by the slight oblateness of the Earth, or by relativistic effects, changing the gravitational field's behavior with distance) will cause the orbit's shape to depart to a greater or lesser extent from the closed ellipses characteristic of Newtonian two body motion. The 2-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the 3-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies.
Instead, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms.
One form takes the pure elliptic motion as a basis, and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moon, planets and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still there are secular phenomena that have to be dealt with by post-newtonian methods.
The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces will equal the mass times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial ones corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a tiny time in the future, then repeat this. However, tiny arithmetic errors from the limited accuracy of a computer's math accumulate, limiting the accuracy of this approach.
Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large objects have been simulated.
## Analysis of orbital motion
(See also Kepler orbit, orbit equation and Kepler's first law.)
Please note that the following is a classical (Newtonian) analysis of orbital mechanics, which assumes the more subtle effects of general relativity (like frame dragging and gravitational time dilation) are negligible. General relativity does, however, need to be considered for some applications such as analysis of extremely massive heavenly bodies, precise prediction of a system's state after a long period of time, and in the case of interplanetary travel, where fuel economy, and thus precision, is paramount.
To analyze the motion of a body moving under the influence of a force which is always directed towards a fixed point, it is convenient to use polar coordinates with the origin coinciding with the center of force. In such coordinates the radial and transverse components of the acceleration are, respectively:
$a_r = frac\left\{d^2r\right\}\left\{dt^2\right\} - rleft\left(frac\left\{dtheta\right\}\left\{dt\right\} right\right)^2$
and
$a_\left\{theta\right\} = frac\left\{1\right\}\left\{r\right\}frac\left\{d\right\}\left\{dt\right\}left\left(r^2frac\left\{dtheta\right\}\left\{dt\right\} right\right)$.
Since the force is entirely radial, and since acceleration is proportional to force, it follows that the transverse acceleration is zero. As a result,
$frac\left\{d\right\}\left\{dt\right\}left\left(r^2frac\left\{dtheta\right\}\left\{dt\right\} right\right) = 0$.
After integrating, we have
$r^2frac\left\{dtheta\right\}\left\{dt\right\} = \left\{rm const.\right\}$
which is actually the theoretical proof of Kepler's 2nd law (A line joining a planet and the sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. It then follows that
$frac\left\{dtheta\right\}\left\{dt\right\} = \left\{ h over r^2 \right\} = hu^2$
where we have introduced the auxiliary variable
$u = \left\{ 1 over r \right\}$.
The radial force is f(r) per unit mass is $a_r$, then the elimination of the time variable from the radial component of the equation of motion yields:
$frac\left\{d^2u\right\}\left\{dtheta^2\right\} + u = -frac\left\{f\left(1 / u\right)\right\}\left\{h^2u^2\right\}$.
In the case of gravity, Newton's law of universal gravitation states that the force is proportional to the inverse square of the distance:
$f\left(1/u\right) = a_r = \left\{ -GM over r^2 \right\} = -GM u^2$
where G is the constant of universal gravitation, m is the mass of the orbiting body (planet), and M is the mass of the central body (the Sun). Substituting into the prior equation, we have
$frac\left\{d^2u\right\}\left\{dtheta^2\right\} + u = frac\left\{ GM \right\}\left\{h^2\right\}$.
So for the gravitational force – or, more generally, for any inverse square force law – the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is:
$u\left(theta\right) = frac\left\{ GM \right\}\left\{h^2\right\} + A cos\left(theta-theta_0\right)$
where $A$ and $theta_0$ are arbitrary constants.
The equation of the orbit described by the particle is thus:
$r = frac\left\{1\right\}\left\{u\right\} = frac\left\{ h^2 / GM \right\}\left\{1 + e cos \left(theta - theta_0\right)\right\}$,
where $e$ is:
$e equiv frac\left\{h^2A\right\}\left\{G M\right\} .$
In general, this can be recognized as the equation of a conic section in polar coordinates ($r$, $theta$). We can make a further connection with the classic description of conic section with:
$frac\left\{h^2\right\}\left\{GM\right\} = a\left(1-e^2\right)$
If parameter $e$ is smaller than one, $e$ is the eccentricity and $a$ the semi-major axis of an ellipse.
## Orbital planes
The analysis so far has been two dimensional; it turns out that an unperturbed orbit is two dimensional in a plane fixed in space, and thus the extension to three dimensions requires simply rotating the two dimensional plane into the required angle relative to the poles of the planetary body involved.
The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles.
## Orbital period
The orbital period is simply how long an orbiting body takes to complete one orbit.
## Specifying orbits
It turns out that it takes a minimum 6 numbers to specify an orbit about a body, and this can be done in several ways. For example, specifying the 3 numbers specifying location and 3 specifying the velocity of a body gives a unique orbit that can be calculated forwards (or backwards). However, traditionally the parameters used are slightly different.
The traditionally used set of orbital elements is called the set of Keplerian elements, after Johannes Kepler and his Kepler's laws. The Keplerian elements are six:
• Inclination ($i,!$)
• Longitude of the ascending node ($Omega,!$)
• Argument of periapsis ($omega,!$)
• Eccentricity ($e,!$)
• Semimajor axis ($a,!$)
• Mean anomaly at epoch ($M_o,!$)
In principle once the orbital elements are known for a body, its position can be calculated forward and backwards indefinitely in time. However, in practice, orbits are affected or perturbed, by forces other than gravity due to the central body and thus the orbital elements change over time.
## Orbital perturbations
An orbital perturbation is when a force or impulse which is much smaller than the overall force or average impulse of the main gravitating body and which is external to the two orbiting bodies causes an acceleration, which changes the parameters of the orbit over time.
### Radial, prograde and transverse perturbations
It can be shown that a radial impulse given to a body in orbit doesn't change the orbital period (since it doesn't affect the angular momentum), but changes the eccentricity. This means that the orbit still intersects the original orbit in two places.
For a prograde or retrograde impulse (i.e. an impulse applied along the orbital motion), this changes both the eccentricity as well as the orbital period, but any closed orbit will still intersect the perturbation point. Notably, a prograde impulse given at periapsis raises the altitude at apoapsis, and vice versa, and a retrograde impulse does the opposite.
A transverse force out of the orbital plane causes rotation of the orbital plane.
### Orbital decay
If some part of a body's orbit enters an atmosphere, its orbit can decay because of drag. Particularly at each periapsis, the object scrapes the air, losing energy. Each time, the orbit grows less eccentric (more circular) because the object loses kinetic energy precisely when that energy is at its maximum. This is similar to the effect of slowing a pendulum at its lowest point; the highest point of the pendulum's swing becomes lower. With each successive slowing more of the orbit's path is affected by the atmosphere and the effect becomes more pronounced. Eventually, the effect becomes so great that the maximum kinetic energy is not enough to return the orbit above the limits of the atmospheric drag effect. When this happens the body will rapidly spiral down and intersect the central body.
The bounds of an atmosphere vary wildly. During solar maxima, the Earth's atmosphere causes drag up to a hundred kilometres higher than during solar minima.
Some satellites with long conductive tethers can also decay because of electromagnetic drag from the Earth's magnetic field. Basically, the wire cuts the magnetic field, and acts as a generator. The wire moves electrons from the near vacuum on one end to the near-vacuum on the other end. The orbital energy is converted to heat in the wire.
Orbits can be artificially influenced through the use of rocket motors which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated.
Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the sun, and so can be used indefinitely. See statite for one such proposed use.
Orbital decay can also occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the solar system are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example, and is expected to either impact Mars' surface or break up into a ring within 50 million years.
Finally, orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely.
### Oblateness
The standard analysis of orbiting bodies assumes that all bodies consist of uniform spheres, or more generally, concentric shells each of uniform density. It can be shown that such bodies are gravitationally equivalent to point sources.
However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body.
The general effect of this is to change the orbital parameters over time; predominantly this gives a rotation of the orbital plane around the rotational pole of the central body (it perturbs the argument of perigee) in a way that is dependent on the angle of orbital plane to the equator as well as altitude at perigee.
### Other gravitating bodies
The effects of other gravitating bodies can be very large. For example, the orbit of the Moon cannot be in any way accurately described without allowing for the action of the Sun's gravity as well as the Earth's.
### Light radiation and stellar wind
For small bodies particularly, light and stellar wind can cause significant perturbations to the attitude and direction of motion of the body, and over time can be quite significant.
## Scaling in gravity
The gravitational constant G is measured to be:
• (6.6742 ± 0.001) × 10−11 N·m²/kg²
• (6.6742 ± 0.001) × 10−11 m³/(kg·s²)
• (6.6742 ± 0.001) × 10−11 (kg/m³)-1s-2.
Thus the constant has dimension density-1 time-2. This corresponds to the following properties.
Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence orbital periods remain the same. Similarly, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the earth.
When all densities are multiplied by four, orbits are the same, but with orbital velocities doubled.
When all densities are multiplied by four, and all sizes are halved, orbits are similar, with the same orbital velocities.
These properties are illustrated in the formula (known as Kepler's 3rd Law)
$GT^2 sigma = 3pi left\left(frac\left\{a\right\}\left\{r\right\} right\right)^3,$
for an elliptical orbit with semi-major axis a, of a small body around a spherical body with radius r and average density σ, where T is the orbital period.
## References
• Abell, Morrison, and Wolff (1987). Exploration of the Universe. fifth edition, Saunders College Publishing.
## External links
• CalcTool: Orbital period of a planet calculator Has wide choice of units. Requires Javascript.
• Understand orbits using direct manipulation Requires Javascript and Macromedia.
• An on-line orbit plotter: http://www.bridgewater.edu/~rbowman/ISAW/PlanetOrbit.html Requires Javascript.
• Orbital Mechanics (Rocket and Space Technology)
• The NOAA page on Climate Forcing Data includes (calculated) data on Earth orbit variations over the last 50 million years and for the coming 20 million years
• The orbital simulations by Varadi, Ghil and Runnegar (2003) provide another, slightly different series for Earth orbit eccentricity, and also a series for orbital inclination. Orbits for the other planets were also calculated, but only the eccentricity data for Earth and Mercury are available online.
• Java simulation on orbital motion Requires Java.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321012496948242, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/197657-finding-period-polar-graph.html
|
1Thanks
• 1 Post By eddie2042
# Thread:
1. ## Finding Period of polar graph
How come the polar graph of $r=a cos\Theta$ or $r=a sin\Theta$ only has a period of a pi?
To find the period I would use $\frac{2\pi}{b}$, so why wouldnt the period of both these polar circles be $2\pi$
2. ## Re: Finding Period of polar graph
Originally Posted by delgeezee
How come the polar graph of $r=a cos\Theta$ or $r=a sin\Theta$ only has a period of a pi?
To find the period I would use $\frac{2\pi}{b}$, so why wouldnt the period of both these polar circles be $2\pi$
Consider $r = a\cos(\theta)$, which is a circle of radius a/2, centered at a/2. From 0 to pi/2 , $\cos(\theta)$ is positive, so r is positive. From pi/2 to pi, $\cos(\theta)$ is negative, which makes r negative as well. On a graph, a negative r implies that the line starts from the origin and proceeds in the direction opposite to that which the corresponding angle points to.
So at theta=0, r = a, and theta=pi, r = -a, which are the same point.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421395659446716, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/114155/computing-int-0-infty-frac1x1x2-xn-mathrm-dx/114167
|
# Computing $\int_{0}^{\infty} \frac{1}{(x+1)(x+2)…(x+n)} \mathrm dx$
I would like to compute:
$$\int_{0}^{\infty} \frac{1}{(x+1)(x+2)...(x+n)} \mathrm dx$$ $$n\geq 2$$
So my question is how can I find the partial fraction expansion of
$$\frac{1}{(x+1)(x+2)...(x+n)} \; ?$$
-
## 3 Answers
HINT: Here's a trick to find partial fraction expansions. Compute
$$\lim_{x\to -k} \frac{(x+k)}{(x+1)(x+2)...(x+n)} \; .$$
This should give you the coefficient of the term $1/(x+k)$ in the expansion.
EDIT: As Américo points out, the partial fraction expansion is
$$\frac{1}{\left( x+1\right) \left( x+2\right) \cdots \left( x+n\right) } =\sum_{k=1}^{n}\frac{(-1)^{k-1}}{(k-1)!\left( n-k\right) !}\cdot\frac{1}{x+k} \; .$$
The indefinite integral of that expansion is
$$\ln\left( \prod_{k=1}^{n}(x+k)^{\frac{(-1)^{k-1}}{(k-1)!\left( n-k\right) !}} \right) \; .$$
When you fill in the upper bound, you can see that the result must be zero as the leading power in $x$ for the product is $0$ because
$$0 = (1-1)^{n-1} = \sum_{k=0}^{n-1} \frac{(-1)^{k} (n-1)!}{(k)!\left( (n-1)-k\right) !} = (n-1)! \sum_{k=1}^{n} \frac{(-1)^{k-1}}{(k-1)!\left( n-k\right) !} \; .$$
Therefore, we are left with the lower bound
$$-\ln\left( \prod_{k=1}^{n}(k)^{\frac{(-1)^{k-1}}{(k-1)!\left( n-k\right) !}} \right) \; .$$
For $n=2,3$ and $4$ you get resp. $\ln 2$, $\ln(2/\sqrt{3})$ and $\ln(2^5/3^3)/6$.
The lower bound can also be written as
$$\frac{1}{(n-1)!}\sum_{k=0}^{n-1} (-1)^{k-1} {n-1 \choose k} \ln(1+k) \; .$$
-
Somehow I've never seen this trick before - very clever! (This doesn't work so well unless all terms are inverse-linear, but it's going straight into my toolbox...) – Steven Stadnicki Feb 28 '12 at 0:28
Thank you very much for this answer! – Chon Feb 28 '12 at 10:36
If $$\frac{1}{(x+1)(x+2)\dots(x+n)} = \sum_{i=1}^{n} \frac{A_i}{x+i}$$
To compute $A_k$, multiply by $(x+k)$ and set $x = -k$.
In fact, this can be used to show, that for any polynomial $P(x)$ with distinct roots $\alpha_1, \alpha_2, \dots \alpha_n$, that
$$\frac{1}{P(x)} = \sum_{j=1}^{n} \frac{1}{P'(\alpha_j)(x-\alpha_j)}$$
where $P'(x)$ is the derivative of $P(x)$.
-
Based on my computations in SWP for $2\leq n\leq 8$ I conjecture the following expansion
$$\begin{equation*} \frac{1}{\left( x+1\right) \left( x+2\right) \cdots \left( x+n\right) } =\sum_{k=1}^{n}\frac{(-1)^{k-1}}{(k-1)!\left( n-k\right) !}\cdot\frac{1}{x+k}. \end{equation*}$$
Added. How to prove or disprove? Induction doesn't seem easy.
Added 2. It follows from Aryabhata's answer. See comment below.
-
3
From my answer, you can see that the coefficient is $\frac{1}{(1-k)(2-k)\dots((k-1)-k)(k+1-k)\dots(n-k)} = \frac{(-1)^k}{(k-1)!(n-k)!}$ – Aryabhata Feb 28 '12 at 1:22
@Aryabhata: Many thanks! The first coefficient is positive. Shouldn't it be $\dfrac{(-1)^{k-1}}{(k-1)!(n-k)!}$, for $k=1,2,\dots n$? – Américo Tavares Feb 28 '12 at 1:35
1
Yes, it is $(-1)^{k-1}$. – Aryabhata Feb 28 '12 at 1:38
So now we have to find $$\lim_{a\rightarrow\infty} \sum_{k=1}^n \frac{(-1)^{k-1}}{(k-1)!(n-k)!}\ln(1+\frac{a}{k})$$ – Chon Feb 28 '12 at 8:59
@Chon: Actually, the part with the limit to infinity is $0$. It's the lower bound that gives something interesting. I've computed the first few values for $n=2,3$ and $4$ and they are resp. $\ln 2$, $\ln(2/\sqrt{3})$ and $\ln(2/\sqrt{3})$. – Raskolnikov Feb 28 '12 at 9:25
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295941591262817, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/19010/formula-for-the-negative-binomial-inverse-cumulative-function
|
Formula for the Negative binomial inverse cumulative function
For example, how many times (N) do I need to flip a coin (p=0.5) to have a P=90% probability of having observed 20 heads. I empirically found that I need N=20+28=48. Is it correct?
Is there an explicit formula for the Negative binomial inverse cumulative function?
-
– Shai Covo Jan 26 '11 at 1:23
I use Maple (statevalf,idcdf,negativebinomial) but I would like to know if there is an explicit formula. – Jean-Pierre Jan 26 '11 at 1:47
1 Answer
We know that $F(k) = P(X \leq k) = 1-I_{p}(k+1,r)$ where $I_{p}(k+1,r)$ is the incomplete beta function. So $0.9 = 1-I_{0.5}(k+1,r)$. From this, I believe you have enough information to get $k$ and $r$ and hence $n$.
-
Thank you for your answer. I know how to compute the incomplete beta from k and r but I can not see how to compute k and r from the value of the incomplete beta. what did I miss? – Jean-Pierre Jan 26 '11 at 1:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8879446983337402, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/46597/dependent-bernoulli-trials
|
# Dependent Bernoulli trials
The probability of a sequence of n independent Bernoulli trials can be easily expressed as $$p(x_1,...,x_n|p_1,...,p_n)=\prod_{i=1}^np_i^{x_i}(1-p_i)^{1-x_i}$$ but what if the trials are not independent?
How would one express the probability to capture the dependence?
-
What is the dependence? E.g. Summing over the N trials must equal K? There must be an even number of 'true' results, etc. Once you define the kind of dependence it will be possible to write down the actual likelihood more concretely. – Nick Dec 27 '12 at 17:25
## 2 Answers
There are expressions you can write down, but I hope you realize how uninformative they are. Saying that the variables are not known to be indpendent, without saying anything else, gives no usable information. It's like saying that you have a friend whose name is not known to be Bob, then asking what you can say about your friend's height and age. So, here is a nearly meaningless restatement:
$$p(x_1,...,x_n) = \prod_i p(X_i=x_i|X_1=x_1,...,X_{i-1}=x_{i-1}).$$
-
did you look at de Finetti theorem and exchangeable sequences ? http://www.stats.ox.ac.uk/~steffen/teaching/grad/definetti.pdf
-
5
Can you make this reply more self-contained, e.g. by providing a brief overview of the slides/references you linked to? – chl♦ Dec 27 '12 at 11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053586721420288, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/quantum-electrodynamics?page=3&sort=votes&pagesize=15
|
# Tagged Questions
Quantum-ElectroDynamics (QED) is the quantum field theory believed to describe the electromagnetic interaction (and with some extension the weak nuclear force).
2answers
598 views
### How many photons can an electron absorb and why?
How many photons can an electron absorb and why? Can all fundamental particles that can absorb photons absorb the same amount of photons and why? If we increase the velocity of a fundamental ...
2answers
132 views
### numerical formulation of Dirac equation plus electromagnetic field
I have the following equations describing the electron field in a (classic) electromagnetic field: $$c\left(\alpha _i\right.{\cdot (P - q(A + A_b) + \beta mc) \psi = E \psi }$$ where $A_b$ is ...
3answers
274 views
### If electromagnetic fields give charge to particles, do photons carry charge?
As I understand these two statements: An electromagnetic field gives particles charge A photon is a quantum of electromagnetic field It must mean that a photon carries charge. But I guess it isn't ...
6answers
655 views
### How do we visualise antenna reception of individua radiowave photons building up to a resonant AC current on the antenna?
I am a chemical/biological scientist by trade and wish to understand how quantum EM phenomena translates to our more recognizable classical world. In particular I want to get a mechanistic picture of ...
1answer
87 views
### Photons, where do they come from? [closed]
Photons, where do they come from? What exactly is a photon? I've certainly heard how they get produced, but it doesn't seem to make sense that some sort of particle should need to be produced just ...
1answer
31 views
### Will an entangled idler electron induce a current in a conductor if the signal elctron's spin is measured?
I'm assuming a hypothetical setup as follows: Two labs (Alice and Bob) exist. Each has one electron of an entangled pair. At Alice, the electron travels through free space towards a magnetic field of ...
2answers
168 views
### what is the relationship between the dynamical casimir effect and virtual particles?
Since virtual particles are disturbances in a field, and not particles in any sense, as explained here, how is it that true photons arise from them when excited with kinetic energy via the dynamical ...
1answer
318 views
### is space infinitely divisible?
As a child I remember hearing the popular paradox presented by Zeno proposing that Achilles could never catch a tortoise in a race since he would have to traverse the infinite space between himself ...
1answer
81 views
### Is there a point interaction model of the electron?
Is there a point interaction model of the electron? Is there a point interaction model of the electron? I imagine something like $\propto(\bar \psi\psi)^2$ (edited). Is such a thing in use? Since I ...
1answer
142 views
### For someone who only studied electromagnetism, what is the modern way to explain electromagnetic fields?
After reading most of the electromagnetism chapters of Feynman's lectures on physics, I would like to understand in more detail, at least an idea, of what causes the electromagnetic fields. Not sure ...
1answer
113 views
### Back-of-the-envelope calculation of electron anomalous magnetic moment
I wonder if there is an intuitive way to obtain the $\frac{\alpha}{2\pi}$ correction to electron's $\frac12 (g-2)$ just like how Bethe estimated the Lamb shift? Here is an attempt by Drell & ...
2answers
416 views
### What is the physical process (if any) behind magnetic attraction?
I understand that the electromagnetic force can be described as the exchange of virtual photons. I also understand that it's possible for virtual photons, unlike their real counterparts, to have mass ...
0answers
59 views
### How does this paper relate to standard QED?
This paper proposes a microscopic mechanism for generating the values of $c, \epsilon_0, \mu_0$. They state that their vacuum is assumed to contain ephemeral (meaning existing within the limits of ...
0answers
34 views
### Is it reasonable to interpret the Lamb shift as vacuum induced Stark shifts?
This is a pretty hand-wavy question about interpretation of the Lamb shift. I understand that one can calculate the Lamb shift diagrammatically to get an accurate result, but there exist ...
1answer
67 views
### Alternative methods to derive the static potential in the NR limit of QED
In QED, one can relate the two-particle scattering amplitude to a static potential in the non-relativistic limit using the Born approximation. E.g. in Peskin and Schroeder pg. 125, the tree-level ...
0answers
88 views
### The state of Indefinite metric in Quantum Electrodynamics
I faced difficulties to grasp why indefinite metric is introduced from no where in QED, after searching internet I found that this is a problem in QED, because one needs it to preserve theory's ...
0answers
166 views
### Nonlinear refraction index of vacuum above Schwinger limit
This question is more about trying to feel the waters in our current abilities to compute (or roughly estimate) the refraction index of vacuum, specifically when high numbers of electromagnetic quanta ...
1answer
236 views
### Does electric charge affect space time fabric?
I am confused with this question. Does electric charge affect the space time fabric? If so, why? Also if electric charge does not affect the space time fabric, how can we interpret the origin of the ...
0answers
79 views
### After quantization of electron vibrations, do we need electrons anyway?
The title question is not ment in a general context, but one in which goes to the plasmon theory. In that case, how is are the statistics (boson vs. fermions) of plasmons determined? And is there an ...
0answers
292 views
### Maxwell's equations in microscopic and macroscopic forms, and quantization
The macroscopic Maxwell's equations can be put in terms of differential forms as $$\mathrm{d}\mathrm{F}=0,\quad\delta \mathrm{D}=j\implies \delta j=0,\quad \mathrm{D}=\mathrm{F}+\mathrm{P}.$$ ...
4answers
396 views
### Are electromagnetic “plane” waves measurable or just a virtual concept?
I find plane waves are uncompatible with light cone. Perhaps plane waves are "virtual" and can never be measured in that case, shouldn't we call plane waves as "virtual plane waves"? (other option ...
2answers
205 views
### Neutrino and electromagnetic forces
I learned from Wikipedia that neutrinos "are not affected by the electromagnetic forces". How was this identified experimentally?
1answer
431 views
### What is the value of the fine structure constant at Planck energy?
At low energy, 511 keV, the value of the fine structure constant is 1/137.03599... At Planck energy $\sqrt{\frac{\hbar c^5}{G}}$, or 1.956 $\times$ 109 Joule, or 1.22 $\times$ 1028 eV, it has a ...
3answers
252 views
### What is the 'quantum-developed' or 'relativistic-developed' equation of the electrostatic force?
Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics that is the first theory where full agreement between quantum mechanics, special relativity and ...
1answer
97 views
### Two-photon scattering: colours
Is there a particular conservation principle that necessitates that the outcoming photon pair has the same frequencies as the incoming photon pair? I'm thinking in particular of these Feynman-like ...
2answers
105 views
### A step in the derivation of the magnetic momentum of the electron in Zee's QFT book
In chapter III.6 of his Quantum Field Theory in a Nutshell, A. Zee sets out to derive the magnetic moment of an electron in quantum electrodynamics. He starts by replacing in the Dirac equation the ...
1answer
87 views
### What's the wavelength of an electron after hitting a potential barrier?
I have this question: An electron with Energy $E = 40 eV$ hits a potential barrier with $E_0 = 30 eV$. What is the wavelength of the electron after hitting the potential barrier? I worked from ...
2answers
235 views
### Photon emission from excited atoms
the answer given by classical quantum mechanic to the for atomica levels does not provide that an electron in an excited levels can radiate a photon and move to a lower level. How QED justifies this ...
1answer
172 views
### Which is this formula Feynman talks about in the QED book?
I am reading the fantastic QED Feynman book. He talks in chapter 3 about a formula he considers too complicated to be written in the book. I would like to know which formula he talks about, although I ...
2answers
226 views
### Ontology of the quantum field
I'll use QED as an example, but my question is relevant to any quantum field theory. When we have a particle in QED, where is its charge contained in the field? Is the field itself charged? If so, ...
2answers
173 views
### How did QED diverge from quantum mechanics mathematically?
We have either Heisenberg or Schrodinger picture of quantum mechanics world. So, how did quantum electrodynamics come from mathematical formulations of quantum mechanics? Also, QED seems to have ...
3answers
200 views
### EM field quantization
I'm trying to quantize the electromagnetic field by solving the vector potential wave equation, that is: \nabla^{2} \mathbf{A} = \dfrac{1}{c^{2}} \dfrac{\partial ^{2} \mathbf{A}}{\partial t^{2}}, ...
2answers
411 views
### Lightning and nuclear fusion
I'm going to be brief, I just saw a Discovery Channel show that showed a lot of interesting phenomena around lightning (like elves, how cool is that(!)), and got me wondering. 1) Thinking of ...
1answer
66 views
### Does a quadrupole transition mean emission of one photon with spin 2?
If it's true and spin-2 photons do exist, could you please point to some literature that discusses spin-2 photons? If not, then how exactly does a selection rule for quadrupole transition make sense ...
1answer
137 views
### Local $U(1)$ gauge invariance of QED
The Lagrangian density for QED is $$\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\bar{\psi}(i\gamma^{\mu}D_{\mu}-m)\psi$$ with $$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$$ ...
1answer
292 views
### Feynman Rules for massive vector boson interactions
I am stuck at the beginning of a problem where I am given an interaction term that modifies the regular QED Lagrangian. It involves the interaction between a photon field and a massive vector boson: ...
2answers
98 views
### Spinors Under Spatial Reflection
How eq(4.4) is a solution of eq(4.3)
1answer
51 views
### How Lyman transition (to the ground state from higher excited) happens ? The dipole selection rule is +/- 1?
How are the lyman series observed when the dipole selection rule is +/-1 in l change for hydrogen atom ?
1answer
89 views
### Optical waveguide that can displace a 4D light field
Has anyone invented an optical waveguide that can "pipe" a scene from one place to another unaltered? More precisely, I want to displace (and/or rotate) a 4D light field. An optical waveguide is an ...
2answers
175 views
### Could gravity hold electron charge together?
Could the gravitational force be what holds the charge of the electron together? It seems to be the only obvious possibility; what other ideas have been proposed besides side-stepping the issue and ...
4answers
223 views
### Interaction of matter with EM fields
For the interaction between electromagnetic fields and matter, when do we have to include quantization of the EM field and when we can ignore it? when do we have to include quantization of atomic ...
1answer
135 views
### thermal energy while calculating Langevin Forces
I have a quick question from thermodynamics. I remember that we take kT/2 as the kinetic energy per degree of freedom in kinetic theory of gases. But when we do langevin forces (for example in ...
0answers
59 views
### Photons interact with themselves
We know that photons are the antiparticles of themselves and if they interact with each other through higher order process do they annihilate and again produce photons? Here is the Phys.SE question ...
0answers
80 views
### Is there a simple explanation for Schwinger's relation g=2+alpha/pi for the g-factor of the electron?
Schwinger has on his grave (it seems) the relation between the g-factor of the electron and the fine structure constant: $g = 2 \ + \ \alpha / \pi \ + \ ...$ Did Schwinger or somebody else ever ...
1answer
319 views
### Relationship between classical electromagnetic wave frequency and quantum wave function + de broglie frequency
As it is. As I study through classical mechanics and quantum mechanics, I began to wonder whether there is a relationship between classical electromagnetic wave frequency and quantum wave function ...
0answers
77 views
### Can we use only the observables of Fermion fields?
There are legion ways to consider fermionic Dirac spinor fields, but is it possible to consider the asymptotic free field only in terms of observables, which in the case of the Dirac spinor field must ...
0answers
226 views
### Are virtual photons affected by effective gravity in non-linear quantum electrodynamics?
Quantum electrodynamics based upon Euler-Heisenberg or Born-Infeld Lagrangians predict photons to move according to an effective metric, which is dependent on the background electromagnetic field in a ...
2answers
231 views
### Why 2s state is lower in energy that 2p state in atoms?
The s orbital have higher probability to be closer to the core and feels larger attraction than the p orbital and on average is further away and in addition p has repulsive potentilal l(l+1)h^2/2mr^2. ...
3answers
642 views
### Conservation of electric charge in Feynman diagram
Here is a Feynman diagram showing the mutual annihilation of a bound state electron positron pair into two photons: Is the electric charge conserved at the point A (or B)? What is the "charge" of ...
1answer
266 views
### What is the spectral energy density of virtual photons around a unit charge at rest?
Given that my previous question, namely "What is the number density of virtual photons around a unit charge?" has no precise answer, here is a more precise wording: What is the virtual photon ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088637232780457, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/196345-symmetric-matrix-eigenvectors-eigenvalues.html
|
# Thread:
1. ## Symmetric Matrix - Eigenvectors, Eigenvalues
Hi, just wondering if anyone would be able to assist me with this question.
Assume that is symmetric. Let be an orthonormal basis of eigenvectors with corresponding (real) eigenvalues .
Show that the 2-norm of any vector satisfies
.
Note, I have shown in part a of the question that:
Any help is appreciated!
2. ## Re: Symmetric Matrix - Eigenvectors, Eigenvalues
This isn't my area, but since $A$ is symmetric, isn't $v_j=v_j^T$? That would get you the $\left( v_j^T \right)^2$ part. I'm not sure if this helps any or not.
3. ## Re: Symmetric Matrix - Eigenvectors, Eigenvalues
Thank you for your attempt, but I figured it out.
Setting V as the matrix with columns v_j you can write this in matrix notation. v_j are orthonormal, or in matrix notation V^t V = I which also means VVt = I. |x|^2 = x^t x, and the j'th entry of V^t x is v_j^t x, and:
|x|^2 = x^t x = x^t I x = x^t V V^t x = (V^t x)^t (V^t x)=|V^t x|^2.
But, I also have a follow up question to this which is to show that [ ||Ax || \leq |\lambda_1| ||x|| ] where the eigenvalues are ordered such that .
I'm guessing we need to use the previous part of the question, but after substituting Ax into the LHS, I get stuck showing the inequality.
Would it possible to give me some idea on this one? Thanks!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180467128753662, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/51293/how-can-two-seas-not-mix/51350
|
# How can two seas not mix?
How can two seas not mix? I think this is commonly known and the explanation everyone gives is "because they have different densities".
What I get is that they eventually will mix, but this process takes a long time.
From what you see in this picture you can see that they have a clear separation line as if you would mix water and oil.
Basically what I'm skeptical about is the clear separation line between them. Putting highly salted water and normal water in the same bowl will cause almost instant mixing. Can you get the same effect as shown in the picture in a bowl at home ?
I'm looking for a more complete answer than just that they have different densities. Thanks.
EDIT: Looking more on the "density" hipothesis I also found this which I found interesting :)
-
6
Yes, it looks like a very clear seperation line, but on what scale? That line is probably wider than your bowl at home. Was there any description along with this picture? – jkej Jan 15 at 15:45
Where is the picture taken? – Qmechanic♦ Jan 15 at 18:23
– Sean Cheshire Jan 15 at 18:26
2
The white line in your picture looks to me like waves breaking on a reef. Not sure the color contrast really has anything to do with your question about mixing (except people have claimed it does on the internet.) Mixing takes place at all scales in the ocean, and all else being equal, larger bodies of water take longer to mix. – Mark Rovetta Jan 15 at 19:41
1
FYI the boundary between salty and fresh water is often called a halocline. They can be quite pronounced in sufficiently calm bodies of water. – user1631 Jan 15 at 20:42
show 2 more comments
## 7 Answers
There are two mechanisms for mixing at a liquid-liquid interface, firstly diffusion and secondly physical agitation.
Diffusion is negligably slow in liquids, it takes days for solutes to travel a few centimetres, so the mixing is dominated by physical agitation e.g. wave action, convention currents, wind mixing etc.
In this particular case it's hard to judge what effect waves and wind have. The sea looks very calm, so I'd guess that waves and wind have little effect and it's not that surprising that mixing is slow. I bet that line wouldn't be as well defined the morning after a storm.
This sort of divison isn't that unusual. I grew up in Khartoum where the White Nile and the Blue Nile meet, and the division between them remains sharp for miles. Although I don't have any snaps from that era (I was five :-) the following picture found with google images shows the division nicely.
-
so you say, if I have a big enough container with a wall seperating left and right, fill it up with two liquids of different colours and densities, one on the left one on the right side of the container, then remove the wall slowly not to cause agitation, the net result is there should be a no-mixing line visible, right? – elcojon Jan 15 at 17:20
If the liquids have different densities the higher density one will flow along the bottom of your container and the lower density one will flow along the top. How much mixing occurs depends on how turbulant the flow is. If the flow is slow, e.g. low density difference or high viscosity, you won't get much mixing and you'll eventually end up with two layers. – John Rennie Jan 15 at 17:27
1
For example, when I pour blue toilet cleaner into the toilet I get a pool of blue liquid at the bottom. To get it to mix I have to use the brush. I hope this example doesn't lower the tone too much :-) – John Rennie Jan 15 at 17:28
1
"I bet that line wouldn't be as well defined the morning after a storm." OK, but why does the line return to its well-defined state after that? I would expect that once the waters become mixed, they stay that way. But that must not be the case, as the line has apparently endured storms for millions of years. What is the mechanism behind this un-mixing? – Kevin Jan 15 at 18:13
1
So, it's somethink like a river of fresh water flowing into the sea? Well ok than: in case of storm, the bluer water would mix and get completely lost in the darker one, which is the ocean so the other can be disregarded as for the color. Then, when waters are calm again, a new flux of fresh water slowly reproduces the phenomenon. This is my opinion obviously. – Bzazz Jan 22 at 17:43
show 5 more comments
Nobody has thus far touched on the probability that freshwater at a river/ocean interface is quite likely to be muddy. What does this mean? It means that the water is likely to contain a stable suspension of silicate micro- or nanoparticles, which are unable to aggregate due to short range electrostatic repulsion. This is what is called a colloid.
The example of the turbidity of fresh versus ocean water was one that came up in a phys chem course I did a few years ago. The clarification of water at a river delta is something that can be seen in satellite imagery worldwide and has less to do with the dilution of muddy water in an ocean and more to do with the destabilising effect of dissolved ions on muddy colloids, which results in a radical reduction of aggregation timescale.
What this means is that in mixing muddy fresh and clear salt water the colloidal mud particles will rapidly aggregate and literally drop out of the water. I would posit that what is being depicted here is actually a phase transition of sorts between 'stable colloid' on the left and 'unstable colloid' on the right, with an attendant sharp distinction in light scattering off suspended particles. The salinity gradient thus may be somewhat smoother than the boundary would suggest as a fairly small change in salinity may be the difference between muddy water that is indefinitely stable, versus muddy water that will clarify in seconds.
-
+1. Have you got references on how and why the change in salinization destabilizes the colloid? – Emilio Pisanty Jan 16 at 12:20
1
– Richard Terrett Jan 16 at 12:57
I initially suspected that the picture here is one of a sand bar next to deeper water, not of two "seas" not mixing, where the light-colored water is light because it is shallow, and we are seeing the sand below, and the dense region is dark because it is too deep to see the bottom, and the light is absorbed rather than reflecting back.The foam we see at the border is from waves that are pushed up when the deep-water waves encounter suddenly shallower water. I think, having read another response (see comment below; I can't remember the name of the author ATM and the edit section doesn't allow me to see it) that he's right: it's one liquid (say, a large river of fresh water) flowing into another (probably the ocean or something connected to it).
You're right that different-density liquids will eventually mix if they are mutually soluble, but generally, when you have a case of two mutually soluble liquids with different densities, they're top & bottom, rather than side-by-side.
You can get this effect at home with water, sugar, and food coloring. First, mix 2 parts sugar with one part water. Heat until all of the sugar is dissolved, and add some blue food coloring. Put it in a clear container. Allow it to cool to room temperature.
Next, mix some red food coloring with water. Pour it over the back of a spoon slowly and gently so as to minimize mixing.
The glass should show blue on the bottom, red on top, with minimal purple in the middle if you can do it right. It should persist for at least a few hours, possibly a few days. This is similar to what happens in the global conveyor belt, where cold, dense, saltier water is beneath warm, relatively less saline water. It can also happen on a smaller scale, with brinicles, as explained by Alec Baldwin.
-
After reading John Rennie's answer, I suspect he's right: we're looking at an interface where one liquid is flowing into another. I'm kind of new here; what's the etiquette for changing my answer? – thatnerd Jan 15 at 16:03
Good point. I think one of the important conclusions is that the question is scale-sensitive. It seems like the Atlantic not mixing with the Pacific is a big stretch, since the inflow is so small, the currents so large, and the volume and time scales so big. You can use the "edit" button on your answer btw. – AlanSE Jan 15 at 16:06
The video is not available in the UK (at least). If you can find an alternative source it'd be good. – Emilio Pisanty Jan 15 at 16:13
– thatnerd Jan 15 at 16:27
1
"You can do this at home.." pics? :) – BlueRaja - Danny Pflughoeft Jan 15 at 16:49
show 2 more comments
Its worth pointing out the separation of two similar liquids is a common experiment. The diffusion of the liquids into each other is governed by Fick's Law but can also be understood in terms of Entropy of Mixing.
The key to this puzzle is to really understand it in terms of entropy. Although the Black and Tan shown in the picture will eventually mix over time, if we found some mechanism by which we could build a dynamic cycle similar to a complete thermodynamic cycle we could keep the material separated as long as our energy source held.
One has to keep in mind that it is both the temperature gradient and the material density (as well as properties of dissolved solids) that governs the mixing between the materials. If materials of different densities have substantially different temperatures, they will tend to stay separated longer then if they were at the same temperature.
In the case of two seas, because there is a constant source of energy (the sun, etc), as well as an apparent source of material to cause different densities, those dynamic sources must also be accounted for in our understanding of equilibrium. It is the dynamics of the total system being analyzed that will cause it to favor sets of configurations that might not be stable in a more "static" diffusion problem.
The problem of ocean mixing is probably best generalized in the study of ocean circulation models.
-
In the ocean even if the difference of density is small (e.g., of the order $0.1\,kg/m^3$) the process of mixing between two water masses is rather slow (without strong turbulence). The picture probably was taken close to the estuary of a big river. In this case density difference between fresh river water and salty sea water should be of the order of $20\,kg/m^3$, that is why the boundary is visible so clear (taking in to account calm wind conditions).
I (Grisha) checked the location on Google maps http://goo.gl/xY41z and yes — there are three huge rivers not far from the Flickr geotag — Dangerous River, Ahrnkin River and Italio River. UPDATE. Actually you can clearly see this sharp front on Bing Maps! — http://binged.it/VoGDhh
The front is most likely not strictly vertical — the fresh and warm water runs on top of the cold and salt ocean water, that, in turn, is submerging under the fresh water. Here is the fragment of lecture with the explanation how the vertical front can be formed, e.g. this picture Your picture is an example of so called salt wedge estuaries. The classical example of such wedge is the Columbia River.
In Internet, you can find a lot of such pictures from satellites, here are two examples:
http://www.ifish.net/board/showthread.php?t=293094
http://www.aslo.org/photopost/showphoto.php/photo/271/title/fraser-river-satellite-image/cat/504
-
Mechanism of Mixing: First of all (as we all know), Mixing is not an atomic or nuclear phenomenon. It happens when the molecules of one fluid takes its place in some interstitial position between molecules of other fluid. The reason that I've mentioned "fluid" is because, the atoms (or molecules or whatever), are free to move in fluids. In case of solids, this doesn't happen. Because, the molecules hold themselves so tightly that they won't allow any other molecules (even their own) to come and occupy the position unless affected by pressure, temperature, etc.
What happens actually? Practically, its impossible for two liquids of different densities to make a natural boundary between themselves. As John told, some external force may help the molecules of one liquid to diffuse through the other very easily. Gases (no problem), they don't require a force at all because they scatter everywhere. The difference is because, the intermolecular forces existing among the liquid molecules is somewhat higher than those in gases. This makes the diffusion process to happen very slowly in liquids than in gases. (actually, phrased by @tpg and I agree with it)
When you mix two liquids in bowl, either the molecules obtain the necessary force to diffuse, from your shaking of bowl or if you stir it. One thing must be noted that this phenomenon necessarily depends on the surface area. In a bowl, the area occupied by the liquids is so less. And due to this reason, the force could easily push the molecules simultaneously.
In case of seas (or oceans), the force can't push and push and push the molecules forever... They have to vanish at sometime. Moreover, such a tremendous force couldn't be expected in oceans. That's why John mentioned "calm". Sadly, there's no one to stir the seas. If we take sci-fi into our topic and assume a 20 richter (impossible) Earthquake under the ocean to do our job, there would be enough force to get you muddy water.
-
1
Liquids can mix just as gases without external forcing simply due to the kinetic energy (temperature) of the molecules. They just take a lot longer than gases do because the intermolecular forces slow down the diffusion. So your contention in your second paragraph is not correct. – tpg2114 Jan 15 at 16:16
2
I'm taking issue with: "As John told, mixing requires some external force to make the molecules of one liquid to diffuse through the other. Gases (no problem), they don't require a force at all because they scatter everywhere." Mixing does not require an external force, it's just more efficient with it. And liquids don't behave any differently than gases physically -- they just diffuse slower than gases. So that entire statement is incorrect. – tpg2114 Jan 15 at 17:03
1
No worries. Kinetic theory and turbulence are graduate level topics, you've got some years before you've taken them :) – tpg2114 Jan 15 at 17:10
The mixed state is a thermodynamic equilibrium state and unmixed a non-equilibrium state. A non-equilibrium state can only be maintained if there is energy flux into and out of the system. In this case the obvious reason could be influx of fresh water or water with different salinity (energy in matter) is countering the mixing such that non-equilibrium is maintained, even though there is mixing by both diffusion and convection (storms etc). However other systems relying on other forms of energy to cause unmixed states, i.e., wind patterns and ocean currents rely on solar energy flux and geothermal energy flux respectively. The exact reason would depend on the actual situation. Of course this may not be the detailed answer you are looking for but I am just putting the overarching concept out there.
-
## protected by Qmechanic♦Feb 20 at 19:55
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555111527442932, "perplexity_flag": "middle"}
|
http://motls.blogspot.com/2012/10/different-ways-to-interpret-feynman.html
|
# The Reference Frame
## Tuesday, October 30, 2012
... /////
### Different ways to interpret Feynman diagrams
Feynman diagrams are the funny pictures that Richard Feynman drew on his van:
You see that a Feynman diagram is composed of several lines that meet at vertices (at the nodes of the graph). Some of the lines are straight, some of them are wiggly: this shape of each line distinguishes the particle type. For example, straight lines are often reserved for fermions while wiggly lines are reserved for photons or other gauge bosons.
Some lines (I mean line intervals) are external – one of their two endpoints is free, unattached to anything. These are the external physical particles that must obey the mass on-shell condition $p_\mu p^\mu = m^2$ and that specify the problem we're solving (i.e. what's the probability that some particular collection of particles with some momenta and polarization will scatter and produce another or the same collection particles with other momenta and polarizations). Other lines (I mean line intervals) are internal and they are unconstrained. You must sum over all possible ways to connect the predetermined external lines by allowed vertices and allowed internal lines. If you associate a momentum with these internal lines, also known as "propagators", it doesn't have to obey the mass on-shell condition. We say that the particle is "virtual". An explanation why its $E$ may differ from $\sqrt{p^2+m^2}$ is that the virtual particle only exists temporarily and the energy can't be accurately measured or imposed because of the inequality $\Delta E\cdot\Delta t\geq\hbar/2$.
Because the virtual particles are not external, they define neither the initial state nor the final state. Still, they "temporarily appear" during the process, e.g. scattering, and they influence what's happening. In fact, they're needed for almost every interaction. Also, the Feynman diagrams have vertices at which several lines meet, where they terminate. The vertices describe the real events in the spacetime in which the particles merge, split, or otherwise interact. However, we're doing quantum mechanics so none of these points in spacetime are uniquely or objectively determined. In fact, all the choices contribute to the calculable results – the total probability amplitudes.
A Feynman diagram is a compact picture that may be drawn by most kids in the kindergarten. However, each Feynman diagram – assuming we know the context and conventions – may also be uniquely translated to an integral, a contribution to the complex "probability amplitude" whose total value is used to calculate the probability of any process in quantum field theory. The laws used to translate the kindergarten picture to a particular integral or a related mathematical expression are known as the "Feynman rules".
How do we derive them?
I will discuss three seemingly very different methods:
• Dyson's series, an operator-based method
• Feynman's sum over histories i.e. configurations of fields
• Feynman's sum over histories i.e. trajectories of first-quantized particles
Richard Feynman originally derived his Feynman diagrams by the second method. As his name in the description of a method indicates, Freeman Dyson rederived the Feynman rules for the Feynman diagrams using the first method – and it was an important moment from a marketing viewpoint because this is how Freeman Dyson made Feynman diagrams extremely popular and essentially omnipresent.
The third method was added for the sake of conceptual completeness and it is the least rigorous one. However, it still gives you another way to think about the origin of Feynman diagrams – a way that is perhaps generalized in the "most straightforward way" if you try to construct Feynman diagrams for perturbative string theory.
It's important to mention that Feynman has discovered many things and methods, of course, but we shouldn't confuse them. The Feynman diagrams are the pictures on the van, tools to calculate scattering amplitudes and Green's functions. But he also invented the Feynman path integral ("sum over histories") approach to any quantum mechanical theory. It's not quite the same thing as the Feynman diagrams – it applies to any quantum theory, not just quantum field theory. However, as I have already said, he used the "sum over histories" of a quantum field theory to derive the Feynman diagrams for the first time.
Two other, conceptually differently looking ways to derive the Feynman diagrams were found later. The third method uses the "sum over histories" but applied to a "differently formulated system" than Feynman originally chose; the first method due to Dyson doesn't use the "sum over histories" at all.
Quadratic terms in the action, higher-order terms in the action
But all the three strategies to derive the Feynman rules share certain technical principles which are "independent of the formalism":
• The lines, both the propagators and the external lines, are associated with individual fields or particle species and with the bilinear or quadratic terms they contribute to the action (and the Lagrangian or the Hamiltonian).
• The vertices are associated with cubic, quartic, or other higher-order terms in the action (and the Lagrangian or the Hamiltonian), assuming that it is written in a polynomial form.
Let's assume we have an action and the Lagrangian that depends on the fields $\phi_i$ in a polynomial way:\[
\eq{
\LL &= a_0 + \sum_i a_{1,i} \phi_i + \frac{1}{2!} \sum_{i,j} a_{2,ij} \phi_i\phi_j +\\
&+ \frac{1}{3!}\sum_{i,j,k} a_{3,ijk} \phi_i\phi_j\phi_k+\dots
}
\] which continues to higher orders, if needed, and which also contains various similar terms with the (first or higher-order) spacetime derivatives $\partial_\mu$ of the fields $\phi_i$ contracted in various ways. We don't consider the spacetime derivatives as something that affects the order in $\phi$ so $\partial_\mu \phi\partial^\mu \phi$ is still a second-order term in $\phi$. The number of fields $\phi_i$ – the order in $\phi$ – that appear in the cubic or higher-order term will determine how many lines are attached to the corresponding vertex of the Feynman diagram.
The individual coefficients $a_{n,i}$ etc. are parameters or "coupling constants" of a sort. How do we treat them?
Well, the first term, the universal constant $a_0$, is some sort of the vacuum energy density. As long as we consider dynamics without gravity, it won't affect anything that may be observed. For example, the classical (or Heisenberg) equations of motion for the operators are unaffected because the derivative of a constant such as $a_0$ with respect to any degree of freedom vanishes. We know that even in the Newtonian physics, the overall additive shift to energy is a matter of conventions. The potential energy is $mgh$ where $h$ is the height but you may interpret it as the weight above your table or above the sea level or above any other level and Newton's equations still work.
If we include gravity, the term $a_0$ acts like a cosmological constant and it curves the spacetime. Fine. We will ignore gravity here so we will ignore $a_0$, too.
The next terms are linear, proportional to $a_{1,i}$. They are multiplied by one copy of a quantum field. For the Lorentz invariance to hold, it should better be a scalar field and if it is not, it must be a bosonic field and the vector indices must be contracted with those of some derivatives, e.g. as in $\partial_\mu A^\mu$.
What do we do with the linear terms?
Well, here we can't say that they don't matter. They do depend on the fields and they do matter. But we will still erase them because of a different reason: they matter too much. If the potential energy contains a term proportional to $\phi$ near $\phi=0$, it means that $\phi=0$ isn't a stationary point. The value of $\phi$ will try to "roll down" in one of the directions to minimize the potential energy. It will either do so indefinitely, in which case the Universe is a catastrophically unstable hell, or it will ultimately reach a local minimum of the potential energy. In the latter, peaceful case, you may expand around $\phi=\phi_{\rm min}$, i.e. around the new minimum, and if you do so, the linear terms will be absent.
So if we perform these basic steps, we see that without a loss of generality, we may assume that the Lagrangian only begins with the bilinear or quadratic terms. The following ones are cubic, and so on.
(We could start with a quantum field theory that has nontrivial linear terms, e.g. in the scalar field, anyway. In that case, the instability of the "vacuum" we assumed would manifest itself by a non-vanishing "one-point functions" for the relevant scalar field(s). The Feynman diagrams for these one-point functions ("scattering of a 1-particle state to a 0-particle state or vice versa") are known as "tadpoles" – tadpoles have a loop(s)/head and one external leg – because a journal editor decided that Sidney Coleman's alternative term for these diagrams, the "spermion", was even more problematic than a "tadpole".)
Bilinear terms and propagators
The method of Feynman diagrams typically assumes that we are expanding around a "free field theory". A free field theory is one that isn't interacting. What does it mathematically mean? It means that its Lagrangian is purely bilinear or quadratic. If we want to extract the "relevant" bilinear Lagrangian out of a theory that has many higher-order terms as well, we simply erase the higher-order terms.
Why is a quadratic Lagrangian defining a "free theory"? It's because by taking the variation, it implies equations of motions for the fields that are linear. And linear equations obey the superposition principle: if $\phi_A(\vec x,t)$ and $\phi_B(\vec x,t)$ are solutions to the equations of motion, so is $\phi_A+\phi_B$. If $\phi_A$ describes a wave packet moving in one direction and $\phi_B$ describes a wave packet moving in another direction, they may intersect or overlap but the wave packets may be simply added which means that they pretend that they don't see one another: they just penetrate through their friend. This is the reason why they don't interact. Linear equations describe waves that just freely propagate and don't care about anyone else. Linear equations are derived from quadratic or bilinear actions. That's why quadratic or bilinear actions define "free field theories".
If we appropriately integrate by parts, we may bring the bilinear terms to the form\[
\LL_{\rm free}=\frac{1}{2}\sum_{ij} C_{ij} \phi_i P_{ij} \phi_j
\] where $P_{ij}$ is some operator, for example $(\partial_\mu\partial^\mu+m^2)\delta_{ij}$. The factor $1/2$ is a convention that is natural because if we differentiate the expression above with respect to a $\phi_i$, we produce two identical terms due to the Leibniz rule for the derivative of the product. (That's not the case if the first $\phi_i$ were $\phi^*_i$ which is needed when it's complex: for complex fields, including the Dirac fields etc., the factor of $1/2$ is naturally dropped.)
So the classical equations of motion derived for those fields look like this:\[
\sum_j P_{ij} \phi_j = 0.
\] You should imagine the Klein-Gordon equation as an example of such an equation.
Some operator, e.g. the box operator, acts on the fields and gives you zero. These are linear equations. You may often explicitly write down solutions such as plane waves, $\phi_i = \exp(ip\cdot x)$, and all their linear superpositions are solutions as well. The coefficients of these plane waves are called creation and annihilation operators etc. You may derive what spectrum of free particles may be produced by a free field theory.
This may be done in the operator approach – the free fields are infinite-dimensional harmonic oscillators defined by their raising and lowering operators – as well as by the "sum over histories" approach – the harmonic oscillator may be solved in this way as well. The "sum over histories" approach encourages you to choose the $\ket x$ or $\ket{ \{\phi_i(\vec x,t)\} }$ continuous (or functionally metacontinuous) basis of the Hilbert space. By the functionally metacontinuous basis, I mean a basis that gives you a basis vector for each function or $n$-tuple of functions $\{\phi_i(\vec x,t=t_0) \}$ even though these functions form a set that is not only continuous but actually infinite-dimensional.
But I want to focus on the derivation of the Feynman rules including the vertices. We don't want to spend hours with a free field theory. When we construct the Feynman rules, the free part of the action determines the particles that may be created and annihilated and that define the initial and final Hilbert space as a Fock space; and it determines the propagators.
The propagators will be determined by "simply" inverting the operator $P_{ij}$ I used to define the bilinear action above. This inverted $P^{-1}_{ij}$ plays the role of the propagator for a simple reason: we ultimately need to solve the linear equation of motion with some function on the right hand side. Each function may be written as a combination of continuously infinitely many (i.e. as an integral over) delta-functions so we really need to solve the equation\[
\sum_j P_{ij} \phi_j = \delta^{(4)} (x-x') \cdot k_i
\] for some coefficients $k_i$ – which may be decomposed into Kronecker deltas $\delta_{im}$ for individual values of $m$. The value of $x'$ – the spacetime event where the delta-function is localized – doesn't change anything profound about the equation due to the translational symmetry. A funny thing is that the equation above may be formally solved by multiplying it with the inverse operator:\[
\phi_i = \sum_j P^{-1}_{ij} \delta^{(4)}(x-x')\cdot k_j.
\] That's why the inverse of the operator $P_{ij}$ – which is nonlocal (the opposite to differentiation is integration and we are generalizing this fact) appears in the Feynman rules.
So far I am presenting features of the results "informally"; we are not strictly deriving any Feynman rules and we haven't chosen one of the three methods yet.
Higher-order terms
I will postpone this point but the cubic and higher-order terms in the Lagrangian will produce the vertices of the Feynman diagrams. In the position representation, the locations of the vertices must be integrated over the whole spacetime.
In the momentum representation, the vertices are interactions that appear "everywhere" and we must instead impose the 4-momentum conservation at each vertex. In the latter approach, some momenta will continue to be undetermined even if the external particles' momenta are given. The more independent "loops" the Feynman diagram has, the more independent momenta running through the propagators must be specified. All the allowed values of the loop momenta must be integrated over.
The momentum and position approaches are related by the Fourier transform. Note that the Fourier transform of a product is a "convolution" and this is the sort of mathematical facts that translates the rules from the momentum representation to the position representation and vice versa.
Starting with the methods: Dyson series
We have already leaked what the final Feynman rules should look like so let us try to derive them. Dyson's method coincides with the tools in quantum mechanics that most courses teach you at the beginning, so it's a beginner-friendly method (although this statement depends on our culture and on those perhaps suboptimal ways how we teach quantum mechanics and quantum field theory). But it's actually not the first method by which the Feynman rules were derived; Feynman originally used the "sum over histories" applied to fields.
Dyson's method uses several useful technicalities, namely the Dirac interaction picture; time ordering; and a modified Taylor expansion for the exponential.
The Dirac interaction picture is a clever compromise between Schrödinger's picture in which the operators are independent of time and the state vector evolves according to Schrödinger's equation that depends on the Hamiltonian; and the Heisenberg picture in which the state vector is independent of time and the operators evolve according the Heisenberg equations of motion that resemble the classical equations of motion with extra hats (which are omitted on this blog because it's a quantum mechanical blog).
In the Dirac interaction picture, we divide the Hamiltonian to the "easy", bilinear part we have discussed above and this "free part" is used for the Heisenberg-like evolution equations (the operators evolve in a simple linear way as a result); and the "hard", higher-order or interacting part of the Hamiltonian which is used as "the" Hamiltonian in a Schrödinger-like equation. So we have:\[
\eq{
H(t) &= H_0 + V(t), \\
i\hbar \pfrac{\phi_i(\vec x,t)}{t} &= [\phi_i(\vec x,t),H_0]\\
i\hbar \ddfrac{\ket{\psi(t)}}{t} &= V(t)\ket{\psi(t)}.
}
\] The operators evolve according to $H_0$, the free part, but the wave function evolves according to $V(t)$. Note that $V(t)$ – and of course the whole $H(t)$ as well – is a rather general composite operator so it also depends on time: its evolution is also determined by its commutator with $H_0$. On the other hand, $H_0$ itself, while an operator, is $t$-independent because it commutes with itself.
The operator $H_0$ depends on the elementary fields $\phi_i$ in a quadratic way so the commutator in the second, Heisenberg-like equation above is linear in the fields $\phi_i$. Consequently, these equations of motion are "solvable" and the solutions may be written as some combinations of the plane waves – the usual decomposition of operators $\phi_i(\vec x,t)$ into plane waves multiplied by coefficients that are interpreted as creation and annihilation operators.
The proof that this Dirac interaction picture is equivalent to either Heisenberg or Schrödinger picture is analogous to the proof of the equivalence of the latter two pictures themselves; one just considers "something in between them".
Getting the time-ordered exponential
At any rate, we may now ask how the initial state $\ket\psi$ at $t=-\infty$ evolves to the final state at $t=+\infty$ via the Schrödinger-like equation that only contains the interacting (higher-order) $V(t)$ part of the Hamiltonian. We may divide the evolution into infinitely many infinitesimal steps by $\epsilon\equiv \Delta t$. The evolution in each step (the process of waiting for time $\epsilon$) is given by the map\[
\ket\psi \mapsto \zav{ 1+\frac{\epsilon}{i\hbar} V(t) }\ket\psi.
\] For an infinitesimal $\epsilon$, the terms that are higher-order in $\epsilon$ may be neglected. To exploit the formula above, we must simply perform this map infinitely many times on the initial $\ket\psi$. Imagine that one day is very short and its length is $\epsilon$ and use the symbol $U_t$ for the parenthesis $1+\epsilon V(t)/i\hbar$ above. Then the evolution over the first six days of the week will be given by\[
\ket\psi \mapsto U_{\rm Sat} U_{\rm Fri} U_{\rm Thu} U_{\rm Wed} U_{\rm Tue} U_{\rm Mon}\ket\psi.
\] Note that the Monday evolution operator acts first on the ket, so it appears on the right end of the product of evolution operators. The later day we consider, the further on the left side – further from the ket vector – it appears in the product. So the evolution from Monday to Saturday (or Sunday) is given by a product where the later operators are always placed on the left side from the earlier ones. We call such products of operators "time-ordered products".
In fact, we may define a "metaoperator" of time-ordering ${\mathcal T}$ which, if it acts on things like $V(\text{Day1}) V(\text{Day2})$, produces the product of the operators in the right order, with the later ones standing on the left. The ordering is important because operators usually refuse to commute with each other in quantum mechanics.
Now, if you study the product of the $U_{\rm Day}$ operators above, you will realize that the product generalizes our favorite "moderate interest rates still yield the exponential growth at the end" formula for the exponential\[
\exp(X) = \lim_{N\to \infty} \zav{ 1 + \frac XN }^N
\] where $1/N$ may be identified with $\epsilon$. The generalization affects two features of this formula. First, the terms $X/N$ aren't constant, i.e. independent of $t$, but they gradually evolve with $t$ because they depend on $V(t)$. Second, we mustn't forget about the time ordering. Both modifications are easily incorporated. The first one is acknowledged by writing $X$ inside $\exp(X)$ as the integral over time; the second one is taken into account by including the "metaoperator" of time-ordering. (I call it a "metaoperator" so that it suppresses your tendency to think that it's just an operator on the Hilbert space. It's not. It's an abstract symbol that does something with genuine operators on the Hilbert space. What it does is still linear – in the operators.)
With these modifications, we see that the evolution map is simply\[
\ket\psi\mapsto {\mathcal T} \exp\zav{ \int_{-\infty}^{+\infty}\dd t\, \zav{ \frac{V(t)}{i\hbar} } } \ket\psi
\] The time-ordered exponential is an explicit form for the evolution operator (the $S$-matrix) that simply evolves your Universe from minus infinity to plus infinity. In classical physics, you could rarely write such an evolution map explicitly but quantum mechanics is, in a certain sense, simpler. Linearity made it possible to "solve" the most general system by an explicit formula.
Once we have this "time-ordered exponential", we may deal with it in additional clever ways. The exponential may be Taylor-expanded, assuming that we don't forget about the time-ordering symbol in front of all the monomial terms in the Taylor expansion. The operators $V(t)$ are polynomial in the fields and their spacetime derivatives: we allow each "elementary field" factor to either create or annihilate particles in the initial or final state (these elementary fields will become the inner end points of external lines of Feynman diagrams); or we keep the elementary fields "ready to perform internal services". In the latter case, we will need to know the correlators such as\[
\bra 0 \phi_i(\vec x,t) \phi_j(\vec x', t')\ket 0
\] which is a sort of a "response function" that may be calculated – even by the operator approaches – and which will play the role of the propagators. The remaining coefficients and tensor structures seen in $V(t)$ will be imprinted to the Feynman rules for the vertices, the places where at least 3 lines meet.
I suppose you know these things or you will spend enough time with the derivation so that you understand many subtleties. My goal here isn't to go through one particular method in detail, however. My goal is to show you different ways how to look at the derivation of the Feynman diagrams. They seem conceptually or philosophically very different although the final predictions for the probability amplitudes are exactly equivalent.
Feynman's original method: "sum over histories" of fields
Feynman originally derived the Feynman rules by "summing over histories" of fields. The very point of the "sum over histories" approach to quantum mechanics is that we consider a classical system, the classical limit of the quantum system we want to describe, and consider all of its histories, including (and especially) those that violate the classical equations of motion. For each such a history or configuration in the spacetime, we calculate the action $S$, and we sum i.e. integrate $\exp(iS/\hbar)$ over all these histories, perhaps with the extra condition that the initial and final configurations agree with the specified ones (those that define the problem we want to calculate).
(See Feynman's thesis: arrival of path integrals, Why path integrals agree with the uncertainty principle, and other texts about path integrals.)
We have already mentioned that we're dividing the action, Lagrangian, or Hamiltonian to the "free part" and the "interacting part". We're doing the same thing if we use this Feynman's original method, too. To deal with the external lines, we have to describe the wave functions (or wave functionals) for the multiparticle states; this task generalizes the analogous problem with the quantum harmonic oscillator to the case of the infinite dimension and I won't discuss it in detail.
What's more important are the propagators, i.e. the internal lines, and the vertices. The propagators produce the inverse operator $P_{ij}^{-1}$ from the Lagrangian again. These "Green's functions" have the property I have informally mentioned – they solve the "wave equation" with the Dirac delta-function on the right hand side; and they are equal to the two-point correlation functions evaluated in the vacuum.
But Feynman's path integral has a new way to derive the appearance of this inverse operator as the propagator. It boils down to the Gaussian integral\[
\int \dd^n x\,\exp(\vec x\cdot M\cdot \vec x) = \frac{\pi^{n/2}}{\sqrt{\det M}}.
\] but what is even more relevant is a modified version of this integral that has an extra linear term in the exponent aside from the bilinear piece:\[
\int \dd^n x\,\exp(\vec x\cdot M\cdot \vec x+ \vec J\cdot \vec x) = \dots
\] This more complicated integral may be solved by "completing the square" i.e. by the substitution\[
\vec x = \vec x' - \frac{1}{2} M^{-1}\cdot \vec J.
\] With this substitution, after we expand everything, the $\vec x'\cdot \vec J$ "mixed terms" get canceled. As a replacement, we produce an extra term\[
-\frac{1}{4} \vec J\cdot M^{-1} \cdot \vec J
\] in the exponent; the coefficient $-1/4$ arises as $+1/4-1/2$. And because $M$ is the matrix that is generalized by our operator $P_{ij}$ discussed previously, we see how the inverse $P^{-1}_{ij}$ appears sandwiched in between two vectors $\vec J$.
The strategy to evaluate the Feynman's path integral is to imagine that this whole integral is a "perturbation" of a Gaussian integral we know how to calculate. We work with all the $V(\vec x,t)$ interaction terms as if they were general perturbations similar to the $\vec J$ vector above, and in this way, we reproduce all the vertices and all the propagators again.
Note that I have been even more sketchy here because this text mainly serves as a remainder that there exists a "philosophically different attitude" to the Feynman diagrams that one shouldn't overlook or dismiss just because he got used to other techniques and a different philosophy. If you want to calculate things, it's good to learn one method and ignore most of the others so that you're not distracted. But once you start to think about philosophy and generalizations, you shouldn't allow your – often random and idiosyncratic – habits to make you narrow-minded and to encourage you to overlook that there are completely different ways how to think about the same physics. These different ways to think about physics often lead to different kinds of "straightforward generalizations" that might look very unnatural or "difficult to invent" in other approaches.
In science, one must disentangle insights that are established – directly or indirectly supported by the experimental data – from arbitrary philosophical fads that you may be promoting just because you got used to them or for other not-quite-serious reasons. Of course, this broader point is the actual important punch line I am trying to convey by looking at a particular technical problem, namely methods to derive the Feynman rules.
Feynman's other method: "sum over histories" of merging and splitting particles
Once I have unmasked my real agenda, I will be even more sketchy when it comes to the third philosophical paradigm. You may "derive" the Feynman rules, at least qualitatively, from the "first-quantized approach" emulating non-relativistic quantum mechanics.
Again, in this derivation, we are "summing over histories". But they're not "histories of the fields $\phi_i(\vec x,t)$" as in the approach from the previous section – the original method Feynman exploited to derive the Feynman rules. Instead, we may sum over histories of ordinary mechanics, i.e. over histories of trajectories $\vec x(t)$ for different particles in the process.
This approach, emulating non-relativistic quantum mechanics, the propagators $D(x,y)$ arise as the probability amplitude for a particle to get from the point $x$ of the spacetime to the point $y$. It just happens that the form of the propagators – which have been interpreted as matrix elements of the "inverse wave operator" $P^{-1}_{ij}$; and as two-points functions evaluated in the vacuum – may also be interpreted as the amplitude for a particle getting from one point to another.
Well, this works in some approximations and one needs to deal with antiparticles properly in order to restore the Lorentz invariance and causality (note that the sum over particles' trajectories still deals with trajectories that are superluminal almost everywhere, but the final result still obeys the restrictions and symmetries of relativity!) and it's tough. At the end, the "derivation" ends up being a heuristic one.
But morally speaking, it works. In this interpretation, a Feynman diagram encodes some histories of point-like particles that propagate in the spacetime and that merge or split at the vertices which correspond to spacetime points at which the total number of particles in the Universe may change (this step would be unusual in non-relativistic quantum mechanics, of course). The path integral over all the paths of the internal particles gives us the propagators; the vertices where the particles split or join must be accompanied by the right prefactors, index contractions, and other algebraic structures. But in some sense, it works.
It's this interpretation of the Feynman diagrams that has the most straightforward generalization in string theory. In string theory, we may imagine cylindrical or strip-like world sheets – histories of a single closed string or a single open string propagating in time – and they generalize the world lines. The path integral over all histories like that, between the initial closed/open string state and the final one, gives us a generalized Green's function for a single string.
And in string theory, we simply allow the topology of the world sheet to be nontrivial – to resemble the pants diagram or the genus $h$ surface with additional boundaries or crosscaps – and it's enough (as well as the only consistent way) to introduce interactions. While the interactions of point-like particles are given by vertices, "singular places" of the Feynman diagrams, and this singular character of the vertices is ultimately responsible for all the short-distance problems in quantum field theories, the world sheets for strings have no singular places at all. They're smooth manifolds – each open set is diffeomorphic to a subset of $\RR^2$, especially if you work in the Euclidean signature – but if you look at a manifold globally (and only if you do so), you may determine its topology and say whether some interactions have taken place.
So this third method of interpreting the Feynman diagrams – as the sum over histories of point-like particles in the spacetime that are allowed to split and join at the vertices – which was the "most heuristic one" and the "method that was least connected to exact formulae" encoding the mathematical expressions behind the Feynman diagrams actually becomes the most straightforward, the most rigorous way to derive the analogous amplitudes in string theory.
Take the world from another point of view, interview with RPF, 36 minutes, PBS NOVA 1973. At 0:40, he also mentions that brushing your teeth is a superstition. Given my recent appreciation of the yeasts that are unaffected by the regular toothpastes, I started to think RPF had a point about this issue, too.
If you got stuck with a particular "philosophy" how to derive the Feynman rules, e.g. with Dyson's series, it could be much harder – but not impossible – to derive the mathematical expressions for multiloop string diagrams. There have been many methods due to Richard Feynman mentioned in this text but once again, the most far-reaching philosophical lesson is one that may be attributed to Richard Feynman as well:
Perhaps Feynman's most unique and towering ability was his compulsive need to do things from scratch, work out everything from first principles, understand it inside out, backwards and forwards and from as many different angles as possible.
I took the sentence from a review of a book about Feynman. It's great if you decompose things to the smallest possible blocks, rediscover them from scratch, and try to look at the pieces out of which the theoretical structure is composed from as many angles as you can. New perspectives may give you new insights, new perceptions of a deeper understanding, and new opportunities to find new laws and generalize them in ways that others couldn't think of.
And that's the memo.
P.S.: BBC and Discovery's Science Channel plan to shoot a Feynman-centered historical drama about the Challenger tragedy.
Prayer for Marta ["Let the peace remain with this land. Let anger, envy, jealousy, fear and conflicts subside, let them subside. Now when your lost control over your things will return to you, the people, it will return to you..."], an iconic politically flavored 1968 song by which the singer restarted freedom lost in 1968 during the Velvet Revolution in 1989.
P.P.S.: Ms Marta Kubišová, a top Czech pop singer in the late 1960s (Youtube videos), refused to co-operate with the pro-occupation establishment after the 1968 Soviet invasion which is why she became a harassed clerk in a vegetable shop rather than a pillar of the totalitarian entertainment similar to her ex-friend Ms Helena Vondráčková.
She just received Napoleon Bonaparte's Legion of Honor award, a well deserved one. Congratulations!
Posted by Luboš Motl
|
Other texts on similar topics: philosophy of science, string vacua and phenomenology
#### snail feedback (10)
:
reader Dilaton said...
Ah, such a nice discussion of Feynman diagrams comes right on cue for me, thanks Lumo :-)
I'll have to study this very carefully (and not only as an entertaining lunch time reading)
I'm very jalous of Feynman's cool truck, I want to have one too :-D !
Cheers
reader Lord Haw-Haw. said...
Another thought-provoking article article Lumo, well done!
reader James Gallagher said...
This is by far the best blog post anywhere on the internet this week
reader James Gallagher said...
But people should be warned that the extension to string-like world sheets was never embraced by feynman, and is really , perhaps, just somethig that allows the mathematically sophisticated a wild time, no doubt masturbating over the incredible structures that are constructible, but acheiving zero advance in understanding nature.
reader Luboš Motl said...
Dear James, this is not how science is done. Whether something is "embraced" by an individual is totally irrelevant. What's relevant is whether it works as a description of Nature, and be sure that string theory does.
reader James Gallagher said...
No he wouldn't! Feynman liked cool ideas, not mathematical sophistication, he would definitely agree with me, especially when I explain my discrete (non constant) time evolution model which gets 3 "spatial" degrees of freerdom as a period three attracting point in a finite dimensional Hilbert space, simply by subtracting the past universe off the evolution equation.
Or, at least, he would say, hmm, that's an interesting way to get 3 big spatial dimensions but I wonder if we could look at other types of dynamical evolution in the hilbert space.
reader Not Sandy said...
off topic:
what properties does a superstring possess before it's observed?
reader Luboš Motl said...
Hi, I am not quite sure I understand what you want to find out. In quantum mechanics, there are no objective properties or facts before they're observed. One may only probabilistically predict what may be observed.
What may be observed about superstrings? All observables we know from the real world because all of them are ultimately carried by superstrings. This includes energy, momentum, angular momentum, electric charge, color charge, all other charges, parity.
The specific stringy properties are the number of excitations of i-th transverse dimension (or its a-th fermionic counterpart) multiplied by the m-th Fourier mode (wave with m minima along the string) and many other things related to the detailed excitation of a string.
reader Dilaton said...
It so nicely brings together things I've read in my QFT demystified book and in the QFT nutshell, or heard from Lenny Susskind in his lectures or in a course I've taken at the University of Rostock wich strange enough was called "Quantum statistics" but it was rather a mixture of statistical mechanics and QFT following Dirac's approach.
After a bad night, reading this elucidating and insightful article made my day much brighter than it would be otherwise ... :-D
Cheers
reader Anonymous said...
Maybe.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 107, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274011850357056, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/115608/sat-and-arithmetic-geometry/115683
|
## SAT and Arithmetic Geometry
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is an agglomeration of several questions, linked by a single observation: SAT is equivalent to determining the existence of roots for a system of polynomial equations over $\mathbb{F}_2$ (note though that the system is represented in non-trivial manner). The reason it is OK to consider more than one equation is because the conjunction of the conditions $f_i(x_1 ... x_n) = 0$ is equivalent to the single condition $\prod_i (f_i(x_1 ... x_n) + 1) + 1 = 0$.
• This reminds of the solution of Hilbert's 10th problem, namely that it is undecidable whether a system of polynomial equations over $\mathbb{Z}$ has roots. Is there a formal relation? Can we use the undecidability over $\mathbb{Z}$ to provide clues why the problem is hard over $\mathbb{F}_2$ (that is, $P \ne NP$)? What is known about decidability and complexity for other rings? In particular, what is known about complexity over $\mathbb{F}_p$ for p prime > 2?
• The system of polynomial equations defines an algebraic scheme. Is it possible to find algebro-geometric conditions on this scheme, s.t. something can be told about the complexity of SAT restricted to such schemes?
• The solutions of our system of polynomial equations are the fixed points of the Frobenius endomorphism on the corresponding variety over $\bar{\mathbb{F}}_2$. There is a variant of Lefschetz's fixed-point theorem which relates the existence of such points to $l$-adic cohomology. Can this be used to provide some insight on P vs. NP?
-
3
decidability over finite fields is trivial, as there are only finitely many choices! – Dima Pasechnik Dec 6 at 14:23
4
@Dima Pasechnik : This is obvious. I asked about complexity over finite fields and decidability for other (infinite) rings – Squark Dec 6 at 15:00
## 6 Answers
I'm hoping a real expert comes by to give a better answer, but these are things I've thought about a bit. Here are some thoughts:
(1) You should separate notions of computability (is there an algorithm?) and complexity (will the algorithm terminate in my lifetime?) Hilbert's 10th problem is a question about computability. Questions about SAT, solving equations over finite fields, or basically any computation in algebraic geometry over an algebraically closed field, such as computing betti numbers and Frobenius eigenvalues1, are all computable; the question is how complex they are.
(2) Given an instance of SAT, impose not only the various binary equations you are thinking of, but also the equations $x_i^2=x_i$ for all $i$. Now all of the solutions over $\mathbb{F}^{alg}_2$ live in $\mathbb{F}_2$. So, expanding the size of your problem linearly, you can make there be no difference between working in the boolean field and in its algebraic closure.
This transformation suggests that sophisticated notions like cohomology and Frobenius action will not be useful for the general question of counting solutions to equations over $\mathbb{F}_2$. If your family of equations over $\mathbb{F}_2$ contains the relations $x_i^2 = x_i$ then the solutions are just finitely many points all defined over the ground field, so all cohomology is trivial other than $H^0$ and the Frob action is trivial. So, for at least some equations over $\mathbb{F}_2$, the geometry doesn't give you any tools other than going back to solving the SAT problem.
I would very much like to see someone define and study the notion of which problems are "ALGEBRAIC GEOMETRY-HARD". Ravi's work on Murphy's law (and the people who have followed him) have this flavor, but they don't actually talk about computational complexity.
(4) Your motivation may be in the opposite direction. That is to say, you may be hoping to start with a SAT problem and create equations over $\mathbb{F}_2$ for which the high power tools do help you. I can't prove this is impossible, but I'm skeptical.
That said, there is some work which points against my skepticism. Jesus De Loera and his collaborators (start here and, if you are interested, continue by reading all the papers on algebraic optimization here) have been taking standard hard instances of NP-complete graph theory problems, encoding them as algebraic geometry problems and beating pure graph theoretic algorithms. The results are entirely experimental, but the experimental data makes this method look surprisingly good.
Harm Derksen has tried a similar approach for Graph Isomorphism. He can prove that his methods at worst only polynomially worse than the classic Weisfeiler-Lehman algorithm and, for the Cai-Furer-Immerman graphs, his methods are exponentially better.
One interesting feature of these papers is that they discard Groebner basis methods in favor of brute force. For example, to test whether $g_1$, $g_2$, ..., $g_n$ generate the unit ideal, rather than using Groebner bases, De Loera et al simply guess an upper bound for the degree of polynomials $f_i$ obeying $\sum f_i g_i =1$ and solve this as a family of linear equations in the coefficients of the $f_i$.
(5) There is a general principle, whose precise statement I don't know, that the difficulty of computing with a projective variety $X$ is controlled by its Castelnuovo-Mumford regularity. Googling on "Castelnuovo Mumford regularity complexity" turns up lots of references where one could start digging, though I didn't find a statement simple enough to quote.
My understanding (but I am not an expert) is that varieties created by encoding combinatorial problems in algebraic geometry tend to have very high CM regularity.
1 Frobenius eigenvalues can be described as the eigenvalues of geometric Frob acting on a variety over $\mathbb{F}^{alg}_p$, so you can define them without talking about varieties over nonaglebraically closed fields, although you'll certainly miss some of the motivation.
-
1
Vakil's work actually uses Mnev's Universality Theorem, which in a way has a computational flavour. – Dima Pasechnik Dec 6 at 17:32
Thanks a lot for your answer! I want to point out that I realize very well the distinction between decidability and computability. However, my naive intuition tells me that a problem which is close to an undecidable problem must have high complexity. For example the halting problem is undecidable whereas deciding whether a program halts in k steps is EXP-complete. Deciding whether a program halts on every output is even higher in the undecidability hierarchy (I think) whereas deciding whether a program halts on every output in k steps is NEXP-complete – Squark Dec 6 at 17:58
Supporting the skepticism in (4), i once saw a talk by Anders Bjoerner, where he mentioned a "program" to prove P≠NP. He said the first step is to find a good description of a Boolean function as a variety. According to him, this would the most difficult thing to come up with (the rest being \'etale cohomology...) – Camilo Sarmiento Dec 7 at 17:30
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As for the first question, solvability of a system of polynomial equations is NP-complete over every finite field, and NP-hard for every integral domain. The reduction was already mentioned in David Speyers’s answer: add $x_i^2-x_i$ to your system for every variable $x_i$.
The exact complexity of solvability over infinite domains is not so easy to answer. To begin with, it may significantly depend on the representation of the polynomial and its coefficients.
• Solvability over $\mathbb Z$ is undecidable ($\Sigma^0_1$-complete). This is the MRDP theorem. The decidability of solvability over $\mathbb Q$ is an open problem.
• Solvability of polynomials with rational coefficients over $\mathbb R$ or $\mathbb C$ is in PSPACE, and it is not known whether one can do better. Assuming the generalized Riemann hypothesis, solvability of rational polynomials over $\mathbb C$ is in AM, and therefore in the second level of the polynomial hierarchy. Note that AM = NP under some plausible assumptions from circuit complexity.
• Solvability over the algebraic closure of a finite field is also decidable, though I do not know offhand what are the complexity bounds (but it should be again something in the vicinity of EXP or PSPACE). The keyword is “effective Nullstellensatz”.
-
It seems you have the same answer as me while I was typing. – Benjamin Steinberg Dec 6 at 16:23
Yes, we were writing up the answers simultaneously. Happens all the time. – Emil Jeřábek Dec 6 at 16:33
Thanks a lot for your answer! – Squark Dec 6 at 18:00
I'm going to address just one aspect of your question, which has been restated in some of the answers and comments, about whether there is any formal connection between undecidability and computational complexity.
This is certainly a very tempting idea, and you could argue that one of the main reasons to conjecture that the polynomial hierarchy does not collapse (this is a standard generalization of the conjecture that P≠NP) is that the infinitary analogue of the polynomial hierarchy, namely the arithmetical hierarchy, is known not to collapse. Of course we know, for example from the Baker–Gill–Solovay theorem, that we can't naively carry over all intuitions from computability theory over to complexity theory, but the analogy remains in the back of the minds of many people working in this subject.
Perhaps closer to your question are two papers by Michael Freedman, Limit, logic, and computation and K-SAT on groups and undecidability. The latter paper in particular notes that 2-SAT is polytime solvable and 3-SAT is NP-complete, and constructs infinitary analogues of these two problems that are respectively decidable and undecidable. Your observation about the analogy between the MRDP theorem and the hardness of solving systems of equations over $\mathbb{F}_2$ could be thought of as being in the same vein. Unfortunately, while intriguing, so far these analogies don't seem to have yielded any significant insights into the P≠NP problem.
-
Which field you work over is not important for encoding SAT, because as pointed out by David Speyer, you can put in the equation $x^2=x$ to make $x$ take on only the values $0,1$. You can encode $x\wedge y$ by $xy$ and $x\vee y$ by $x+y-xy$. So determining if a system has a solution is at least NP-hard. My understanding from surveying the computer science literature some time back is that over $\mathbb C$, it is known that determining if a system has a solution is in PSPACE unconditionally and is in a complexity class called AM, which is just above NP, if one assumes the generalized Riemann Hypothesis. See Pascal Koiran, Hilbert's Nullstellensatz is in the polynomial hierarchy. J. Complexity 12 (1996), no. 4, 273–286. I don't know if the results have been improved since. This was the most recent I had found.
-
Thank you for your answer! – Squark Dec 6 at 18:03
Silly notational point: If you literally use $x+y-xy$, your formula could get exponentially long. One could solve this by introducing new variables but I think it's more elegant to just write $1-(1-x)(1-y)$. – Will Sawin Dec 7 at 17:40
@Will: If you encode 3SAT instead of SAT, each clause will have a constant-size translation, no matter how complex the translation of $\lor$ is. – Emil Jeřábek Dec 10 at 11:35
David mentioned in his answer results where Groebner bases are not used. However, Groebner bases for polynomial systems aren't the best available tools theoretically, either. E.g. the best theoretical results for complexity of solving 0-dimensional systems follow a route of symbolically deforming the system into one for which the Groebner basis is trivial to find, and then using Stickelberger Lemma to find roots of the deformed system, and finally taking the limit. Details of this are described e.g. in this book.
Groebner bases (in characteristic 0) are also usually beaten by the machinery of semidefinite programming relaxations. Perhaps not coincidentally, a lot of recent work in computational complexity (in particular, approximation algorithms) uses semidefinite programming relaxations, too.
There is also a line of research where the complexity is investigated w.r.t. arithmetic operations in $\mathbb{C}$ (or $\mathbb{R}$) having unit cost.
-
Thank you for your answer! – Squark Dec 6 at 18:03
More unexpected to me is the reduction from SAT to linear system over $\mathbb{N}_0$ (including $0$).
Monotone one in three SAT is NP-complete and is a conjunction of clauses, each clause is three boolean variables $(x,y,z)$. To satisfy a clause exactly one variable must be true.
The reduction to linear system over $\mathbb{N}_0$ is for each clause add the linear equation $x + y + z = 1$.
Solving the resulting system over $\mathbb{Z}$ is easy, but over $\mathbb{N}_0$ is NP-complete.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457539319992065, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/115953/picx-l0-in-terms-of-h-etx-mu-ln
|
## $Pic(X)/l=0$ in terms of $H^*_{et}(X,\mu_{l^n})$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to calculate Picard groups of certain schemes over fields; I'm mostly interested in the question whether $Pic(X)$ is infinitely $l$-divisible, i.e. whether $Pic(X)/l=0$, $l$ is a prime distinct from the base field characteristic (the latter could be $0$). I would like to have a characterization of this vanishing in terms of etale cohomology of $l$-torsion sheaves.
Certainly, $Pic(X)\cong H^1_{et}(X,G_m)$; yet this only yields an injection of $Pic(X)/l$ into $H^1(X,\mu_l)$, and I don't know how to control the cokernel.
Upd. So, is there a general method that expresses $Pic(X)/l$ in terms of all of $H^i(X,\mu_{l^n})$ (for $i,n>0$)? What can be said here if $X$ is a variety over an algebraically closed field?
-
## 2 Answers
$Pic(X)$ mod $l$ injects into $H^2(X,\mu_l)$ with cokernel the group of elements of order $l$ in the Brauer group of $X$. Depending on your $X$, the Brauer group may be known, or it may be as mysterious as a Tate-Shafarevich group. For example, for a complete smooth surface over a finite field, the Brauer group is conjectured to be finite, but this is not known (it's been proved to be equivalent to the Tate conjecture for the surface).
-
Thank you for this information! See the Update. – Mikhail Bondarko Dec 10 at 8:08
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In characteristic zero, Hodge theory is the best way to approach this. Hodge theory would express $H^2(X,\mathbb \mu_l)=H^2(X,\mathbb Z/l)$ in terms of $H^2(X,\mathbb Z)$. You then use the fact that the classes in $H^2(X,\mathbb Z)$ that come from holomorphic line bundles are exactly Hodge classes, so you try to find the Hodge classes. This works best when $H^2(X,\mathbb Z)$ is nontorsion.
I think that in general $Pic(X)$ being $l$-divisible is rare, e.g. it is never such for any projective variety. One case where it is $l$-divisible is affine curves, where $H^2(\mu_l)$ is zero because $H^2$ of anything locally constant is zero.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413444995880127, "perplexity_flag": "head"}
|
http://regularize.wordpress.com/tag/morozov/
|
# regularize
Trying to keep track of what I stumble upon
May 4, 2011
## Ivanov regularization
Posted by Dirk under Math, Regularization | Tags: ill-posed problems, ivanov, morozov, regularization, tikhonov |
[6] Comments
Some time ago I picked up the phrase Ivanov regularization. Starting with an operator $A:X\to Y$ between to Banach spaces (say) one encounters the problem of instability of the solution of $Ax=y$ if $A$ has non-closed range. One dominant tool to regularize the solution is called Tikhonov regularization and consists of minimizing the functional $\|Ax - y^\delta\|_Y^p + \alpha \|x\|_Y^q$. The meaning behind these terms is as follows: The term $\|Ax -y^\delta \|_Y^p$ is often called discrepancy and it should be not too large to guarantee, that the “solution” somehow explains the data. The term $\|x\|_Y^q$ is often called regularization functional and shall not be too large to have some meaningful notion of “solution”. The parameter $\alpha>0$ is called regularization parameter and allows weighting between the discrepancy and regularization.
For the case of Hilbert space one typically chooses $p=q=2$ and gets a functional for which the minimizer is given more or less explicitly as
$x_\alpha = (A^*A + \alpha I)^{-1} A^* y^\delta$.
The existence of this explicit solution seems to be one of the main reasons for the broad usage of Tikhonov regularization in the Hilbert space setting.
Another related approach is sometimes called residual method, however, I would prefer the term Morozov regularization. Here one again balances the terms “discrepancy” and “regularization” but in a different way: One solves
$\min \|x\|_X\ \text{s.t.}\ \|Ax-y^\delta\|_Y\leq \delta.$
That is, one tries to find an $x$ with minimal norm which explains the data $y^\delta$ up to an accuracy $\delta$. The idea is, that $\delta$ reflects the so called noise level, i.e. an estimate of the error which is made during the measurment of $y$. One advantage of Morozov regularization over Tikhonov regularization is that the meaning of the parameter $\delta>0$ is much clearer that the meaning of $\alpha>0$. However, there is no closed form solution for Morozov regularization.
Ivanov regularization is yet another method: solve
$\min \|Ax-y^\delta\|_Y\ \text{s.t.}\ \|x\|_X \leq \tau.$
Here one could say, that one wants to have the smallest discrepancy among all $x$ which are not too “rough”.
Ivanov regularization in this form does not have too many appealing properties: The parameter $\tau>0$ does not seem to have a proper motivation and moreover, there is again no closed for solution.
However, recently the focus of variational regularization (as all these method may be called) has shifted from using norms, to the use of more general functionals. One even considers Tikhonov in an abstract form as minimizing
$S(Ax,y^\delta) + \alpha R(x)$
with a “general” similarity measure $S$ and a general regularization term $R$, see e.g. the dissertation of Christiane Pöschl (which can be found here, thanks Christiane) or the works of Jens Flemming. Prominent examples for the similarity measure are of course norms of differences or the Kullback-Leibler divergence or the Itakura-Saito divergence which are both treated in this paper. For the regularization term one uses norms and semi-norms in various spaces, e.g. Sobolev (semi-)norms, Besov (semi-)norms, the total variation seminorm or $\ell^p$ norms.
In all these cases, the advantage of Tikhonov regularization of having a closed form solution is not there anymore. Then, the most natural choice would be, in my opinion, Morozov regularization, because one may use the noise level directly as a parameter. However, from a practical point of view one also should care about the problem of calculating the minimizer of the respective problems. Here, I think that Ivanov regularization is important again: Often the similarity measure $S$ is somehow smooth but the regularization term $R$ is nonsmooth (e.g. for total variation regularization or sparse regularization with $\ell^p$-penalty). Hence, both Tikhononv and Morozov regularization have a nonsmooth objective function. Somehow, Tikhonov regularization is still a bit easier, since the minimization is unconstrained. Morozov regularization has a constraint which is usually quite difficult to handle. E.g. it is usually difficult (is it probably even ill posed?) to project onto the set defined by $S(Ax,y^\delta)\leq \delta$. Ivanov regulaization has a smooth objective functional (at least if the similarity measure is smooth) and a constraint which is usually somehow simple (i.e. projections are not too difficult to obtain).
Now, I found, that all thee methods, Tikhonov, Morozov and Ivanov regularizazion are all treated in the book “Theory of linear ill-posed problems and its applications” by V. K. Ivanov,V. V. Vasin and Vitaliĭ Pavlovich Tanana in section 3.2, 3.3 and 3.4 respectively. Ivanov regularization goes under the name “method of quasi solutions” (section 3.2) and Morozov regularization is called “Method of residual”(section 3.4). Well, I think I should read these sections a bit closer now…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457788467407227, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/16087/inductive-definition-of-ectl-how-are-recursive-formulas-forbidden
|
# Inductive definition of ECTL*: how are recursive formulas forbidden?
In [1], the extended computation tree logic ECTL* is inductively defined as the propositional formulas over all E($A(F_1,..F_n)$), where E is the existential path quantifier and $A$ some Büchi automaton whose alphabet is $2^{\{F_1,..,F_n\}}$ for some ECTL* formulas $F_1,..F_n$.
Obviously E($A(F_1)$) with $F_1=$E$(A(F_1)$) is not well-defined for some Büchi automata. So what does inductively defined exactly mean in this case?
Maybe that no $F_1,..,F_n$ may use $F_1,..,F_n$ in their respective Büchi automata, but the Büchi automata may occur recursively if the alphabet is correctly exchanged. Or may, e.g., $F_1$ use $F_2,..,F_n$, as long as there is no mutual recursion between the $F_i$?
Update: Does the following define ECTL*'s syntax?
$\langle\text{sf}\rangle::=\; \langle\text{atomic prop}\rangle\;\big\vert\; \langle\text{sf}\rangle \vee \langle\text{sf}\rangle\;\big\vert \neg\langle\text{sf}\rangle\;\big\vert\; \text{E }\langle \text{Buechi}\;\mathcal{A}\rangle (\langle\text{sf}\rangle,\dots,\langle\text{sf}\rangle);$
(as opposed to $\langle\text{sf}\rangle::=\; \langle\text{atomic prop}\rangle\;\big\vert\; \langle\text{sf}\rangle \vee \langle\text{sf}\rangle\;\big\vert \neg\langle\text{sf}\rangle\;\big\vert\; \text{E }\langle \text{Buechi}\;\mathcal{A}\rangle (F_1,\dots,F_n);$ where the $F_i$ are ECTL* formulas).
### In detail,
[1] says
Formulas of ECTL* are inductively defined as usual and built from propositional variables using boolean connectives and containing the formula $E(A)$ for each Büchi automaton $A$ over an alphabet $2^{\{F_1,..,F_n\}}$ where the $F_i$ are ECTL* formulas.
[1] CTL* and ECTL* as fragements of the modal $\mu$-calculus by Mads Dam
-
1
Probably the first you say, i.e., that each $F_i$ is defined by some BNF and thus cannot refer to any other $F_j$. – Pål GD Jan 14 at 15:22
Do you mean CTL* or ECTL* syntax? – Vijay D Jan 15 at 19:52
Thanks, corrected it to ECTL*. – DaveBall aka user750378 Jan 15 at 21:46
I would choose the first definition you give. The two are equivalent to me, but if your confusion is notational, I would stick with the first one. – Vijay D Jan 15 at 22:28
## 2 Answers
I think the key problem here is not understanding how inductive definitions of syntax work. Here are three approaches to understanding what a BNF grammar means.
Consider a simple grammar:
$$t ::= \mathtt{true} ~~|~~ \mathtt{false} ~~|~~ 0 ~~|~~ \mathtt{succ}\ t ~~|~~ \mathtt{if}\ t\ \mathtt{then}\ t\ \mathtt{else}\ t$$
Following Pierce's Types and Programming Languages pages 26-27, the set of terms defined by this grammar is given by the following three equivalent ways.
Inductively
The set of terms is the smallest set $\mathcal{T}$ such that:
1. $\{\mathtt{true},\mathtt{false},0\}\subseteq\mathcal{T}$.
2. if $t_1\in\mathcal{T}$, then $\mathtt{succ}\ t_1\in\mathcal{T}$.
3. if $t_1,t_2,t_3\in\mathcal{T}$, then $\mathtt{if}\ t_1\ \mathtt{then}\ t_2\ \mathtt{else}\ t_3\in\mathcal{T}$.
By Inference Rules
The set of terms is defined by the following rules:
$$\mathtt{true}\in\mathcal{T} \qquad \mathtt{false}\in\mathcal{T} \qquad 0\in\mathcal{T}$$
$$\dfrac{t_1\in\mathcal{T}}{\mathtt{succ}\ t_1\in\mathcal{T}} \qquad \dfrac{t_1\in\mathcal{T}\quad t_2\in\mathcal{T}\quad t_3\in\mathcal{T}}{\mathtt{if}\ t_1\ \mathtt{then}\ t_2\ \mathtt{else}\ t_3\in\mathcal{T}}$$
Concretely
For each natural number $i$, define set $S_i$ as follows:
1. $S_0=\emptyset$.
2. $S_{i+1}=\{\mathtt{true},\mathtt{false},0\}\cup\{\mathtt{succ}\ t_1\mid t_1\in S_i\} \cup \{\mathtt{if}\ t_1\ \mathtt{then}\ t_2\ \mathtt{else}\ t_3 \mid t_1,t_2,t_3\in S_i\}$
Finally, $S=\bigcup_i S_i$ is the set of all terms.
-
Thanks a lot - such a clean answer (+1). And it does say SMALLEST set :) – DaveBall aka user750378 Jan 17 at 18:22
Suppose you state: $t ::= 0~|~ 0 w$ with $w \in \mathcal{L}$, the language currently being defined. Is this still a BNF? Or does only the inductive way apply? If only the inductive way applies, you do need to state that $\mathcal{L}$ must be the smallest such set. That's what the discussion with Vijay D was about... – DaveBall aka user750378 Jan 17 at 18:31
2
You cannot refer to the language currently being defined like that in BNF. Of course, you can refer to the top non-terminal instead of $w$ to achieve the same effect. I do need to say the smallest set in the inductive definition above; that's what makes it inductive. If I allowed any other set, then there could be other junk in $\mathcal{T}$. – Dave Clarke♦ Jan 17 at 19:06
I'm jealous. That's a beautifully clear answer that I could not produce. – Vijay D Jan 17 at 23:39
1
@VijayD: The credit should go to Pierce. I just knew were to find this description. – Dave Clarke♦ Jan 18 at 5:15
Obviously E(A(F1)) with F1=E(A(F1)) is not well-defined for some Büchi automata. So what does inductively defined exactly mean in this case?
cannot arise if you built ECTL* inductively. This means, in standard academic parlance, we would present a syntax definition of the form below.
Let $Prop$ be a set of propositions and $p$ range over $Prop$. If $\varphi_1, \ldots, \varphi_n$ are formulae of ECTL* let $\mathcal{A}[\varphi_1, \ldots, \varphi_n]$ denote an automaton $\mathcal{A}$ over the alphabet $\mathcal{P}(\{\varphi_1, \ldots, \varphi_n\})$ of subsets of the $n$ formulae shown.
$\varphi ::= p ~\mid~ \varphi \land \varphi ~\mid~ \neg \varphi ~\mid~ \mathsf{E}(\mathcal{A}[\varphi_1, \ldots, \varphi_n])$
I believe there is some confusion in how you may be reading the presentation above. In academic terms this qualifies as BNF. I see that the Wikipedia page gives a slightly different presentation. However, the definition above is equivalent to the BNF grammar you give below. It is not a "relaxed BNF" to use your terminology. It is just an inductive definition where some BNF is used where convenient.
BNF: $\langle\text{sf}\rangle::=\; \langle\text{atomic prop}\rangle\;\big\vert\; \langle\text{sf}\rangle \vee \langle\text{sf}\rangle\;\big\vert \neg\langle\text{sf}\rangle\;\big\vert\; \text{E }\langle \text{Buechi}\;\mathcal{A}\rangle (\langle\text{sf}\rangle,\dots,\langle\text{sf}\rangle);$
Academic writing mixes English and BNF when we make inductive definitions because it is convenient. To see the need for convenience, consider your grammar. It does not inform the reader what $\langle\mathrm{atomic~prop}\rangle$ and what $\langle \mathrm{sf}\rangle$ are. You have chosen an intuitive naming convention, so one can guess the first represents atomic propositions. But in general, we want to have short symbols in our papers, so we prefer a one-time cost of verbose English, which is why we write something like "where $\varphi$ is an ECTL* formula". This text does not in any way change the language being defined.
To your specific question: Your formula $F$ with $F = E(A(F))$ is not a formula of ECTL*. This type of formula cannot arise as the result of an inductive of the form above. This impossibility has nothing to do with the difference in presentation between BNF as you write it and inductive definitions in papers. To derive your formula, you need some atomic propositions, and a sequence of syntactic compositions to build up the more complex formula.
Now about fixed points. There is some confusion here too. BNF grammars are specific ways to present certain inductive definitions. They are not the only way and they do not cover all inductive definitions. Inductive definitions are a more general class and can be presented in many ways. Least fixed points are a strict generalisation of inductive definitions. Below I will only give an example of how both the BNF grammar above and the inductive definition I gave define a least fixed point. I emphasise that both definitions give rise to the same least fixed point.
Let's assume that $Prop = \{q,r,s\}$ is a set of atomic propositions and $p$ ranges over $Prop$. Consider a lattice $L$ which contains all sets of sequences of symbols appearing in the definitions above. The formulae of ECTL* are one specific element of this lattice. Consider a function $G:L \to L$, which is generated by the definitions above.
$G(\emptyset) = Props$, which accounts for $q$, $r$ and $s$ being formulae, or for the instantiation of $\langle \mathrm{atomic~prop}\rangle$ in BNF.
$G(G(\emptyset)) = G(Props)$, which is the set containing elements, $q \land r$, $q \land s$, $r \land s$, $\neg q$, $\neg s$, $\neg r$, $E(\mathcal{A}(\{q\}))$, $E(\mathcal{A}(\{r\}))$, $E(\mathcal{A}(\{s\}))$, $E(\mathcal{A}(\{q,r\}))$, $E(\mathcal{A}(\{q,s\}))$, $E(\mathcal{A}(\{r,s\}))$, $E(\mathcal{A}(\{q,r,s\}))$. This is equivalent to applying the formation rules $\langle \mathrm{sf} \rangle \lor \langle \mathrm{sf} \rangle$, $\neg \langle \mathrm{sf} \rangle$ and the others with $\langle \mathrm{sf} \rangle$ being replaced by $\langle \mathrm{atomic~prop}\rangle$. Alternatively, this is equivalent to one unwinding of the inductive definition where we say, if $\varphi_1$ and $\varphi_2$ are atomic propositions, then $\varphi_1 \land \varphi_2$, $\neg \varphi_1$, $E(\mathcal{A}(\{\varphi_1, \varphi_2\}))$, etc are formulae.
The set $G(G(G(\emptyset)))$ contains $q \land r\land s$, $r \land \neg s$, $\neg (q \land s)$, $\neg E(\mathcal{A}(\{r\}))$, $E(\mathcal{A}(\{q \land s, \neg r\}))$ and so on.
The set of ECTL* formulae is the least fixed point of the function $G$ above. Note that I have not explicitly defined it, but this is possible to do and might be an insightful exercise to work out the details. Equivalently, you can obtain exactly the same fixed point by the BNF grammar or the inductive definition I give.
The kind of formula you suggest will occur in a fixed point that is not a least fixed point. I emphasise that both the BNF grammar and the inductive definition I give generate the same function $G$, hence they have the same least fixed point and the same set of fixed points. So, if the inductive definition allows for a certain formula, your BNF must allow for the at formula, and vice versa.
-
Thanks for the answer (+1). I've updated my question. Is my syntax definition correct? Is it a direct consequence of "inductively defined"? Because what confuses me in Mads Dam's and your text is "let f_i be formulae". Do you have any reference where ECTL*'s syntax is defined rigorously? Thx. – DaveBall aka user750378 Jan 15 at 10:59
Thanks a lot for your thorough update. It explains the BNF I have given in my update, doesn't it? – DaveBall aka user750378 Jan 15 at 22:27
I am having difficulty understanding your comment. My expansion is for the grammar I gave. Maybe I am used to the way BNF is used in the academic literature, while you are reading it slightly differently. How can the grammar I gave admit the formula you wrote? – Vijay D Jan 15 at 22:31
Let $\varphi_1:=E(A(\varphi_1))$. Then $\varphi_1$ is a fixpoint of your BNF, thus you could argue it is in ECTL*. This problem does not arise in my definition where all syntactic elements are created by the BNF, without using $F_i \in$ ECTL*. – DaveBall aka user750378 Jan 15 at 22:37
Can you show me other academic literature where a language $L$ is defined by a BNF that uses $F_i \in L$ (as opposed to non-terminal symbols creating that syntactic element)? – DaveBall aka user750378 Jan 15 at 22:40
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 99, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239488840103149, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/38759?sort=newest
|
## Why is the symmetric monoidal structure on invertible modules strict?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $N$ be an object in a symmetric monoidal category. Then the braid map $N\otimes N\to N\otimes N$ is almost never the identity, and this is the obstruction to making a symmetric monoidal category into a "strict" symmetric monoidal category, in which the functor $\otimes$ is commutative on the nose. For example, when $\otimes$ is just the categorical product in the category of sets (or any other concrete category), this is the map $(x,y)\mapsto(y,x)$ which is almost never the identity.
However, one case in which the braid map is the identity is in the category of invertible modules over a commutative ring $R$, as a full subcategory of all modules. Indeed, the braid map $R\otimes R\to R\otimes R$ is the identity, and any invertible module $I$ is locally isomorphic to $R$, so the braid map $I\otimes I\to I\otimes I$ is locally equal to the identity and hence equal to the identity.
What I'd like to have is a more conceptual explanation for why the braid map is the identity for invertible modules, which does not use the fact that they are locally free (indeed, I'm interested in this because I want to use this to prove invertible modules are locally free in a more general setting). Unfortunately, the proof cannot just be abstract nonsense involving invertibility--for example, if we work with graded modules over a commutative ring instead of ordinary modules and use the usual sign conventions, then the braid map will be -1 rather than 1 on invertible modules concentrated in odd degrees. Does anyone know of a better explanation, or know a reason I shouldn't expect there to be one?
EDIT: Inspired by Charles's answer, here's a closely related question. I'm really interested in invertible objects in the derived category, and in the derived category dualizable objects can be represented by finite chain complexes of finite-rank projective modules. Over a local ring, then, all dualizable objects have an Euler characteristic which is an integer (as opposed to an arbitrary element of $R$). Since as Charles noted, the braid map of an invertible object can be identified with its Euler characteristic as a dualizable object, this implies that the braid map of any invertible object in the derived category of a local ring is $\pm 1$ (and so if you're willing to suspend your objects if necessary, you can assume it is 1).
Thus I would be satisfied with a conceptual answer to the following question: why is the Euler characteristic of a dualizable object in the derived category of a local ring always an integer? (It may be more natural to not assume that the ring is local, in which case you should replace "integer" with "integral linear combination of idempotents".)
-
## 1 Answer
This is an interesting question. I.e., I have no idea how to answer it ...
Here's a little bit of context to put this in. So $C$ is a symmetric monoidal category, with unit object $R$. Let $\mathrm{Pic}(C)$ be the collection of isomorphism classes of invertible objects in $C$; it's an abelian group using $\otimes$.
There's a group homomorphism $$\eta : \mathrm{Pic}(C) \to \text{($2$-torsion subgroup of $\mathrm{Aut}(R)$).}$$ This sends an invertible object $I$ to its Euler characteristic..
Here's a different construction of $\eta$ which I find easier to understand. If $I$ is an invertible object, there is a canonical isomorphism $\mathrm{Aut}(I)\approx \mathrm{Aut}(R)$; to construct it, choose an isomorphism $I\otimes J\approx R$, so that an automorphism $f:I\to I$ gets sent to an automorphism $R\approx I\otimes J \xrightarrow{f\otimes 1} I\otimes J\approx R$. Now $\eta (I)$ is just the image of the braid map $(\tau: I\otimes I \to I\otimes I)\in \mathrm{Aut}(I\otimes I)\approx \mathrm{Aut}(R)$. This definition makes it clear that $(\eta (I))^2=1$.
So you're asking why $\eta$ has trivial image when $C$ is a category of modules over a commutative ring. As you point out, in graded modules the image is ${\pm1}$. I got nothing here ...
-
What is the Euler characteristic of an invertible object? – Martin Brandenburg Sep 15 2010 at 8:07
Martin: if an object X in C has a dual Y, then the "Euler characteristic" is a map $R\to R$; the definition is basically the same as one way to define "trace of the identity map on a vector space". "Dual" means you have maps $\eta:R\to X\otimesY$ and $\epsilon:Y\otimes X\to R$ which encode the idea that "$Y\otimes{−}$ and $X\otimes{−}$" are adjoint functors on C; then $\chi(X)$ is $R\to X\otimes Y\approx Y\otimes X\to R$, where the map in the middle is the "braid map". – Charles Rezk Sep 15 2010 at 15:05
Hmmm...the idea of identifying $\eta$ with the Euler characteristic seems promising (though there's a bit of a diagram to chase to verify that they really are the same--the subtlety is that $\eta$ is defined in terms of the unit map and its inverse, while the Euler characteristic is defined in terms of the unit and the counit, which is NOT the same). This lets you really recast the question: why is the Euler characteristic of any dualizable module (locally) a nonnegative integer, rather than being an arbitrary element of $R$? – Eric Wofsey Sep 15 2010 at 19:44
It is a curious diagram chase. I do it this way: if $\chi: R\to R$ is the map which represents the euler characteristic of $X$, then show that $1_X\otimes \chi \otimes 1_X : X\otimes X\to X\otimes X$ is equal to the braid map. – Charles Rezk Sep 15 2010 at 20:18
Hmmm, correction: it is not true that dualizable modules over local rings have Euler characteristics that are nonnegative integers. For example, for $R=k[\epsilon]/\epsilon^2$, the module $R/\epsilon$ is dualizable and has Euler characteristic $\epsilon$. – Eric Wofsey Sep 16 2010 at 1:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914766252040863, "perplexity_flag": "head"}
|
http://www.ams.org/bookstore?fn=20&arg1=genint&ikey=EULER
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
Euler Through Time: A New Look at Old Themes
V. S. Varadarajan, University of California, Los Angeles, CA
SEARCH THIS BOOK:
2006; 302 pp; hardcover
ISBN-10: 0-8218-3580-7
ISBN-13: 978-0-8218-3580-7
List Price: US\$61
Member Price: US\$48.80
Order Code: EULER
See also:
Supersymmetry for Mathematicians: An Introduction - V S Varadarajan
The Mathematical Legacy of Harish-Chandra: A Celebration of Representation Theory and Harmonic Analysis - Robert S Doran and V S Varadarajan
The Selected Works of V.S. Varadarajan - V S Varadarajan
Algebra in Ancient and Modern Times - V S Varadarajan
Euler is one of the greatest and most prolific mathematicians of all time. He wrote the first accessible books on calculus, created the theory of circular functions, and discovered new areas of research such as elliptic integrals, the calculus of variations, graph theory, divergent series, and so on. It took hundreds of years for his successors to develop in full the theories he began, and some of his themes are still at the center of today's mathematics. It is of great interest therefore to examine his work and its relation to current mathematics. This book attempts to do that.
In number theory the discoveries he made empirically would require for their eventual understanding such sophisticated developments as the reciprocity laws and class field theory. His pioneering work on elliptic integrals is the precursor of the modern theory of abelian functions and abelian integrals. His evaluation of zeta and multizeta values is not only a fantastic and exciting story but very relevant to us, because they are at the confluence of much research in algebraic geometry and number theory today (Chapters 2 and 3 of the book).
Anticipating his successors by more than a century, Euler created a theory of summation of series that do not converge in the traditional manner. Chapter 5 of the book treats the progression of ideas regarding divergent series from Euler to many parts of modern analysis and quantum physics.
The last chapter contains a brief treatment of Euler products. Euler discovered the product formula over the primes for the zeta function as well as for a small number of what are now called Dirichlet $$L$$-functions. Here the book goes into the development of the theory of such Euler products and the role they play in number theory, thus offering the reader a glimpse of current developments (the Langlands program).
For other wonderful titles written by this author see: Supersymmetry for Mathematicians: An Introduction, The Mathematical Legacy of Harish-Chandra: A Celebration of Representation Theory and Harmonic Analysis, The Selected Works of V.S. Varadarajan, and Algebra in Ancient and Modern Times.
Readership
Undergraduates, graduate students, and research mathematicians interested in the history of mathematics and Euler's influence on modern mathematics.
Reviews
"...something truly special...Varadarajan has provided us with a useful guide to certain portions of Euler's work and with interesting surveys of the mathematics to which that work led over the centuries."
-- MAA Reviews
"...the author has admirablly managed to organize the text in such a manner that an interested non-specialist will find the whole story comprehensible, absorbing, and enjoyable. This book has been written with the greatest insight, expertise, experience, and passion on the part of the author's, and it should be seen as what it really is: a cultural jewel in the mathematical literature as a whole!"
-- Zentralblatt MATH
"By taking some of Euler's most important insights, developing them, and showing their connection to contemporary research, this book offers a profound understanding of Euler's achievements and their role in the development of mathematics as we now know it."
-- Mathematical Review
AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127832651138306, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/14194/what-techniques-are-used-for-empirical-stochastic-simulation-of-a-time-series
|
# What techniques are used for empirical, stochastic simulation of a time series?
Suppose you have recorded a set of paths in the $y,t$ plane, with $y = f(t)$, $f$ is a stochastic function (i.e. there is a noise term), and $t$ might be time or some other monotonic increasing independent variable.
Suppose further that you know the underlying process is random, meaning the same set of initial conditions (even, "hidden variables") do not produce the same path. Do not assume that changes in $y$ are independent, or even stationary (though, stationary but dependent changes are an important special case).
What techniques are available to simulate an incomplete curve into the future?
What I mean by "incomplete" curve is: your set of historical curves are each over some range of interest, $T=[0,t_{max}]$. You are given a curve that represents data $y_1(t)$ recorded up until some intermediate value of $t' \in T$.
-
Are you willing to assume that the historical data form a set of identically distributed and perhaps even independent curves? – NRH Aug 12 '11 at 20:07
Yes, good point; an important case is with each curve independent from the other, with the same underlying distribution throughout $T$. However, it is important that changes in $y$ on a given curve are generally dependent. – Pete Aug 12 '11 at 23:38
Coudl you use Markov Chains? – Manoel Galdino Aug 16 '11 at 2:08
## 2 Answers
Would dynamic linear models be applicable? (State Space formulation, Kalman filter, etc.) The `dlm` package has some nice tools to create and simulate from models.
-
Conceptually, the question fits into the framework of functional data analysis, see, for instance Applied Functional Data Analysis by Ramsay and Silverman. The usual assumption here is that we have a data set of independent, and perhaps even identically distributed, smooth curves. Fitting an fda model to your data over $[0, t_{\max}]$ you are able to predict a future $y_1(t)$ based on the fitted model, or, in principle, the continuation of $y_1(t)$ for $t > t'$ based on the conditional distribution from the fitted model of $y_1(t)$ for $t > t'$ given $y_1(t)$ for $0 \leq t \leq t'$. However, this may be easier said than done.
A simple example is when your curves can be modeled via an ordinary differential equation (with random initial values). Then you can predict $y_1(t)$ for $t > t'$ by solving the ode with the observed initial condition $y_1(t')$.
I would recommend that you take a look in the book above, and perhaps also their theoretical version Functional data analysis or their web page for inspiration. I think there are many ways to proceed depending on the relative impact of inhomogeneity in the $t$-variable and information content in the partial observation of $y_1$ on the continuation of the curve.
-
(+1) Good suggestions. Even the "theoretical version" is an easy and enjoyable read. :) – cardinal Oct 1 '11 at 23:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418891668319702, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/phase-velocity
|
# Tagged Questions
The phase-velocity tag has no wiki summary.
1answer
426 views
### Deriving group velocity
At the introduction to quantum mechanic phase $v_p$ and group $v_g$ velocities are often presented. I know how to derive $v_p$ and get equation: $$\scriptsize v_p=\frac{\omega}{k}$$ What i dont ...
0answers
33 views
### What is the relation between phase velocity and generator frequency in a rectangular waveguide?
Could you help me to find relations between phase velocity and generator frequency in a rectangular waveguide?
1answer
528 views
### Phase and Group Velocity of Electromagnetic Waves
Moving charges produce oscillating electric and magnetic fields -we have an electromagnetic wave. In terms of moving charges or at the level of charges, what is phase velocity and group velocity of ...
2answers
501 views
### What does the velocity of a wave mean?
I know that the velocity of a wave is given by $v=\lambda f$ but what does this velocity represent in the physical sense. For instance, if I am told a car moves at a velocity of 5 $m/s$ I know that ...
1answer
85 views
### Wavefronts and phase velocity faster than $c$
Lets assume we have parallel wavefronts in a glass of water: and we put an inclined rod on the water surface: related to a very small inclining, Vy velocity is greater or much greater then Vx ...
2answers
588 views
### Group Velocity and Phase Velocity of Matter Wave?
In quantum mechanics, what is the difference between group velocity and phase velocity of matter wave? How can it also be that phase velocity of matter wave always exceeds the speed of light?
7answers
409 views
### Information relationship to Special Relativity
How do we write mathematically that "information" cannot go faster than light? And along a similar line of thought, how do we relate "information" with special relativity. Lastly, what is the ...
1answer
352 views
### Rotating mirror - Foucault's measurement of light speed
Some time ago I came across a secondary web source on measurement of light speed in water made by Foucault around 1850. I append its redrawn scheme below (light is reflected from the rotating mirror ...
1answer
171 views
### Fermat principle: which index of refraction?
I am somewhat puzzled by a common formulation of the Fermat principle (light travel time), because it contains index of refraction related to phase velocity while light travel time through a slab of ...
1answer
182 views
### Why doesn't anomalous dispersion allow faster-than-light propagation?
It seems that the phase velocity of light could be greater than $c$, if $\sqrt{\epsilon \mu} < 1/c$, i.e. for anomalous dispersion. Are there examples of such media? For diamagnetics it seems ...
4answers
676 views
### In superluminal phase velocities, what is it that is traveling faster than light?
I understand that information cannot be transmitted at a velocity greater than speed of light. I think of this in terms of the radio broadcast: the station sends out carrier frequencies $\omega_c$ but ...
3answers
2k views
### How do you find the velocity function of a mechanical wave?
With the form $y(x,t)=A\sin(kx-\omega t+\phi_0)$, there are two variables, How do I find the velocity? I don't know I can apply derivative with two variables.
3answers
273 views
### Controllable faster-than-light phase velocity
This is not another question about faster-than-light travel or superluminal communication. I totally appreciate the speed limit capped by physical laws (or theories.) Just curious, since there is no ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218170642852783, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/23853/how-to-derive-the-manley-rowe-relation-for-the-process-of-the-second-harmonics-g
|
# How to derive the Manley-Rowe relation for the process of the second harmonics generation?
Derive the Manley-Rowe relation for the process of the second harmonics generation.
Manley-Rowe relation: ~ The Manley-Rowe relations are mathematical expressions developed originally for electrical engineers to predict the amount of energy in a wave that has multiple frequencies.
Second-harmonic generation: ~ Second harmonic generation (SHG); also called frequency doubling; is a nonlinear optical process, in which photons interacting with a nonlinear material are effectively “combined” to form new photons with twice the energy, and therefore twice the frequency and half the wavelength of the initial photons. It is example of nonlinear phenomena (I, 3). In the SHG process, the intense wave at the frequency $\omega$ propagates in the medium with second-order nonlinearity ( VI, 1).
-
1
– John Rennie Apr 16 '12 at 16:22
## 1 Answer
The Manley-Rowe relation arises from conservation of energy and momentum. For the case of SHG, the presence of the nonlinear optical (NLO) material eliminates the conservation of momentum (any momentum difference between the initial and final photons can be provided by the bulk material). So what's left is conservation of energy.
Let $N_\omega$ and $N_{2\omega}$ be the number of the fundamental frequency and the second harmonic. Usually NLO people care about how these sorts of things change with the distance that the wave moves through the material. So let the direction of propagation be $x$. Then, by energy conservation:
$$2\frac{dN_{\omega}(x)}{dx} + \frac{dN_{2\omega}(x)}{dx} = 0.$$ This follows from $E=\hbar\omega$. That is, it takes two of the $\omega$ photons to provide the energy in one $2\omega$ photon.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8914512395858765, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/169512-tensor-linear-map.html
|
# Thread:
1. ## tensor and linear map
Let V i W be a vector spaces and $f:V\rightarrow W$ a linear map. Show that f is a tensor of type (1,1).
thank you very much!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.801378607749939, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/156123/show-that-frac377-12-is-odd-and-composite
|
Show that $\frac{(3^{77}-1)}{2}$ is odd and composite
The question given to me is:
Show that $\large\frac{(3^{77}-1)}{2}$ is odd and composite.
We can show that $\forall n\in\mathbb{N}$:
$$3^{n}\equiv\left\{ \begin{array}{l l} 1 & \quad \text{if $n\equiv0\pmod{2}$ }\\ 3 & \quad \text{if $n\equiv1\pmod{2}$}\\ \end{array} \right\} \pmod{4}$$
Therefore, we can show that $3^{77}\equiv3\pmod{4}$. Thus, we can determine that $(3^{77}-1)\equiv2\pmod{4}$. Thus, we can show that $\frac{(3^{77}-1)}{2}$ is odd as:
$$\frac{(3^{77}-1)}{2}\equiv\pm1\pmod{4}$$
However, I am unsure how to show that this number is composite. The book I am reading simply states two of the factors, $\frac{(3^{11}-1)}{2}$ and $\frac{(3^{7}-1)}{2}$, but I do not know how the authors discovered these factors.
I'd appreciate any help pointing me in the right direction, thanks.
-
In this case it's true that $(3^{77}-1)/2\equiv 1 \pmod{4}$, but observe that from $x\equiv 2 \pmod{4}$ it does not necessarily follow that $x/2 \equiv 1 \pmod{4}$, only that it is odd. – Zander Jun 9 '12 at 18:58
6 Answers
Hint: $$a^n-1=(a-1)(a^{n-1}+a^{n-2}+...+a+1)\,,\,\,\forall n\in\mathbb{N} \wedge \forall a\in\mathbb{R}$$
Added Of course, we also have in this case, applying the above: $$3^{77}-1=\left(3^7\right)^{11}-1=(3^7-1)\left(\left(3^7\right)^{10}+\left(3^7\right)^9+...+3^7+1\right)\,,\,etc.$$ and something similar can be done with $\,3^{77}=\left(3^{11}\right)^7$
-
Did you find this question interesting? Try our newsletter
email address
Another way to see this is by writing the number in base 3:
$$3^{77}=1\underbrace{00\dots00}_{77}\ _3$$
Here the index $3$ denotes base 3, and $77$ is the number of digits. Subtracting one, we get:
$$3^{77}-1=\underbrace{22\dots22}_{77}\ _3$$
Therefore, dividing this by two,
$$\frac{3^{77}-1}{2}=\underbrace{11\dots11}_{77}\ _3$$
From this we can directly read that the number is odd, since it is the sum of 77 odd numbers, and composite, since $$\underbrace{11\dots11}_{77}\ _3=1111111_3\cdot\underbrace{10000001000000\dots100000010000001}_{71}\ _3$$
(Although, this is basically the same as some of the other answers.)
-
+1 for an interesting, creative method I would never have thought of! – Shaktal Jun 9 '12 at 19:24
3
+1 Ditto as above: beautiful! – DonAntonio Jun 9 '12 at 21:09
Well following your congruences idea we have that $$3^{77} \equiv 3^{11} \equiv 1 \pmod{23}$$ So $$3^{77} - 1\equiv 0 \pmod{23}$$ Since $2^{-1} \equiv 12 \pmod{23}$, we have that $$\dfrac{3^{77}-1}{2} \equiv 0 \pmod{23}$$ Hence $23 \mid \dfrac{3^{77}-1}{2}$ and is therefore composite.
-
2
That's an interesting method, can I ask where the number $23$ came from though? – Shaktal Jun 9 '12 at 16:35
Trial and error sadly. – Eugene Jun 9 '12 at 16:36
1
Or if not trial and error, then Little Fermat telling us that $$3^{22}\equiv1\pmod{23}$$ together with $$3^{77}=3^{11+22\cdot3}\equiv3^{11}\cdot(3^{22})^3\equiv3^{11}\pmod{23}.$$ – Jyrki Lahtonen Jun 9 '12 at 16:43
@JyrkiLahtonen Yes Fermat's Little Theorem was the idea behind the trial and error. That only gets you so far though... – Eugene Jun 10 '12 at 0:40
Given the hints in the question, you could just look at $\frac{(3^{11}-1)}{2} = 88573 = 23 \times 3851$ while $\frac{(3^{7}-1)}{2} = 1093$ is prime. – Henry Jun 10 '12 at 8:21
Note that if $m=kn$ for some $k\in\mathbb{N}$, then $$a^m-1=(a^n)^k-1=(a^n-1)(a^{n(k-1)}+a^{n(k-2)}+\cdots+a^n+1)$$ so that $a^n-1$ divides $a^m-1$.
-
Hint $\,$ The sought factors demonstrating compositeness of your number arise very simply from a compositional factorization $\rm\:g\:\!f = g\circ f\:$ of a polynomial, combined with the Factor Theorem.
$$\begin{eqnarray}\rm z\,\ -\,\ c\ & | &\,\rm\ g\:\!(\,z\,) - g\:\!(\,c\,) \\ \rm f(x)\!-\!f(a) &|&\,\rm\ g\:\!f(x) - g\:\!f(a) \\ \rm x^7\, -\, a^7\, & | &\,\rm (x^{7})^{11} - (a^{7})^{11} \\ 3^7-\,1\ & | &\:\!\ (3^{7})^{11} - 1 \end{eqnarray}$$
Note that if we employ the notation $\rm\:x^f = f(x)\:$ (e.g. as in Galois theory) then it is clearer
$$\begin{eqnarray}\rm z\,\ -\,\ c\,\:\! & | &\rm\ (\,z\,)^g - \:\!(\,c\,)^g \\ \rm x^f\ -\ a^f\, &|&\rm\ (x^f)^g - (a^f)^g \\ \rm x^7\, -\, a^7\, & | &\:\rm (x^{7})^{11}\!\! - (a^{7})^{11} \\ 3^7-\,1\ & | &\:\!\ (3^{7})^{11\!} - 1 \end{eqnarray}$$
In the Galois case the exponential notation highlights further structure, e.g. from my post here
$\quad\quad\begin{align}{} \rm g^{\:\sigma^4-1} \;=\;& \rm g^{\:(1\:+\;\sigma\:+\;\sigma^2\:+\;\:\sigma^3)\:(\sigma-1)} \\\\ \iff\quad\quad\rm \frac{\sigma^4 g}g \;=\;& \rm (g \;\: \sigma\:g \;\:\sigma^2 g \;\:\sigma^3 g)^{\sigma - 1} \;=\; \frac{\phantom{g\;\;\:} \sigma\:g \;\;\: \sigma^2 g \;\;\:\sigma^3 g \;\;\:\sigma^4 g}{g \;\;\:\sigma\:g \;\;\:\sigma^2 g \;\;\:\sigma^3 g\phantom{\;\;\:\sigma^4 g}} \\\\ \end{align}$
-
Note the algebraic factorisation:
$x^{nm} - 1 = (x^m - 1)(x^{m(n-1)} + x^{m(n-2)} + ... + x^m + 1)$
If we let $x=3, m=7, n=11$ then we see that $3^7 - 1$ divides $3^{77} - 1$.
So $\frac{3^{77} - 1}{2} = \frac{3^7 - 1}{2}k$ for some integer k as required.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263831973075867, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/heat+homework
|
# Tagged Questions
0answers
21 views
### Find the workdone to increase the temperature of an ideal gas by 300c if gas is expanding under [closed]
Find the work done to increase the temperature of an ideal gas by $30^o C$ if gas is expanding under the condition $V \propto \dfrac{t^2}{3}$.
1answer
679 views
### Beginner Thermal Dynamics Question [closed]
A syringe is set up filled with air (mainly N2 and O2) as shown in the diagram below. The surface area of the syringe is 15.3 cm2. The initial pressure inside the syringe is Pi = 114 kPa and the ...
0answers
27 views
### Calculating the change in entropy in a melting process
I have a homework question that I'm completely stumped on and need help solving it. I have a $50\, \mathrm{g}$ ice cube at $-15\, \mathrm{C}$ that is in a container of $200\, \mathrm{g}$ of water at ...
1answer
44 views
### Calculating the coefficient of thermal expansion in liquid
I am trying to write a matlab function that calculates the coefficient of thermal expansion of water from a given temperature. From what I understand the thermal expansion coefficient is calculated as ...
0answers
23 views
### Calculate the amount of water needed to extinguish a vehicle fire? [closed]
Essential Data: Petrol has a calorific value 45000 KJ/Kg Temperature at which petrol burns ~ 300 Degrees Celcius Specific heat Capacity of Water = 4.2 KJ/Kg Specific heat Capacity of Steam = 2.0 ...
1answer
40 views
### What happens to pipe outlet temperature here?
I have a fluid flowing through a pipe in the ocean and there is heat transfer from the ocean to the fluid in the pipe. I prepared a simulation and the results show that if I increase the mass flow ...
0answers
30 views
### What is the meaning of $h_L - h_H$ for a heat engine?
My problem gives me a Carnot cycle heat engine with water as its working fluid, with $T_H$, $T_L$, and the fact that it starts from saturated liquid to saturated vapor in the heating process. I need ...
1answer
88 views
### Sensible heat question (solving for temperature)
If $55 034.175 \rm{kJ }$ of heat are transferred to $150 \rm{kg}$ of ice at a temperature of $-12.15 ^\circ \rm{C}$, calculate the temperature of the resulting water. Using $Q = mc(t_2-t_1)$ or ...
0answers
32 views
### Height of a piston in a heated cylinder containing H2O
I'm a first time user and I hope I won't be too enigmatic asking the following question: I have a cylinder (radius= $6$ $\text{cm}$) with a frictionless piston on top of it and inside $30$ $g$ of ...
4answers
362 views
### why does a larger thermal conductivity provide a smaller temperature gradient?
I was thinking about Fourier's Law in heat transfer today and for some reason I am just not understanding the relationships it gives us. Fourier's tells us that if the heat transfer rate is kept ...
2answers
2k views
### What is the characteristic length of a cylinder
I have a cold cylinder that is submerged in hot water and I need to find the convective heat transfer coefficient. I can do the whole process but I am stuck finding the characteristic length. I found ...
1answer
459 views
### Heat Exchanger Calculation
I have a tank of oil at 55 degrees c. I plan to run a copper pipe 8mm in diameter (1mm thickness) into a coil 15m long inside the tank. For all purposes of assumption, the copper pipe is perfectly ...
1answer
96 views
### I need help with this question on Heat Capacity
A calorimeter has a Heat Capacity of $70 J/K$. There is $150g$ water with a temperature of $20^oC$ in this calorimeter. In this, you put a metal cube of $60g$ with a temperature of $100^oC$. The ...
0answers
168 views
### Simple heat transfer question [closed]
You add an unknown volume of milk of $5.2 ^\circ C$ to a cup of coffee ($40 mL$ of water, temperature: $80.3 ^\circ C$). After a while of stirring the temperature reaches $73.2 ^\circ C$. The ...
0answers
53 views
### Experimental Physics [closed]
A heater and thermocouple are used to measure and control the temp. T of a sample at $T_{0}=250^{o}C$. A feedback circuit supplies power P to the heater according to the equation \$P=P_{0}+G(T_{0}-T)-D ...
1answer
260 views
### ratio between work and heat [closed]
I am really stuck on a problem in my textbook: Water is heated in an open pan where the air pressure is one atmosphere. The water remains a liquid, which expands by a small amount as it is heated. ...
1answer
103 views
### Temperature and latent heat
Building a bronze stature we make a mold and pour in the liquid bronze when the bronze hardens we remove the mold. The mold is made of 3 Kg of steel and the statue has a mas of 1 Kg. The specific ...
1answer
113 views
### Determine the flow and amplitude equation for thermal energy (with Del operator)
It is a question vector calculus and Maxwell's laws. I put it this way. Let's say, we are working in a $3$-Dimensional space ( e.g $x\cdot y\cdot z = 4\cdot3\cdot2$, a certain room/class of that size ...
2answers
658 views
### Specific heat capacity
Which of two objects at the same tempreature can cause more intense burns when you touch it: the one with the greater specific heat capacity or the one with the smaller specific heat capacity and why? ...
2answers
138 views
### Using CO2 to air condition a room
I'm trying to determine how much dry ice or liquid nitrogen I would need to cool 3300 cubic feet, about 90,000 liters of air, from about 100F (37.78C or 310K) to about 90F (26.67C or 299.81K). I'm ...
1answer
257 views
### How would I calculate the convection coefficient in transient convection?
So I have faced a problem dealing with transient conduction and I need a little help with the problem solving concepts. I need to determine how long it would take to reach the final temperature but I ...
2answers
299 views
### Transient radiation--heating a slab
Hey guys I really need help on this problem. A ceramic slab of dimentions 5cm x 10 cm x .25 cm has to be heated to $177\,^{\circ}{\rm C}$. The ceramic slab travels on a conveyor belt traveling at ...
2answers
140 views
### How would I go about solving this transient convection problem if the mean fluid temperature is constantly changing?
Let's say I have a ceramic slab on a conveyor belt that is initially at $450\,^{\circ}\mathrm{C}$ and there is air being blown over it at a speed of $35 \frac{m}{s}$ with an ambient temperature of ...
1answer
134 views
### 2d or 1d conduction in this scenario?
There is a rectangular fin attached to a heat exchanger with a base temperature of 350K. The fin has uniform properties and experiencesa uniform heat generation. It also experiences heat transfer with ...
1answer
162 views
### At what point can we assume the tip of a fin is adiabatic?
Let's say there is a fin that is 1mm thick, extends 8mm from the surface, and is 10 mm wide. The fin is exposed to a moving fluid. Can we assume the adiabatic tip condition and use the characteristic, ...
1answer
459 views
### In what situations do I use the characteristic length of a fin to find the surface area?
So I'm learning about fins in heat transfer and it seems that there are two separate formulas for the surface area of a rectangular fin of length L, width w and thickness t. The fin is attached to a ...
1answer
627 views
### Confused with stress, strain and linear thermal expansion
Four rods A, B, C, D of same length and material but of different radii r, 2r , 3r and 4r respectively are held between two rigid walls. The temperature of all rods is increased by same ...
0answers
166 views
### Is the $mL_c$ value for triangular and rectangular fins the same value?
I am looking at the solutions that my professor put up and I feel that he did something wrong. Here is the question and I will give my stab at the solution so you can see why I think that it is wrong. ...
1answer
270 views
### How do I find average temperature given a temperature distribution?
I was told to find the temperature distribution of a wire with a current going through it. So I found $$T(x)=T_{\infty}-\frac{\dot{q}}{km^{2}}[\frac{cosh(mx)}{cosh(mL)}-1]$$ I need to find the ...
1answer
177 views
### When to use Heat Diffusivity eqn and when to use Fourier's law to find temperature distribution?
Let's say that there is a circular conical section that has diameter $D=.25x$ without any heat generation and I need to find the temperature distribution. Originially I thought I could use the heat ...
0answers
403 views
### When do I know if energy stored in an object is 0 or nonzero? (Heat transfer)
There is uniform internal heat generation at $\dot{q}=5\times10^{7} \frac{W}{m^{3}}$ is occurring in a cylindrical nuclear reactor fuel rod of 50 mm diameter and under steady state conditions the ...
1answer
271 views
### Temperature change effected by electric heater [closed]
A 40-gallon electric water heater has a 10kW heating element. What will the water temperature be after 15 min of heating if the start temp is 50F degrees. There must be an equation. I can't find it ...
1answer
271 views
### Given two boiling temperatures and pressures, how can I find the latent heat?
I am given the fact that at a certain pressure a liquid boils at a corresponding temperature, at a different pressure it boils at a different temperature, and then I am asked to find the latent heat ...
3answers
260 views
### Problem with an electricity / thermodynamics assignment
I've been trying to figure this one out for a while on my own, so I'd like to ask for your help if you could offer some. The task states: A heater made out of a wire with a diameter \$R = ...
0answers
95 views
### To determine the age of the Earth by the cooling of the centre of the Earth [closed]
I was given an exercise to make a simple model that suggests that the age of the Earth is 78000 years old. My current starting point is $$u_{tt} = u_{xx}$$ so a wave equation. So in polar ...
4answers
12k views
### Heat transfer calculated from the specific heat formula
Say I have 10g of silver, whose specific heat is known to be 0.235. I've heated it up from 50.0C to 60.0C. How much heat has been transferred? The equation is: $$Q = C_pm\Delta t$$ where Cp is ...
1answer
378 views
### Heat & thermodynamics question based on heat loss [closed]
A Sphere A is placed on a smooth table.Another sphere B is suspended as shown in the figure.Both the spheres are identical in all respects.Equal quantity of heat is supplied to both spheres.All ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337892532348633, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/253110/how-to-find-mathrmranka/254147
|
# How to find $\mathrm{Rank}(A)$
Let $A\in \mathrm{Mat}_{5\times 4}(\mathbb R)$ be such that the space of all solutions of the linear system $AX^t=[1,2,3,4,5]^t$ is given by $\left\{[1+2s,2+3s,3+4s,4+5s]:s\in \mathbb R\right\}$. I need to find $\mathrm{Rank}(A)$.
I don't know where to start. I need some hints.
-
4
– clark Dec 7 '12 at 15:34
1
You have 5 equations in 4 unknowns, but in the solution set values are specified for 3 variables only. Which of your variables is the free variable? You should be able to compute the rank using the number of variables and the number of free variables. – Chris Leary Dec 7 '12 at 15:42
@ChrisLeary: Sorry. Corrected. – Sugata Adhya Dec 7 '12 at 16:16
## 1 Answer
It holds $A^{-1}\bigl((1,2,3,4,5)\bigr) = \ker A + v$ for one vector $v \in A^{-1}\bigl((1,2,3,4,5)\bigr)$ and thus you get $\dim(\ker A )= 1$. Furthermore we know
$$\dim(\ker A) + \dim( \mathrm {Im} \: A ) = \dim \: \mathbb R^4 = 4,$$
following $\dim (\mathrm{Im}\: A) = 3$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036434888839722, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5981/how-do-single-use-passwords-work-for-an-encrypted-message
|
# How do single use passwords work for an encrypted message
LastPass encrypts its data before sending it to their servers. The key for the data is generated using a combination of the master password and username.
How then can a one-time password be used to decrypt this data? The data on the server has been encrypted with the master password and username. Does this mean that LastPass stores a new version of encrypted data for every (single-use) password?
-
They key for the data should be generated in a way that also involves salt. $\hspace{2.2 in}$ (I have no other reason to believe that it actually is.) $\:$ – Ricky Demer Jan 12 at 19:18
## 2 Answers
As hunter notes, the only people who can really say what LastPass actually does are those who work there. However, as long as we only consider what they can and should do...
They don't really need to store a separate copy of your data for each one-time password. Instead, all they need to store for each password is an encrypted copy of the key used to actually encrypt the data.
So, when you log in using a one-time password, the password is (presumably) passed through PBKDF2 to derive a "key-encryption key", which is then used to decrypt the actual key needed to decrypt the data.
-
– hunter Jan 15 at 16:24
@hunter: I suppose they might derive the actual encryption key directly using PBKDF2 from the user's permanent password, and then save an encrypted copy of that key for each one-time password. Or the statement on the website might just be missing some details. – Ilmari Karonen Jan 16 at 2:48
The only people who can answer your question definitively are the programmers at LastPass, however, I'll try.
I assume you're referring to this. If LastPass really does encrypt your data with your password/username, then logically it could only be decrypted with the same key.
Their 'one-time' password feature is an interesting idea, but I'm dubious about it. They use 256-bit AES to encrypt your data - this is symmetrical encryption, which means that there's only one single key used for encryption/decryption.
In answer to your question, yes - it would make sense that they would have to create/store a copy of your data, encrypted with the 'one-time' password, for each one-time password that you create.
The only way (that I can think of) to avoid making a separate copy of your data for each 'one-time' password is if there is a server-side 'back-door' into your account, which would put your data at extreme risk (but I doubt this is the case).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439888596534729, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/63736/list
|
## Return to Answer
2 added 159 characters in body
I don't know if I really have enough mathematical background to give a profound answer, but there are some things I would like to mention here since I have also pondered a lot about questions when and why finiteness conditions are so important.
Basically the answer to the question "why are finiteness conditions so important?" is very, very simple: Because they make it possible to do mathematics. A mathematical theory which tries to overcome natural finiteness conditions tends to be isolated and narrow. In contrast to that, when you impose good finiteness conditions, the theory becomes rich, very beautiful (which is, of course, subjective). Also "which finiteness conditions" has a very simple answer: Exactly the ones which you need to do the mathematics you want to develope or, at least, imagine. There is no general recipe to produce a good finiteness condition, except that it should fit best to your situation. I've choosen the word "exactly" in order to exclude too restrictive resp. strong finiteness conditions here. On the other hand, we don't always have to look for the most general finitness conditions, unless for some application, we really need more general ones.
For example there is nothing wrong with Hartshorne's book in the definition of coherent sheaves when we restrict ourselves to noetherian schemes - everything works out nicely. But if we jump, some day, to non-noetherian schemes, then we have to reconsider the notions of "coherent", "of finite type", "of finite presentation", etc. In the affine case, this also motivates the definition of noetherian rings: I also agree that the definition involving increasing chaings of ideals might not be the most natural one, but what about the equivalent one which Pete has mentioned: Every submodule of a module of finite type is, again of finite type. Actually exactly this property is ofted needed and maybe it has motivated the definition of noetherian rings. No obscure chains. Besides, from a more modern perspective, it is a relative condition, which talks about objects "over" the ring.
Also quite useful in practice (for example when surjectivity comes from an abstract argument and the injectivity just does not work out): A surjective endomorphism of a noetherian ring is an automorphism. Well I expect that you can list thousands of nice properties here. Remark that this property illstruates that often the finiteness condition is used to conclude something which says nothing at all about finitness. Another example: If $X$ is a compact topological space, then for every topological space $Y$ the map $X \times Y \to Y$ is a closed map. But of course this fits well since in the proof we want to use a finite intersection of open subsets etc., and the definition of a topology with this restricted intersection property is again based on basic examples out of which this notion was developed. So this fits together very well. Also, the property above characterizes compact topological spaces and probably has motivated the corresponding notion of proper schemes in algebraic geometry.
Some general remarks about finitness: One of the most natural object of our mathematical universe is the set of natural numbers $\mathbb{N}$ (I hope no one already here objects and wants to generalize everything to regular cardinals), and the most basic proof involving natural numbers is induction (by the definiton of $\mathbb{N}$ as the smallest inductive set). In order to use induction in more sophisticated situations, we have to give our mathematical objects a measure in $\mathbb{N}$, for example dimension, length, depth, height, etc.. One of the most beautiful and basic examples for this is Grothendieck's vanishing result in sheaf cohomology for finite dimensional topological spaces. So basically you induct on the complexity of the topological space, which you cannot do for arbitrary topological spaces.
Finally I have to admit that my oppinion on finiteness conditions has changed in the last months. For years, I wanted to generalize every notion, theorem or even theory in order to avoid all the occuring finiteness conditions. See this MO question for a very clear example: What about infinite tensor products of vector spaces? We can write them down and prove some basic stuff, but in the end there is nothing interesting which we can do with them and there are no useful connections or applications. So let's just forget about them! :-) The same goes for schemes which are not quasi-separated (link, link).
1
I don't know if I really have enough mathematical background to give a profound answer, but there are some things I would like to mention here since I have also pondered a lot about questions when and why finiteness conditions are so important.
Basically the answer to the question "why are finiteness conditions so important?" is very, very simple: Because they make it possible to do mathematics. A mathematical theory which tries to overcome natural finiteness conditions tends to be isolated and narrow. In contrast to that, when you impose good finiteness conditions, the theory becomes rich, very beautiful (which is, of course, subjective). Also "which finiteness conditions" has a very simple answer: Exactly the ones which you need to do the mathematics you want to develope or, at least, imagine. There is no general recipe to produce a good finiteness condition, except that it should fit best to your situation. I've choosen the word "exactly" in order to exclude too restrictive resp. strong finiteness conditions here. On the other hand, we don't always have to look for the most general finitness conditions, unless for some application, we really need more general ones.
For example there is nothing wrong with Hartshorne's book in the definition of coherent sheaves when we restrict ourselves to noetherian schemes - everything works out nicely. But if we jump, some day, to non-noetherian schemes, then we have to reconsider the notions of "coherent", "of finite type", "of finite presentation", etc. In the affine case, this also motivates the definition of noetherian rings: I also agree that the definition involving increasing chaings of ideals might not be the most natural one, but what about the equivalent one which Pete has mentioned: Every submodule of a module of finite type is, again of finite type. Actually exactly this property is ofted needed and maybe it has motivated the definition of noetherian rings. No obscure chains. Besides, from a more modern perspective, it is a relative condition, which talks about objects "over" the ring.
Also quite useful in practice (for example when surjectivity comes from an abstract argument and the injectivity just does not work out): A surjective endomorphism of a noetherian ring is an automorphism. Well I expect that you can list thousands of nice properties here. Remark that this property illstruates that often the finiteness condition is used to conclude something which says nothing at all about finitness. Another example: If $X$ is a compact topological space, then for every topological space $Y$ the map $X \times Y \to Y$ is a closed map. But of course this fits well since in the proof we want to use a finite intersection of open subsets etc., and the definition of a topology with this restricted intersection property is again based on basic examples out of which this notion was developed. So this fits together very well.
Some general remarks about finitness: One of the most natural object of our mathematical universe is the set of natural numbers $\mathbb{N}$ (I hope no one already here objects and wants to generalize everything to regular cardinals), and the most basic proof involving natural numbers is induction (by the definiton of $\mathbb{N}$ as the smallest inductive set). In order to use induction in more sophisticated situations, we have to give our mathematical objects a measure in $\mathbb{N}$, for example dimension, length, depth, height, etc.. One of the most beautiful and basic examples for this is Grothendieck's vanishing result in sheaf cohomology for finite dimensional topological spaces. So basically you induct on the complexity of the topological space, which you cannot do for arbitrary topological spaces.
Finally I have to admit that my oppinion on finiteness conditions has changed in the last months. For years, I wanted to generalize every notion, theorem or even theory in order to avoid all the occuring finiteness conditions. See this MO question for a very clear example: What about infinite tensor products of vector spaces? We can write them down and prove some basic stuff, but in the end there is nothing interesting which we can do with them and there are no useful connections or applications. So let's just forget about them! :-) The same goes for schemes which are not quasi-separated (link, link).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439530968666077, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/27011/list
|
## Return to Answer
2 added 526 characters in body
The $k$-forms that are easiest to describe are those with $k \in {0,1,n-1,n}$. A 0-form on an $n$-manifold is a function. A 1-form on an $n$-manifold, if you imagine it in $n+1$ dimensions, is like an arrangement of shingles on a roof: At each point of the manifold, it defines a directional slope, which as other people have said, is the same as a dual vector on tangent vectors. An $n$-form is a density, i.e., an entity that you can integrate over the manifold. And an $(n-1)$-form is a flux (like, say, describing oil coming out of a well): At each point it has a null tangent direction, and it assigns a non-zero volume to each cross section.
Of course you can think of any $k$-form as a $k$-dimensional flux, and for general values of $k$ you might as well. But when $k$ is 1 or $n-1$, it is somewhat easier to visualize the condition that the form is closed. A 1-form is closed when the shingles locally mesh as the slope of a smooth roof, i.e., the form is locally integrable. An $(n-1)$-form is closed when the flux is locally conservative, which for instance is the case with fluid flow. In fact, theorem: A closed, non-zero $(n-1)$-form is equivalent to a 1-dimensional foliation with a transverse volume structure.
The reason that other values of $k$ are harder is that while you do get an entirely analogous algebraic integrability condition when the form is closed, you might not get the same kind of geometric integrability. A non-zero 1-form has an $(n-1)$-dimensional kernel at each point. (Although the visualization that I suggested is in $n+1$ dimensions, it is also true in $n$ dimensions that these tangent hyperplanes mesh when the 1-form is closed.) A non-zero $(n-1)$-form has a 1-dimensional kernel at each point. But a $k$-form for other values of $k$ doesn't usually have a kernel. (Okay, a maximum rank 2-form in odd dimensions also has a 1-dimensional kernel, and it is equivalent to a 1-foliation with a transverse symplectic structure.)
I have heard the statement that only 1-forms and 2-forms are any good. (Well, that's an overstatement, but they are more important than the others except for maybe $0$ and $n$.) In particular, symplectic forms show up a lot, so it is important to try to imagine them even though by definition they have no kernels. I think of a symplectic form as a calibration for a local complex structure. (Or an almost complex structure, which might be all that exists globally.) I.e., among the different tangent 2-planes of a symplectic $2n$-manifold, the ones that are complex lines have the greatest pairing with the symplectic form, while the ones that are real planes have vanishing pairing, and the pairing minimum is achieved by complex lines with the wrong orientation.
One more remark: The geometric picture of a foliation with a transverse volume structure holds for closed $k$-forms that are also non-zero simple forms (i.e., wedge products of linearly independent 1-forms). I think it's a theorem that any closed $k$-form is locally a sum of closed, simple $k$-forms. If that's correct, then that's also a way to visualize a closed $k$-form, as an algebraic superposition of volumed foliations. $k=1$ and $k=n-1$ are special cases in which every non-zero form is simple.
1
The $k$-forms that are easiest to describe are those with $k \in {0,1,n-1,n}$. A 0-form on an $n$-manifold is a function. A 1-form on an $n$-manifold, if you imagine it in $n+1$ dimensions, is like an arrangement of shingles on a roof: At each point of the manifold, it defines a directional slope, which as other people have said, is the same as a dual vector on tangent vectors. An $n$-form is a density, i.e., an entity that you can integrate over the manifold. And an $(n-1)$-form is a flux (like, say, describing oil coming out of a well): At each point it has a null tangent direction, and it assigns a non-zero volume to each cross section.
Of course you can think of any $k$-form as a $k$-dimensional flux, and for general values of $k$ you might as well. But when $k$ is 1 or $n-1$, it is somewhat easier to visualize the condition that the form is closed. A 1-form is closed when the shingles locally mesh as the slope of a smooth roof, i.e., the form is locally integrable. An $(n-1)$-form is closed when the flux is locally conservative, which for instance is the case with fluid flow. In fact, theorem: A closed, non-zero $(n-1)$-form is equivalent to a 1-dimensional foliation with a transverse volume structure.
The reason that other values of $k$ are harder is that while you do get an entirely analogous algebraic integrability condition when the form is closed, you might not get the same kind of geometric integrability. A non-zero 1-form has an $(n-1)$-dimensional kernel at each point. (Although the visualization that I suggested is in $n+1$ dimensions, it is also true in $n$ dimensions that these tangent hyperplanes mesh when the 1-form is closed.) A non-zero $(n-1)$-form has a 1-dimensional kernel at each point. But a $k$-form for other values of $k$ doesn't usually have a kernel. (Okay, a maximum rank 2-form in odd dimensions also has a 1-dimensional kernel, and it is equivalent to a 1-foliation with a transverse symplectic structure.)
I have heard the statement that only 1-forms and 2-forms are any good. (Well, that's an overstatement, but they are more important than the others except for maybe $0$ and $n$.) In particular, symplectic forms show up a lot, so it is important to try to imagine them even though by definition they have no kernels. I think of a symplectic form as a calibration for a local complex structure. (Or an almost complex structure, which might be all that exists globally.) I.e., among the different tangent 2-planes of a symplectic $2n$-manifold, the ones that are complex lines have the greatest pairing with the symplectic form, while the ones that are real planes have vanishing pairing, and the pairing minimum is achieved by complex lines with the wrong orientation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554314613342285, "perplexity_flag": "head"}
|
http://alanrendall.wordpress.com/2012/01/21/albert-goldbeter-and-glycolytic-oscillations/
|
# Hydrobates
A mathematician thinks aloud
## Albert Goldbeter and glycolytic oscillations
This Christmas, at my own suggestion, I was given the book ‘La Vie Oscillatoire’ by Albert Goldbeter as a present. This book is concerned with oscillatory phenomena in biological systems and how they can be explained and modelled mathematically. After the introduction the second chapter is concerned with glycolytic oscillations. I had a vague acquaintance with this subject but the book has given me a much better picture. The chapter treats both the theoretical and experimental aspects of this subject.
If yeast cells are fed with glucose they convert it into alcohol. Those of us who appreciate alcoholic beverages can be grateful to them for that. In the presence of a supply of glucose with a small constant rate alcohol is produced at a constant rate. When the supply rate is increased something more interesting happens. The output starts to undergo periodic oscillations although the input is constant. It is not that the yeast cells are using some kind of complicated machine to produce these. If the cells are broken down to make yeast extract the effect persists. In fact for yeast extract the oscillations go away again for very high concentrations of glucose, an effect not seen for intact cells. This difference is not important for the basic mechanism of production of oscillations. The breakdown of sugar in living organisms takes place via a process called glycolysis consisting of a sequence of chemical reactions. By replacing the input of glucose by an input of each of the intermediate products it was possible to track down the place where the oscillations are generated. The enzyme responsible is phosphofructokinase (PFK), which converts fructose-6-phosphate into fructose-1,6-bisphosphate while converting ATP to ADP to obtain energy. Now ADP itself increases the activity of PFK, thus giving a positive feedback loop. This is what leads to the oscillations. The process can be modelled by a two-dimensional dynamical system called the Higgins-Selkov oscillator. Let $S$ and $P$ denote the concentrations of substrate and product respectively. The substrate concentration satisfies an equation of the form $\dot S=k_0-k_1SP^2$. The substrate is supplied at a constant rate and used up at a rate which increases with the concentration of the product. (Here we are thinking of ADP as the product and ignoring other possible effects.) The product concentration correspondingly satisfies $\dot P=k_1 SP^2-k_2 P$.
The Higgins-Selkov oscillator gives rise to a limit cycle by means of a Hopf bifurcation. The ODE system is similar to the Brusselator. There are two clear differences. The substance which is being supplied from ouside occurs linearly in the nonlinear term in the Higgins-Selkov system and quadratically in the Brusselator. In the Higgins-Selkov system the nonlinear term occurs with a negative sign in the evolution equation for the substance being supplied from outside while in the Brusselator it occurs with a positive sign. In the book of Goldbeter the Higgins-Selkov oscillator seems to play the role of a basic example to illustrate the nature of biological oscillations.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298534989356995, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/28271/persistence-of-fixed-points-under-perturbation-in-dynamical-systems
|
## Persistence of fixed points under perturbation in dynamical systems
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a smooth dynamical system on $R^n$ (defined by a system of ODEs). Assuming that the system has a finite set of fixed points, I am interested in knowing (or obtaining references about) what is the behaviour of its fixed point structure under perturbations of the ODEs. More specifically, i would like to know under which conditions the total number of fixed points remains the same. By perturbations I mean generic changes in the system of equations... the more general the better.
I am sorry if the question is too basic, my interest comes from the study of the so-called renormalization flow in field theories. In particular, it would be important for me to generate an intuition about the conditions under which the approximations performed over a dynamical system alters its fixed point structure.
-
1
It seems like the Conley index (en.wikipedia.org/wiki/Conley_index_theory) should have something to say about this, but I can't find any obvious references. – Steve Huntsman Jun 15 2010 at 17:01
## 5 Answers
A good reference for this sort of thing is Guckenheimer and Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, 1983, if you're not already familiar with it (and even if you are, for that matter). Chapter 3 in particular is relevant to your question.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A little bit more general than the previous answer: Given a singularity of a vector field such that the derivative is invertible, the singularity persists under perturbations. This is because the zero of the vector field in this case is transversal (so, it persist by C1 perturbations).
Otherwise, depending on the local structure in a local neighborhood, it can be "removed" after perturbation (in the case the index is zero) or bifurcate in more than one singularity (in the case the index is non zero and the singularity is not hyperbolic).
Generic singularities are hyperbolic in the dissipative setting. In the conservative one, one may have elliptic singularities robustly, but generically these will be also persistent.
This being local, may be studied in Rn as well as in a manifold.
-
Aga, right! But if the fixed point is not hyperbolic then the local phase portrait may be destroyed. Though Federico didn't ask for it. – Andrey Gogolev Jun 15 2010 at 17:28
System of ODEs is a vector field. If there's finite number of fixed points and periodic orbits and the system is Morse-Smale then it is structurally stabel under $C^1$ small perturbations. In particular, all fixed points survive.
More details are here http://www.scholarpedia.org/article/Morse-Smale_systems
The manifold needs to be compact. So one extends $R^n$ to $S^n$ by adding an extra fixed point.
-
If you are just interested in the number of the fixed points of the flow, you can just put your question in terms of zeros of the vector field; in this case the right tool is the topological degree, which is stable under small perturbations of the field in the uniform norm; and in absolute value is (generically) a lower bound on the number of the zeros. Of course, if the vector field is variational (it's the gradient if a functional) much stronger invariants are available (all the Morse complex machinery).
-
In order to develop your intuition, you might want to start with a flow having no fixed points -- so a constant nonzero vector field, say in the plane. Perturb it by a nice vector field, eg. polynomial (or polynomial times a positive function decaying to zero at infinity) until you get zeros for your vector field. Now, what is it you want to show, or preserve? Do you want to estimate how big the perturbation need be (in terms of the size of the initial nonzero vector field) in order for zeros to be created?
I think you need to be a bit more specific about where you are headed, or what restrictions there are on your initial vector field and its allowed perturbations to get anywhere here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197161793708801, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/2653/safe-generator-for-elgamal-signature
|
# Safe generator for ElGamal signature
What are the properties a generator $g$ should have to be secure for ElGamal signatures (original scheme)?
I am aware that it is poorly chosen and not secure when $g|p-1$ or $g^{-1}|p-1$, where $p$ is the large prime so that $g$ generates $Z_p^*$ and $g$ has maximum order (that is, $p-1$).
Are there other properties one should consider? Additionally, should $g$ be picked randomly over $[1..p-1]$ or is it safe to choose it to be a small integer to start with?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9260587096214294, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/26212/what-is-the-best-way-to-peel-fruit/26284
|
## What is the best way to peel fruit?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A mango made me wonder about this. (See also this question, which is in a similar spirit.)
Fix $L >0$ and a smooth body (possibly nonconvex—pears or bananas are fair game!) $B \subset \mathbb{R}^3$ (and assume w/l/o/g below that $L$ is sufficiently large since we can dilate $B$). For $\gamma:[0,L] \rightarrow \mathbb{R}^3$ smooth and parametrized by arclength and $\theta:[0,L] \rightarrow S^1$ smooth, let $k(\gamma, \theta,s)$ denote a copy of the unit interval centered at $\gamma(s)$ and in the plane orthogonal to $\dot \gamma(s)$, and at the angle $\theta(s)$ in that plane (we require $k(\gamma,\theta,0)$ to be tangent to $B$, say, and w/l/o/g that this sets $\theta(0) = 0$; angles in planes away from $s=0$ can be sensibly defined via parallel translation). Let `$K(\gamma,\theta):= \{ k(\gamma, \theta,s) \cap B : s \in [0,L] \} $`. If $K$ contains the boundary of a body $C_K \subset B$ then say that $(\gamma, \theta)$ is a peeling of $B$.
For $L$ fixed, is there an effective way to determine a peeling that minimizes $\mbox{vol}(B \backslash C_K)$?
Followup: can the best peeling of the unit ball for a given value of $L$ be explicitly constructed?
-
5
+1 for the charasmatic title and way of explaining the idea. – Joel David Hamkins May 28 2010 at 2:35
10
Unfortunately, as you might expect when peeling fruit, it's a messy problem. – Greg Kuperberg May 28 2010 at 3:09
2
Reminds me of a paper I spotted a couple of days ago: arxiv.org/abs/1005.4609 – Dan Piponi May 28 2010 at 3:50
2
The title is sexy but I got lost in line 3 of the mathematical description. Is there way to restate it in plane language without excessive notation? – Victor Protsak May 28 2010 at 6:45
2
@Victor: here is my interpretation of what the notation means: $k(\gamma,\theta,s)$ is describing the knife, where $\gamma$ is the movement of the center of the knife while $\theta$ is the rotation of it (I didn't understand why it is only allowed to rotate orthogonal to the motion of the center, is that sufficient?). You are allowed to just make a limited movement (L) and want to minimize the pulp you are cutting away. – Michael May 28 2010 at 9:10
show 8 more comments
## 2 Answers
If the path is allowed to be piecewise smooth (see the comments above), and the fruit is convex, then you can cover the surface with a large number of small patches, and use very short circular trajectories to peel each patch, roughly spinning the blade in place to remove the piece of peel. As the size of patches decreases, this will approach a perfect peel, even if the total length $L$ is chosen to be arbitrarily small. This is like peeling a fruit by bouncing it off a belt sander.
If the fruit is non-convex, we still approach a perfect peeling, as long as we allow $C_K$ to have more than one connected component.
This suggests that the problem is only interesting if you put a bound on the number of jumps.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
After a bit of thought I have a sketch of a very simple case. Suppose that $B$ is a 3-polytope; let `$B^*$` denote its dual and consider the graph $G$ associated to the 1-skeleton `$B^*_1$` of `$B^*$`. Now vertices of $G$ correspond to faces of $B$, and edges of $G$ correspond to adjacent faces of $B$. So if $G$ admits a Hamiltonian path then we can use it to get a (nearly?) optimal peeling for $L$ appropriate.
Google results for Hamiltonian circuits on 3-polytopes are here.
-
Thinking a bit further, it's not clear that the arclength cost incurred by a Hamiltonian path will be small; it might be more efficient to revisit already-peeled faces. So this casts the near-optimality into doubt. – Steve Huntsman May 28 2010 at 20:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285886287689209, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/tagged/functions?page=2&sort=newest&pagesize=30
|
# Tagged Questions
Questions about the use of built-in Mathematica functions, including pure functions.
2answers
165 views
### Expansion in Basis Functions
I am trying to create an expansion of the form f[x,y]->Sum[Cn[x] y^n,{n,1,order}] To replace the function f by the ...
1answer
97 views
### Solving a differential equation for a variable in a function
This may sound like a very trivial question, but I need to solve a differential equation as follows: ...
2answers
152 views
### How can I get the right hand side of a delayed expression?
Imagine there is a given function f, defined with SetDelayed, say f[x_] := Sin[x]^2 + Cos[x]^2. Is it possible to get rhs of ...
0answers
19 views
### Define a Plot3D function with custom options [duplicate]
I am trying to define a Plot3D function with custom options, following (this previous SE question), but I'm running into a brick wall. I've narrowed the problem to the following 'toy' code: ...
2answers
127 views
### How to use ScientificForm in computation
Lets say I want to add 2 values : ...
1answer
190 views
### How can I define a Step-Wise function in Mathematica (Not using Heaviside Step Function)? [closed]
I need to define a function, which has very different behaviour in different regions. There are about 13 different regions. A sample of my function is the foloowing table: I want to define it as a ...
0answers
66 views
### Choosing the appropriate solution for the square root [closed]
My problem is the following: given the function myfunc[x_, a_]:= (a^3 - (a^2)^(3/2))/(x) the limit as x goes to 0 should be well defined and = 0. However, ...
0answers
39 views
### Confusing efficiency and evaluation when returning pure functions? [duplicate]
I have a function that takes some time to evaluate, that's meant to be a polynomial approximation to a function. The polynomial is defined by a list of coefficients, so I have the function ...
4answers
435 views
### Finding Limits in several variables
Is there a way to find a limit of a multivariable function, like $$\lim_{(x,y)\to (0,0)} f(x,y)$$ with Mathematica? When $f$ is continuous, we can use \lim_{(x,y)\to (0,0)} ...
2answers
62 views
### Named patterns as function argument descriptors
I have a function timeToMinutes that takes a 24-hour time as a string argument (in digital clock format, e.g., "15:47") and converts it to the number of minutes ...
2answers
101 views
### Self-defined function to transform row vector into column vector, where am I wrong? [closed]
I defined a function row2col[] to transform row into column, Where am I wrong? ...
1answer
138 views
### Problem using the DifferentialEvolution method of NMinimize [closed]
I have a function of 20 parameters, which 3 of the parameters are my physical parameters, and the others are pull terms to fix the errors. The goal is finding the global minimum of this function, to ...
3answers
205 views
### How make f[{x,y}] evaluate as f[x,y]?
I frequently encounter the situation where I have a function of two real variables defined, e.g.: f[x_, y_] := 9 - x^2 - y^2 But then I need to feed into ...
2answers
176 views
### Is there an $n^{\text{th}}$ root function in Mathematica?
Is there a way to find $\sqrt[n]{x}$ with Mathematica beside of x^(1/n) as this is something different, because this is not always the same (-1)^{\frac{2}{4}}=i ...
4answers
202 views
### Extract function arguments
Is there a way to extract the arguments of a function? Consider the following example: I have this sum ...
1answer
149 views
### Why does the first derivative of a piecewise continuous function turns out with discontinuities?
I have this piecewise continuous function which is also continuously differentiable over time : ...
1answer
91 views
### Does pass-by-value affect the performance of function calls?
I only have a little coding experience in C, and I remember I was told that pass by reference is more efficient than pass by value since the parameters don't need to be copied. Since there is no pass ...
0answers
137 views
### Minimization in mathematica [closed]
I have recently had a strange problem with NMinimize. I have a very huge function with respect to 20 parameters. When I ...
4answers
238 views
### Enforcing correct variable bindings and avoiding renamings for conflicting variables in nested scoping constructs
Using global variables the following turns an "expression" into a Function: expr = 2 x; Function[x, Evaluate[expr]] Of course ...
3answers
166 views
### How do I create a matrix of functions?
I'm trying to create a matrix in which the elements are functions of two variables, but I can't figure out how to do it. Is it possible? The only way I could figure out is to define the matrix in ...
1answer
116 views
### Expression for the real 7/3 power
I need an expression for the real 7/3 power of a real-valued function, i.e., a reformulation of f[x_] := g[x]^(7/3) that works for negative values of ...
1answer
137 views
### A Faster way to combine two Lists of different structures into one of a different structure [duplicate]
I have the following two lists (each containing over 500,000 elements). Here is a sample: ...
1answer
99 views
### Forcing evaluation of ArgMax
How to force evaluation of ArgMax before its output gets used in Solve? Background: I'm trying to solve for the Nash ...
1answer
98 views
### Evaluate Numerator and Denominator Separately
I have this function. ...
0answers
70 views
### Solving for vector elements in a function to which the vectors are not passed explicitly
I want to define a function which takes in two integers (indicating the lengths of 2 vectors), and solves a simple set of expressions at a set of points to find all the values in both the vectors. so ...
0answers
73 views
### How can I obtain the function described by given set of central moments?
I want to investigate how my function P behaves with different probability functions rho as input variables. This means ...
1answer
109 views
### Speeding up code by avoiding repeated evaluation of a function
I want to speed up my code and I have two ideas, but I don't know how I can implement them. Here is a little part of my code, which I want to improve: ...
1answer
102 views
### Function definition and delayed assignment
I need to define the following function MyWavelet[n_]["PrimalLowpass", prec_ : MachinePrecision] := Table[(-1)^(j - 1) h[[2*n - j]], {j, 0, 2*n - 1}] which ...
2answers
118 views
### Function with custom Options and modified Options for built-in Symbols
I couldn't find a more descriptive title, but I guess an example will explain my problem. I set up some customized Grid function including some additional ...
3answers
162 views
### Permanent minors
The function Minors yields the minors of a matrix. Is there a function that yields the permanent minors of a matrix?
3answers
95 views
### Creating a nonperiodic function in mathematica
I want to create a non-periodic square wave with values of 1 and -1(not necessarily alternating). For e.g. I want to convert an arbitrary array like {1,-1,-1,1,-1,1,-1} into a function. I tried ...
3answers
165 views
### Solve with v9 (issues with Subscript, Overscript, Superscript etc)
The way Solve works has changed in v9 ... in essence, to get the answers that one obtained under v8, one now often has to specify a lot more information about all ...
1answer
111 views
### Downvalues vs. Scoping for Functions
Regarding my recent question on using a default value for a function argument when a pattern was not met yielded some interesting answers, but the general consensus was "Yes this can be done, but ...
0answers
22 views
### Define a second name for a function [duplicate]
I'm working with an own package of functions. During development I noticed, that my naming scheme is not that clever; for example some of my functions are starting with capital letters. But I have ...
1answer
155 views
### Print out all functions in Mathematica
Sometimes I only remember parts of a function name and I want find that function name quickly, and if I search in the document center, it will give too much informations related to that and difficult ...
1answer
226 views
### Optimizing the performance of an algorithm
First - a bit of an introduction. If you're only interested in the code, you can skip this section. The following question is drawn from Dennis E. Shasha's Puzzling Adventures, and is listed under ...
6answers
258 views
### Manipulate and Turning Expressions into Functions
I've been trying to use Manipulate to do interactive plotting, but I've been running into a few problems with saved expressions. I have an expression saved as "func" and I want to work with it and ...
1answer
105 views
### How to export/import function values [duplicate]
I have a function with two variables: f[x_,y_]:=f[x,y] = ... I calculated some values (they are fractions like 435345345/3424242424) and would like to store the ...
2answers
161 views
### How to declare a function of variable?
In my program, there are many functions relying on spatial coordinates: x, y, and z, which are also functions of time t, i.e., composite function. I need to differentiate some functions for example f: ...
2answers
106 views
### Method that can be used to collect the variables of a function
Suppose we have a function f[x, y, z] and we want to get all its variables Sequence[x,y,z], what method can we use then? The ...
2answers
151 views
### How do I write a function that can be used in a rule to modify both sides of an equation? [duplicate]
I sometimes need fine grain control over equations in Mathematica in order to help me understand how to solve a problem manually. A greatly simplified example of a session might be something like ...
4answers
170 views
### Function argument to default under certain condition
Inspired by this and this question (and how I handle this in practice), what is the best way to default a function value when a certain condition is met? For example, if a function is defined as: ...
1answer
86 views
### Plot[] breaks behavior of custom data handling function - Recursion
I am working on a utility to analyze a set of data. I want to process the data with a sliding window, in such a way that there is an output associated with each sample of data. To start, I have a ...
1answer
236 views
### What is the difference between prefix/postfix notations and map?
I am new to Mathematica and just experimenting with the different programming constructs. I have been looking at Map and how to evaluate a function for a list of ...
0answers
60 views
### From notebook,how can I change variable value with in .m file dynamically?
I wrote a function named Testing. ...
1answer
94 views
### How can I calculate a limit with a free variable?
For example, when I evaluate Limit[(1 + x^n + (x^2/2)^n)^(1/n), n -> Infinity] Mathematica does not output any result. When I evaluate ...
0answers
91 views
### Strange result from integrate
I am working on the following problem. However, the first task takes too long, although the answer 0 is correct. In the second task, to reduce the calculation time, ...
0answers
118 views
### Help on how to write a function to be used with NMinimize
I need some help to write a proper function to be used as a parameter on NMinimize. Here is the code of the function to be minimized (Please correct, optimize and rewrite the code if you want to - I ...
0answers
84 views
### MiniMaxApproximation [closed]
I'm new to Mathematica and I'm trying to obtain a minimax rational function approximation to a certain expression. In particular, I'm using ...
3answers
221 views
### FullSimplify does not work on this expression with no unknowns
I can't reproduce this simple example from Habrat, 2010 ("Mathematica : a Problem-Centered Approach"). It is supposed to demonstrate the functionality of ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8979909420013428, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/11768/does-a-static-electric-field-and-the-conservation-of-momentum-give-rise-to-a-rel?answertab=votes
|
# Does a static electric field and the conservation of momentum give rise to a relationship between $E$, $t$, and some path $s$?
For a static electric field $E$ the conservation of energy gives rise to $$\oint E\cdot ds =0$$ Is there an analogous mathematical expression the conservation of momentum gives rise to?
-
Maybe I'm being stupid but surely if you could consider an object that has no net force acting on it, you would get 0 if you just took any loop integral around it: ∮pho⋅ds=0 What would one call the 'force field' be called though? – qftme Jun 30 '11 at 22:32
2
– yayu Jun 30 '11 at 23:02
– qftme Jun 30 '11 at 23:33
## 2 Answers
We have a thermodynamic understanding of energy violation, in terms of perpetual motion, but momentum violation is somewhat less immediately paradoxical in a changing background, maybe because we have intuition that the field has momentum in this case. But the field momentum is irrelevant for this question. Mechanical momentum should be conserved in a z-independent background ignoring any field momentum considerations.
I'll state the principle of conservation of momentum is as follows: if the z direction is made periodic with period L (for mathematical convenience), and all the physical conditions (the time dependent E and B fields) do not depend on z, it is impossible to get a rod parallel to the z-direction going all the way around the box to move along the z direction only by moving it in the x and y directions, keeping it parallel to the z axis.
In terms of differential forms, the E equation you give is part of
$dF = 0$
When F is independent of time, this reduces to $\nabla \times E = 0$, and to the loop integral condition, but I will try to state it more covariantly in spacetime. Consider a spacetime 2-d surface which is some parametrized loop x(s) y(s) z(s) in the t=0 plane, extended in times by translation. The integral of the two form over the surface is equal to zero, by stokes theorem. But this integral is equal to (the infinite length in time of the surface times) $\int E_x dx + E_y dy + E_z dz$, and you conclude that in time-independent fields, the integral must be zero.
If you believe the Lorentz equation of motion, this is a statement of conservation of mechanical energy, as you noticed.
But since it is stated in differential form language, all you have to do is tip the picture over into the z-axis to get a momentum version. If the F tensor does not depend on z, and z is periodic with period L (just in case), then integrating the F form on a two-surface made by taking a parametrized loop t(s),x(s),y(s), extended by translation along the z axis gives zero. This reduces to the following one dimensional integral:
$\int (E_z dt - B_y dx + B_x dy) = 0$
This integral is equal to zero by the Maxwell equation dF=0.
If you believe the Lorentz equation of motion, this integral must also vanish for reasons of conservation of momentum. If you take a uniformly charged z-extended wire around this loop, it would acquire momentum of this magnitude.
But you can't get a charged wire to move backward in time, so you have to use two wires, and at the proper instant charge them up to equal and opposite charge densities, then move them along the x and y directions as required by the loop, then bring them back and let them neutralize each other.
If the integral above is nonzero, the wires will get a nonzero relative z-momentum, which is impossible in a z-invariant system. The above is just the integrated Lorentz force in the z-direction.
So from conservation of momentum, you conclude that the above is zero. This allows you to see that in the static case, $B_y$ and $B_x$ are derived from a vector potential $A_z$ just as E is derived from a potential $\phi$. The differential form formulation gives you this automatically of course.
Just as the conserved mechanical energy quantity in a static background E field is the kinetic energy plus $q\phi$, the conserved mechanical momentum in a z-independent background is the mechanical z-momentum plus $qA_z$.
-
Thanks for your interesting answer. What do you mean by making the z direction periodic? – John McVirgo Aug 23 '11 at 20:08
I just mean consider a situation where the electric and magnetic field are z-independent, and space is imagined to be a box so that positions z and z+L are identified. I do this so that the box one integrates the 2-form over in the z-direction is finite. – Ron Maimon Aug 25 '11 at 3:21
Your answer considered a particular case for $z$ being periodic and the physical conditions independent of $z$, but didn't make it easy for people less gifted like me to see how your expression could be generalised to any static electric field. If you can edit your answer to show this as a conclusion, I'll make yours the accepted answer. – John McVirgo Aug 25 '11 at 20:58
But your original question considered a "t independent" conditions. Momentum is to energy as space is to time, so "t independent" turns into z-independent. The answer is giving the analog of Kinetic+potential energy, it is "mechanical momentum plus vector potential". There is no generalization for static electric fields. The generalization involves magnetic fields. – Ron Maimon Aug 26 '11 at 0:29
Jackson and Griffiths don't say anything about this stuff so can you recommend a good book that goes into this? – Physiks lover Jun 22 '12 at 18:01
show 2 more comments
for this particular case, there doesn't appear to be an obvious expression that comes from the conservation of momentum, analagous to that for the conservation of energy. However, Noether's theorem offers a route in determing one of the conserved quantities as being the canonical momentum, which on rearranging gives the Lorentz force law for a static electric field: $$\frac d {dt} \gamma mv = qE$$
-
I gave the obvious expression right here.. What's going on? – Ron Maimon Aug 25 '11 at 17:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312205910682678, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/112336/average-of-i-i-d-exponential-random-variables
|
# average of i.i.d. exponential random variables
I have a question about the following probability:
$Pr\{\frac{\sum^N_{k=1}u_k}{N}<1\}$
where $u_k\sim \exp(1)$ are i.i.d. exponential random variables with mean one (also, $\frac{\sum^N_{k=1}u_k}{N}$ is gamma distributed).
I have plotted this probability for different $N$. The plot shows that as $N$ increases, this probability approaches $0.5$. Is this a well-known result? Has someone proved it already? If not, how to prove it rigorously?
-
## 1 Answer
You are right. This type of result is, however, not new, and applies to a very large class of random variables of a shape similar to yours. In particular, the fact that the random variables being averaged have exponential distribution has almost nothing to do with the result.
For a proof, it is probably best to use the Central Limit Theorem. Call your random variable (the average of the $N$ exponentials) by the name $Y_N$.
Since you are taking an average of $N$ random variables with mean $1$, the random variable $Y_N$ has mean $1$. Since each of the exponentials has variance $1$, and they are independent, their sum has variance $N$, and therefore $Y_N$, which is $1/N$ times the sum, has variance $\frac{N}{N^2}$, which is $\frac{1}{N}$.
The Central Limit Theorem says that if $Y_N$ is the average of $N$ independent random variables with mean $\mu$ and variance $\sigma^2$, then $$\lim_{N\to\infty}P\left(\sqrt{N}(Y_N-\mu)\le z\right)=\Phi(z/\sigma),$$ where $\Phi$ is the cumulative distribution function of the standard normal. In our case, we have $\mu=1$ and $\sigma=1$. So we can rewrite the above result as $$\lim_{N\to\infty}P\left(Y_N\le 1+\frac{z}{\sqrt{N}}\right)=\Phi(z). \qquad(\ast)$$ Now just put $z=0$. Since $F(0)=1/2$, we get the fact that you observed.
With relatively well-behaved random variables like your mean $1$ exponentials, the approach to normality is very rapid. Thus we can, for largish $N$, remove the limit part, and use $(\ast)$ as an estimate of $P(Y_N-1\le \frac{z}{\sqrt{N}})$.
-
thank you very much for such a detailed proof. It's a rigorous proof, and very clear. Thanks for these inputs. – Scholli Feb 23 '12 at 9:24
By the way, I assume that the last probability is $P(Y_N<1)<\frac{z}{\sqrt{N}}$. – Scholli Feb 23 '12 at 9:25
@Scholli: Thanks for pointing out the typo. Fixed. But it may not be the only one. – André Nicolas Feb 23 '12 at 9:44
I think the other calculations are correct ;) – Scholli Feb 23 '12 at 14:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378731846809387, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/66720/are-all-rings-groups?answertab=votes
|
# Are all rings groups? [closed]
Also are all groups rings?
The book I'm reading is saying that it's not necessary a ring if you have a group. However, that is strange because rings are used to define groups. A ring is group that is a monoid. Yet how can't it be a ring?
Any decent books on ring theory? the book I'm using Langs Algebra is confusing as hell. Liek don't even know what a group is from that book. Like what is a field? It mentions it without giving an example of a field, what the hell is a monoid?. Is Z a field? like it looks like a field.
-
A group is a set G combined with binary operator *, and a ring is a set G combined with two binary operators +,$\times$, where the G,+ is a commutative group. So rings are defined using groups. – Dimitri Surinx Sep 22 '11 at 17:09
What is your definition of a ring? Usually a ring is a set $R$ equip with two operations $+$ and $\cdot$. $(R, +)$ is an abelian group, and $(R, + \cdot)$ has other properties such as associativity of the multiplication and distributivity. Also I think every group is a monoid. – William Sep 22 '11 at 17:10
1
Also: If you take 2 minutes and browse the site a bit, you will immediately see that no one says «what the hell is X?» here. Please be civil. In any case, it is rather clear that you do not know the basics of abstract algebra, so this line of questioning is more or less useless (among other things, Lang was not writing the book with you in his intended audience). You should simply pick some other book, which starts at a more basic level. – Mariano Suárez-Alvarez♦ Sep 22 '11 at 17:16
1
-1 for the way the question is asked. Moreover, you could easily find clear definitions of group, ring, field, etc., by looking at, e.g., Wikipedia. – Keenan Kidwell Sep 22 '11 at 17:17
4
I feel like complexity is a troll... – Dimitri Surinx Sep 22 '11 at 17:19
show 10 more comments
## closed as not a real question by Qiaochu YuanSep 22 '11 at 19:00
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ.
## 2 Answers
A ring is a set $R$ with two binary operations, usually dentoed by $+$ and $\times$, such that $(R,+)$ is an abelian group, $(R,\times)$ is a semigroup (if you like your rings to have identities, then you require it to be a monoid), and such that the two operations are connected via the distributive laws: $$a\times(b+c) = (a\times b)+(a\times c)\quad\text{and}\quad (x+y)\times z = (x\times z) + (y\times z)\quad\text{for all }a,b,c,x,y,z\in R.$$
So given a ring $(R,+,\times)$, you get a group by "forgetting" about the operation $\times$; and you get a semigroup/monoid by "forgetting" about the operation $+$. In that sense, every ring is also a group (under addition).
It is wrong to say "A ring is a group that is a monoid", because you are not precise enough in refering to the operations, and to the connection between the two operations.
I am also confused by your claim that "rings are used to define groups". How so? A group is defined to be a set $G$ together with an operation $\cdot$ that is associative, has a two-sided identity element, and has two-sided inverses for each element. No notion of ring is harmed in the production of this definition. It is true that many structures that are in fact rings are used to provide examples of groups, but you don't define groups in terms of rings.
Lang's Algebra defines groups on page 7 (Revised 3rd Edition, Springer-Verlag GTM 211) as "a monoid such that for every element $x\in G$ there exists an element $y\in G$ such that $xy=yx=e$. It defines monoid in page 3 as a set $G$ with a "law of composition" (a binary operation) that is associative and has a unit element.
Lang defines "field" on page 84: a field is a commutative ring (the operation $\times$ is associative) in which there is an identity for $\times$, denoted $1$, such that $1\neq 0$ ($0$ is the identity for $+$), and in which every nonzero element has a multiplicative inverse: for every $a$, if $a\neq 0$ then there exists $b$ such that $a\times b = b\times a = 1$. $\mathbb{Z}$ is not a field, because $2$ does not have a multiplicative inverse. It is a ring (in fact, an integral domain).
Common examples of fields are: $\mathbb{Q}$, with the usual addition and multiplication; $\mathbb{R}$ with the usual addition and multiplication; $\mathbb{C}$, with the usual addition and multiplication; $\mathbb{Z}/p\mathbb{Z}$, the integers modulo $p$ with $p$ a prime, using modular addition and modular multiplication.
-
What is Q and R? Are they just numbers? – complexity Sep 22 '11 at 17:34
Q and R are the set of all rational numbers and the set of all real numbers, respectively. C refers to complex numbers. Z refers to integers. – Vivek Viswanathan Sep 22 '11 at 18:13
$\mathbb{N}$ine $\mathbb{Z}$ulu $\mathbb{Q}$ueens $\mathbb{R}$uled $\mathbb{C}$hina: the natural numbers are contained in the integers are contained in the rationals are contained in the reals are contained in complex numbers. – anon Sep 22 '11 at 18:39
@anon: Heh, I wanted to remember the quaternions as well back in the day, so I came up with "$\mathbb N$o $\mathbb Z$ulu $\mathbb Q$ueens $\mathbb R$ate $\mathbb C$ountless $\mathbb H$ours"... – J. M. Sep 22 '11 at 19:22
I hope this doesn't confuse you further, but there is also such a thing as a "group ring". Given any group $G$ and a ring $R$, you define $R[G]$ as the set of "linear combinations of elements of $G$ with coefficients in $R$", i.e. elements of $R[G]$ are sums of terms $r g$ with $r \in R$ and $g \in G$. Multiplication is defined by the distributive laws and $(r g) \cdot (s h) = (r s)(g h)$ for $r, s \in R$ and $g, h \in G$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605506062507629, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/43421/translations-of-field-operators-in-qft
|
# Translations of field operators in QFT
A question in the book QFT of Srednicki: This concerns the relativistic QFT generalization
$$\tag{2.21} {{e}^{-i\hat{P}x/\hbar}}\psi (0){{e}^{i\hat{P}x/\hbar}}~=~\psi (x)$$
of the formula
$$\tag{2.20} {{e}^{i\hat{H}t/\hbar}}\psi (\vec{x},0){{e}^{-i\hat{H}t/\hbar}}~=~\psi (\vec{x},t)$$
from non-relativistic QM.
How can we prove this two formula just use Poincare algebra? $\psi (x)$ is an field operator. About the second formula we can use Schrodinger equation to prove. I think there is a proof just using the Poincare algebra. I just have no idea about this.
-
## 4 Answers
In the answer of Argopulos, one must assume that the Taylor series converges. This is never done, and maybe physicists don't care.
From a mathematical perspective, (2.20) is in fact the definition of the dynamics, and the commutator relation follows simply by differentiating both sides. No special assumption is needed to get this.
-
Symmetry group of the space time$^1$ on which QFT is defined is usually required to have a representation on the space of states.
Quantum mechanics is just QFT in one dimensions. The spacetime in this case is the time line $\mathbb R$. Fields are $X(t)$, and $P(t)$. Symmetry group is group of translations $t\rightarrow t+b$ of $\mathbb R$. Infinitesimal form of (Hermitian) time translation operator is Hamiltonian $H=i\partial/\partial t$. One requires that Hilbert space of states have a representation of this group, which in Heisenberg picture is given as $$exp(iHt)X(0)exp(-iHt)=X(t),\:exp(iHt)P(0)exp(-iHt)=P(t)$$ The case of QFT on Minkowski space is similar. We again require the symmetry group (which is Poincare group in this case) to have a representation on the space of states. In particular for the subgroup of Poincare group generated by four translation operators $P_0,P_1,P_2,P_3$ we require $$exp(iP_0x^0)\psi(0)exp(-iP_0x^0)=\psi(x^0,0,0,0)$$ $$exp(iP_1x^1)\psi(0)exp(-iP_1x^1)=\psi(0,x^1,0,0)$$ $$exp(iP_2x^2)\psi(0)exp(-iP_2x^2)=\psi(0,0,x^2,0)$$ $$exp(iP_3x^3)\psi(0)exp(-iP_3x^3)=\psi(0,0,0,x^3)$$ Since $P_\mu$'s commute with each other these conditions can be collectively written as : $$exp(iPx)\psi(0)exp(-iPx)=\psi(x)$$ where $P=(P^\mu), \: P^\mu=\eta^{\mu\nu}P_\nu$ and $\eta=diag(1,-1,-1,-1)$. Sredniki's sign convention is different i think.
1. For example if space time is a manifold with metric $\eta$ then symmetry group is group of transfomations which preserve $\eta$.
-
Taylor expand the right-hand side, and you will see that it matches, order by order in $x$, with the left-hand side. You just need to know the definition of $\hat{P}$, i.e. $[\hat{P},\hat{\psi}(x)]=i \hat{\psi}'(x)$, where the prime denotes derivative with respect to the coordinate associated to $\hat{P}$. Of course, you need to remember that the momenta all commute among each other (and in fact derivatives do commute). Back to your original intuition, you need to know in practice only the abelian subalgebra of the Poincaré algebra.
-
The formula $e^{-i\hat{P}a/\hbar}\hat\psi(x)e^{i\hat{P}a/\hbar}=\hat\psi(x+a)$ is just the finite transformation version of infinitesimal transformation version formula $[\hat{P},\hat{\psi}(x)]=i\hbar\partial_x\hat{\psi}(x)$ as mentioned by DaniH and Argopulos above. $e^{-i\hat{P}a/\hbar}$ is a finite transformation operator acting on the Hilbert space, while $x\rightarrow x+a$ is the finite transformation acting on the field space. The infinitesimal transformations or the generator of the Lie group is given by $\hat P$ and $i\hbar\partial_x$ accordingly.
Finite transformation can be derived from infinitesimal transformation as followed:
$e^{-i\hat{P}a/\hbar}\hat\psi(x)e^{i\hat{P}a/\hbar}=e^{-i(a/\hbar)[\hat{P},\cdot]}\hat\psi(x)=e^{-i(a/\hbar)(i\hbar\partial_x)}\hat\psi(x)=e^{a\partial_x}\hat\psi(x)=\hat\psi(x+a)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240208268165588, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/10185/how-do-you-define-an-affine-transformation-that-is-translated-by-itself?answertab=votes
|
# How do you define an affine transformation that is translated by itself?
This one should be simple, but I can't figure it out. I would like to define a transformation (function) such that $r$ maps to $m.r + r$
The help file on AffineTransformation says that
````AffineTransformation[{m,v}]
````
maps $r$ to $m.r + v$
so for example
````tt = AffineTransform[{d*IdentityMatrix[2], {a, b}}]
tt[{x,y}]
{a+dx,b+dy}
````
But how would I define a transformation such that it would return
````tt[{x,y}]
{x+dx,y+dy}
````
for any ${x,y}$?
And that it could be applied to graphics objects, e.g,
````Graphics[GeometricTransformation[Rectangle[], tt[d]]
````
where I have added a function argument `d` which is a parameter of the transformation function.
-
## 1 Answer
Summary
There are several things going on here:
1. The `a` and `b` in the expression are fixed parameters and don't substitute for your inputs. Try a pure function or a standard function using `SetDelayed` as shown below.
2. What you are trying to create is not an `AffineTransformation`.
3. It's not immediately obvious how to pass the coordinates back into the transformation function (to this economist, it's not even clear if this works geometrically).
Part of the answer (item 1) lies in the difference between a function and a `TransformationFunction`. What you want is to substitute `a` and `b` for whatever values you pass it. Try this instead:
````ttt[{a_, b_}] :=
AffineTransform[{d*IdentityMatrix[2], {a, b}}][{a, b}]
ttt[{x, y}]
(* {x + d x, y + d y} *)
````
What's going on here? Looking at the output of your original `tt` function, reveals a `TransformationFunction`, which defines a function much like a pure function that can then be applied to some data.
````tt
(*TransformationFunction[{{d, 0, a}, {0, d, b}, {0, 0, 1}}]*)
TransformationFunction[{{d, 0, a}, {0, d, b}, {0, 0, 1}}][{p,q}]
(* {a + d p, b + d q} *)
````
But the `a` and `b` are fixed parameters, not variables that are substituted by the data passed to the function. That's why you need to do it as per the `ttt` function above, using `SetDelayed` (`:=`).
An alternative way to do what you want would be to define your function as a pure function. Notice the ampersand and the `Slot` (`#`) notation.
````ttpure = AffineTransform[{d*IdentityMatrix[2], {#1, #2}}][{#1, #2}] &
````
But watch out, the syntax of the expected input is a bit different now - it expects two arguments, not inside curly braces (`List` expression):
````ttpure[x, y]
(* {x + d x, y + d y} *)
````
If you have a pair of variables inside a list, then you need to use `Apply` (`@@`).
````ttpure @@ {x, y}
(* {x + d x, y + d y} *)
````
Alternatively, you can restore the behaviour of `ttt` in taking a two-element list as the input by having a single `Slot` argument and accessing its `Part`s like this:
````ttp2 = AffineTransform[{d*
IdentityMatrix[2], {#[[1]], #[[2]]}}][{#[[1]], #[[2]]}] &
````
Personally I find the `#[[]]` notation to be quite ugly, but it works as required as long as you actually pass an argument that is a list with at least two elements:
````ttp2[{x, y}]
(* {x + d x, y + d y} *)
ttp2[{x}]
````
Part::partw: Part 2 of {x} does not exist. >>
Part::partw: Part 2 of {x} does not exist. >>
````(* {x + d x, {x}[[2]] + d {x}[[2]]} *)
ttp2[{x, y, z}]
(* {x + d x, y + d y} *)
````
Edit in response to comment
To actually make this work on a graphic, you need everything to be numeric. Notice I've added a third argument (`#3`) here.
````ttpure = AffineTransform[{#3*IdentityMatrix[2], {#1, #2}}][{#1, #2}] &
Framed@Graphics[GeometricTransformation[Rectangle[],
ttpure[0.3, 0.6, 0.2]], PlotRange -> 1]
````
However, this isn't quite what you want, since the actual coordinates aren't passed into the `TransformationFunction`, only the parameters you pass. (This will have to wait for a later update.)
-
Fantastic! This works exactly. Thanks for your quick reply. – ShaunH Sep 4 '12 at 3:42
I'm not sure if I understand quite correctly. Does this mean ttt is not a TransformationFunction? I.e, it doesn't seem like I can use it to transform a graphics object (e.g. a Rectangle) using GeometricTransformation. I.e. `Graphics[GeometricTransformation[Rectangle[], ttt]]` – ShaunH Sep 4 '12 at 3:52
You are welcome @ShaunH - have a look at my edit. It might be helpful. And by the way, welcome to Mathematica.SE! – Verbeia♦ Sep 4 '12 at 3:52
`ttt` is a function that applies a `TransformationFunction` to the input, rather than just defining the function. Hopefully my addition, especially `ttpure` clarifies the difference. For the case you mention in comments, try `ttpure`. – Verbeia♦ Sep 4 '12 at 3:53
Thanks for the edit @Verbeia, but I don't quite understand why you would have to specify the coordinates `x` and `y`. Is there a way to make `d` the (only) argument, so for example, ```Framed@Graphics[GeometricTransformation[Rectangle[],
ttpure[0.2]]``` where the 0.2 is the chosen `d`? – ShaunH Sep 4 '12 at 4:14
show 2 more comments
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8386548757553101, "perplexity_flag": "middle"}
|
http://conservapedia.com/Product_rule
|
# Product rule
### From Conservapedia
$\frac{d}{dx} \sin x=?\,$ This article/section deals with mathematical concepts appropriate for a student in late high school or early university.
The Product Rule is a rule in calculus pertaining to the derivative of two variables or functions. The Leibniz notation is as follows:
$\frac{d}{dx} (uv) = u \frac{dv}{dx} + v \frac{du}{dx}$
The product rule in words is the derivative of uv is u multiplied by the derivative of v plus the derivative of u multipled by v. The commutative property allows the terms to be interchanged, which occasionally leads to introduction in reverse.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919607400894165, "perplexity_flag": "middle"}
|
http://www.scholarpedia.org/article/Normal_form
|
# Normal Forms
From Scholarpedia
James Murdock (2006), Scholarpedia, 1(10):1902.
(Redirected from Normal form)
Curator and Contributors
1.00 - James Murdock
0.33 -
Eugene M. Izhikevich
0.11 -
Alessandra Celletti
0.11 -
Nick Orbeck
A normal form of a mathematical object, broadly speaking, is a simplified form of the object obtained by applying a transformation (often a change of coordinates) that is considered to preserve the essential features of the object. For instance, a matrix can be brought into Jordan normal form by applying a similarity transformation. This article focuses on normal forms for autonomous systems of differential equations (vector fields or flows) near an equilibrium point. Similar ideas can be used for discrete-time dynamical systems (diffeomorphisms) near a fixed point, or for flows near a periodic orbit.
## Basic Definitions
The starting point is a smooth system of differential equations with an equilibrium (rest point) at the origin, expanded as a power series $\dot x = Ax + a_1(x) + a_2(x) +\cdots,$ where $$x\in{\mathbb R}^n$$ or $${\mathbb C}^n\ ,$$ $$A$$ is an $$n\times n$$ real or complex matrix, and $$a_j(x)$$ is a homogeneous polynomial of degree $$j+1$$ (for instance, $$a_1(x)$$ is quadratic). The expansion is taken to some finite order $$k$$ and truncated there, or else is taken to infinity but is treated formally (the convergence or divergence of the series is ignored). The purpose is to obtain an approximation to the (unknown) solution of the original system, that will be valid over an extended range in time. The linear term $$Ax$$ is assumed to be already in the desired normal form, usually the Jordan or a real canonical form. A transformation to new variables $$y$$ is applied, having the form $x=y+u_1(y)+u_2(y)+\cdots,$ where $$u_j$$ is homogeneous of degree $$j+1\ .$$ This results in a new system $\dot y = Ay + b_1(y) + b_2(y) +\cdots,$ having the same general form as the original system. The goal is to make a careful choice of the $$u_j\ ,$$ so that the $$b_j$$ are "simpler" in some sense than the $$a_j\ .$$ "Simpler" may mean only that some terms have been eliminated, but in the best cases one hopes to achieve a system that has additional symmetries that were not present in the original system. (If the normal form possesses a symmetry to all orders, then the original system had a hidden approximate symmetry with transcendentally small error.)
Among many historical references in the development of normal form theory, two significant ones are Birkhoff (1996) and Bruno (1989). As the Birkhoff reference shows, the early stages of the theory were confined to Hamiltonian systems, and the normalizing transformations were canonical (now called symplectic). The Bruno reference treats in detail the convergence and divergence of normalizing transformations.
## An Example
A basic example is the nonlinear oscillator with $$n=2$$ and $A=\left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right].$ In this case it is possible (no matter what the original $$a_j$$ may be) to achieve $$b_j=0$$ for $$j$$ odd and to eliminate all but two coefficients from each $$b_j$$ with $$j$$ even. More precisely, writing $$r^2=y_1^2+y_2^2\ ,$$ a normal form in this case is $\dot y = Ay + \sum_{i=1}^{\infty} \alpha_ir^{2i}y + \beta_ir^{2i}Ay \ .$ In polar coordinates this becomes $\dot r = \alpha_1 r^3 + \alpha_2r^5+\cdots$ $\dot\theta = 1 + \beta_1r^2 + \beta_2r^4+\cdots \ .$ The first nonzero $$\alpha_i$$ determines the stability of the origin, and the $$\beta_i$$ control the dependence of frequency on amplitude. Also the normalized system has achieved symmetry (more technically, equivariance) under rotation about the origin. Although the classical (or level-one) approach to normal forms stops with the form obtained above for this example, it is important to note that neither the coefficients $$\alpha_i$$ and $$\beta_i$$ in the equation, nor the transformation terms $$u_j$$ used to achieve the equation, are uniquely determined by the original $$a_j\ .$$ In fact, by a more careful choice of the $$u_j\ ,$$ it is possible to put the nonlinear oscillator into a hypernormal form (also called a unique, higher-level, or simplest normal form) in which all but finitely many of the coefficients $$\alpha_i$$ and $$\beta_i$$ are zero. Hypernormal forms are difficult to calculate, and from here on we speak only of classical normal forms.
## Asymptotic Consequences of Normal Forms
For some systems, the normal form (truncated at a given degree) is simple enough to become solvable. In this case it is of interest to ask whether this solution gives rise to a good approximation (an asymptotic approximation in some specific sense) to a solution of the original equation (say, with the same initial condition). The answer is "sometimes yes". ("Gives rise to" means that the solution of the truncated normal form usually must be fed back through the transformation to normal form.) Some popular books, such as Nayfeh (1993), present the subject entirely from this point of view, without proving any error estimates or noticing that there are cases in which asymptotic validity cannot hold. Several theorems and open questions in this regard are given in chapter 5 of Murdock (2003). The most basic theorem states that an asymptotic error estimate with respect to a small parameter holds if (a) the parameter is introduced correctly, (b) the matrix of the linear term is semisimple (see below) and has all its eigenvalues on the imaginary axis, and (c) the semisimple normal form style (see below) is used. Although the asymptotic use of normal forms is important when it is true, and has many practical applications, the primary importance of normal forms is as a preparatory step towards the study of qualitative dynamics, unfoldings, and bifurcations.
## Geometrical Consequences of the Normal Form
It has already been pointed out that a normal form can decide stability questions and establish hidden symmetries. Computing the normal form up to degree $$k$$ also automatically computes (to degree $$k$$) the stable, unstable, and center manifolds, the center manifold reduction, and the fibration of the center-stable and center-unstable manifolds over the center manifold. The common practice of computing the center manifold reduction first, and then computing the normal form only for this reduced system, seems to save work but loses many of these results. See chapter 5 of Murdock (2003).
On occasion, the truncation of a normal form produces a simple system that is topologically equivalent to the original system in a neighborhood of the equilibrium, called topological normal form. For instance, in the example above, truncating after the first nonvanishing $$\alpha_i$$ will accomplish this, but if all $$\alpha_i$$ are zero, the topological behavior is probably determined by a transcendentally small effect that is not captured by the normal form.
Normal forms are important for determining bifurcations of a system, but this requires the inclusion of unfolding parameters.
## The Homological Equation and Normal Form Styles
In the general case, we define the Lie derivative operator $$L_A$$ associated with the matrix $$A$$ by $$(L_A v)(x)=v'(x)Ax-Av(x)\ ,$$ where $$v$$ is a vector field and $$v'$$ is its matrix of partial derivatives. Then $$L_A$$ maps the vector space $$\mathcal{V}_j$$ of homogeneous vector fields of degree $$j+1$$ into itself. The relation between the $$a_j\ ,$$ $$b_j\ ,$$ and $$u_j$$ is determined recursively by the homological equations $L_A u_j = K_j - b_j \ ,$ where $$K_1=a_1$$ and $$K_j$$ equals $$a_j$$ plus a correction term computed from $$a_1,\dots,a_{j-1}$$ and $$u_1,\dots,u_{j-1}\ .$$ Let $$\mathcal{N}_j$$ be any choice of a complementary subspace to the image of $$L_A$$ in $$\mathcal{V}_j\ ;$$ then it is possible to choose the $$u_j$$ so that each $$b_j\in \mathcal{N}_j\ .$$ (Take $$b_j=P_j K_j\ ,$$ where $$P_j:\mathcal{V}_j\rightarrow\mathcal{N}_j$$ is the projection map, and note that the homological equation can be solved, nonuniquely, for $$u_j\ .$$) The choice of $$\mathcal{N}_j$$ is called a normal form style, and represents the preference of the user as to what is considered "simple". The purpose of this procedure is to ensure that the higher-order correction terms, $$u_j\ ,$$ are bounded, so that the approximation to the solution, $$x(t)\ ,$$ is valid over an extended range in time.
The theory breaks into two cases according to whether $$A$$ is semisimple (diagonalizable) or not. The semisimple case, illustrated by the nonlinear oscillator above, is the easiest, and there is only one useful style (in which $$\mathcal{N}_j$$ is the kernel of $$L_A$$), ultimately due to Poincaré. It is easy to describe the semisimple normal form if $$A$$ is diagonal with diagonal entries $$\lambda_1,\dots,\lambda_n$$ (which usually requires introducing complex variables with reality conditions): The $$r$$th equation (for $$\dot y_r$$) of the normalized system will contain only monomials $$y_1^{m_1}\cdots y_n^{m_n}$$ satisfying $m_1\lambda_1+\cdots+m_n\lambda_n-\lambda_r=0 \ .$ Such monomials are called resonant because for pure imaginary eigenvalues, this equation becomes a resonance among frequencies in the usual sense. An elementary treatment of normal forms in the semisimple case only is by Kahn and Zarmi (1998).
## The Nonsemisimple Case
In the nonsemisimple case there are two important styles, the inner product normal form, originally due to Belitskii but popularized by Elphick et al. (1987), and the sl(2) normal form due to Cushman and Sanders. In the inner product style, $$\mathcal{N}_j$$ is the kernel of $$L_{A^*}\ ,$$ $$A^*$$ being the adjoint or conjugate transpose of $$A\ .$$ In the sl(2) style, $$\mathcal{N}_j$$ is the kernel of an operator defined from $$A$$ using the theory of the Lie algebra sl(2). The inner product style is more popular at this time, but the sl(2) style has a much richer mathematical structure with deep connections to sl(2) representation theory and to the classical invariant theory of Cayley, Sylvester and others. Because of this the sl(2) style has computational algorithms that are not available for the inner product style. There is also a simplified normal form style that is derived from the inner product style by changing the projection.
A modern introduction to normal form theory, containing all the styles mentioned here with references and historical remarks, may be found in the monograph by Murdock (2003). Some more recent developments are contained in the last few chapters of Sanders, Verhulst, and Murdock (2007).
## References
• Poincaré, H., New Methods of Celestial Mechanics (Am. Inst. of Physics, 1993).
• Birkhoff, G.D., Dynamical Systems (Am. Math. Society, Providence, 1996).
• Arnold, V.I., Geometrical Methods in the Theory of Ordinary Differential Equations (Springer-Verlag, New York, 1988).
• Bruno, A.D., Local Methods in Nonlinear Differential equations (Springer-Verlag, Berlin, 1989).
• Elphick C., Tirapegui E., Brachet M.E., Coullet P., and Iooss G. A simple global characterization for normal forms of singular vector fields. Physica D, 29:95-127(1987).
• Nayfeh, A.H., Method of Normal Forms. (Wiley, New York, 1993).
• Kahn P.B. and Zarmi Y., Nonlinear Dynamics: Exploration through Normal Forms. (Wiley, New York, 1998).
• Murdock J. Normal Forms and Unfoldings for Local Dynamical Systems. (Springer, New York, 2003).
• Jan Sanders, Ferdinand Verhulst, and James Murdock, Averaging Methods in Nonlinear Dynamical Systems, Springer, New York, 2007, xxiii+431.
Internal references
• Jack Carr (2006) Center manifold. Scholarpedia, 1(12):1826.
• Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
• James Murdock (2006) Unfoldings. Scholarpedia, 1(12):1904.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 76, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080069065093994, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/204864/number-of-ways-to-choose-shoes-so-that-there-will-be-no-complete-pair/204869
|
# Number of ways to choose shoes so that there will be no complete pair
I am just stuck with a problem and need your help. The question is:
A closet has 5 pairs of shoes. The number of ways in which 4 shoes can be chosen from it so that there will be no complete pair is a.80 b.160 c.200 d.none
I think the answer should be $\binom{10}{1}\cdot\binom{8}{1}\cdot\binom{6}{1}\cdot\binom{4}{1}$. But I seriously doubt this could be the answer. Please help me in solving this.
-
## 1 Answer
Hint: The four shoes must come from 4 different pairs.
For a specific set of 4 pairs, there are $2^4$ ways of choosing the shoes. There are 5 possible sets of 4 pairs. Hence there are $5*2^4$ ways in total.
Actually your idea is right, but you are counting the number of ordered sequences of four shoes. Divide by $4!$ and you get the answer you are looking for.
-
Why should I divide by 4!? – Mistu4u Oct 1 '12 at 4:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390016794204712, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/91849/how-to-calculate-fx-using-fast-binary-exponentiation
|
# how to calculate $f^x$ using fast binary exponentiation?
Consider some function $f : \{1,2,\ldots,n\} \rightarrow \{1,2,\ldots,n\}$. I want to calculate $f^x$. It can be easily done in time $O(nx)$ where $n$ is the number of elements in the set.
I've found some formula $f^{2k+1} = f^{2k} f$ and my source says we can use this to do fast binary exponentiation. In fact I know how to calculate $a^x$ where $a$ is some integer using fast binary exponentiation, but I have no idea how to do it for functions/permutations.
Thanks for any hints.
-
## 3 Answers
A nice way to think about it is to notice that a function from any finite set to itself can be represented as a tuple, with the $i$th element giving the image of $i$ under the function: for example, $(2,3,4,1)$ is a representation of a function from the set $\{1,2,3,4\}$ to itself.
I'll write all my code using MATLAB syntax, as I think it's particularly easy to read, and arrays index from 1, which is sometimes pleasant for mathematicians.
Function composition is composition of tuples, and it can be computed in linear time:
````function h = compose(f,g)
disp('Called compose')
for i = 1:length(f)
h(i) = f(g(i));
end
end
````
I've inserted a line to display a message every time the function composition routine is called. The squaring operator is then easily defined:
````function f2 = square(f)
f2 = compose(f,f);
end
````
And finally our exponentiation routine is:
````function h = exponentiate(f,n)
if n == 1 % The base case
h = f;
elseif mod(n,2) == 0
g = exponentiate(f,n/2);
h = square(g);
else
g = exponentiate(f,(n-1)/2);
h = compose(f,square(g));
end
end
````
We can now define a function and exponentiate it:
````>> f = [2,3,4,5,1];
>> exponentiate(f,2)
Called compose
ans =
3 4 5 1 2
>> exponentiate(f,16)
Called compose
Called compose
Called compose
Called compose
ans =
2 3 4 5 1
>> exponentiate(f,63)
Called compose
Called compose
Called compose
Called compose
Called compose
Called compose
Called compose
Called compose
Called compose
Called compose
ans =
4 5 1 2 3
````
And there you have it - the composition function is called approximately $\log_2(x)$ times when we compose the function with itself $x$ times. It takes $O(n)$ time to do the function composition and $O(\log x)$ calls to the composition routine, for a total time complexity of $O(n\log x)$.
-
Note that function application is associative, so $f^{2k+1}=f^{2k}\cdot f=(f^k)^2 *f(k)$. Where $^2$ means double application. The usual applications of fast binary exponentiation are either real numbers or matrices. Matrices can represent linear functions, and permutations are linear maps, so with naive matrix multiplication, you would obtain a $O(n^3 \log x)$ algorithm, which would only be faster for significantly large values of $x$.
But permutations are even more special. You have the cycle decomposition, which can be computed in linear ($O(n)$) time. Once you have the cycle representation, it is easy to compute iterated application. For general functions, the cycles might start with a chain, but the method would still work.
-
Repeated squaring may be used to compute powers of any associative binary operation, i.e. it works in any semigroup. In particular, since function composition $\rm\:f\circ g\:$ is associative, it may be use to compute compositional powers of functions $\rm\:f^2 =\: f\circ f\:,\:$ etc. However, one should beware that repeated squaring can be much less efficient than repeated multiplication in contexts where the cost of multiplication and squaring depends on the size of the operands; for example, look up work by Richard Fateman on computing powers of sparse polynomials.
Note that the algorithm is easily memorized or reconstructed since it arises simply from writing the exponent in binary radix in Horner polynomial form, i.e. $\rm\ d_0 + x\ (d_1 + x\ (d_2\ +\:\cdots))\:$ for $\rm\:x=2\:.\:$ Below is an example of computing $\rm\ x^{101}\$ by repeated squaring. Note that the repeated square form arises simply from performing various substitutions into the binary polynomial Horner form namely $\rm\ 1\to x,\ \ 0\to 1,\ \ (x)\:2\to (x)^2\$ into $101_{10} = 1100101_2\$ expanded into Horner form, viz.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107159376144409, "perplexity_flag": "middle"}
|
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_1&diff=29764&oldid=29763
|
# User:Michiexile/MATH198/Lecture 1
### From HaskellWiki
(Difference between revisions)
| | | | |
|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 49: | | Line 49: | |
| | ;Epimorphism:A morphism that is right-cancellable. Corresponds to surjective functions. We say that ''f'' is an epimorphism if for any <math>g_1,g_2</math>, the equation <math>g_1f = g_2f</math> implies <math>g_1=g_2</math>. | | ;Epimorphism:A morphism that is right-cancellable. Corresponds to surjective functions. We say that ''f'' is an epimorphism if for any <math>g_1,g_2</math>, the equation <math>g_1f = g_2f</math> implies <math>g_1=g_2</math>. |
| | Note, by the way, that cancellability does not imply the existence of an inverse. Epi's and mono's that have inverses realizing their cancellability are called ''split''. | | Note, by the way, that cancellability does not imply the existence of an inverse. Epi's and mono's that have inverses realizing their cancellability are called ''split''. |
| - | ;Isomorphism;A morphism is an isomorphism if it has an inverse. | + | ;Isomorphism:A morphism is an isomorphism if it has an inverse. Split epi and split mono imply isomorphism. |
| | | + | ;Automorphism:An automorphism is an endomorphism that is an isomorphism. |
| | | | |
| | ==Objects== | | ==Objects== |
## 2 Introduction
Why this course? What will we cover? What do we require?
## 3 Category
A graph is a collection G0 of vertices and a collection G1 of arrows. The structure of the graph is captured in the existence of two functions, that we shall call source and target, both going from G1 to G1. In other words, each arrow has a source and a target.
We denote by [v,w] the collection of arrows with source v and target w.
A category is a graph with some special structure:
• Each [v,w] is a set and equipped with a composition operation $[u,v] \times [v,w] \to [u,w]$. In other words, any two arrows, such that the target of one is the source of the other, can be composed to give a new arrow with target and source from the ones left out.
We write $f:u\to v$ if $f\in[u,v]$.
$u \to v \to w$ => $u \to w$
• The composition of arrows is associative.
• Each vertex v has a dedicated arrow 1v with source and target v, called the identity arrow.
• Each identity arrow is a left- and right-identity for the composition operation.
The composition of $f:u\to v$ with $g:v\to w$ is denoted by $gf:u\to v\to w$. A mnemonic here is that you write things so associativity looks right. Hence, (gf)(x) = g(f(x)). This will make more sense once we get around to generalized elements later on.
### 3.1 Examples
• The empty category with no vertices and no arrows.
• The category 1 with a single vertex and only its identity arrow.
• The category 2 with two objects, their identity arrows and the arrow $a\to b$.
• For vertices take vector spaces. For arrows, take linear maps. This is a category, the identity arrow is just the identity map f(x) = x and composition is just function composition.
• For vertices take finite sets. For arrows, take functions.
• For vertices take logical propositions. For arrows take proofs in propositional logic. The identity arrow is the empty proof: P proves P without an actual proof. And if you can prove P using Q and then R using P, then this composes to a proof of R using Q.
• For vertices, take data types. For arrows take (computable) functions. This forms a category, in which we can discuss an abstraction that mirrors most of Haskell. There are issues making Haskell not quite a category on its own, but we get close enough to draw helpful conclusions and analogies.
• Suppose P is a set equipped with a partial ordering relation <. Then we can form a category out of this set with elements for vertices and with a single element in [v,w] if and only if v<w. Then the transitivity and reflexivity of partial orderings show that this forms a category.
Some language we want settled:
A category is concrete if it is like the vector spaces and the sets among the examples - the collection of all sets-with-specific-additional-structure equipped with all functions-respecting-that-structure. We require already that [v,w] is always a set.
A category is small if the collection of all vertices, too, is a set.
## 4 Morphisms
The arrows of a category are called morphisms. This is derived from homomorphisms.
Some arrows have special properties that make them extra helpful; and we'll name them:
Endomorphism
A morphism with the same object as source and target.
Monomorphism
A morphism that is left-cancellable. Corresponds to injective functions. We say that f is a monomorphism if for any g1,g2, the equation fg1 = fg2 implies g1 = g2. In other words, with a concrete perspective, f doesn't introduce additional relations when applied.
Epimorphism
A morphism that is right-cancellable. Corresponds to surjective functions. We say that f is an epimorphism if for any g1,g2, the equation g1f = g2f implies g1 = g2.
Note, by the way, that cancellability does not imply the existence of an inverse. Epi's and mono's that have inverses realizing their cancellability are called split.
Isomorphism
A morphism is an isomorphism if it has an inverse. Split epi and split mono imply isomorphism.
Automorphism
An automorphism is an endomorphism that is an isomorphism.
## 5 Objects
In a category, we use a different name for the vertices: objects. This comes from the roots in describing concrete categories - thus while objects may be actual mathematical objects, but they may just as well be completely different.
Some objects, if they exist, give us strong
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.912345290184021, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=3991111
|
Physics Forums
data fusion using extended kalman filter
I am presently working with data fusion of redundant sensor data, basically trying to put together data from an IMU, a gyro and odometry....
My question is whether the accuracy of the final state be greater than the accuracy of the most accurate sensor used in the ekf?
Thanks in advance for the help!
-Sethu
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by sethu_chidam I am presently working with data fusion of redundant sensor data, basically trying to put together data from an IMU, a gyro and odometry.... My question is whether the accuracy of the final state be greater than the accuracy of the most accurate sensor used in the ekf? Thanks in advance for the help! -Sethu
I'm not sure I understand the logic of your question. Are you asking if the accuracy of the signal from the filtered aggregate data would be greater than the most accurate signal (by your definition) in the aggregate (or "fused") signal data?
Quote by SW VandeCarr I'm not sure I understand the logic of your question. Are you asking if the accuracy of the signal from the filtered aggregate data would be greater than the most accurate signal (by your definition) in the aggregate (or "fused") signal data?
Yes, that is exactly my question.... The project I am working on has a few redundant sensors...and i was wondering if it would be worth running an ekf
data fusion using extended kalman filter
Quote by sethu_chidam Yes, that is exactly my question.... The project I am working on has a few redundant sensors...and i was wondering if it would be worth running an ekf
Are you planning on using multiple sensors or just one? If it's the latter, then I would just run the most accurate one. The fact that you're using the extended filter indicates you have a non-linear situation. These are very sensitive to the initial state. If that state is an outlier on the Gaussian curve (for either the state transition or measurement noise) your model may fail. Given you already have an accurate sensor (a good signal to noise ratio), you won't get any improvement by aggregating it with noisier sensors unless I'm missing something here.
I am running multiple sensors... My sensors and their outputs are as follows: 1. IMU- linear velocity, yaw velocity and pitch velocity 2. Gyro - yaw velocity 3. odometry - linear velocity and yaw velocity My IMU and gyro are quite accurate... As of now I am using linear velocity from the odometry, yaw velocity from the gyro and pitch velocity from the IMU I was wondering if I could put all the 3 together so that i can get a more accurate result so that even if one of the sensors fail i still have others to compensate. Also my sensors operate asynchronously. My initial plan was to build 2 ekf's(one for the IMU data and one for the gyro data) and run them in parallel(depending on which type of sensor data comes in, that corresponding ekf is executed) and using the odometry data in the prediction stage. but now I'm not sure if doing that will be the best idea Any suggestion regarding this? Thanks!
I really can't speak to the technical issues of your situation. This is a math forum, not an engineering forum. From a mathematical perspective, the Kalman filter (usually) requires a specification of the desired final state. It's not clear to me what this is, but if it's your best sensor, you're already there, and your problem would seem to be bringing other sensors to that point if you need multiple sensors. A simple way to look at the filtering process is the so called Pontryagin Minimum Principle for minimizing the integral: $$J(\mu) = \int_{0}^{T} g(x, \mu) dt$$ where $0\leq t \leq T$ and where $x(t)$ is the system state which is connected to the desired state (control input) $\mu(t)$ by: $$\frac{dx}{dt}=f(x,\mu); x(0)=x_0$$
Thanks for the reply... I'll try implementing it and post results later.... Thanks!
Quote by sethu_chidam Thanks for the reply... I'll try implementing it and post results later.... Thanks!
You're welcome.
Quote by sethu_chidam I am running multiple sensors... My sensors and their outputs are as follows: 1. IMU- linear velocity, yaw velocity and pitch velocity 2. Gyro - yaw velocity 3. odometry - linear velocity and yaw velocity My IMU and gyro are quite accurate... As of now I am using linear velocity from the odometry, yaw velocity from the gyro and pitch velocity from the IMU I was wondering if I could put all the 3 together so that i can get a more accurate result so that even if one of the sensors fail i still have others to compensate. Also my sensors operate asynchronously. Any suggestion regarding this? Thanks!
There are two separate ideas here; Sensor failure can be detected without resorting to a Kalman filter; These sensors are not grossly different... Merely calibrating the sensors at an intial time under two extreme conditions of motion, pitch, or yaw -- so that you can scale them to match numerically allows you to detect failure by simply noting when the values become different by more than an arbitrary threshold that you choose; Determine a sigma from continual measurements that 68% of the data falls within, then just choose a confidence interval to detect the sensor giving unreasonable data...
The remaining problem is to decide *which* sensor is giving the false data after the failure is detected. This can be something as simple as noting which sensor abruptly changed its value, or as failures typically are of the whole sensor at once -- noticing which sensor is the locus of multiple discrepancies.
As to improving the accuracy, a Kalman filter may be of use -- but I haven't enough experience with them to prove the issue. I generally have found other ways that are simpler (computationally) to implement.
But -- as another idea; Being able to skew the sampling time from your sensors in a random manner, or physically injecting noise into the signal converting electronics may also allow you to improve the accuracy of the measurements. Typical conversion processes cause a truncation of data at a certain bit depth (a floor operation); by being able to generate electrical/signal noise equivalent to about half the least significant bit's analog signal's magnitude -- one can detect sub-lsb data by re-sampling and averaging. This is very useful for low frequency signals, and in your case -- that would be when the device is moving "slowly" -- and at a time where drift becomes quite appreciable...
Also, this technique is useful to remove aliasing from sampling eg: Nyquist frequency -- by randomly changing the exact time that the sample is taken by a small fraction of the sampling time.
Best wishes. !
Tags
data fusion, ekf, kalman filter, redundant sensor, sensor fusion
Thread Tools
| | | |
|---------------------------------------------------------------|------------------------|---------|
| Similar Threads for: data fusion using extended kalman filter | | |
| Thread | Forum | Replies |
| | General Engineering | 5 |
| | Mechanical Engineering | 0 |
| | Mechanical Engineering | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392310380935669, "perplexity_flag": "middle"}
|
http://cstheory.stackexchange.com/questions/tagged/complexity-classes
|
Tagged Questions
Computational complexity classes and their relations
0answers
2 views
how to prove this unsolvable problem about halting problem (turing machine)
Show that the problem of deciding, for a given TM M, whether M halts for all inputs within n^2(namely n square ) steps(n is the length of the input) is unsolvable. You can use the fact without proof ...
1answer
89 views
Syntactic Complexity Class ${\bf X}$ such that ${\bf PP} \subseteq {\bf X} \subseteq {\bf PSPACE}$
It is known that some (non-relativized) syntactic complexity classes between ${\bf P}$ and ${\bf PSPACE}$ have the following property, \${\bf P} \subseteq {\bf CoNP} \subseteq {\bf US} \subseteq {\bf ...
0answers
120 views
Is it known that $NEXP = \Sigma_2 \implies NEXP = MA$?
Is it known whether the implication $\mathsf{NEXP} = \Sigma_2 \implies \mathsf{NEXP} = \mathsf{MA}$ holds? (The question is inspired by well-known \$\mathsf{NEXP} \subseteq \mathsf{P/poly} ...
1answer
166 views
On the proof of Meyer's Theorem
Meyer's theorem is one of the classical results about collapse of the polynomial hierarchy such as famous Karp Lipton's theorem, and states that \$EXP \subseteq P/poly \Rightarrow EXP = \Sigma_{2}^{p} ...
0answers
79 views
Is there a simpler proof of Beigel and Tarui's transformaion of ACC0 circuits
Beigel and Tarui's transformation of $\mathsf{ACC}^0$ circuits to depth 2 circuits with a polylog symmetric function on top is one of important results in the circuit complexity. For example, the ...
1answer
138 views
Consequences of OWFs for Complexity
It it well-known that the existence of one-way functions is necessary and sufficient for much of cryptography (digital signatures, pseudorandom generators, private-key encryption, etc.). My question ...
0answers
64 views
Conditional Results on Bounded Depth Circuit Hierarchy
$\mathsf{AC,ACC,TC}$-hierarchy are basic bounded depth circuit hierarchies. $AC$-hierarchy is $\bigcup _{i =0}^{\infty} AC^{i}$ , where $AC^{i}$ is the $i$-th level of the hierarchy: a family of ...
0answers
89 views
TQBF $\notin$ SPACE($n^{1/3}$) [closed]
I want to show that TQBF $\notin$ SPACE($n^{1/3}$). Can I use the fact that TQBF is PSPACE-complete to show that there is a language L $\in$ SPACE(n) that reduces to TQBF in polynomial time, and ...
0answers
228 views
What is the complexity of this edge coloring problem?
Recently, I have encountered the following variant of edge coloring. Given a connected undirected graph, find a coloring of the edges that uses the maximum number of colors while also satisfying ...
0answers
29 views
Are all Integer Linear Programming problems NP-Hard? [migrated]
As I understand, the assignment problem is in P as the Hungarian algorithm can solve it in polynomial time - O(n3). I also understand that the assignment problem is an integer linear programming ...
3answers
357 views
What is the fastest known simulation of BPP using Las Vegas algorithms?
$\mathsf{BPP}$ and $\mathsf{ZPP}$ are two of basic probabilistic complexity classes. $\mathsf{BPP}$ is the class of languages decided by probabilistic polynomial-time Turing algorithms where the ...
1answer
146 views
How powerful are nondeterministic constant-depth circuits?
A nondeterministic circuit is a Boolean circuit that has nondeterministic input wires. In other words, a nondeterministic circuit $C$ computing a Boolean function \$f\colon\{0,1\}^{n}\rightarrow ...
2answers
192 views
Is Parity-P contained in PP?
This question was asked by Jan Pax on the Foundations of Mathematics mailing list. Certainly $P^{\oplus P} \subseteq P^{\#P} = P^{PP}$ but I suspect from the answers to this question that it's not ...
1answer
511 views
How can a problem be in NP, be NP-hard and not NP-complete?
For the longest time I have thought that a problem was NP-complete if it is both (1) NP-hard and (2) is in NP. However, in the famous paper "The ellipsoid method and its consequences in ...
2answers
387 views
Is adiabatic quantum computing as powerful as qubit computing?
Much of quantum computing literature focuses on qubit-based computation. Adiabatic quantum computing is not based on qubits. I am looking for insight into any of the following. Is adiabatic quantum ...
1answer
191 views
Smooth Complexity of the Nonnegative Permanent
There has been fantastic work done on the Permanent going on for the last two decades.I have been wondering for a while about the possibility of a Smooth P algorithm for the Permanent of Nonnegative ...
1answer
809 views
+50
About Inverse 3-SAT
Context: Kavvadias and Sideri have shown that the Inverse 3-SAT problem is coNP Complete: Given $\phi$ a set of models on $n$ variables, is there a 3-CNF formula such that $\phi$ is its exact set of ...
1answer
181 views
On $\mathcal L$, $\mathcal{N\!L}$, $\mathcal L^2$, $\mathcal P$ and $\mathcal{N\!P}$
We know that $\mathcal{L}\subseteq \mathcal{N\!L}\subseteq\mathcal{P}\subseteq\mathcal{N\!P}$. From Savitch's Theorem, $\mathcal{N\!L}\subseteq\mathcal{L}^2$, and, from Space Hierarchy Teorem, ...
2answers
435 views
P vs NP: Instructive example of when Brute Force search can be avoided
To be able to explain the P vs NP problem to non-mathematicians I would like to have a pedagogical example of when Brute Force-search can be avoided. The problem should ideally be immediately ...
0answers
128 views
Big O notation for “modulo a polynomial”
Is there a notation that would be like the Big O notation (let's say Big P), but with the following definition: $f=P(g)$ if there exists a polynomial p such that for n large enough, $f\leq p(g(n))$? ...
0answers
46 views
Nonmetric TSP and Functional Compleixty Classes
Non-metric TSP that is TSP and some instance is not hold the triangle inequality is NP-hard by gap-reduction method. Is this general TSP a complete problem in some functional complexity classes ? ...
0answers
93 views
Smallest Nonuniform Complexity Classes including uniform-P
As we know, studiyng differences between uniform complexity and nonuniform complexity class is crucial. For example, P/poly is defined as challenges to derive a separation between P and NP, because ...
1answer
189 views
How do you argue a query is impossible in a query language like SPARQL or SQL?
I've been investigating the ability of the SPARQL query language to represent certain basic tasks in graph theory and machine learning, and have come to believe that it is not possible to do some. For ...
1answer
200 views
Computational Complexity of Computer Vision Problems
What is the computational complexity of computer vision problems (reconstruction, detection, etc.)? Are these problems NP-complete? Are they NP-hard? In most cases this will boil down to determining ...
1answer
304 views
Does every Turing-recognizable undecidable language have a NP-complete subset?
Does every Turing-recognizable undecidable language have a NP-complete subset? The question could be seen as a stronger version of the fact that every infinite Turing-recognizable language has an ...
0answers
92 views
Structural equivalence of two context-free grammars
I understand that determining if two context-free grammars are structurally equivalent is decidable (according to the 1968 paper by Paull, M.C. and Unger, S.H., "Structural equivalence of context-free ...
1answer
358 views
Consensus on P = NP in a world where RP = NP
$RP = NP$ is widely conjectured to be false. But imagine for a moment that it is true. In such case, how likely would be that $P = NP$? Put in other words: in a world where $RP = NP$, what might ...
0answers
113 views
Complexity of $\oplus$ 3-REGULAR BIPARTITE PLANAR VERTEX COVER
The $\oplus$3-REGULAR BIPARTITE PLANAR VERTEX COVER problem consists in computing the parity of the number of vertex covers of a 3-regular bipartite planar graph. Question Which is the ...
1answer
243 views
How hard is to compute $\Delta_{|V|}$?
Let $G=(V,E)$ be a graph. Let $\Delta_k$ be the quantity defined in this question. Let $\mathcal{C}$ be the set of vertex covers of $G$. The following holds: |\mathcal{C}| = 2^{|V|} - \sum_{k = ...
1answer
184 views
Is deterministic pseudorandomness possibly stronger than randomness in parallel?
Let the class BPNC (the combination of $\mathsf{BPP}$ and $\mathsf{NC}$) be log depth parallel algorithms with bounded error probability and access to a random source (I'm not sure if this has a ...
3answers
539 views
Constructivity in Natural Proof and Geometric Complexity
Recently, Ryan Willams proved that Constructivity in Natural Proof is unavoidable to derive a separation of complexity classes : $\mathsf{NEXP}$ and $\mathsf{TC}^{0}$. Constructivity in Natural ...
0answers
84 views
Fullness of regular expressions with exponentiation
Meyer & Stockmeyer proved many years ago that the following problem is NEXPSPACE complete, called "fullness of regular expressions": Input: regular expression with exponentiation Output: true if ...
1answer
54 views
Natural relativized worlds
The oracles that are used in relativized collapses or separations of complexity classes rarely represent $natural$ algorithmic problems. They are typically constructed "artificially" with techniques ...
1answer
262 views
Computational complexities in factoring
[Note: n is a given integer (not the number of its digits)] I'd like to know how O(sqrt(n)/log(n)) would compare against the computational complexity of the best available algorithms (as well as the ...
3answers
598 views
Is NPI contained in P/poly?
It is conjectured that $\mathsf{NP} \nsubseteq \mathsf{P}/\text{poly}$ since the converse would imply $\mathsf{PH} = \Sigma_2$. Ladner's theorem establishes that if $\mathsf{P} \ne \mathsf{NP}$ then ...
1answer
113 views
Complexity results for Lower-Elementary Recursive Functions?
Intrigued by Chris Pressey's interesting question on elementary-recursive functions, I was exploring more and unable to find an answer to this question on the web. The elementary recursive functions ...
0answers
155 views
Narrowing the gap between BPP and RP
We do not know yet whether the 2-sided error of $BPP$ allows more computing power than the one sided error of $RP$. In view of derandomization results, the conjectured answer is no, since both classes ...
1answer
104 views
Complexity of Hidden Subgroup problems
Has anyone classified the (non-quantum) complexity of the hidden subgroup problem for finite Abelian groups? Is it known to be in any classical (not quantum) complexity classes?
1answer
110 views
Algebraic (or numeric) invariants of complexity classes
I hope this question isn't too naive for this site. In mathematics (topology, geometry, algebra) it is common for one to distinguish between two objects by coming up with an algebraic or numerical ...
2answers
323 views
What is $DTIME(n^a)^{DTIME(n^b)}$?
This might be embarrassing, but it turned out I don't know what is $DTIME(n^a)^{DTIME(n^b)}$. It is between $DTIME(n^{ab})$ and $DTIME(n^{a(b+1)})$ but where? Update: There are three possible ways to ...
1answer
138 views
Is semantic language complexity class UP Turing equivalent to syntactic language complexity class US?
${\sf UP}$ is defined in terms of unambiguous-SAT which asks if there exits at most one solution or no solution. On the other hand, ${\sf US}$ is defined in terms of unique-SAT which asks if there ...
1answer
76 views
Consequences of a $p$-optimal proof system for $\operatorname{TAUT}$
I'm reading a paper which shows the result: $(1)$ There is a $p$-optimal proof system for $\operatorname{TAUT}$. $\Leftrightarrow$ $(2)$ $L_{\leq}$ is a $P$-bounded logic for $P$. Both $(1)$ and ...
0answers
478 views
What are the consequences of $\mathsf{L}^2 \subseteq \mathsf{P}$?
We know that $\mathsf{L} \subseteq \mathsf{NL} \subseteq \mathsf{P}$ and that $\mathsf{L} \subseteq \mathsf{NL} \subseteq \mathsf{L}^2 \subseteq$ $\mathsf{polyL}$, where \$\mathsf{L}^2 = ...
1answer
212 views
Counting reduction from #SAT to #HornSAT?
Is it possible to find a counting reduction from #SAT to #HornSAT? I haven't found this question posted here, so decided to check if anyone has any answer to this. Let me explain what do I mean by ...
1answer
320 views
What if a problem is both in $\Pi_2^p$ and $NP$-hard?
If a problem $P$ belongs to both $\Pi_2^p$ and $NP$-hard (thanks to some reduction from a $NP$-complete problem) but not to $NP$, does it imply that $P$ is $\Pi_2^p$-complete? If the answer is no, ...
1answer
106 views
Literature for restrictions that make NPC-Problems to P
The boolean satisfiability problem is in $\mathcal{NPC}$. But if you only get Horn clauses, it is in $\mathcal{P}$. I've already heard similar statements. Do you know a more general statement when ...
1answer
176 views
Is 3SAT problem APX-hard or not?
Could you point me a reference, an answer or it is an open question?
0answers
98 views
The power of randomized logspace with two-way access to the random tape
Let $\mathsf{ZPL}$/$\mathsf{RL}$/$\mathsf{BPL}$ denote the classes of the languages which are accepted (with zero/one-side/two-side error) by a logspace Turing machine with one-way access to the ...
1answer
103 views
Two way deterministic multihead counter automata or logspace TM with counter
Is that known something about languages recognized by two-way deterministic multihead counter automaton or logspace TM with counter (equivalent model)? This class called Aux2DC in my advisor's paper. ...
0answers
155 views
Descriptive complexity characterization of TimeSpace classes
Are there descriptive complexity characterizations for TimeSpace complexity classes like $\mathsf{SC^i}= \mathsf{DTimeSpace}(n^{O(1)},O(\lg^i n))$?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9108557105064392, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/statistics
|
## Tagged Questions
1answer
63 views
### Drect limit of sequences
Let $\mathcal{C}$ is a grothendiect category and consider all of what follows in $\mathcal{C}$. Let $${\varepsilon_i: 0\to A_i \to B_i \to C_i\to 0\ ,\ \phi_i^j}$$ be a direct sy …
1answer
15 views
### Random walk on the hypercube
Consider the hypercube $Q_4$. I would like to know how to compute the number of steps of a random walk in this graph such that the probability to be at a vertex is a given number …
1answer
27 views
### Directed colimits of maps in a combinatorial model category
I have the following situation. $M$ is a combinatorial model category, or if you like a locally presentable $(\infty,1)$-category. I have a set of maps $S$ and I let $C$ be the cla …
0answers
3 views
### “Step-by-Step” toric resolution process?
WLOG the fan $\Sigma$ of our toric variety $X_{\Sigma}$ is simplicial. (So $X_{\Sigma}$ has at worst orbifold singularities and all cones $\sigma \in \Sigma$ are simplicial). The …
3answers
153 views
### Sequences equdistributed modulo 1
Let $\alpha$ be any positive irrational and $\beta$ be any positive real. We have the following results. H. Weyl (1909): The fractional part of the sequence $\alpha n$ is equidist …
0answers
12 views
### f(x,y) [min/max]
I need to find a minima and maxima of a function z = x^2 - 12x + y^2 - 2y that is limited by points A(-7;-5); B(5;-5) and C(5;10) but i do not clearly understand the algorithm >< …
0answers
11 views
### exceptional divisor on a smooth surface
Let $D=\sum d_iD_i$ be an exceptional divisor on a smooth projective surface $X$. i.e., the intersection matrix $(D_i.D_j)$ is negative definite. I have 2 stupid questions. Fix …
0answers
9 views
### Convergence in L^p([0,T],X)
Dear mathoverflowers, I have a question concerning the strong convergence in $L^p([0,T],X)$. Let $X_1,X$ be two Banach spaces such that $X_1\subset X$ with compact embedding. Let …
2answers
108 views
### Is the site of (smooth) manifolds hypercomplete?
By site of manifolds Man, I mean the category of manifolds (maybe submanifolds to obtain a small category) with continuous maps between them. A Grothendieck topology is given by op …
3answers
112 views
### Are residually finite, perfect groups residually alternating?
Dear all, I am interested in residually finite, perfect groups. Are all of them known to be residually alternating? If not, how could one construct a counterexample? A group $G$ …
1answer
141 views
### Non-standard model of the domination principle
(Base theory $RCA_0$)The domination principle says there exists a function g such that g dominates any X-recursive function for any X in the model. i.e. For any $f\le_T X$, \$\exis …
0answers
21 views
### Sobolev spaces on hypersurfaces
I am learning about Sobolev spaces on hypersurfaces. Let $S$ be a $C^k$-hypersurface with boundary for some $k$. In order to define a weak derivative, one needs $k \geq 2$ becaus …
0answers
25 views
### Embedded associated prime
$\underline{\textbf{Embedded associated prime}}$ I am reading the book "Joins and Intersections". In the proof of Rees theorem I have some doubt. Let $\mathbf M$ be a finitely ge …
2answers
847 views
### Numbers of a different order?
Let $d_r$ be a divergent series of positive terms and let $s_r = \sum_{i=1}^{r}d_r$. We are interested in the sequence of numbers $S_{d_r} = s_1, s_2, \ldots$. For example if \$d_r …
2answers
136 views
### What are these compact sets called?
I'm wondering if a compact set $A\subset\mathbb{C}$ satisfying the properties that • $A$ and its complement have finitely many connected components • every connected component of …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716812133789062, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/48217/does-the-mathematics-of-physics-require-impure-set-theory
|
# Does the mathematics of physics require impure set theory?
Suppose for the sake of this question that all mathematics is ultimately reducible to set theory in such a way that the only mathematical objects there really are, are sets.
Now, there is a common distinction between pure and impure set theory. Pure set theory is built up from the empty set and only involves "pure" sets--- sets that only contain other sets as members. Impure set theory, by contrast, involves urelements--- non-set individuals which can be members of sets.
My question is whether the mathematics needed for physics (assuming it all gets cashed out in terms of set theory) is of the pure or impure variety? Are there primitive relations of physics (where a primitive relation of physics is something unique to physics--- not something like equality/identity) that hold between mathematical entities (like numbers) and physical entities? If so, could you provide examples?
-
If further clarification is needed, let me know. Also, I hope this question is appropriate for this forum. – Dennis Jan 3 at 5:05
Further clarification needed: I don't understand what the non-equality/identity primitive relations of physics are supposed to be. Physical theories are written down in mathematical terms where some of these term get associated with a rule where to read off some value from a measurement tool. The formalism is arbitrary, it must only be readable to the scientist. I can't see how the question comes up, as all the formalisms of mathematics, which can be used as foundation for mathematics will obviously suffice for this task. If formalism A can model formalism B, then formalism B isn't "required". – Nick Kidman Jan 5 at 22:35
Your question is essentially an answer to my question. I was curious as to whether impure set theory was indispensable. You are saying that any theory that could plausibly be a foundations for mathematics would suffice for physics. So, no one theory is strictly required and no impure set theory in particular is required. – Dennis Jan 5 at 22:42
It's essentially what the Timelike Cat said, with "interpreted" written out as modeled (understood in the rigorous set theoretic sense). The question called to mind all the threads on MathSE and even PhilosophySE where people discuss how second order theories can really be formalized in first order logic. - Anyway, my question still stands, what do you mean by "primitive relations of physics", how are these beyond mathematical formulations of some ideas for theories of physicist. – Nick Kidman Jan 5 at 23:03
@NickKidman I'll admit that I don't have a clear conception of what the "primitive relations of physics" are. I was hoping to get some guidance and plausible examples here. I suppose a rough characterization of what I have in mind are the primitive (undefined) relations of fundamental physics. Obviously this is somewhat cagey, but hopefully it is clear enough to elicit some helpful suggestions? Should this be a separate question ("What are the primitive relations of fundamental physics?")? – Dennis Jan 7 at 4:03
show 1 more comment
## 2 Answers
Impure set theory can be interpreted in terms of pure set theory, so the question is moot.
You may ask which interpretation is "more natural". But embedding arithmetic, analysis, and calculus in terms of set theory is already a fairly convoluted and unnatural thing.
-
Impure set theory cannot be interpreted in terms of pure set theory. Well, if by that you mean "there's no real difference" then you're wrong. They have different primitives and impure set theory can only be "interpreted" within pure set theory by adding an additional theoretical primitive, the predicate "is a urelement". But once you do that, you haven't interpreted impure set theory within pure set theory, you've simply turned pure set theory into impure set theory! – Dennis Jan 5 at 22:02
I agree with your second point, and do not favor this reductionist attitude. I'm just trying to approach this from the standpoint of someone who favors reduction for reasons of simplicity of theory, since that is who I am attempting to respond to. – Dennis Jan 5 at 22:04
– Retarded Potential Jan 5 at 22:22
Hmm, very interesting. I stand corrected! Has there been any discussion of this paper in the literature that you know of? – Dennis Jan 5 at 22:38
– Dennis Jan 5 at 22:44
show 3 more comments
I'm not entirely sure this question is answerable in terms of physics.
The reason is that all physics theories are reducible to combinations of data structures (e.g. scalars, vectors, matrices, tensors) and associated behaviors. Data structures and associated behaviors are in turn (and pretty much by definition) entities that can be represented within any formal system that has enough richness and complexity to build a Turing machine within it. Even odd physics issues such as quantum entanglement can be modeled in this fashion, just very inefficiently.
So, the problem is that pure and impure set theories are both rich enough (easily!) to build Turing machines. Either one could be used represent any formal physics theory. Since pure sets are a bit simpler, that would (I guess) be the winner between the two options.
However, your real question may be more along the lines of asking if there are any unique, atomic, indivisible things or properties in physics that match better with urelements than with pure sets. Asked that way, I'd say there is a pretty good chance that there are. Physics hasn't quite found them yet, but structure does get simpler as you get smaller. For example, nothing in physics seems to get below $1/3$ of an electron charge or less than $1/2$ of a unit of spin, so something firmly "atomic" or "indivisible" may be going on with those units.
However, I would also point out that such smallest-units in physics tend to come in mutually annihilating pairs, not as simple additions. The idea of mutually annihilating pairs is not typically seen in the all-positive-additions way in which sets are typically constructed, or at least not something I've seen in set theory.
So, if you want to broaden the question a bit and say "what kinds of mathematical systems best seem to capture the natural structure of physics as we observe it?", I would say "something that is based on the creation and annihilation, at many different levels, of mutually annihilating pairs of properties or entities." It would be akin perhaps to a more discrete version of the framework found in quantum field theory.
I am not specifically aware of any specific formal system of that type in mathematics, at least not as part of the search for more fundamental mathematical building blocks like sets.
-
– Dennis Jan 3 at 6:12
Tomorrow, sorry! (1:14 AM here) – Terry Bollinger Jan 3 at 6:14
I'm currently reading a paper where the author assumes that physics requires set theory, and in particular impure set theory. He then proceeds to argue for a set-theoretic construal (as opposed to mereological) of physical geometry and defends it on the grounds that it is simpler (in just the sense you describe in the second paragraph) than its mereological counterpart. I was skeptical of his claim that impure set theory was indispensable to physics and it seems that you think my suspicions are right. – Dennis Jan 3 at 6:17
Oh no worries! Thanks for any help at all that you can provide. I am a philosopher/logician by trade and am woefully under-informed with respect to physics. So I appreciate any assistance in better understanding this topic. – Dennis Jan 3 at 6:18
Dennis, sorry, busy days. Alas, I don't have any specific references for that, at least not that I can recall. The Turing argument often used to be brought up in the context of whether human sentience could be modeled accurately using computers, and I think such arguments just sort of settled deeply into my bones. I would completely concur, however, that an argument that only impure sets can model physics is invalid, for the same reasons I gave: If you can make a Turing machine with it, you can model physics with it -- and pure sets certainly have sufficient power to create Turing machines. – Terry Bollinger Jan 4 at 5:23
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569253921508789, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/191352-fol-3-a.html
|
# Thread:
1. ## Fol 3
Show that the formula $x=y \rightarrow Pzfx \rightarrow Pzfy$ (where $f$ is a one-place function symbol and $P$ is a two-place predicate symbol) is valid.
--------------------------------------------------------------------------
(*) If $\gamma;\alpha \models \phi$, then $\gamma \models (\alpha \rightarrow \phi)$.
We show $\models x=y \rightarrow Pzfx \rightarrow Pzfy$.
By (*), it suffices to show that $\{x=y, Pzfx\} \models Pzfy$. Therefore, we need to show every $A$ that satisfies $x=y$ and $Pzfx$ with every function $s:V \rightarrow |A|$ satisfies $Pzfy$ with $s$. That is, if $\models_Ax=y[s]$ and $\models_A Pzfx[s]$, then $\models_A Pzfy[s]$.
I am stuck here. Any help will be appreciated.
2. ## Re: Fol 3
As you know, given A, the function $s:V\to|A|$ can be extended to a function $s_A$ from the set of all terms to |A|. In particular, A associates some function $\mathbf{f}:|A|\to|A|$ to a unary functional symbol $f$ and some function $\mathbf{P}:|A|\times|A|\to\{T,F\}$ to a binary predicate symbol $P$. Then $s_A(x)=s(x)$, $s_A(fx)=\mathbf{f}(s_A(x))$ and $\models_A Pz(fx)$ iff $\mathbf{P}(s_A(z),s_A(fx))=T$.
By definition of $\models$, if $\models_Ax=y[s]$, then $s(x) = s(y)$. Therefore,
$\models_A Pz(fx)$ iff
$\mathbf{P}(s_A(z),s_A(fx))=T$ iff
$\mathbf{P}(s_A(z),\mathbf{f}(s(x)))=T$ iff
$\mathbf{P}(s_A(z),\mathbf{f}(s(y)))=T$ iff
$\mathbf{P}(s_A(z),s_A(fy))=T$ iff
$\models_A Pz(fy)$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8206197023391724, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/108440-equation.html
|
# Thread:
1. ## Equation
which points satisfies the problem below?
$XYZ+XZ+XY-YZ-11X-3Y-Z+9=0$
2. Originally Posted by streethot
which points satisfies the problem below?
$XYZ+XZ+XY-YZ-11X-3Y-Z+9=0$
x = 0
y = 0
z = 9
3. you have gave a particular solution... now, you have to prove that this is the only solution
4. Originally Posted by streethot
you have gave a particular solution... now, you have to prove that this is the only solution
Lol! I'm not here to do your homework for you.
You merely asked for a point that satisfied the equation, and I gave you one.
If you want help with proving or otherwise that there's only one unique solution, then that's a different equation.
5. Ah, how the road to mathematical truth is paved in blood...
streethot:
There are more solutions than you should care to count. A simple calculator program can tell you that...
1,-1,-1
1,-1,0
1,-1,1
-1,2,2
0,2,1
1,-1,-2
1,-1,2
-1,3,1
0,1,3
0,3,0
etcetera ad nauseam
As you know, Diophantine equations are notoriously difficult to solve. There may be an infinite number of solutions readily found by number-crunching.
What exactly is it that you want to know about this equation? A proof of whether or not there is an infinitude of points? A formula generating a set of solutions?
6. Thank you Media_Man. I was thinking that could have a way to solve this problem by the same method used to solve this:
$xyz+xy+yz+zx+x+y+z=243$
A classmate proposed this problem to me, i will investigate. Anyway, thanks for help me
7. Here's a small part:
If you rearrange the equation you'll get the following:
$Z=\frac{11X+3Y-XY-9}{(X-1)(Y+1)}$
When X=1, and Y=-1 we get $Z=\frac{0}{0}$, in other words $0 \cdot Z = 0$
So, any Z will work for X=1 and Y=-1
Some other brute force stuff:
When you set one variable to equal zero, there's only a finite number of solutions that will work (tried for X = 0 and Z = 0, Y = 0 will probably behave the same way)
I don't think there's one general formula for the solution though, because of the (1,-1,Z) case ...
8. Now i know what is the real question.
Is for $x,y,z\in\mathbb{N}^*$
9. Originally Posted by streethot
Now i know what is the real question.
Is for $x,y,z\in\mathbb{N}^*$
Indeed, it is a diophantine equation.
10. After much work, i got it! i will post with more calm after because the solution is a little long.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508673548698425, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/category/teaching/254b-higher-order-fourier-analysis/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Category Archive
You are currently browsing the category archive for the ‘254B – Higher order Fourier analysis’ category.
## Higher order Fourier analysis
30 March, 2011 in 254B - Higher order Fourier analysis, book, math.CA, math.CO | Tags: Fourier analysis, Gowers uniformity norms | by Terence Tao | 20 comments
I’ve just finished writing the first draft of my third book coming out of the 2010 blog posts, namely “Higher order Fourier analysis“, which was based primarily on my graduate course in the topic, though it also contains material from some additional posts related to linear and higher order Fourier analysis on the blog. It is available online here. As usual, comments and corrections are welcome. There is also a stub page for the book, which at present does not contain much more than the above link.
## 254B, Lecture Notes 7: The transference principle, and linear equations in primes
5 June, 2010 in 254B - Higher order Fourier analysis, math.CO, math.NT | Tags: arithmetic progressions, Green-Tao theorem, prime numbers, pseudorandomness, Selberg sieve, transference principle | by Terence Tao | 4 comments
In this, the final lecture notes of this course, we discuss one of the motivating applications of the theory developed thus far, namely to count solutions to linear equations in primes ${{\mathcal P} = \{2,3,5,7,\ldots\}}$ (or in dense subsets ${A}$ of primes ${{\mathcal P}}$). Unfortunately, the most famous linear equations in primes: the twin prime equation ${p_2 - p_1 = 2}$ and the even Goldbach equation ${p_1+p_2=N}$ – remain out of reach of this technology (because the relevant affine linear forms involved are commensurate, and thus have infinite complexity with respect to the Gowers norms), but most other systems of equations, in particular that of arithmetic progressions ${p_i = n+ir}$ for ${i=0,\ldots,k-1}$ (or equivalently, ${p_i + p_{i+2} = 2p_{i+1}}$ for ${i=0,\ldots,k-2}$) , as well as the odd Goldbach equation ${p_1+p_2+p_3=N}$, are tractable.
To illustrate the main ideas, we will focus on the following result of Green:
Theorem 1 (Roth’s theorem in the primes) Let ${A \subset {\mathcal P}}$ be a subset of primes whose upper density ${\limsup_{N \rightarrow \infty} |A \cap [N]|/|{\mathcal P} \cap [N]|}$ is positive. Then ${A}$ contains infinitely many arithmetic progressions of length three.
This should be compared with Roth’s theorem in the integers (Notes 2), which is the same statement but with the primes ${{\mathcal P}}$ replaced by the integers ${{\bf Z}}$ (or natural numbers ${{\bf N}}$). Indeed, Roth’s theorem for the primes is proven by transferring Roth’s theorem for the integers to the prime setting; the latter theorem is used as a “black box”. The key difficulty here in performing this transference is that the primes have zero density inside the integers; indeed, from the prime number theorem we have ${|{\mathcal P} \cap [N]| = (1+o(1)) \frac{N}{\log N} = o(N)}$.
There are a number of generalisations of this transference technique. In a paper of Green and myself, we extended the above theorem to progressions of longer length (thus transferring Szemerédi’s theorem to the primes). In a series of papers (culminating in a paper to appear shortly) of Green, myself, and also Ziegler, related methods are also used to obtain an asymptotic for the number of solutions in the primes to any system of linear equations of bounded complexity. This latter result uses the full power of higher order Fourier analysis, in particular relying heavily on the inverse conjecture for the Gowers norms; in contrast, Roth’s theorem and Szemerédi’s theorem in the primes are “softer” results that do not need this conjecture.
To transfer results from the integers to the primes, there are three basic steps:
• A general transference principle, that transfers certain types of additive combinatorial results from dense subsets of the integers to dense subsets of a suitably “pseudorandom set” of integers (or more precisely, to the integers weighted by a suitably “pseudorandom measure”);
• An application of sieve theory to show that the primes (or more precisely, an affine modification of the primes) lie inside a suitably pseudorandom set of integers (or more precisely, have significant mass with respect to a suitably pseudorandom measure).
• If one is seeking asymptotics for patterns in the primes, and not simply lower bounds, one also needs to control correlations between the primes (or proxies for the primes, such as the Möbius function) with various objects that arise from higher order Fourier analysis, such as nilsequences.
The former step can be accomplished in a number of ways. For progressions of length three (and more generally, for controlling linear patterns of complexity at most one), transference can be accomplished by Fourier-analytic methods. For more complicated patterns, one can use techniques inspired by ergodic theory; more recently, simplified and more efficient methods based on duality (the Hahn-Banach theorem) have also been used. No number theory is used in this step. (In the case of transference to genuinely random sets, rather than pseudorandom sets, similar ideas appeared earlier in the graph theory setting, see this paper of Kohayakawa, Luczak, and Rodl.
The second step is accomplished by fairly standard sieve theory methods (e.g. the Selberg sieve, or the slight variants of this sieve used by Goldston and Yildirim). Remarkably, very little of the formidable apparatus of modern analytic number theory is needed for this step; for instance, the only fact about the Riemann zeta function that is truly needed is that it has a simple pole at ${s=1}$, and no knowledge of L-functions is needed.
The third step does draw more significantly on analytic number theory techniques and results (most notably, the method of Vinogradov to compute oscillatory sums over the primes, and also the Siegel-Walfisz theorem that gives a good error term on the prime number theorem in arithemtic progressions). As these techniques are somewhat orthogonal to the main topic of this course, we shall only touch briefly on this aspect of the transference strategy.
Read the rest of this entry »
## 254B, Notes 6: The inverse conjecture for the Gowers norm II. The integer case
29 May, 2010 in 254B - Higher order Fourier analysis, math.DS, math.GR | Tags: equidistribution, Gowers uniformity norms, inverse conjecture for the Gowers norm, nilsequences, polynomial maps | by Terence Tao | 11 comments
In Notes 5, we saw that the Gowers uniformity norms on vector spaces ${{\bf F}^n}$ in high characteristic were controlled by classical polynomial phases ${e(\phi)}$.
Now we study the analogous situation on cyclic groups ${{\bf Z}/N{\bf Z}}$. Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms ${U^{s+1}({\bf Z}/N{\bf Z})}$ once ${s}$ exceeds ${1}$. To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here.
Traditionally, nilsequences have been defined in terms of linear orbits ${n \mapsto g^n x}$ on nilmanifolds ${G/\Gamma}$; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits ${n \mapsto g(n) \Gamma}$, and this is the perspective we will take here.
A polynomial phase ${n \mapsto e(\phi(n))}$ on a finite abelian group ${H}$ is formed by starting with a polynomial ${\phi: H \rightarrow {\bf R}/{\bf Z}}$ to the unit circle, and then composing it with the exponential function ${e: {\bf R}/{\bf Z} \rightarrow {\bf C}}$. To create a nilsequence ${n \mapsto F(g(n) \Gamma)}$, we generalise this construction by starting with a polynomial ${g \Gamma: H \rightarrow G/\Gamma}$ into a nilmanifold ${G/\Gamma}$, and then composing this with a Lipschitz function ${F: G/\Gamma \rightarrow {\bf C}}$. (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as ${n \mapsto e( \lfloor \alpha n \rfloor \beta n )}$. (The “almost” here is because the relevant functions ${F: G/\Gamma \rightarrow {\bf C}}$ involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.)
In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences.
Read the rest of this entry »
## 254B, Notes 5: The inverse conjecture for the Gowers norm I. The finite field case
20 May, 2010 in 254B - Higher order Fourier analysis, math.CO | Tags: additive combinatorics, arithmetic regularity lemma, Gowers uniformity norms, polynomials, Szemeredi's theorem | by Terence Tao | 2 comments
In Notes 3, we saw that the number of additive patterns in a given set was (in principle, at least) controlled by the Gowers uniformity norms of functions associated to that set.
Such norms can be defined on any finite additive group (and also on some other types of domains, though we will not discuss this point here). In particular, they can be defined on the finite-dimensional vector spaces ${V}$ over a finite field ${{\bf F}}$.
In this case, the Gowers norms ${U^{d+1}(V)}$ are closely tied to the space ${\hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ of polynomials of degree at most ${d}$. Indeed, as noted in Exercise 20 of Notes 4, a function ${f: V \rightarrow {\bf C}}$ of ${L^\infty(V)}$ norm ${1}$ has ${U^{d+1}(V)}$ norm equal to ${1}$ if and only if ${f = e(\phi)}$ for some ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$; thus polynomials solve the “${100\%}$ inverse problem” for the trivial inequality ${\|f\|_{U^{d+1}(V)} \leq \|f\|_{L^\infty(V)}}$. They are also a crucial component of the solution to the “${99\%}$ inverse problem” and “${1\%}$ inverse problem”. For the former, we will soon show:
Proposition 1 (${99\%}$ inverse theorem for ${U^{d+1}(V)}$) Let ${f: V \rightarrow {\bf C}}$ be such that ${\|f\|_{L^\infty(V)}}$ and ${\|f\|_{U^{d+1}(V)} \geq 1-\epsilon}$ for some ${\epsilon > 0}$. Then there exists ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ such that ${\| f - e(\phi)\|_{L^1(V)} = O_{d, {\bf F}}( \epsilon^c )}$, where ${c = c_d > 0}$ is a constant depending only on ${d}$.
Thus, for the Gowers norm to be almost completely saturated, one must be very close to a polynomial. The converse assertion is easily established:
Exercise 1 (Converse to ${99\%}$ inverse theorem for ${U^{d+1}(V)}$) If ${\|f\|_{L^\infty(V)} \leq 1}$ and ${\|f-e(\phi)\|_{L^1(V)} \leq \epsilon}$ for some ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$, then ${\|F\|_{U^{d+1}(V)} \geq 1 - O_{d,{\bf F}}( \epsilon^c )}$, where ${c = c_d > 0}$ is a constant depending only on ${d}$.
In the ${1\%}$ world, one no longer expects to be close to a polynomial. Instead, one expects to correlate with a polynomial. Indeed, one has
Lemma 2 (Converse to the ${1\%}$ inverse theorem for ${U^{d+1}(V)}$) If ${f: V \rightarrow {\bf C}}$ and ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ are such that ${|\langle f, e(\phi) \rangle_{L^2(V)}| \geq \epsilon}$, where ${\langle f, g \rangle_{L^2(V)} := {\bf E}_{x \in G} f(x) \overline{g(x)}}$, then ${\|f\|_{U^{d+1}(V)} \geq \epsilon}$.
Proof: From the definition of the ${U^1}$ norm (equation (18) from Notes 3), the monotonicity of the Gowers norms (Exercise 19 of Notes 3), and the polynomial phase modulation invariance of the Gowers norms (Exercise 21 of Notes 3), one has
$\displaystyle |\langle f, e(\phi) \rangle| = \| f e(-\phi) \|_{U^1(V)}$
$\displaystyle \leq \|f e(-\phi) \|_{U^{d+1}(V)}$
$\displaystyle = \|f\|_{U^{d+1}(V)}$
and the claim follows. $\Box$
In the high characteristic case ${\hbox{char}({\bf F}) > d}$ at least, this can be reversed:
Theorem 3 (${1\%}$ inverse theorem for ${U^{d+1}(V)}$) Suppose that ${\hbox{char}({\bf F}) > d \geq 0}$. If ${f: V \rightarrow {\bf C}}$ is such that ${\|f\|_{L^\infty(V)} \leq 1}$ and ${\|f\|_{U^{d+1}(V)} \geq \epsilon}$, then there exists ${\phi \in \hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ such that ${|\langle f, e(\phi) \rangle_{L^2(V)}| \gg_{\epsilon,d,{\bf F}} 1}$.
This result is sometimes referred to as the inverse conjecture for the Gowers norm (in high, but bounded, characteristic). For small ${d}$, the claim is easy:
Exercise 2 Verify the cases ${d=0,1}$ of this theorem. (Hint: to verify the ${d=1}$ case, use the Fourier-analytic identities ${\|f\|_{U^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^4)^{1/4}}$ and ${\|f\|_{L^2(V)} = (\sum_{\xi \in \hat V} |\hat f(\xi)|^2)^{1/2}}$, where ${\hat V}$ is the space of all homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${V}$ to ${{\bf R}/{\bf Z}}$, and ${\hat f(\xi) := \mathop{\bf E}_{x \in V} f(x) e(-\xi \cdot x)}$ are the Fourier coefficients of ${f}$.)
This conjecture for larger values of ${d}$ are more difficult to establish. The ${d=2}$ case of the theorem was established by Ben Green and myself in the high characteristic case ${\hbox{char}({\bf F}) > 2}$; the low characteristic case ${\hbox{char}({\bf F}) = d = 2}$ was independently and simultaneously established by Samorodnitsky. The cases ${d>2}$ in the high characteristic case was established in two stages, firstly using a modification of the Furstenberg correspondence principle, due to Ziegler and myself. to convert the problem to an ergodic theory counterpart, and then using a modification of the methods of Host-Kra and Ziegler to solve that counterpart, as done in this paper of Bergelson, Ziegler, and myself.
The situation with the low characteristic case in general is still unclear. In the high characteristic case, we saw from Notes 4 that one could replace the space of non-classical polynomials ${\hbox{Poly}_{\leq d}(V \rightarrow {\bf R}/{\bf Z})}$ in the above conjecture with the essentially equivalent space of classical polynomials ${\hbox{Poly}_{\leq d}(V \rightarrow {\bf F})}$. However, as we shall see below, this turns out not to be the case in certain low characteristic cases (a fact first observed by Lovett, Meshulam, and Samorodnitsky, and independently by Ben Green and myself), for instance if ${\hbox{char}({\bf F}) = 2}$ and ${d \geq 3}$; this is ultimately due to the existence in those cases of non-classical polynomials which exhibit no significant correlation with classical polynomials of equal or lesser degree. This distinction between classical and non-classical polynomials appears to be a rather non-trivial obstruction to understanding the low characteristic setting; it may be necessary to obtain a more complete theory of non-classical polynomials in order to fully settle this issue.
The inverse conjecture has a number of consequences. For instance, it can be used to establish the analogue of Szemerédi’s theorem in this setting:
Theorem 4 (Szemerédi’s theorem for finite fields) Let ${{\bf F} = {\bf F}_p}$ be a finite field, let ${\delta > 0}$, and let ${A \subset {\bf F}^n}$ be such that ${|A| \geq \delta |{\bf F}^n|}$. If ${n}$ is sufficiently large depending on ${p,\delta}$, then ${A}$ contains an (affine) line ${\{ x, x+r, \ldots, x+(p-1)r\}}$ for some ${x,r \in {\bf F}^n}$ with ${ r\neq 0}$.
Exercise 3 Use Theorem 4 to establish the following generalisation: with the notation as above, if ${k \geq 1}$ and ${n}$ is sufficiently large depending on ${p,\delta}$, then ${A}$ contains an affine ${k}$-dimensional subspace.
We will prove this theorem in two different ways, one using a density increment method, and the other using an energy increment method. We discuss some other applications below the fold.
Read the rest of this entry »
## 254B, Lecture Notes 4: Equidistribution of polynomials over finite fields
8 May, 2010 in 254B - Higher order Fourier analysis, math.CO, math.RA | Tags: equidistribution, finite fields, polynomials, rank | by Terence Tao | 8 comments
In the previous lectures, we have focused mostly on the equidistribution or linear patterns on a subset of the integers ${{\bf Z}}$, and in particular on intervals ${[N]}$. The integers are of course a very important domain to study in additive combinatorics; but there are also other fundamental model examples of domains to study. One of these is that of a vector space ${V}$ over a finite field ${{\bf F} = {\bf F}_p}$ of prime order. Such domains are of interest in computer science (particularly when ${p=2}$) and also in number theory; but they also serve as an important simplified “dyadic model” for the integers. See this survey article of Green for further discussion of this point.
The additive combinatorics of the integers ${{\bf Z}}$, and of vector spaces ${V}$ over finite fields, are analogous, but not quite identical. For instance, the analogue of an arithmetic progression in ${{\bf Z}}$ is a subspace of ${V}$. In many cases, the finite field theory is a little bit simpler than the integer theory; for instance, subspaces are closed under addition, whereas arithmetic progressions are only “almost” closed under addition in various senses. (For instance, ${[N]}$ is closed under addition approximately half of the time.) However, there are some ways in which the integers are better behaved. For instance, because the integers can be generated by a single generator, a homomorphism from ${{\bf Z}}$ to some other group ${G}$ can be described by a single group element ${g}$: ${n \mapsto g^n}$. However, to specify a homomorphism from a vector space ${V}$ to ${G}$ one would need to specify one group element for each dimension of ${V}$. Thus we see that there is a tradeoff when passing from ${{\bf Z}}$ (or ${[N]}$) to a vector space model; one gains a bounded torsion property, at the expense of conceding the bounded generation property. (Of course, if one wants to deal with arbitrarily large domains, one has to concede one or the other; the only additive groups that have both bounded torsion and boundedly many generators, are bounded.)
The starting point for this course (Notes 1) was the study of equidistribution of polynomials ${P: {\bf Z} \rightarrow {\bf R}/{\bf Z}}$ from the integers to the unit circle. We now turn to the parallel theory of equidistribution of polynomials ${P: V \rightarrow {\bf R}/{\bf Z}}$ from vector spaces over finite fields to the unit circle. Actually, for simplicity we will mostly focus on the classical case, when the polynomials in fact take values in the ${p^{th}}$ roots of unity (where ${p}$ is the characteristic of the field ${{\bf F} = {\bf F}_p}$). As it turns out, the non-classical case is also of importance (particularly in low characteristic), but the theory is more difficult; see these notes for some further discussion.
Read the rest of this entry »
## 254B, Notes 3: Linear patterns
23 April, 2010 in 254B - Higher order Fourier analysis, math.CO | Tags: generalised von Neumann theorem, Gowers uniformity norms, linear patterns | by Terence Tao | 14 comments
In the previous lecture notes, we used (linear) Fourier analysis to control the number of three-term arithmetic progressions ${a, a+r, a+2r}$ in a given set ${A}$. The power of the Fourier transform for this problem ultimately stemmed from the identity
$\displaystyle \mathop{\bf E}_{n,r \in {\bf Z}/N'{\bf Z}} 1_A(n) 1_A(n+r) 1_A(n+2r)$
$\displaystyle = \sum_{\alpha \in \frac{1}{N'}{\bf Z} / {\bf Z}} \hat 1_A(\alpha) \hat 1_A(-2\alpha) \hat 1_A(\alpha) \ \ \ \ \ (1)$
for any cyclic group ${{\bf Z}/N'{\bf Z}}$ and any subset ${A}$ of that group (analogues of this identity also exist for other finite abelian groups, and to a lesser extent to non-abelian groups also, although that is not the focus of my current discussion). As it turns out, linear Fourier analysis is not able to discern higher order patterns, such as arithmetic progressions of length four; we give some demonstrations of this below the fold, taking advantage of the polynomial recurrence theory from Notes 1.
The main objective of this course is to introduce the (still nascent) theory of higher order Fourier analysis, which is capable of studying higher order patterns. The full theory is still rather complicated (at least, at our present level of understanding). However, one aspect of the theory is relatively simple, namely that we can largely reduce the study of arbitrary additive patterns to the study of a single type of additive pattern, namely the parallelopipeds
$\displaystyle ( x + \omega_1 h_1 + \ldots + \omega_d h_d )_{\omega_1,\ldots,\omega_d \in \{0,1\}}. \ \ \ \ \ (2)$
Thus for instance, for ${d=1}$ one has the line segments
$\displaystyle x, x+h_1 \ \ \ \ \ (3)$
for ${d=2}$ one has the parallelograms
$\displaystyle x, x+h_1, x+h_2, x+h_1+h_2, \ \ \ \ \ (4)$
for ${d=3}$ one has the parallelopipeds
$\displaystyle x, x+h_1, x+h_2, x+h_3, x+h_1+h_2, x+h_1+h_3, x+h_2+h_3, x+h_1+h_2+h_3. \ \ \ \ \ (5)$
These patterns are particularly pleasant to handle, thanks to the large number of symmetries available on the discrete cube ${\{0,1\}^d}$. For instance, whereas establishing the presence of arbitrarily long arithmetic progressions in dense sets is quite difficult (Szemerédi’s theorem), establishing arbitrarily high-dimensional parallelopipeds is much easier:
Exercise 1 Let ${A \subset [N]}$ be such that ${|A| > \delta N}$ for some ${0 < \delta \leq 1}$. If ${N}$ is sufficiently large depending on ${\delta}$, show that there exists an integer ${1 \leq h \ll 1/\delta}$ such that ${|A \cap (A+h)| \gg \delta^2 N}$. (Hint: obtain upper and lower bounds on the set ${\{ (x,y) \in A \times A: x < y \leq x + 10/\delta \}}$.)
Exercise 2 (Hilbert cube lemma) Let ${A \subset [N]}$ be such that ${|A| > \delta N}$ for some ${0 < \delta \leq 1}$, and let ${d \geq 1}$ be an integer. Show that if ${N}$ is sufficiently large depending on ${\delta,d}$, then ${A}$ contains a parallelopiped of the form (2), with ${1 \leq h_1,\ldots,h_d \ll_\delta 1}$ positive integers. (Hint: use the previous exercise and induction.) Conclude that if ${A \subset {\bf Z}}$ has positive upper density, then it contains infinitely many such parallelopipeds for each ${d}$.
Exercise 3 Show that if ${q \geq 1}$ is an integer, and ${d}$ is sufficiently large depending on ${q}$, then for any parallelopiped (2) in the integers ${{\bf Z}}$, there exists ${\omega_1,\ldots,\omega_d \in \{0,1\}}$, not all zero, such that ${x + h_1 \omega_1 + \ldots + h_d \omega_d = x \hbox{ mod } q}$. (Hint: pigeonhole the ${h_i}$ in the residue classes modulo ${q}$.) Use this to conclude that if ${A}$ is the set of all integers ${n}$ such that ${|n-km!| \geq m}$ for all integers ${k, m \geq 1}$, then ${A}$ is a set of positive upper density (and also positive lower density) which does not contain any infinite parallelopipeds (thus one cannot take ${d=\infty}$ in the Hilbert cube lemma).
The standard way to control the parallelogram patterns (and thus, all other (finite complexity) linear patterns) are the Gowers uniformity norms
$\displaystyle \| f\|_{U^d(G)} := {\bf E}_{x,h_1,\ldots,h_d \in G} \prod_{\omega_1,\ldots,\omega_d \in \{0,1\}^d} {\mathcal C}^{\omega_1+\ldots+\omega_d} f(x+\omega_1 h_1 + \ldots + \omega_d h_d) \ \ \ \ \ (6)$
with ${f: G \rightarrow {\bf C}}$ a function on a finite abelian group ${G}$, and ${{\mathcal C}: z \mapsto \overline{z}}$ is the complex conjugation operator; analogues of this norm also exist for group-like objects such as the progression ${[N]}$, and also for measure-preserving systems (where they are known as the Gowers-Host-Kra uniformity seminorms, see this paper of Host-Kra for more discussion). In this set of notes we will focus on the basic properties of these norms; the deepest fact about them, known as the inverse conjecture for these norms, will be discussed in later notes.
Read the rest of this entry »
## 254B, Notes 2: Roth’s theorem
8 April, 2010 in 254B - Higher order Fourier analysis, math.CA, math.CO | Tags: density increment argument, energy increment argument, Fourier analysis, Roth's theorem | by Terence Tao | 9 comments
We now give a basic application of Fourier analysis to the problem of counting additive patterns in sets, namely the following famous theorem of Roth:
Theorem 1 (Roth’s theorem) Let ${A}$ be a subset of the integers ${{\bf Z}}$ whose upper density
$\displaystyle \overline{\delta}(A) := \limsup_{N \rightarrow \infty} \frac{|A \cap [-N,N]|}{2N+1}$
is positive. Then ${A}$ contains infinitely many arithmetic progressions ${a, a+r, a+2r}$ of length three, with ${a \in {\bf Z}}$ and ${r>0}$.
This is the first non-trivial case of Szemerédi’s theorem, which is the same assertion but with length three arithmetic progressions replaced by progressions of length ${k}$ for any ${k}$.
As it turns out, one can prove Roth’s theorem by an application of linear Fourier analysis – by comparing the set ${A}$ (or more precisely, the indicator function ${1_A}$ of that set, or of pieces of that set) against linear characters ${n \mapsto e(\alpha n)}$ for various frequencies ${\alpha \in {\bf R}/{\bf Z}}$. There are two extreme cases to consider (which are model examples of a more general dichotomy between structure and randomness). One is when ${A}$ is aligned up almost completely with one of these linear characters, for instance by being a Bohr set of the form
$\displaystyle \{ n \in {\bf Z}: \| \alpha n - \theta \|_{{\bf R}/{\bf Z}} < \epsilon \}$
or more generally of the form
$\displaystyle \{ n \in {\bf Z}: \alpha n \in U \}$
for some multi-dimensional frequency ${\alpha \in {\bf T}^d}$ and some open set ${U}$. In this case, arithmetic progressions can be located using the equidistribution theory of the previous set of notes. At the other extreme, one has Fourier-uniform or Fourier-pseudorandom sets, whose correlation with any linear character is negligible. In this case, arithmetic progressions can be produced in abundance via a Fourier-analytic calculation.
To handle the general case, one must somehow synthesise together the argument that deals with the structured case with the argument that deals with the random case. There are several known ways to do this, but they can be basically classified into two general methods, namely the density increment argument (or ${L^\infty}$ increment argument) and the energy increment argument (or ${L^2}$ increment argument).
The idea behind the density increment argument is to introduce a dichotomy: either the object ${A}$ being studied is pseudorandom (in which case one is done), or else one can use the theory of the structured objects to locate a sub-object of significantly higher “density” than the original object. As the density cannot exceed one, one should thus be done after a finite number of iterations of this dichotomy. This argument was introduced by Roth in his original proof of the above theorem.
The idea behind the energy increment argument is instead to decompose the original object ${A}$ into two pieces (and, sometimes, a small additional error term): a structured component that captures all the structured objects that have significant correlation with ${A}$, and a pseudorandom component which has no significant correlation with any structured object. This decomposition usually proceeds by trying to maximise the “energy” (or ${L^2}$ norm) of the structured component, or dually by trying to minimise the energy of the residual between the original object and the structured object. This argument appears for instance in the proof of the Szemerédi regularity lemma (which, not coincidentally, can also be used to prove Roth’s theorem), and is also implicit in the ergodic theory approach to such problems (through the machinery of conditional expectation relative to a factor, which is a type of orthogonal projection, the existence of which is usually established via an energy increment argument). However, one can also deploy the energy increment argument in the Fourier analytic setting, to give an alternate Fourier-analytic proof of Roth’s theorem that differs in some ways from the density increment proof.
In these notes we give both two Fourier-analytic proofs of Roth’s theorem, one proceeding via the density increment argument, and the other by the energy increment argument. As it turns out, both of these arguments extend to establish Szemerédi’s theorem, and more generally in counting other types of patterns, but this is non-trivial (requiring some sort of inverse conjecture for the Gowers uniformity norms in both cases); we will discuss this further in later notes.
Read the rest of this entry »
## 254B, Notes 1: Equidistribution of polynomial sequences in tori
28 March, 2010 in 254B - Higher order Fourier analysis, math.CA, math.DS | Tags: equidistribution, Ratner's theorem, torii, ultralimit analysis, van der Corput inequality, Weyl's equidistribution theorem | by Terence Tao | 19 comments
(Linear) Fourier analysis can be viewed as a tool to study an arbitrary function ${f}$ on (say) the integers ${{\bf Z}}$, by looking at how such a function correlates with linear phases such as ${n \mapsto e(\xi n)}$, where ${e(x) := e^{2\pi i x}}$ is the fundamental character, and ${\xi \in {\bf R}}$ is a frequency. These correlations control a number of expressions relating to ${f}$, such as the expected behaviour of ${f}$ on arithmetic progressions ${n, n+r, n+2r}$ of length three.
In this course we will be studying higher-order correlations, such as the correlation of ${f}$ with quadratic phases such as ${n \mapsto e(\xi n^2)}$, as these will control the expected behaviour of ${f}$ on more complex patterns, such as arithmetic progressions ${n, n+r, n+2r, n+3r}$ of length four. In order to do this, we must first understand the behaviour of exponential sums such as
$\displaystyle \sum_{n=1}^N e( \alpha n^2 ).$
Such sums are closely related to the distribution of expressions such as ${\alpha n^2 \hbox{ mod } 1}$ in the unit circle ${{\bf T} := {\bf R}/{\bf Z}}$, as ${n}$ varies from ${1}$ to ${N}$. More generally, one is interested in the distribution of polynomials ${P: {\bf Z}^d \rightarrow {\bf T}}$ of one or more variables taking values in a torus ${{\bf T}}$; for instance, one might be interested in the distribution of the quadruplet ${(\alpha n^2, \alpha (n+r)^2, \alpha(n+2r)^2, \alpha(n+3r)^2)}$ as ${n,r}$ both vary from ${1}$ to ${N}$. Roughly speaking, once we understand these types of distributions, then the general machinery of quadratic Fourier analysis will then allow us to understand the distribution of the quadruplet ${(f(n), f(n+r), f(n+2r), f(n+3r))}$ for more general classes of functions ${f}$; this can lead for instance to an understanding of the distribution of arithmetic progressions of length ${4}$ in the primes, if ${f}$ is somehow related to the primes.
More generally, to find arithmetic progressions such as ${n,n+r,n+2r,n+3r}$ in a set ${A}$, it would suffice to understand the equidistribution of the quadruplet ${(1_A(n), 1_A(n+r), 1_A(n+2r), 1_A(n+3r))}$ in ${\{0,1\}^4}$ as ${n}$ and ${r}$ vary. This is the starting point for the fundamental connection between combinatorics (and more specifically, the task of finding patterns inside sets) and dynamics (and more specifically, the theory of equidistribution and recurrence in measure-preserving dynamical systems, which is a subfield of ergodic theory). This connection was explored in one of my previous classes; it will also be important in this course (particularly as a source of motivation), but the primary focus will be on finitary, and Fourier-based, methods.
The theory of equidistribution of polynomial orbits was developed in the linear case by Dirichlet and Kronecker, and in the polynomial case by Weyl. There are two regimes of interest; the (qualitative) asymptotic regime in which the scale parameter ${N}$ is sent to infinity, and the (quantitative) single-scale regime in which ${N}$ is kept fixed (but large). Traditionally, it is the asymptotic regime which is studied, which connects the subject to other asymptotic fields of mathematics, such as dynamical systems and ergodic theory. However, for many applications (such as the study of the primes), it is the single-scale regime which is of greater importance. The two regimes are not directly equivalent, but are closely related: the single-scale theory can be usually used to derive analogous results in the asymptotic regime, and conversely the arguments in the asymptotic regime can serve as a simplified model to show the way to proceed in the single-scale regime. The analogy between the two can be made tighter by introducing the (qualitative) ultralimit regime, which is formally equivalent to the single-scale regime (except for the fact that explicitly quantitative bounds are abandoned in the ultralimit), but resembles the asymptotic regime quite closely.
We will view the equidistribution theory of polynomial orbits as a special case of Ratner’s theorem, which we will study in more generality later in this course.
For the finitary portion of the course, we will be using asymptotic notation: ${X \ll Y}$, ${Y \gg X}$, or ${X = O(Y)}$ denotes the bound ${|X| \leq CY}$ for some absolute constant ${C}$, and if we need ${C}$ to depend on additional parameters then we will indicate this by subscripts, e.g. ${X \ll_d Y}$ means that ${|X| \leq C_d Y}$ for some ${C_d}$ depending only on ${d}$. In the ultralimit theory we will use an analogue of asymptotic notation, which we will review later in these notes.
Read the rest of this entry »
## Course announcement: 254B, Higher order Fourier analysis
9 March, 2010 in 254B - Higher order Fourier analysis, admin | by Terence Tao | 8 comments
Starting on Monday, March 29, I will begin my graduate class for the winter quarter, entitled “Higher order Fourier analysis“. While classical Fourier analysis is concerned with correlations with linear phases such as $x \mapsto e(\alpha x)$ (where $e(x) := e^{2\pi i x}$), quadratic and higher order Fourier analysis is concerned with quadratic and higher order phases such as $x \mapsto e(\alpha x^2)$, $x \mapsto e(\alpha x^3)$, etc.
In recent years, it has become clear that certain problems in additive combinatorics are naturally associated with a certain order of Fourier analysis. For instance, problems involving arithmetic progressions of length three are connected with classical Fourier analysis; problems involving progressions of length four are connected with quadratic Fourier analysis; problems involving progressions of length five are connected with cubic Fourier analysis; and so forth. The reasons for this will be discussed later in the course, but we will just give one indication of the connection here: linear phases $x \mapsto e(\alpha x)$ and arithmetic progressions $n, n+r, n+2r$ of length three are connected by the identity
$e(\alpha n) e(\alpha(n+r))^{-2} e(\alpha(n+2r)) = 1,$
while quadratic phases $x \mapsto e(\alpha x^2)$ and arithmetic progressions $n, n+r, n+2r, n+3r$ of length four are connected by the identity
$e(\alpha n^2) e(\alpha(n+r)^2)^{-3} e(\alpha(n+2r)^2)^3 e(\alpha(n+3r)^2)^{-1} = 1,$
and so forth.
It turns out that in order to get a complete theory of higher order Fourier analysis, the simple polynomial phases of the type given above do not suffice. One must also consider more exotic objects such as locally polynomial phases, bracket polynomial phases (such as $n \mapsto e( \lfloor \alpha n \rfloor \beta n )$, and/or nilsequences (sequences arising from an orbit in a nilmanifold $G/\Gamma$). These (closely related) families of objects will be introduced later in the course.
Classical Fourier analysis revolves around the Fourier transform and the inversion formula. Unfortunately, we have not yet been able to locate similar identities in the higher order setting, but one can establish weaker results, such as higher order structure theorems and arithmetic regularity lemmas, which are sufficient for many purposes, such as proving Szemeredi’s theorem on arithmetic progressions, or my theorem with Ben Green that the primes contain arbitrarily long arithmetic progressions. These results are powered by the inverse conjecture for the Gowers norms, which is now extremely close to being fully resolved.
Our focus here will primarily be on the finitary approach to the subject, but there is also an important infinitary aspect to the theory, originally coming from ergodic theory but more recently from nonstandard analysis (or more precisely, ultralimit analysis) as well; we will touch upon these perspectives in the course, though they will not be the primary focus. If time permits, we will also present the number-theoretic applications of this machinery to counting arithmetic progressions and other linear patterns in the primes.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 285, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226292967796326, "perplexity_flag": "head"}
|
http://nrich.maths.org/601/solution
|
### Hockey
After some matches were played, most of the information in the table containing the results of the games was accidentally deleted. What was the score in each match played?
### Score
There are exactly 3 ways to add 4 odd numbers to get 10. Find all the ways of adding 8 odd numbers to get 20. To be sure of getting all the solutions you will need to be systematic. What about a total of 15 with 6 odd numbers?
### Medal Muddle
Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished?
# Eight Dominoes
##### Stage: 3 Challenge Level:
Congratulations to Katherine, from Maidstone Girls' Grammar School who reasoned that there are the 24 unique domino solutions given below, where each arrangement has the 5-3 domino horizontally. Katherine gave the answer 384. She explained that there are 3 columns of dominoes which can be arranged in six different ways. Each of these arrangements can be varied by swopping the rows giving 6 x 4 = 24 arrangements. Katherine then explained that from each of these 24 arrangements you can find 8 others: 4 which are mirror images and 4 which are rotations. This gives 192 (24x8) patterns altogether. Then if you take the 2-2 you can also turn it round 180 degrees to form twice as many solutions. This gives 384 (192x2) domino square solutions.
James of Hethersett High School explained that I have found that the domino 5-3 is the key domino, as wherever it goes it has to be followed by two blanks.'' He explained that, from one solution, he found different patterns by swapping the rows or columns. Daniel and Michael of Necton Middle School, Norfolk found one of the solutions. Camilla of Maidstone Girls' Grammar School discovered that in her solutions certain blocks stayed next to one another and concluded that having found one solution all the others are different ways of re-arranging it.
5-3 B B
1-2 3 2
B-1 1 6
2-2 4 B
5-3 B B
1-2 2 3
B-1 6 1
2-2 B 4
B 5-3 B
3 1-2 2
1 B-1 6
4 2-2 B
B 5-3 B
2 1-2 3
6 B-1 1
B 2-2 4
B B 5-3
3 2 1-2
1 6 B-1
4 B 2-2
B B 5-3
2 3 1-2
6 1 B-1
B 4 2-2
5-3 B B
1-2 3 2
2-2 4 B
B-1 1 6
5-3 B B
1-2 2 3
2-2 B 4
B-1 6 1
B 5-3 B
3 1-2 2
4 2-2 B
1 B-1 6
B 5-3 B
2 1-2 3
B 2-2 4
6 B-1 1
B B 5-3
3 2 1-2
4 B 2-2
1 6 B-1
B B 5-3
2 3 1-2
B 4 2-2
6 1 B-1
1-2 3 2
5-3 B B
B-1 1 6
2-2 4 B
1-2 2 3
5-3 B B
B-1 6 1
2-2 B 4
3 1-2 2
B 5-3 B
1 B-1 6
4 2-2 B
2 1-2 3
B 5-3 B
6 B-1 1
B 2-2 4
3 2 1-2
B B 5-3
1 6 B-1
4 B 2-2
2 3 1-2
B B 5-3
6 1 B-1
B 4 2-2
1-2 3 2
5-3 B B
2-2 4 B
B-1 1 6
1-2 2 3
5-3 B B
2-2 B 4
B-1 6 1
3 1-2 2
B 5-3 B
4 2-2 B
1 B-1 6
2 1-2 3
B 5-3 B
B 2-2 4
6 B-1 1
3 2 1-2
B B 5-3
4 B 2-2
1 6 B-1
2 3 1-2
B B 5-3
B 4 2-2
6 1 B-1
Reflections in the diagonals give patterns with the 5-3 domino placed vertically and reflections in the vertical mirror line give patterns with this domino placed as 3-5. Reflections in the horizontal mirror line turn the vertically placed dominoes upside down.
Carrie who goes to Saint Thomas More School has found another solution, based on this pattern of dominoes:
The numbers are:
1 2 3 2
B 2 5 1
1 4 B 3
6 B B 2
No-one else has found a different solution that is not derived from the first basic pattern. You may like to investigate patterns such as the one below, or investigate Carrie's pattern further.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8782286047935486, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/136901/if-m-and-n-are-positive-integers-then-f-m-f-n-f-m-n?answertab=oldest
|
# If $m$ and $n$ are positive integers, then $(F_m,F_n)=F_{(m,n)}$.
Edit: The $F$'s are Fibonacci numbers.
I need an idea on how to show the following:
If $m$ and $n$ are positive integers, then $(F_m,F_n)=F_{(m,n)}$.
I believe that using the fact that $F_{m+n}=F_mF_{n+1}+F_nF_{m-1}$ could come in handy. Moreover, Euclid's algorithm may as well be needed. But I am not certain, as there may be better methods to achieve this.
Thanks in advance.
-
1
What is $f_m$, $f_n$? – Daan Michiels Apr 25 '12 at 18:16
1
Hint. $F_{kn}$ is divisible by $F_n$ – Arturo Magidin Apr 25 '12 at 18:22
4
This is most probably a duplicate though I can't find the link right now. – lhf Apr 25 '12 at 18:23
4
– sdcvvc Apr 25 '12 at 18:30
1
Josué: The proof is induction on $n+m$, so this is inductive hypothesis you can assume. – sdcvvc Apr 25 '12 at 18:40
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350156784057617, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/95822/using-the-catalan-numbers?answertab=votes
|
# Using the Catalan numbers
Here's a question we got for homework:
A soccer match between team A and team B ends with a 9-9 tie. It is know that at some point of the game team A had the lead and that later on it was team B who had it. How many series's of 18 goals can represent the course of the game?
Hint: use the double-reflection technique.
So, this hint doesn't really help me as I don't understand what a double-reflection is. Other than that: I thought about counting all possible series's which is the Catalan number C9, and then subtract all series's where B scored the first goal, but it's a little vague in my mind.
Any hints that would get me started would be great. Thanks!
-
– Did Jan 2 '12 at 14:47
I understand the reflection principle, but what does double reflection means? Its probably a question for my TA but still... Also - the way I understand the question A must score the first goal – yotamoo Jan 2 '12 at 14:51
I don't think there's an implication that team A must score the first goal. They had the lead at some point. – joriki Jan 2 '12 at 14:52
Ok, and if so - how do I start? – yotamoo Jan 2 '12 at 15:06
## 4 Answers
Hint: it's probably easier to count the ways that the condition can fail: that is, the number of series where B is winning/tied up to a certain turning point, and then A is winning/tied for the rest of the tournament.
Here's a canonical Catalan-type picture demonstrating this:
The red-and-yellow marked point is the turning point here. Now, how can you use the reflection technique on this to get a Catalan graph where the black line is always above the diagonal? Can you use this to finish the problem?
-
If I reflect from the marked point and on, then I count all series where B is winning/tied (or A never lead). Is that all the ways for the condition to fail? What about a series where B never leads? Should I double the number I find to include both options? – yotamoo Jan 2 '12 at 16:32
## Did you find this question interesting? Try our newsletter
email address
I wasn't able to solve the problem based only on Lopsy's hint, so here's a bit more.
First, it's all well and good to apply nifty reflection tricks, but they're a lot easier to find if you already know the result you're aiming for; so let's first mechanically derive the result using generating functions and then think about how to get it more elegantly.
The sequences that don't fulfill the requirement consist of a segment (possibly empty) in which $B$ is in the lead, followed by a segment (possibly empty) in which $A$ is in the lead. Such segments where the lead doesn't change are counted by the Catalan numbers, so these invalid sequences are counted by a convolution of the Catalan numbers with themselves (with the sum running over the point where the lead changes). In terms of generating functions, that means that the generating function $G$ of the invalid sequences is the square of the generating function $C$ of the Catalan numbers. With
$$C(x)=\frac{1-\sqrt{1-4x}}{2x}\;,$$
that yields
$$G(x)=C(x)^2=\frac1x\left(\frac{1-\sqrt{1-4x}}x-1\right)=\frac{C(x)-1}x\;.$$
Thus, $G$ is just $C$ with the constant term removed and shifted down by one, that is, $G_n=C_{n+1}$.
Knowing the result, it's a bit easier to see how to apply reflection. The problem in pursuing Lopsy's hint is that it's not obvious how to get a bijection – it's easy to reflect the part below the diagonal upward, but it's not clear what bijection that establishes. Knowing that we want to end up with the Catalan numbers one higher, we can use the extra slot to make the reflected sequence unique: By inserting an up-step before the reflected segment and a down-step after it, we get a bijection from the invalid sequences to the diagonal-avoiding sequences with two more steps, since the turning point is now uniquely marked as the last intersection with the diagonal in the new sequence.
-
The convolution of Catalan numbers which yields $G=C^2$ in your answer corresponds to @Lopsy's decomposition of each (in)admissible path into a path from (0,0) to some (x+1,x) which stays above the diagonal except at its endpoint, and a path from (x,x) to (9,10) which stays below the diagonal except at its endpoint. – Did Jan 2 '12 at 21:00
@Didier: a) I don't understand why you're including an additional segment in the paths -- wouldn't it be more straightforward to say that the decomposition is into a path from (0,0) to (x,x) and one from (x,x) to (9,9)? b) I agree that this decomposition corresponds to the convolution; but I don't see how this gives a hint how to use reflection to set up a bijection. – joriki Jan 2 '12 at 21:27
Re (a), adding these elementary segments is not needed, but I mentioned the point (x+1,x) (and the corresponding one (9,10)) to make clear the reason why the decompoosition is bijective: one does not cut the global path at the first hitting of the diagonal but at its first crossing of the diagonal. Re (b), I guess the idea is to reflect the second portion so as to have two similar paths in succession. Here again I agree with you that this is not needed. – Did Jan 2 '12 at 21:39
@Didier: Sorry, I might be being a bit dense, but I don't understand. I understand that the path is cut at the first crossing, not at the first hit, but once it's reflected, how does one decide which of the hits to turn into a crossing to get back to the original and make the mapping bijective? Or are you somehow using these additional segments in a similar way as I did in my answer to prevent any further hits in the reflected part? (I took Lopsy's hint to mean that one should simply reflect at the diagonal.) – joriki Jan 2 '12 at 22:05
Of course there is no bijection with the set of paths of length 18 entirely above the diagonal! (Otherwise one would have G=C.) Each such path p corresponds to as many admissible paths as the number n(p) of times it hits the diagonal. Surely true combinatorists know by heart that the generating function C^2 enumerates the collection of the paths p weighted by n(p)... – Did Jan 2 '12 at 22:40
show 3 more comments
This follows closely Lopsy's suggestion and Joriki's answer. I copy here my answer to a problem from sci.math.
Question: Suppose there are $n$ '$-1$' and $n$ '$+1$'. What is the recurrence relation for the permutations where all the subtotals beginning from the left is non-negative?
Answeer: Let us call an arrangement of $n$ '$+1$'s and $n$ '$-1$'s a walk of type $n$. Let us also call a walk that has no negative partial sum a unilateral walk.
Let $w(n)$ be the number of unilateral walks of type $n$. Let us classify these walks by the type of their smallest initial subwalk. Those whose smallest initial subwalk is of type $k$ look like this: $$+1<\text{a unilateral walk of type }k{-}1>-1<\text{a unilateral walk of type }n{-}k>$$ By considering all possible types of initial subwalk, we get the following recusive relation: $$w(n) = w(0)w(n-1) + w(1)w(n-2) + w(2)w(n-3) + \dots + w(n-1)w(0)\tag{1}$$ with the initial condition that $w(0) = 1$.
Now that we have the recursive relation, let's try to find a closed form. The best way is to look at the generating function: $$f(x) = w(0) + w(1)x + w(2)x^2 + w(3)x^3 + \dots\tag{2}$$ The recursive relation $(1)$ gives $f(x) = 1 + xf(x)^2$. Solving this with the quadratic formula gives $f(x) = \frac{1 - \sqrt{1-4x}}{2x}$. We can use the binomial theorem to get the power series for $\sqrt{1-4x}$, subtract that from $1$, and divide by $2x$. This gives $$f(x) = 1 + x + 2x^2 + 5x^3 + 14x^4 + \dots + \frac{1}{n+1}\binom{2n}{n} x^n + \dots\tag{3}$$ And equating the coefficients of $(2)$ and $(3)$ we get $w(n) = \frac{1}{n+1}\binom{2n}{n}$.
Since there are $\binom{2n}{n}$ walks of type $n$, subtracting the unilateral walks on both sides, there are $$\frac{n-1}{n+1}\binom{2n}{n}\tag{4}$$ walks of type $n$ whose partial sums are both positive and negative.
-
I don't understand what the above posters are talking about but I think this problem is straightforward. Basically A is in the lead for $r$ goals and then B takes the lead for the other $18-r$ goals.
So the answer is just:
$\sum_{r=1}^{17} C_r C_{18-r}$ where $C_n$ is the Catalan number $\frac{1}{n+1}{2n \choose n}$
-
This only works if you assume that after $B$ has taken the lead, $A$ never takes the lead anymore... – TMM Mar 25 '12 at 17:59
Yup you're right, I interpreted the question wrongly I guess haha – dragoncharmer Mar 26 '12 at 23:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533986449241638, "perplexity_flag": "head"}
|
http://nrich.maths.org/719/solution
|
### Bendy Quad
Four rods are hinged at their ends to form a convex quadrilateral. Investigate the different shapes that the quadrilateral can take. Be patient this problem may be slow to load.
### Long Short
A quadrilateral inscribed in a unit circle has sides of lengths s1, s2, s3 and s4 where s1 ≤ s2 ≤ s3 ≤ s4. Find a quadrilateral of this type for which s1= sqrt2 and show s1 cannot be greater than sqrt2. Find a quadrilateral of this type for which s2 is approximately sqrt3and show that s2 is always less than sqrt3. Find a quadrilateral of this type for which s3 is approximately 2 and show that s2 is always less than 2. Find a quadrilateral of this type for which s4=2 and show that s4 cannot be greater than 2.
### Diagonals for Area
Prove that the area of a quadrilateral is given by half the product of the lengths of the diagonals multiplied by the sine of the angle between the diagonals.
# Cyclic Quad Jigsaw
##### Stage: 4 Challenge Level:
We had two very clear solutions to this problem - well done to James, from Poole Grammar School, and Dylan, who did not give his school. James's solution is shown here.
All five of the small quadrilaterals in the above shape are cyclic, and by using the fact that opposite angles in a cyclic quadrilateral add up to $180 ^{\circ}$ we can prove that the large quadrilateral made up of the five smaller ones is also cyclic.
Opposite angles in each cyclic quadrilateral add up to $180^{\circ}$, so we can write expressions for angles $a$, $c$, and $e$.
$$a = 180^{\circ}- b$$ $$c = 180^{\circ}- d$$ $$e = 180^{\circ}- f$$
Angles $a$, $c$, and $e$ are all round the same point therefore: $$360^{\circ} = a + c + e$$ Substituting in the expressions for angles c and e in we get: $$360^{\circ} = a + (180^{\circ} - d) + (180^{\circ} - f)$$
Simplifying this we get: $$360^{\circ} = a + 360^{\circ} - d - f$$ $$a = d + f$$
Now we follow exactly the same working with the other side of the quadrilateral which gives us the equation $$b = h + j$$
We also know that $a + b = 180^{\circ}$ because they are opposite angles in a cyclic quadrilateral. Substituting in the expressions for $a$ and $b$ gives us $$d + f + h + j = 180^{\circ}$$
So $$(d + h) = 180^{\circ} - (f + j)$$
Looking back at the original diagram we can see that $(d + h)$ and $(f +j)$ are opposite angles in the quadrilateral and because $(d + h) = 180^{\circ} - (f + j)$ the quadrilateral must be cyclic.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422343373298645, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/35478/which-angle-causes-an-object-to-land-quickest?answertab=active
|
# Which angle causes an object to land quickest?
Say a canon is where the circle is, and it shoots two different canonballs at different angles, but at the same speed, which angle would make the cannonball hit the ground first?
Intuitively I'd think they'd hit at the same time, however, I remember a formula describing the second coordinate $y$: $y = - 1/2 \cdot g \cdot t^2 + v_{0y} \cdot t$
Where $t$ is the time, and $v_{0y}$ is the starting velocity. Thus a high $v_{0y}$, would lead to a greater total time $t$, thus it would hit A first. This seems to be missing something though.
-
3
perpendicular components of the object's velocity are independent, so the one which has the smallest initial velocity perpendicular to the ground will hit the floor first. – user27182 Sep 2 '12 at 20:31
1
@Hayeder: And another way to put it is height. The one that rises the least will hit the ground first. – Mike Dunlavey Sep 3 '12 at 1:49
1
It will hit the ground quickest if you point it to -90°. I.e. straight down. – Mark Adler Sep 3 '12 at 4:11
## 4 Answers
For a fixed muzzle velocity the time the cannonball stays in the air depends on the vertical component of the velocity, so trajectory A would stay in the air longest. The trajectory with the longest duration is firing directly upwards.
If the angle to the ground is $\theta$, then the vertical velocity is $v_0sin\theta$, and the time in the air (neglecting air resistance) is $2v_0sin\theta/g$, where $g$ is the acceleration due to gravity.
-
$v_{0y}$ would be greater for cannon A in your picture, so A would go higher and stay in the air longer.
-
Set the equation $y = -0.5gt^2 + V_{y_0} t$ equal to zero. You get two solutions one at time $t = 0$, and the second one is when the object hits the ground again at $t = 2V_{y_0}/g$. Of course, $V_{y_0} = V_0 \sin(\theta)$. Thus you have: $t (\text{travel}) = 2 V_0 \sin(\theta)/g$. The closer the angle to 90 degree the longer the flight time.
-
You are ignoring here the resistance of air, right? I am not going to give you the solution which you can find in any physics text book, but rather tell you how to derive it yourself.
Try expressing initial speed $v_0$ as a function of your angle $\alpha$ (hint: sines and cosines), then solve for $t$ in your formula. From there the solution should be fairly obvious, but you can always verify by finding the angle $\alpha$ which corresponds to smallest $t$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339247345924377, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/85447/difference-between-spaces-of-integrable-functions-w-r-t-lebesgue-measure-and-bore
|
## Difference between spaces of integrable functions w.r.t Lebesgue measure and Borel measure [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a difference between $L^p(\mathbb R,\mathfrak B,\beta)$ and $L^p(\mathbb R,\mathfrak L,\lambda)$ ? Here I denoted by $\lambda$ the Lebesgue measure, defined on the Lebesgue $\sigma$-algebra $\mathfrak L$ and by $\beta$ its restriction to the Borel $\sigma$-algebra $\beta$. Does the answer depend on wether I consider equivalence classes of functions or not?
-
1
Sounds like a homework assignment... See FAQ – Anthony Quas Jan 11 2012 at 20:28
## 1 Answer
I don't exactly know what the Lebesgue sigma-algebra is, but I presume you mean the extension of - for example - the Borel algebra that gives a complete measure. I know this as Baire algebra, and it has a higher cardinality than the Borel algebra.
The $L^p$ spaces however, constist both of equivalence classes of functions, and in fact the spaces are isomorphic via a natural embedding from the Borel one to the other. The difference is that the equivalence classes are bigger. You get more measurable functions in the $\mathcal{L}^p(\lambda)$ space since the sigma-algebra is bigger, but you factor out those you got more when descending to $L^p(\lambda)$.
-
I thought the Baire algebra was the sigma-algebra generated by the zero sets. $\;$ – Ricky Demer Jan 11 2012 at 20:39
1
It's not helpful to post solutions to homework assignments on the forum: it just encourages other inappropriate postings – Anthony Quas Jan 11 2012 at 22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042654633522034, "perplexity_flag": "middle"}
|
http://www.onemathematicalcat.org/algebra_book/online_problems/divisibility.htm
|
DIVISIBILITY
• Jump right to the exercises!
• See the best Algebra Pinball time.
• The concepts for this exercise are summarized below.
For a complete discussion, read the text.
The numbers $0$, $2$, $4$, $6$, … are called even numbers.
Even numbers always end in one of these digits: $0$, $2$, $4$, $6$, or $\,8\,$.
Even numbers can always be divided into two equal (even) piles.
Note: The numbers $1$, $3$, $5$, $7$, … are called odd numbers.
The numbers $0$, $2$, $4$, $6$, … are also said to be divisible by 2.
Divisible by 2 means that $\,2\,$ goes into the number evenly.
The phrases even and divisible by 2 are interchangeable.
Divisible by 3 means that $\,3\,$ goes into the number evenly.
Divisible by 4 means that $\,4\,$ goes into the number evenly; and so on.
A divisibility test is a shortcut to decide if a number is divisible by a given number.
DIVISIBILITY BY 2
If a number ends in $0$, $2$, $4$, $6$, or $8$, then the number is divisible by $\,2\,$.
Also, if a number is divisible by $\,2\,$, then it ends in $0$, $2$, $4$, $6$, or $\,8\,$.
For example, $\,87{,}356\,$ is divisible by $\,2\,$, since it ends in the digit $\,6\,$.
However, $\,87{,}357\,$ is not divisible by $\,2\,$, since it ends in the digit $\,7\,$.
There's a neat trick for deciding if a number is divisible by $\,3\,$.
The technique is illustrated with the following example:
EXAMPLE:
Question: Is $\,57{,}394\,$ divisible by $\,3\,$?
Solution: Add up the digits in the number: $5 + 7 + 3 + 9 + 4 = 28\;$.
The sum is $\,28\,$; add up the digits again: $2 + 8 = 10\,$.
Since $\,10\,$ is not divisible by $\,3\,$, the original number $\,57{,}394\,$ is also not divisible by $\,3\,$.
DIVISIBILITY BY 3
To decide if a number is divisible by $\,3\,$, add up the digits in the number.
Continue this process of adding the digits until you get a manageable number.
(If you want, keep going until you get a single-digit number.)
If this final number is divisible by $\,3\,$, then the number you started with is also divisible by $\,3\,$.
If this final number is not divisible by $\,3\,$, then the number you started with is not divisible by $\,3\,$.
Read the text for a proof of the "divisibility by 3" test.
DIVISIBILITY BY 5
If a number ends in $\,0\,$ or $\,5\,$, then the number is divisible by $\,5\,$.
Also, if a number is divisible by $\,5\,$, then it ends in $\,0\,$ or $\,5\,$.
Read the text for a clever "finger trick" for divisibility by 9.
DIVISIBILITY BY 10
If a number ends in $\,0\,$, then the number is divisible by $\,10\,$.
Also, if a number is divisible by $\,10\,$, then it ends in $\,0\,$.
More compactly, we can say that a number is divisible by 10 if and only if it ends with a 0.
Master the ideas from this section
When you're done practicing, move on to:
Basic Properties of Zero and One
Decide if the number is divisible by: 2, 3, 5, 10.
Check all appropriate boxes.
2 3 5 10
CONCEPT QUESTIONS EXERCISE:
On this exercise, you will not key in your answer.
However, you can check to see if your answer is correct.
(MAX is 12; there are 12 different problem types.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079061150550842, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/probability+particle-physics
|
# Tagged Questions
1answer
69 views
### Computing an average escape distance for a particle
Somewhere in a two dimensional convex bulk of particles (pic related) on a random position a reaction takes place and a particle is sent out in a random direction with a constant velocity $v$. What ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8530374765396118, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/388/advantages-of-high-energy-heavy-ion-collisions-over-proton-proton-collisions
|
# Advantages of high-energy heavy-ion collisions over proton-proton collisions?
Some high-energy experiments (RHIC, LHC) use ion-ion collisions instead of proton-proton collisions. Although the total center-of-mass energy is indeed higher than p-p collisions, it might happen that the total energy per nucleon is actually lower. What are the advantages of using ion-ion collisions (e.g. gold-gold or lead-lead) instead of proton-proton collisions, considering the same accelerator?
-
## 1 Answer
As a clarification the energy per nucleon is always lower: for example, currently in the LHC the proton top energy is 3.5 TeV. Now the Pb energy is 3.5 TeV times Z so the energy per nucleon is 3.5*Z/A and A is greater than Z for every nucleus (except the proton where it is equal to one).
But the goal of ion-ion collision is not to increase the total energy or the energy per nucleon: it is to obtain a different type of collision.
It should be noted than in a proton-proton collision, the energy involved in the real collision process is variable: each quark and gluon carry a fraction of the energy of the proton, and hard collision involve a collision between a quark/gluon of one proton against a quark/gluon of the other.
In the case of ion-ion collision you have the same process: the energy is shared by the protons/neutrons and they can have different energies.
The goal of such collision is also to obtain a volume (bigger than in a p-p collision) with a very high energy density. In such a volume, a "state of matter" called quark-gluon-plasma is believed to be possibly created. The study of this QGP is one of the main goal of the ALICE experiment at the LHC.
A few references:
-
+1 for mentioning the extended volume of colliding matter – David Zaslavsky♦ Nov 8 '10 at 23:53
"the energy per nucleon is always lower" Well, we can imagine a collision dominated by two $x > 1$ partons. Such events would, of course, be vanishingly rare. – dmckee♦ May 19 '11 at 16:43
1
@dmckee: I think you interpreted that sentence to narrowly: He is trying to make a point about energy vs. energy per nucleon. Anyway, how would you produce $x>1$ for pp if you cannot get extra energy from the medium? – honk May 19 '11 at 16:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914520263671875, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/3646/how-does-cornish-fisher-var-aka-modified-var-scale-with-time
|
How does Cornish-Fisher VaR (aka modified VaR) scale with time?
I am thinking about the time-scaling of Cornish-Fisher VaR (see e.g. page 130 here for the formula).
It involves the skewness and the excess-kurtosis of returns. The formula is clear and well studied (and criticized) in various papers for a single time period (e.g. daily returns and a VaR with one-day holding period).
Does anybody know a reference on how to scale it with time? I would be looking for something like the square-root-of-time rule (e.g. daily returns and a VaR with d days holding period). But scaling skewness, kurtosis and volatility separately and plugging them back does not feel good. Any ideas?
-
Have you tried to do the calculation yourself? In the case of IID returns it shouldn't be too complicated. Write log returns of aggredated returns as the sum of individual log returns. Then calculate the skewness and Kurtosis of the sum. – Zarbouzou Jun 21 '12 at 12:12
I did the calculations. I think skewness scales with 1/sqrt(n) and kurtosis with 1/n. So both vanish as the law of large numbers already tells us. But if I just plug in those scalings and apply sqrt(n) for the volatility then a strange time-independent term pops up. This gives me the feeling that just scaling the inputs (skewness,kurtosis and volatility) does not yield a proper solution. – Richard Jun 21 '12 at 13:24
– Bob Jansen Jun 21 '12 at 18:28
Thanks for the Link ... I guess you don't have a link that I can read or at least preview for free? Something like a link to a preprint. – Richard Jun 22 '12 at 9:08
3 Answers
If $z_\alpha$ is the so-called standard normal $z$-score of the significance level $\alpha$ such that $$\frac 1 {\sqrt{2\pi}}\int_{-\infty}^{z_\alpha} e^{-\xi^2/2}d\xi=\alpha$$ and we assume normality, (ignoring skewness and kurtosis,) then we can estimate the $\alpha$ quantile of a distribution with cdf $\Phi$ as $$\Phi^{-1}(\alpha)=\mu + \sigma z_\alpha.$$ The Cornish-Fisher expansion is an attempt to estimate this more accurately directly in terms of the first few cumulants as $$\Phi^{-1}(\alpha)=y_\alpha,$$ where (before we have applied any scaling) $$y_\alpha= \kappa_1 + \frac {z_\alpha}2 + \frac {z_\alpha \kappa_2 }2 + \frac {(z_\alpha^2-1) \kappa_3}6 + \frac {(z_\alpha^3-3z_\alpha) \kappa_4}{24} - \frac {(2z_\alpha - 5z_\alpha) \kappa_3^2}{36}.$$ (Note that $\mu=\kappa_1$ and $\sigma^2=\kappa_2$.) Expressed directly in terms of cumulants, let us try to scale this directly with time as the convolution of infinitely divisible, independent identically distributed random variables. Cumulants of all orders scale linearly with time in this case, since they are simply additive under convolution. $$y_\alpha[t] = \kappa_1t + \frac {z_\alpha}2 + \frac {z_\alpha \kappa_2 t}2 + \frac {(z_\alpha^2-1) \kappa_3 t}6 + \frac {(z_\alpha^3-3z_\alpha) \kappa_4 t}{24} - \frac {(2z_\alpha - 5z_\alpha) \kappa_3^2 t^2}{36}.$$ but we want $y_\alpha[t] = \mu t + (\sigma\sqrt t )x_\alpha[t]$ where $x_\alpha[t]$ is the quantile function of a random variable with zero mean and unit variance. First the term $\kappa_1t$ drops off as our $\mu t$ since all the other cumulants are shift-invariant. Second we need to divide each remaining cumulant $\kappa_kt$ by $(\sigma\sqrt t)^k=(\kappa_2t)^{k/2}$ (because the $k$th cumulant is homogeneous of order $k$.) So: $$y_\alpha[t] = \mu t + \sigma\sqrt t \left[ z_\alpha + \frac {(z_\alpha^2-1) \kappa_3 t}{6(\kappa_2t)^{3/2}} + \frac {(z_\alpha^3-3z_\alpha) \kappa_4 t}{24(\kappa_2t)^2} - \frac {(2z_\alpha - 5z_\alpha) \kappa_3^2 t^2}{36(\kappa_2t)^3}\right].$$ $$y_\alpha[t] = \mu t + \sigma\sqrt t \left[ z_\alpha + \frac {(z_\alpha^2-1) \kappa_3}{6\sigma^3t^{1/2}} + \frac {(z_\alpha^3-3z_\alpha) \kappa_4}{24\sigma^4 t} - \frac {(2z_\alpha - 5z_\alpha) \kappa_3^2}{36\sigma^6t}\right].$$ but generally we write $\gamma_1=\kappa_3/\sigma^3$ and $\gamma_2=\kappa_4/\sigma^4$ for the skewness and the kurtosis respectively, so that $$y_\alpha[t] = \mu t + \sigma\sqrt t \left[ z_\alpha + \frac {(z_\alpha^2-1) \gamma_1}{6\sqrt t} + \frac {(z_\alpha^3-3z_\alpha) \gamma_2}{24t} - \frac {(2z_\alpha - 5z_\alpha) \gamma_1^2}{36t}\right].$$
The Value-at-Risk is then $$\mathrm{VaR} = K_0 \left( 1 - {\exp (y_{\alpha}[t]-rt})\right),$$ where $K_0$ is the initial capital, $\alpha$ is some level of significance, say 1 to 5% or so, and $r$ is some instantaneous risk-free rate, appropriate discount rate, or required rate of return, however one chooses to define it. (This expression ought to become negative for a long enough time, because in the long run one will almost surely make money if $\mu>0$.)
-
Thank you for the rigorous answer. Thus the final result is that plugging in the separate scaling rules for volatility, skewness and kurtosis is correct. And I have to live with the term $$\sigma \frac{(z^2-1)\gamma_1 }{6}$$ which is not affected as time is aggregated. Thank you! – Richard Jun 28 '12 at 12:23
Skewness decays with time, but the rate of that skewness decay will vary based on the instruments and how they are traded, so a simple estimator such as the square root of time rule is not appropriate.
I typically recommend that to scale VaR or ES it makes more sense to lower your confidence level (raise the alpha parameter) to one that makes sense for your holding period.
So, for example, assume that we are working with daily returns, as in your question. Now assume that I have a one-month holding period, and I want a 'monthly VaR'.
I would argue that a rational confidence level for this is 95%, or 1 in 20, corresponding approximately to the loss that will be exceeded about 1 day a month.
For monthly returns, as in a hedge fund portfolio, a confidence of 92% may be most appropriate, to specify the VaR that will be exceeded, on average, once a year.
I think that this is a much more rational approach than asking 'what loss level will be exceeded once in 10000 years?', as many papers and standards bodies recommend. These numbers aren't very useful, as many other authors have pointed out.
Also, extension to Cornish Fisher Expected Shortfall (also called CVaR or Expected Tail Loss) with the same approach as above, helps scale these numbers in a rational way, asking what the mean loss is when the loss exceeds the VaR.
More information on this is available in our published work including this paper from the Journal of Risk which also covers additive/coherent portfolio decomposition of Cornish Fisher VaR and Expected Shortfall.
-
– Richard Jun 21 '12 at 13:35
Something like the alpha-weighting of VaR described for VaR derived from extreme value theory described in Tsay could be derived for the Cornish Fisher Distribution. As another alternative, square root of time sigma scaling will get most of the structure, unless the series is highly skewed or kurtotic. I don't think there is a precise or complete answer in the literature for this, and I haven't done the work to develop a precise answer. – Brian G. Peterson Jun 22 '12 at 14:45
The time scaling of higher moments for ordinary (discrete) returns as per the Wingender paper is illustrated in Excel and VBA in the following spreadsheet demonstration files:
Terminal-Wealth-Time-Horizon-Calcs-Normal-and-Modified-VBA and;
Liqudity-VaR-With-Correct-Time-Scaling-of-Higher-Moments
Available here
For more on the weaknesses of the Cornish Fisher expansion see the presentation on Why-Distributions-Matter-16-Jan-2012
-
Thanks for the link. Searching the web for papers on the topic I have already visited your webpage. – Richard Sep 7 '12 at 8:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276265501976013, "perplexity_flag": "middle"}
|
http://www.r-bloggers.com/working-with-bipartiteaffiliation-network-data-in-r/
|
## R-bloggers
R news and tutorials contributed by (452) R bloggers
# Working with Bipartite/Affiliation Network Data in R
September 30, 2012
By Solomon
(This article was first published on Solomon Messing » R, and kindly contributed to R-bloggers)
This is something I put together a long time ago and recently updated for use with larger data sets.
## Preliminaries
Much of the material here is covered in the more comprehensive “Social Network Analysis Labs in R and SoNIA,” on which I collaborated with Dan McFarland, Sean Westwood and Mike Nowak.
For a great online introduction to social network analysis see the online book Introduction to Social Network Methods by Robert Hanneman and Mark Riddle.
## Bipartite/Affiliation Network Data
A network can consist of different ‘classes’ of nodes. For example, a two-mode network might consist of people (the first mode) and groups in which they are members (the second mode). Another very common example of two-mode network data consists of users on a particular website who communicate in the same forum thread.
Here’s a short example of this kind of data. Run this in R for yourself – just copy an paste into the command line or into a script and it will generate a dataframe that we can use for illustrative purposes:
```df <- data.frame( person =
c('Sam','Sam','Sam','Greg','Tom','Tom','Tom','Mary','Mary'), group =
c('a','b','c','a','b','c','d','b','d'), stringsAsFactors = F)
```
```df
person group
1 Sam a
2 Sam b
3 Sam c
4 Greg a
5 Tom b
6 Tom c
7 Tom d
8 Mary b
9 Mary d
```
## Fast, efficient two-mode to one-mode conversion in R
Suppose we wish to analyze or visualize how the people are connected directly – that is, what if we want the network of people where a tie between two people is present if they are both members of the same group? We need to perform a two-mode to one-mode conversion.
To convert a two-mode incidence matrix to a one-mode adjacency matrix, one can simply multiply an incidence matrix by its transpose, which sum the common 1′s between rows. Recall that matrix multiplication entails multiplying the k-th entry of a row in the first matrix by the k-th entry of a column in the second matrix, then summing, such that the ij-th row-column entry in resulting matrix represents the dot-product of the i-th row of the first matrix and the j-th column of the second. In mathematical notation:
$\displaystyle AB= \begin{bmatrix} a & b\\ c & d \end{bmatrix} \begin{bmatrix} e & f\\ g & h \end{bmatrix} = \begin{bmatrix} ae+bg & af+bh\\ ce+dg & cf+dh \end{bmatrix}$
Notice further that multiplying a matrix by its transpose yields the following:
$\displaystyle AA'= \begin{bmatrix} a & b\\ c & d \end{bmatrix} \begin{bmatrix} a & c\\ b & d \end{bmatrix} = \begin{bmatrix} aa+bb & ac+bd\\ ca+db & cc+dd \end{bmatrix}$
Because our incidence matrix consists of 0′s and 1′s, the off-diagonal entries represent the total number of common columns, which is exactly what we wanted. We’ll use the `%*%` operator to tell R to do exactly this. Let’s take a look at a small example using toy data of people and groups to which they belong. We’ll coerce the data to an incidence matrix, then multiply the incidence matrix by its transpose to get the number of common groups between people.
This is easy to do using the matrix algebra functions included in R. But first, you need to restructure your (edgelist) network data as an incidence matrix. An incidence will record a 1 for row-column combinations where a tie is present and 0 otherwise. One easy way to do this in R is to use the table function and then coerce the table object to a matrix object:
```m <- table( df )
M <- as.matrix( m )
```
If you are using the network or sna packages, a network object be coerced via `as.matrix(your-network)`; with the igraph package use `get.adjacency(your-network)`.
This is great, but what about if we are working with a really large data set? Network data is almost always sparse—there are far more pairwise combinations of potential connections than actual observed connections. Hence, we’d actually prefer to keep the underlying data structured in edgelist format, but we’d also like access to R’s matrix algebra functionality.
We can get the best of both worlds using the Matrix library to construct a sparse triplet representation of a matrix. But we’d also like to avoid building the entire incidence matrix and just feed Matrix our edgelist directly, a point that came up in a recent conversation I had with Sean Taylor. We feed `Matrix` our ‘person’ column to index ‘i’ (rows in the new incidence matrix), our ‘group’ column to index j (columns in the new incidence matrix), and we repeat ’1′ for the length of the edgelist to denote an incidence.
```library('Matrix')
A <- spMatrix(nrow=length(unique(df$person)),
ncol=length(unique(df$group)),
i = as.numeric(factor(df$person)),
j = as.numeric(factor(df$group)),
x = rep(1, length(as.numeric(df$person))) )
row.names(A) <- levels(factor(df$person))
colnames(A) <- levels(factor(df$group))
A
```
We will either convert to the ‘mode’ represented by the columns or by the rows.
To get the one-mode representation of ties between rows (people in our example), multiply the matrix by its transpose. Note that you must use the matrix-multiplication operator `%*%` rather than a simple astrisk. The R code is:
```Arow <- A %*% t(A)
```
Arow will now represent the one-mode matrix formed by the row entities—people will have ties to each other if they are in the same group, in our example. Here’s what it looks like:
```Arow
4 x 4 sparse Matrix of class "dgCMatrix"
Greg Mary Sam Tom
Greg 1 . 1 .
Mary . 2 1 2
Sam 1 1 3 2
Tom . 2 2 3
```
To get the one-mode matrix formed by the column entities (i.e. the number of people the enter the following command:
```Acol <- t(A) %*% A
```
And the resulting co-membership matrix is as follows:
```Mcol
group
group a b c d
a 2 1 1 0
b 1 3 2 2
c 1 2 2 1
d 0 2 1 2
```
Although we’ve used a very small network for our example, this code is highly extensible to the analysis of larger networks with R.
## Analysis of Two Mode Data and Mobility
Let’s work with some actual affiliation data, collected by Dan McFarland on student extracurricular affiliations. It’s a longitudinal data set, with 3 waves – 1996, 1997, 1998. It consists of students (anonymized) and the student organizations in which they are members (e.g. National Honor Society, wrestling team, cheerleading squad, etc.).
What we’ll do is to read in the data, make some mode conversions, visualize the networks in various ways, compute some centrality measures, and then compute transition probabilities (the probability that a member of one group will stay a member of the same group or become a member of a new group
```# Load the "igraph" library
library("igraph")
# (1) Read in the data files, NA data objects coded as "na"
magact96 = read.delim("http://dl.dropbox.com/u/25710348/snaimages/mag_act96.txt",
na.strings = "na")
magact97 = read.delim("http://dl.dropbox.com/u/25710348/snaimages/mag_act97.txt",
na.strings = "na")
magact98 = read.delim("http://dl.dropbox.com/u/25710348/snaimages/mag_act98.txt",
na.strings = "na")
```
Missing data is coded as “na” in this data, which is why we gave R the command na.strings = “na”.
These files consist of four columns of individual-level attributes (ID, gender, grade, race), then a bunch of group membership dummy variables (coded “1″ for membership, “0″ for no membership). We need to set aside the first four columns (which do not change from year to year).
```magattrib = magact96[,1:4]
g96 <- as.matrix(magact96[,-(1:4)]); row.names(g96) = magact96$ID.
g97 <- as.matrix(magact97[,-(1:4)]); row.names(g97) = magact97$ID.
g98 <- as.matrix(magact98[,-(1:4)]); row.names(g98) = magact98$ID.
```
By using the `[,-(1:4)]` index, we drop those columns so that we have a square incidence matrix for each year, and then tell R to set the row names of the matrix to the student’s ID. Note that we need to keep the “.” after ID in this dataset (because it’s in the name of the variable).
Now we load these two-mode matrices into igraph:
```i96 <- graph.incidence(g96, mode=c("all") )
i97 <- graph.incidence(g97, mode=c("all") )
i98 <- graph.incidence(g98, mode=c("all") )
```
### Plotting two-mode networks
Now, let’s plot these graphs. The igraph package has excellent plotting functionality that allows you to assign visual attributes to igraph objects before you plot. The alternative is to pass 20 or so arguments to the `plot.igraph()` function, which gets really messy.
Let’s assign some attributes to our graph. First we set vertex attributes, making sure to make them slightly transparent by altering the gamma, using the `rgb(r,g,b,gamma)` function to set the color. This makes it much easier to look at a really crowded graph, which might look like a giant hairball otherwise.
You can read up on the RGB color model here.
Each node (or “vertex”) object is accessible by calling `V(g)`, and you can call (or create) a node attribute by using the `$` operator so that you call `V(g)$attribute`. Here’s how to set the color attribute for a set of nodes in a graph object:
```V(i96)$color[1:1295] <- rgb(1,0,0,.5)
V(i96)$color[1296:1386] <- rgb(0,1,0,.5)
```
Notice that we index the `V(g)$color` object by a seemingly arbitrary value, 1295. This marks the end of the student nodes, and 1296 is the first group node. You can view which nodes are which by typing V(i96). R prints out a list of all the nodes in the graph, and those with a number are obviously different from those that consist of a group name.
Now we'll set some other graph attributes:
```V(i96)$label <- V(i96)$name
V(i96)$label.color <- rgb(0,0,.2,.5)
V(i96)$label.cex <- .4
V(i96)$size <- 6
V(i96)$frame.color <- NA
```
You can also set edge attributes. Here we’ll make the edges nearly transparent and slightly yellow because there will be so many edges in this graph:
```E(i96)$color <- rgb(.5,.5,0,.2)
```
Now, we’ll open a pdf “device” on which to plot. This is just a connection to a pdf file. Note that the code below will take a minute or two to execute (or longer if you have a pre- Intel dual-core processor).
```pdf("i96.pdf")
plot(i96, layout=layout.fruchterman.reingold)
dev.off()
```
Note that we’ve used the Fruchterman-Reingold force-directed layout algorithm here. Generally speaking, the when you have a ton of edges, the Kamada-Kawai layout algorithm works well but, it can get really slow for networks with a lot of nodes. Also, for larger networks, layout.fruchterman.reingold.grid is faster, but can fail to produce a plot with any meaninful pattern if you have too many isolates, as is the case here. Experiment for yourself.
Here's what we get:
It’s oddly reminiscent of a cresent and star, but impossible to read. Now, if you open the pdf output, you’ll notice that you can zoom in on any part of the graph ad infinitum without losing any resolution. How is that possible in such a small file? It’s possible because the pdf device output consists of data based on vectors: lines, polygons, circles, elipses, etc., each specified by a mathematical formula that your pdf program renders when you view it. Regular bitmap or jpeg picture output, on the other hand, consists of a pixel-coordinate mapping of the image in question, which is why you lose resolution when you zoom in on a digital photograph or a plot produced with most other programs.
Let’s remove all of the isolates (the cresent), change a few aesthetic features, and replot. First, we’ll remove isloates, by deleting all nodes with a degree of 0, meaning that they have zero edges. Then, we’ll suppress labels for students and make their nodes smaller and more transparent. Then we’ll make the edges more narrow more transparent. Then, we’ll replot using various layout algorithms:
```i96 <- delete.vertices(i96, V(i96)[ degree(i96)==0 ])
V(i96)$label[1:857] <- NA
V(i96)$color[1:857] <- rgb(1,0,0,.1)
V(i96)$size[1:857] <- 2
E(i96)$width <- .3
E(i96)$color <- rgb(.5,.5,0,.1)
pdf("i96.2.pdf")
plot(i96, layout=layout.kamada.kawai)
dev.off()
pdf("i96.3.pdf")
plot(i96, layout=layout.fruchterman.reingold.grid)
dev.off()
pdf("i96.4.pdf")
plot(i96, layout=layout.fruchterman.reingold)
dev.off()
```
I personally prefer the Fruchterman-Reingold layout in this case. The nice thing about this layout is that it really emphasizes centrality–the nodes that are most central are nearly always placed in the middle of the plot. Here's what it looks like:
Very pretty, but you can’t see which groups are which at this resolution. Zoom in on the pdf output, and you can see things pretty clearly.
### Two mode to one mode data transformation
We’ve emphasized groups in this visualization so much, that we might want to just create a network consisting of group co-membership. First we need to create a new network object. We’ll do that the same way for this network as for our example at the top of this page:
```g96e <- t(g96) %*% g96
g97e <- t(g97) %*% g97
g98e <- t(g98) %*% g98
i96e <- graph.adjacency(g96e, mode = "undirected")
```
Now we need to tansform the graph so that multiple edges become an attribute ( `E(g)$weight` ) of each unique edge:
```E(i96e)$weight <- count.multiple(i96e)
i96e <- simplify(i96e)
```
Now we’ll set the other plotting parameters as we did above:
```# Set vertex attributes
V(i96e)$label <- V(i96e)$name
V(i96e)$label.color <- rgb(0,0,.2,.8)
V(i96e)$label.cex <- .6
V(i96e)$size <- 6
V(i96e)$frame.color <- NA
V(i96e)$color <- rgb(0,0,1,.5)
# Set edge gamma according to edge weight
egam <- (log(E(i96e)$weight)+.3)/max(log(E(i96e)$weight)+.3)
E(i96e)$color <- rgb(.5,.5,0,egam)
```
We set edge gamma as a function of how many edges exist between two nodes, or in this case, how many students each group has in common. For illustrative purposes, let’s compare how the Kamada-Kawai and Fruchterman-Reingold algorithms render this graph:
```pdf("i96e.pdf")
plot(i96e, main = "layout.kamada.kawai", layout=layout.kamada.kawai)
plot(i96e, main = "layout.fruchterman.reingold", layout=layout.fruchterman.reingold)
dev.off()
```
I like the Kamada-Kawai layout for this graph, because the center of the graph is too busy otherwise. And here’s what the resulting plot looks like:
You can check out the difference between each layout yourself. Here’s what the pdf output looks like. Page 1 shows the Kamada-Kawai layout and page 2 shows the Fruchterman Reingold layout.
### Group overlap networks and plots
Now we might also be interested in the percent overlap between groups. Note that this will be a directed graph, because the percent overlap will not be symmetric across groups–for example, it may be that 3/4 of Spanish NHS members are in NHS, but only 1/8 of NHS members are in the Spanish NHS. We’ll create this graph for all years in our data (though we could do it for one year only).
First we’ll need to create a percent overlap graph. We start by dividing each row by the diagonal (this is really easy in R):
```ol96 <- g96e/diag(g96e)
ol97 <- g97e/diag(g97e)
ol98 <- g98e/diag(g98e)
```
Next, sum the matricies and set any NA cells (caused by dividing by zero in the step above) to zero:
```magall <- ol96 + ol97 + ol98
magall[is.na(magall)] <- 0
```
Note that magall now consists of a percent overlap matrix, but because we’ve summed over 3 years, the maximun is now 3 instead of 1.
Let’s compute average club size, by taking the mean across each value in each diagonal:
```magdiag <- apply(cbind(diag(g96e), diag(g97e), diag(g98e)), 1, mean )
```
Finally, we’ll generate centrality measures for magall. When we create the igraph object from our matrix, we need to set weighted=T because otherwise igraph dichotomizes edges at 1. This can distort our centrality measures because now edges represent more than binary connections–they represent the percent of membership overlap.
```magallg <- graph.adjacency(magall, weighted=T)
# Degree
V(magallg)$degree <- degree(magallg)
# Betweenness centrality
V(magallg)$btwcnt <- betweenness(magallg)
```
Before we plot this, we should probably filter some of the edges, otherwise our graph will probably be too busy to make sense of visually. Take a look at the distribution of connection strength by plotting the density of the magall matrix:
```plot(density(magall))
```
Nearly all of the edge weights are below 1–or in other words, the percent overlap for most clubs is less than 1/3. Let’s filter at 1, so that an edge will consists of group overlap of more than 1/3 of the group’s members in question.
```magallgt1 <- magall
magallgt1[magallgt1<1] <- 0
magallggt1 <- graph.adjacency(magallgt1, weighted=T)
# Removes loops:
magallggt1 <- simplify(magallggt1, remove.multiple=FALSE, remove.loops=TRUE)
```
Before we do anything else, we’ll create a custom layout based on Fruchterman.-Ringold wherein we adjust the coordates by hand using the tkplot gui tool to make sure all of the labels are visible. This is very useful if you want to create a really sharp-looking network visualization for publication.
```magallggt1$layout <- layout.fruchterman.reingold(magallggt1)
V(magallggt1)$label <- V(magallggt1)$name
tkplot(magallggt1)
```
Let the plot load, then maximize the window, and select to View -> Fit to Screen so that you get maximum resolution for this large graph. Now hand-place the nodes, making sure no labels overlap:
Pay special attention to whether the labels overlap (or might overlap if the font was bigger) along the vertical. Save the layout coordinates to the graph object:
```magallggt1$layout <- tkplot.getcoords(1)
```
We use “1″ here because only if this was the first tkplot object you called. If you called tkplot a few times, use the last plot object. You can tell which object is visible because at the top of the tkplot interface, you’ll see something like “Graph plot 1″ or in the case of my screenshot above “Graph plot 7″ (it was the seventh time I called tkplot).
```# Set vertex attributes
V(magallggt1)$label <- V(magallggt1)$name
V(magallggt1)$label.color <- rgb(0,0,.2,.6)
V(magallggt1)$size <- 6
V(magallggt1)$frame.color <- NA
V(magallggt1)$color <- rgb(0,0,1,.5)
# Set edge attributes
E(magallggt1)$arrow.size <- .3
# Set edge gamma according to edge weight
egam <- (E(magallggt1)$weight+.1)/max(E(magallggt1)$weight+.1)
E(magallggt1)$color <- rgb(.5,.5,0,egam)
```
One thing that we can do with this graph is to set label size as a function of degree, which adds a “tag-cloud”-like element to the visualization:
```V(magallggt1)$label.cex <- V(magallggt1)$degree/(max(V(magallggt1)$degree)/2)+ .3
#note, unfortunately one must play with the formula above to get the
#ratio just right
```
Let’s plot the results:
```pdf("magallggt1customlayout.pdf")
plot(magallggt1)
dev.off()
```
Note that we used the custom layout, which because we made part of the igraph object magallggt1, we did not need to specify in plot command.
Here’s the pdf output, and here’s what it looks like:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136994481086731, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/39772/is-symplectic-reduction-interesting-from-a-physical-point-of-view/39903
|
## Is symplectic reduction interesting from a physical point of view?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Do you think that symplectic reduction (Marsden Weinstein reduction) is interesting from a physical point of view? If so, why? Does it give you some new physical insights?
There are some possible answers I often heard of, but I don't really understand it. Perhaps you could comment these points. Explain and illustrate why they are good reasons or if not, explain why they are nonsense:
1) Symplectic reduction is interesting because "it simplifies" the system under consideration because you exploit symmetries to eliminate some redundant degrees of freedom. I do not really understand what's the point here because in general reduction leads to a more complicated geometry. (Or even to singular spaces if you consider more general reduction settings).
2) It's interesting because it is a toy model for gauge theories.
3) It's interesting because if you want to "quantize" a system, from a conceptual point of view, one should start from the reduced system, from the "real" phase space. I don't see why one should do this for nonrelativistic quantum systems. Even for gauge theories I don't get the point, because the usual procedure is quantize the unreduced system (via gauge fixing), isn't it?
If there are points which make symplectic reduction interesting from a physical point of view, are there physical reasons why one should study reduction by stages?
Added; after reading José Figueroa-O'Farrill's answer I had some thoughts I should add:
I am still by far not an expert in gauge theories. But I think, that in gauge theories, one typically has redundant variables, which have no or at least not a direct physical interpretation. So I would agree that the "physical" dynamics takes place at the quotient in the case that the gauge theory itself has a physical meaning (in particular experimental evidence). Concerning quantization, however, if I am not mistaken, the only known quantum gauge theory which corresponding classical gauge theory which experimental support is quantum electrodynamics. For the other physical relevant quantum gauge theories, I think, the classical counterparts play just the role of auxiliary theories in some sense. In this case I would agree, that on the quantum side only the reduced space has physical meaning, but I think for the corresponding classical one, this seems to be a rather pointless question. So the question remains, why it is physically interesting to study on the classical side the reduced phase space in case of gauge theories. Moreover as José Figueroa-O'Farrill points out, the classical reduced space in most cases too complicated to quantize it directly, one would use some kind of extrinsic quantization as BRST instead. I don't know exactly how the situation for gravitation is. I think one can formulate ART as classical gauge theory. But makes it sense in this case to study the reduced classical phase space for quantization purposes? I guess not.
-
This is a nice question! – Jan Weidner Sep 23 2010 at 16:47
The post raises some interesting issues, but it's too argumentative for my taste, which is reaffirmed by the "opinion" tag. – Victor Protsak Sep 23 2010 at 17:05
It is interesting to work out the example of the two-body problem (without using the standard "tricks") just by noting the SO(3)-invariance. (Note that we are dealing with singular symplectic reduction in this case!) – Orbicular Sep 23 2010 at 19:35
2
A side note: I just learned that Jerrold Marsden passed away this week (aged 68). – Hans Lundmark Sep 24 2010 at 17:01
@student: the standard model of particle physics is a quantum gauge theory with lots of experimental support and it's not just quantum electrodynamics. (Sorry I just noticed this addition to the question now!) @Hans: Sad news about Jerry Marsden. – José Figueroa-O'Farrill Oct 30 2010 at 22:59
## 5 Answers
Symplectic reduction arises naturally in constrained hamiltonian systems, e.g., gauge theories. So it is not just a question of it being "interesting" as much as a fact of life.
The way to deal with coisotropic constraints -- those whose zero locus is a coisotropic submanifold -- is via symplectic reduction. The real (read, physically meaningful) dynamics are taking place in the symplectic quotient, which is the standard quotient of the zero locus of the constraints by the (integrable) distribution defined by the hamiltonian vector fields associated to the constraints.
Now, as you point out, the symplectic quotient is usually much more complicated geometrically than the original symplectic manifold and this makes working there cumbersome. For instance, quantising the symplectic quotient is usually difficult. Luckily, one can go the other way: instead of performing the symplectic quotient and then quantising, one can first quantise the constrained system and then do a quantum version of the symplectic quotient. One such procedure, which works in many gauge theories, is BRST quantisation. This is a homological approach to the quantisation of constrained hamiltonian systems. It has the virtue that it preserves the symmetries of the original system which "gauge fixing" typically destroys.
-
Do you think it would be bring new insights to have quantization methods which work generally well for complicated geometries so that you are able to quantize the reduced space directly in physical relevant cases? – student Sep 23 2010 at 19:49
Absolutely: it would be fantastic to be able to quantise complicated systems! Alas, I think it is fair to say we are far from that. One often hears that we only know how to quantise the harmonic oscillator and while this is an exaggeration, it is not too far from the truth. – José Figueroa-O'Farrill Sep 23 2010 at 21:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Inviscid fluid mechanics is one example of a physical system where symplectic reduction actually tells you a great deal, which would be very hard to obtain by other means.
The unreduced configuration space for a (inviscid, incompressible) fluid in a 3-dimensional container $M$ is the group $SDiff(M)$ of volume-preserving diffeomorphisms. This group acts on itself from the right and this action leaves the fluid kinetic energy invariant. Now, there are various kinds of reduction, and all of them give some insight into fluid dynamics:
• Working directly with the unreduced configuration space allows you to do fluid dynamics in Lagrangian variables, that is, you track every particle through its motion and record where it goes as time progresses;
• By factoring out the diffeomorphism group action, you end up from $T^\ast Q$ in the dual of the Lie algebra of $SDiff(M)$ and this will let you do fluid dynamics in the Eulerian picture. Here you are fixed in space, and instead of tracking individual fluid particles you record what happens at a fixed position in space. This is an instance of Poisson reduction, and will show you that the canonical Poisson structure on the dual of the Lie algebra of $SDiff(M)$ is the right one if you want to show that Euler's equations are Hamiltonian. Not only does this process explicitly give you the Poisson structure, it also tells you that (e.g.) it satisfies Jacobi's identity (since it is obtained through reduction from the canonical Poisson structure on $T^\ast Q$), a fact which would otherwise be hard to obtain directly.
• Now fix an element $\mu$ in the dual of the Lie algebra of $SDiff(M)$ and do symplectic reduction at $\mu$. V.I. Arnold has shown that the elements of the dual of the Lie algebra of $SDiff(M)$ can be interpreted as vorticity distributions of the fluid, so performing symplectic reduction is tantamount to fixing the vorticity distribution of the fluid, and only considering fluids with that amount of vorticity. One important example shows up when you take $\mu$ to have support along a closed curve (knot or link). In that case, the symplectic reduced space is nothing but the space of all knots/links that are diffeomorphic to the original curve, equipped with the natural symplectic form! This is quite a simplification, and for instance tells you that the dynamics of vortex rings (a fundamental part of fluid dynamics) is Hamiltonian and has an incredibly nice geometric interpretation. Moreover, by varying $\mu$ you get various other fluid-dynamical systems, such as the Kelvin, Lamb, Kirchoff-Lin, etc. equations, all in one fell swoop!
It's also important emphasize that symplectic reduction doesn't just give you "old" results in a new formulation, but also clears up much of the confusion that you would get when doing straightforward calculations. For instance, people have been known to come up with various sequences of "conserved quantities" for the Euler equations, attempting to establish complete integrability this way. However, having a clear picture of the geometry and how the coadjoint orbits sit inside the dual Lie algebra will quickly show you that these quantities are merely Casimirs and could hence never be used for integrability.
-
Of course it is interesting, and the idea of factoring out symmetries goes back to Newton.
As for your point (1), yes it leads to more complicated geometry, and often some form of singular reduction is required (e.g. have a look at the blue book of Bates & Cushman for the case of integrable Hamiltonian systems with finitely many degrees of freedom), but from a dynamical point of view it makes much more sense.
For instance suppose you would like to study numerically a classical mechanical system (integrable or non-integrable all the same): working in reduced coordinates allows to easily distinguish between different Periodic Orbits (i.e. to count them only once), while in non-reduced dynamics the Periodic Orbits come in continuous families.
As for your comment, the topology of the reduced space is certainly a "new physical insight". Have a look at this recent paper by Yanguas, Palacian, Meyer & Dumas where they study periodic orbits of a non-integrable system as you ask, and where they discuss the issue of reduction and provide further references.
(Edited to correct an error; more links added in reply to comment.)
-
Do you have some references for discussions of non-integrable physically relevant classical mechanical systems, where symplectic reduction brings some new insights? – student Sep 23 2010 at 19:59
Symplectic reduction is interesting because "it simplifies" the system under consideration because you exploit symmetries to eliminate some redundant degrees of freedom. I do not really understand what's the point here because in general reduction leads to a more complicated geometry. (Or even to singular spaces if you consider more general reduction settings).
Actually, the idea is to a find simpler dynamic "upstairs" in order to understand the complicated dynamic that occurs on the quotient space.
Let me give you an example with the so-called Calogero-Moser system.
Consider the cotangent space $T^*(\mathbb{C}^n_{reg})$ of the space $\mathbb{C}^n_{reg}$ consisting of $n$ pairwize dinstinct points in the complex plane. And try to study the dynamic associated to the Hamiltonian ```$$
H(p_1,\dots,p_n,q_1,\dots,q_n):=\sum_ip_i^2-\sum_{i\neq j}\frac{1}{q_i-q_j}
$$``` Are there enough conserved quantities ? What are the integral curves ? etc...
Actually you can see that everything is invariant under the symmetric group $S_n$ so I will actually try to study the same dynamic on $T^*(\mathbb{C}^n_{reg})/S_n$.
### An a priori unrelated system
Consider the space of pairs $(X,Y)$ of $n\times n$ matrices. We actually have $M_n\times M_n=T^*(M_n)$ via the bilinear form $tr(XY)$. The Poisson bracket on coordinates is given by ```$$
\{x_{ij},x_{kl}\}=0=\{y_{ij},y_{kl}\}\quad\{y_{ij},x_{kl}\}=\delta_{il}\delta{jk}
$$``` Where $x_{ij}$ and $y_{kl}$ are the obvious coordinates on $M_n\times M_n$.
Consider the map to `$\mathfrak{sl}_n$` defined by $\mu(X,Y)=[X,Y]$. It is a momentum map ( we again identify `$\mathfrak{sl}_n$` with its dual as above), and take the reduction w.r.t the (co)adjoint orbit $\mathcal{O}$ of $diag(-1,\dots,-1,n-1)$. The reduced space `$M_{red}$` is then the space of pairs $(X,Y)$ of matrices such that $[X,Y]-Id$ has rank $1$, modulo simultaneous conjugation.
Now observe that the functions `$H_i=tr(Y^i)$`, $i=1,\dots,n$, form an integrable system on $M_{red}$. Namely, they are independant Poisson commuting and conjugation invariant functions on $M_n\times M_n$ - therefore they descend to `$M_{red}$` which has precisely dimension $n$.
The dynamic of $H_2$ is very easy. It is linear! Integral curves of $H_2$ are of the form $$(X(t),Y(t))=(X_0+2Y_0t,Y_0).$$
### How the hell is this related to what we had before?
The point is that we have an injective Poisson map from $T^*(\mathbb{C}^n_{reg})/S_n$ to $M_{red}$ that sends $(p_1,\dots,p_n,q_1,\dots,q_n)$ to the pair $(X,Y)$ with $X=diag(q_1,\dots, q_m)$, $Y_{ii}=p_i$ and $Y_{ij}=\frac{1}{q_i-q_j}$. Moreover, th eimage of this map coincide with the dense open subset consisting of conjugation classes of pairs $(X,Y)$ such that $X$ is diagonalizable with pairwize distinct eigenvalues.
Now if you write $H_2$ in $(p,q)$ coordinates you find exactly the $H$ we started with.
So, in the end we found a way to write a complicated system as the reduction of a very simple one. This helps us to understand well the complicates one (e.g. in this example it helped to prove integrability).
The is not the only motivation for symplectic reduction. Studiing systems with constraints might be another one. But I found this example very enlighting about the potential useness of symplectic reduction.
reference: I think that the people who did this are Kazhdan, Kostant and Sternberg. You can find a very nice presentation of it (as well as many other interesting things) in the book "Lectures on Calogero-Moser systems" by Pavel Etingof.
-
Here is a fancy example: Supersymmetry. Rigid N=1 supersymmetric theories in 4 dimension have a natural Kahler structure on the field space. The D-term is precisely a moment map. The moduli space of the theory is the symplectic quotient from this moment map.
-
Interesting. Do you have a reference with some details? – jvkersch Oct 21 2010 at 23:21
2
J. Bagger and E. Witten, “The gauge invariant supersymmetric nonlinear sigma model,” Phys. Lett. B118 (1982) 103–106. J. Bagger and J. Wess, “Gauging the supersymmetric sigma model with a Goldstone field,” Phys. Lett. B199 (1987) 243–246. – Moduli Oct 22 2010 at 2:13
Related to this, in a gauged supergravity theory, we do not have a symplectic quotient anymore; instead we have a GIT quotient. – Moduli Oct 22 2010 at 2:15
Thanks! I definitely want to understand this some more. – jvkersch Oct 22 2010 at 16:37
@Moduli I have been myself looking into these things following this recent work, arxiv.org/abs/1005.3546. It is not clear to me as to why this "conformal" manifold has to come from a symplectic quotient rather than a normal quotient. A similar argument for the case of vacua of supersymmetric gauge theories was given in this paper arxiv.org/abs/hep-th/9506098 where they again argued the need for a symplectic quotient. Though this argument is a little more understandable than the former. – Anirbit Jan 4 2011 at 12:39
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380043148994446, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/large_image/f1/10.1103/Physics.5.99
|
(Left) Y. E. Kraus et al. [2]; (Right) APS/Carin Cain
Figure 1: Adiabatic pumping in (left) the waveguide device studied by Kraus et al. and (right) in the classical geometry Laughlin used to describe the integer quantum Hall effect. In the waveguide, light (red) entering at one end of the device is pumped to the other end. In Laughlin’s picture, a magnetic field ($H0$) pumps charges in a current along the ribbon from one edge of the ribbon to the other, generating a voltage perpendicular to the current.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101412892341614, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/5376/list
|
## Return to Answer
1 [made Community Wiki]
My favourite of these is that there is precisely one differentiable structure on $\mathbb{R}^n$ up to diffeomorphism for all $n$, except when $n=4$, when there are uncountably many.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949022650718689, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/120222-taylor-polynomail-approximation.html
|
# Thread:
1. ## Taylor polynomail approximation
What is/how can I find the taylor polynomial for a semi circle, such as sqrt(25-x^2) ?
when i calculate the derivatives at 0, i just get 0's so i don't know how I can do it.
Thanks
2. Originally Posted by stones44
What is/how can I find the taylor polynomial for a semi circle, such as sqrt(25-x^2) ?
when i calculate the derivatives at 0, i just get 0's so i don't know how I can do it.
Thanks
$f(x)=\sqrt{25-x^2} \implies f(0)=5$
$f'(x)=\frac{-x}{\sqrt{25-x^2}} \implies f'(0)=0$
I agree that the first derivative is zero,but what about the 2nd?
You will find that all of the even derivaitves are equal to zero at zero, but not the odd.
i.e
$f''(x)=\frac{-x^2}{(\sqrt{25-x^2})^3}-\frac{1}{\sqrt{25-x^2}} \implies f''(0)=-\frac{1}{5}$
3. $\sqrt{25-x^2}=0$
$25-x^2=0$
$x^2=25$
$x=5$
$x=-5$
4. wait...i have that the second derivative = -x^2 * (25-x^2)^-1.5
5. $f(x)=(25-x^2)^{1/2}$
$f'(x)=-x(25-x^2)^{-1/2}$
Now by the product rule
$f''(x)=-1(25-x^2)^{-1/2}-x^2(25-x^2)^{-3/2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218621253967285, "perplexity_flag": "middle"}
|
http://nrich.maths.org/2079
|
nrich enriching mathematicsSkip over navigation
### Rots and Refs
Follow hints using a little coordinate geometry, plane geometry and trig to see how matrices are used to work on transformations of the plane.
### Reflect Again
Follow hints to investigate the matrix which gives a reflection of the plane in the line y=tanx. Show that the combination of two reflections in intersecting lines is a rotation.
### Transformations for 10
Explore the properties of matrix transformations with these 10 stimulating questions.
# The Matrix
##### Stage: 5 Challenge Level:
One of the ways to work with transformations is to use a matrix. If you have not met matrices before don't be put off, they are very easy. In this question you will use some simple matrices for rotations and reflections and see how they work.
First you need to know how to multiply a matrix like
$\left( \begin{array}{cc} a & b \\ c & d \end{array} \right)$ by the vector $\left( \begin{array}{c} x \\ y \end{array} \right)$
to give the image of the point $(x,y)$. This multiplication is defined as follows: $$\left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right)$$ $$\left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right) = \left( \begin{array}{c} ax+by \\ cx+dy \end{array} \right)$$ Find the images of the points $(1,0)$ and $(0,1)$ under the transformation given by the matrix $$\left( \begin{array}{cc} a & b \\ c & d \end{array} \right)$$
Describe the effect on the plane of the four transformations where $b=c=0$ and $a$ and $d$ take all possible combinations of the values $\pm 1$. Now describe the effect on the plane of the four transformations where $a=d=0$ and $b$ and $c$ take all possible combinations of the values $\pm 1$
Explain why transformations have the same effect on the whole plane as on the unit square with vertices $(0,0),\ (0,1),\ (1,1),\ (1,0)$.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925469517707825, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=238766
|
Physics Forums
## inquisitive minds
if a number is a cube and a square the only forms will be 9k or 9k+1. any suggestions as to how to vaidate this?? would the 8 cases work? from 9k to 9k+8? what does everyone else think?
yet another...
10 divides z if and only if (10,z) does not = 1.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
10 divides z if and only if (10,z) does not = 1.
Obviously if 10 divides z, then (10, z) is not 1 (it must be at least 10). But the converse is not true, consider z = 2...
that really doesn't help much, i kind of thought of that. this is one of those proofs that need to show one way then the other. that is all i am coming up with.
## inquisitive minds
this is one of those proofs that need to show one way then the other.
But, uh, didn't I just give a counterexample to the other implication? And thus, it's false?
so if i have the left hand side saying that if you choose n = 1, then that says that (10,n) cannot = 1. with = to 2, it says the same thing, so it would work for all n except for 10, and that would givbe you 1. the right hand side would say that (10,n) not = 1. could i assume that it DOES = 1 and show a contradiction? would that be valid for this type of proof?
does anyone have any suggestions on the first question? would showing the 8 cases be the easiest way to p[rove this? i am thinking so, just square them and cube at the same time, or should i square them first THEN cube?
Recognitions: Gold Member Science Advisor Staff Emeritus For a number, k, to be a square and a cube, it needs to be the 6th power of another number. This is evident from the prime factorization of k. So, we need to show that $$k = n^{6m} \equiv 0 or 1 (mod 9)$$ So we need consider only the nine cases n=0,1,2,...,8 $$0^{6} = 0 \equiv 0 (mod 9)$$ $$1^{6} = 1 \equiv 1 (mod 9)$$ $$2^{6} = 64 \equiv 1 (mod 9)$$ $$3^{6} \equiv 6^{6m} \equiv 0 (mod 9)$$ $$4^{6} = 2^{12} \equiv 1 (mod 9)$$ $$5^{6} \equiv (-4)^6 = 4^6 \equiv 1 (mod 9)$$ $$7^{6} \equiv (-2)^6 = 2^6 \equiv 1 (mod 9)$$ and $$8^{6} \equiv (-1)^6 = 1 (mod 9)$$ And of course, 0^m = 0 and 1^m = 1, so that completes the proof.
Thread Tools
| | | |
|----------------------------------------|--------------------|---------|
| Similar Threads for: inquisitive minds | | |
| Thread | Forum | Replies |
| | Quantum Physics | 2 |
| | General Discussion | 1 |
| | Current Events | 67 |
| | General Discussion | 8 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335921406745911, "perplexity_flag": "middle"}
|
http://www.lmfdb.org/ModularForm/
|
Modular Forms
The term modular form is used to describe several types of functions which have a certain type of transformation property and growth condition. The theory of modular forms, although in complex analysis, is intricately connected to areas of number theory, algebraic geometry, combinatorics, algebraic topology, and mathematical physics.
Below you can browse classes of modular forms currently in the LMFDB.
Holomorphic Cusp Forms
Hilbert Modular Forms
Maass Forms on $\GL(2,\Q)$
Siegel Modular Forms
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9014025330543518, "perplexity_flag": "middle"}
|
http://jeremykun.com/category/statistics/
|
# Principal Component Analysis
Posted on June 28, 2012 by
Problem: Reduce the dimension of a data set, translating each data point into a representation that captures the “most important” features.
Solution: in Python
```import numpy
def principalComponents(matrix):
# Columns of matrix correspond to data points, rows to dimensions.
deviationMatrix = (matrix.T - numpy.mean(matrix, axis=1)).T
covarianceMatrix = numpy.cov(deviationMatrix)
eigenvalues, principalComponents = numpy.linalg.eig(covarianceMatrix)
# sort the principal components in decreasing order of corresponding eigenvalue
indexList = numpy.argsort(-eigenvalues)
eigenvalues = eigenvalues[indexList]
principalComponents = principalComponents[:, indexList]
return eigenvalues, principalComponents
```
Discussion: The problem of reducing the dimension of a dataset in a meaningful way shows up all over modern data analysis. Sophisticated techniques are used to select the most important dimensions, and even more sophisticated techniques are used to reason about what it means for a dimension to be “important.”
One way to solve this problem is to compute the principal components of a dataset. For the method of principal components, “important” is interpreted as the direction of largest variability. Note that these “directions” are vectors which may incorporate a portion of many or all of the “standard” dimensions in question. For instance, the picture below obviously has two different intrinsic dimensions from the standard axes.
The regular reader of this blog may recognize this idea from our post on eigenfaces. Indeed, eigenfaces are simply the principal components of a dataset of face images. We will briefly discuss how the algorithm works here, but leave the why to the post on eigenfaces. The crucial interpretation to make is that finding principal components amounts to a linear transformation of the data (that is, only such operations as rotation, translation, scaling, shear, etc. are allowed) which overlays the black arrows above on the standard axes. In the parlance of linear algebra, we’re re-plotting the data with respect to a convenient orthonormal basis of eigenvectors.
Here we first represent the dataset as a matrix whose columns are the data points, and whose rows represent the different dimensions. For example, if it were financial data then the columns might be the instances of time at which the data was collected, and the rows might represent the prices of the commodities recorded at those times. From here we compute two statistical properties of the dataset: the average datapoint, and the standard deviation of each data point. This is done on line 6 above, where the arithmetic operations are entrywise (a convenient feature of Python’s numpy library).
Next, we compute the covariance matrix for the data points. That is, interpreting each dimension as a random variable and the data points are observations of that random variable, we want to compute how the different dimensions are correlated. One way to estimate this from a sample is to compute the dot products of the deviation vectors and divide by the number of data points. For more details, see this Wikipedia entry.
Now (again, for reasons which we detail in our post on eigenfaces), the eigenvectors of this covariance matrix point in the directions of maximal variance, and the magnitude of the eigenvalues correspond to the magnitude of the variance. Even more, regarding the dimensions a random variables, the correlation between the axes of this new representation are zero! This is part of why this method is so powerful; it represents the data in terms of unrelated features. One downside to this is that the principal component features may have no tractable interpretation in terms of real-life phenomena.
Finally, one common thing to do is only use the first few principal components, where by ‘first’ we mean those whose corresponding eigenvalues are the largest. Then one projects the original data points onto the chosen principal components, thus controlling precisely the dimension the data is reduced to. One important question is: how does one decide how many principal components to use?
Because the principal components with larger eigenvalues correspond to features with more variability, one can compute the total variation accounted for with a given set of principal components. Here, the ‘total variation’ is the sum of the variance of each of the random variables (that is, the trace of the covariance matrix, i.e. the sum of its eigenvalues). Since the eigenvalues correspond to the variation in the chosen principal components, we can naturally compute the accounted variation as a proportion. Specifically, if $\lambda_1, \dots \lambda_k$ are the eigenvalues of the chosen principal components, and $\textup{tr}(A)$ is the trace of the covariance matrix, then the total variation covered by the chosen principal components is simply $(\lambda_1 + \dots + \lambda_k)/\textup{tr}(A)$.
In many cases of high-dimensional data, one can encapsulate more than 90% of the total variation using a small fraction of the principal components. In our post on eigenfaces we used a relatively homogeneous dataset of images; our recognition algorithm performed quite well using only about 20 out of 36,000 principal components. Note that there were also some linear algebra tricks to compute only those principal components which had nonzero eigenvalues. In any case, it is clear that if the data is nice enough, principal component analysis is a vey powerful tool.
| |
# Streaming Median
Posted on June 14, 2012 by
Problem: Compute a reasonable approximation to a “streaming median” of a potentially infinite sequence of integers.
Solution: (in Python)
```def streamingMedian(seq):
seq = iter(seq)
m = 0
for nextElt in seq:
if m > nextElt:
m -= 1
elif m < nextElt:
m += 1
yield m
```
Discussion: Before we discuss the details of the Python implementation above, we should note a few things.
First, because the input sequence is potentially infinite, we can’t store any amount of information that is increasing in the length of the sequence. Even though storing something like $O(\log(n))$ integers would be reasonable for the real world (note that the log of a petabyte is about 60 bytes), we should not let that stop us from shooting for the ideal $O(1)$ space bound, and exploring what sorts of solutions arise under that constraint. For the record, I don’t know of any algorithms to compute the true streaming median which require $O(\log(n))$ space, and I would be very interested to see one.
Second, we should note the motivation for this problem. If the process generating the stream of numbers doesn’t change over time, then one can find a reasonably good approximation to the median of the entire sequence using only a sufficiently large, but finite prefix of the sequence. But if the process does change, then all bets are off. We need an algorithm which can compensate for potentially wild changes in the statistical properties of the sequence. It’s unsurprising that such a naive algorithm would do the trick, because it can’t make any assumptions.
In words, the algorithm works as follows: start with some initial guess for the median $m$. For each element $x$ in the sequence, add one to $m$ if $m$ is less than $x$; subtract one if $m$ is greater than $x$, and do nothing otherwise.
In the Python above, we make use of generators to represent infinite sequences of data. A generator is an object with some iterable state, which yields some value at each step. The simplest possible non-trivial generator generates the natural numbers $\mathbb{N}$:
```def naturalNumbers():
n = 1
while True:
yield n
n += 1```
What Python does with this function is translate it into a generator object “g” which works as follows: when something calls next(g), the function computes as usual until it reaches a yield statement; then it returns the yielded value, saves all of its internal state, and then returns control to the caller. The next time next(g) is called, this process repeats. A generator can be infinite or finite, and it terminates the iteration process when the function “falls off the end,” returning None as a function which has no return statement would.
We should note as well that Python knows how to handle generators with the usual “for … in …” language form. This makes it extremely handy, because programmers don’t have to care whether they’re using a list or an iterator; the syntax to work with them is identical.
Now the “streamingMedian” function we began with accepts as input a generator (or any iterable object, which it converts to a generator with the “iter” function). It then computes the streaming median of that generator as part of another generator, so that one can call it, e.g., with the code:
```for medianSoFar in streamingMedian(naturalNumbers()):
print medianSoFar```
### Like this:
Posted in Algorithms, Discrete, Program Gallery, Statistics | |
# Holidays and Homicide
Posted on November 25, 2011 by
## A Study In Data
Just before midnight on Thanksgiving, there was a murder by gunshot about four blocks from my home. Luckily I was in bed by then, but all of the commotion over the incident got me thinking: is murder disproportionately more common on Thanksgiving? What about Christmas, Valentine’s Day, or Saint Patrick’s Day?
Of course, with the right data set these are the kinds of questions one can answer! After an arduous Google search for agreeable data sets, I came across this project called the History of Violence Database (perhaps the most ominous database name ever!). Unfortunately most of the work in progress there puts the emphasis on history, cataloging homicide incidents only earlier than the 1920′s.
But one page I came across contained a complete list of the dates of each known homicide in San Francisco from 1849-2003. What’s more, it is available in a simple, comma-delimited format. (To all data compilers everywhere: this is how data should be transferred! Don’t only provide it in Excel, or SPSS, or whatever proprietary software you might use. Text files are universally accessible.)
With a little sed preprocessing and some Mathematica magic, I whipped together this chart of homicide counts by day of the year (click on the image to get a larger view):
Homicides in San Francsico 1849 - 2003, organized by day of the year.
Here the red grid lines mark the highest ten homicide counts, the green grid lines mark the lowest ten, and the blue lines mark specific holidays. Some of the blue and red lines cover the same date, and so in that case the red line is the one displayed. Finally, the horizontal yellow line represents the median homicide count of 19, so one can compare individual dates against the median.
Now it would be a terrible thing to “conclude” something about general human behavior from this particular data set. But I’m going to do it anyway because it’s fun, and it lets me make fascinating and controversial observations. Plus, I need interesting tidbits of information to drop at parties with math and statistics students.
Here are my observations:
• There is a correlation between some holidays and homicide.
• New Year’s Day is by far the most violent day of the year, followed by Christmas Day. On the other hand, Christmas Eve is only slightly above average.
• The safest days of the year is January 5th. Having no other special recognition, it should be deemed National Peace Day, or at least National Too Pooped from New Year’s to Commit Murder Day.
• New Year’s Day (likely, Morning) is not dangerous because of excessive alcohol consumption alone. If it were, Saint Patrick’s Day would surely be similar. Although to be fair, one should compare this with the same statistic for Dublin.
• None of the following holidays are significantly more dangerous than the average day: Groundhog Day, Valentine’s Day, St. Patrick’s Day, April Fool’s Day, Cinco de Mayo, my birthday (August 28th), Halloween, Veteran’s Day, and New Year’s Eve (before midnight).
• On the other hand, the days following July 4th are quite vicious, even though July 3rd is very quiet.
• The days near November 24th are particularly violent because Thanksgiving (and Black Friday?) often fall on those days. Family gatherings are clearly high-risk events.
• Last-minute Christmas shopping (Dec. 13th, 15th) obviously brings out the rage in everyone.
• March and October are the docile months, while January, June, July, and December are the worst. Murder happens more often in the usual vacation months than at other times of the year.
Of course, some of the above points are meant to be facetious, but they bring up some interesting questions. Why does homicide show up more often during certain months of the year? It’s well known that most murder victims are personal acquaintances of the assailant (specifically, friends and family members); does this mean that family time rejuvenates or fuels strife? Are people more stressed during these times of year? Are June and July more violent simply because heat aggravates people, or at least gets them to interact with others more?
Unfortunately these aren’t the kinds of questions that lend themselves to mathematical proof, and as such I can’t give a definitive answer. But feel free to voice your own opinion in the comments!
For your viewing pleasure, here are the homicide counts on each day for the top and bottom 67 days, respectively, in ascending order:
```highest:
{{"04/05", 24}, {"04/19", 24}, {"05/15", 24}, {"05/24", 24},
{"05/25", 24}, {"06/28", 24}, {"08/05", 24}, {"08/17", 24},
{"09/01", 24}, {"09/03", 24}, {"09/19", 24}, {"09/20", 24},
{"10/26", 24}, {"11/02", 24}, {"11/23", 24}, {"02/01", 25},
{"02/08", 25}, {"02/11", 25}, {"02/15", 25}, {"04/21", 25},
{"05/07", 25}, {"06/04", 25}, {"06/07", 25}, {"07/24", 25},
{"08/01", 25}, {"08/25", 25}, {"09/07", 25}, {"10/21", 25},
{"10/23", 25}, {"11/03", 25}, {"11/10", 25}, {"11/27", 25},
{"01/06", 26}, {"01/18", 26}, {"03/01", 26}, {"03/24", 26},
{"05/30", 26}, {"07/04", 26}, {"07/29", 26}, {"08/09", 26},
{"09/21", 26}, {"11/09", 26}, {"11/25", 26}, {"12/06", 26},
{"05/02", 27}, {"09/06", 27}, {"09/24", 27}, {"10/11", 27},
{"11/08", 27}, {"12/12", 27}, {"12/20", 27}, {"07/18", 28},
{"09/04", 28}, {"12/02", 28}, {"05/18", 29}, {"07/01", 29},
{"07/07", 29}, {"10/27", 29}, {"11/24", 29}, {"01/28", 30},
{"01/24", 31}, {"07/05", 31}, {"08/02", 31}, {"12/15", 31},
{"12/13", 32}, {"12/25", 36}, {"01/01", 43}}
lowest:
{{"01/05", 6}, {"02/29", 6}, {"06/20", 7}, {"07/03", 7},
{"04/15", 8}, {"02/03", 9}, {"02/10", 9}, {"09/16", 9},
{"03/04", 10}, {"05/16", 10}, {"09/29", 10}, {"10/12", 10},
{"12/16", 10}, {"02/04", 11}, {"05/17", 11}, {"05/23", 11},
{"06/06", 11}, {"06/12", 11}, {"07/11", 11}, {"09/22", 11},
{"11/12", 11}, {"11/16", 11}, {"12/19", 11}, {"03/13", 12},
{"03/20", 12}, {"05/21", 12}, {"05/29", 12}, {"07/14", 12},
{"08/13", 12}, {"09/05", 12}, {"10/28", 12}, {"11/22", 12},
{"01/19", 13}, {"02/06", 13}, {"02/09", 13}, {"04/22", 13},
{"04/28", 13}, {"05/04", 13}, {"05/11", 13}, {"08/27", 13},
{"09/15", 13}, {"10/03", 13}, {"10/13", 13}, {"12/03", 13},
{"01/10", 14}, {"01/26", 14}, {"01/31", 14}, {"02/13", 14},
{"02/18", 14}, {"02/23", 14}, {"03/14", 14}, {"03/15", 14},
{"03/26", 14}, {"03/31", 14}, {"04/11", 14}, {"05/12", 14},
{"05/28", 14}, {"06/19", 14}, {"06/24", 14}, {"07/20", 14},
{"07/31", 14}, {"08/10", 14}, {"10/22", 14}, {"12/22", 14},
{"01/07", 15}, {"01/14", 15}, {"01/16", 15}}```
Unfortunately many holidays are defined year by year. Thanksgiving, Easter, Mother’s Day, Father’s Day, MLK Jr. Day, Memorial Day, and the Chinese New Year all can’t be analyzed with this chart because they fall on different days each year. It would not be very difficult to organize this data set according to the rules of those special holidays. We leave it as an exercise to the reader to do so. Remember that the data and Mathematica notebook used in this post are available from this blog’s Google Code page.
And of course, it would be quite amazing to find a data set which provides the dates of (even the last fifty years of) homicide history in the entire US, and not just one city. If any readers know of such a data set, please let me know.
Until next time!
### Like this:
Posted in Statistics | |
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141193628311157, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/206079-computing-slope-secant-line-between-points.html
|
# Thread:
1. ## computing the slope of the secant line between points
Hello everyone. I am really getting lost trying to figure out how to correctly figure this one out. Here is the problem:
Compute the slope of the secant line between the points at (a) x = 1 and x = 2, (b) x = 2 and x = 3, (c) x = 1.5 and x = 2, and (d) x = 2 and x = 2.5, (e) x = 1.9 and x = 2, (f) x = 2 and x = 2.1, and (g) use parts (a) - (f) and other calculations as needed to estimate the slope of the tangent line at x = 2.....
f(x) = ex
To give some context here, I have been learning (In Calculus: Early Transcendental Functions) about Differentiation, and specifically tangent lines and velocity.
Any help with this problem would be greatly appreciated.
Thanks in advance!
2. ## Re: computing the slope of the secant line between points
The point of the problem is that as the second point approaches x=2 (from both directions), the secant line approaches the tangent line.
What is a secant line? You just take two points on the graph and draw a line through them. So in (a), for example, you calculate f(1) and f(2) which are your y-values, and then just calculate the slope as:
$\frac{y_2-y_1}{x_2-x_1}$
- Hollywood
3. ## Re: computing the slope of the secant line between points
Thanks Hollywood, I understand it a lot better now, however I do not quite understand how I am supposed to calculate f(1) and f(2), for example in (a), because the problem has f(x) = ex , So what am I supposed to do with the "e" or have I missed something somewhere?
From what I gathered, to find f(x) = ex , replace the x with 1,which gives us f(x) = e1 , so what now?
Thanks in advance!!
4. ## Re: computing the slope of the secant line between points
The symbol e just represents a number $e=2.71828...$. You probably have the $e^x$ function on your calculator.
So $e^1=e$ is just a number, and so is $e^2$, $e^{1.5}$, etc.
So for part (a), for example, the slope is:
$\frac{y_2-y_1}{x_2-x_1}=\frac{e^2-e^1}{2-1}=e^2-e\approx{4.67}$
- Hollywood
5. ## Re: computing the slope of the secant line between points
Got ya, thanks Hollywood!!!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390448331832886, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/120525?sort=oldest
|
## Embedded resolution of singularities
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'd like to check with my colleagues whether I have correctly understood "embedded resolution of singularities".
Let $X$ be a nonsingular projective variety over $\mathbf C$ and let $D$ be a "nice" divisor on $X$, say $D$ has strictly normal crossings. (Maybe we could just take $D$ to be a closed subscheme?)
Then, has the following statement been proven? And what is a "good" reference?
There exists a projective birational surjective morphism $\psi:Y\to X$ with $Y$ a nonsingular projective variety over $\mathbf C$ and the inverse image of $D$ in $Y$ a nonsingular projective variety over $\mathbf C$ of codimension one in $Y$?
I'm worried about whether I have correctly understood this statement, or maybe one needs some "normality" conditions on $D$ to assure this "embedded" resolution of singularities.
Also, how does one obtain this embedded resolution of singularities? Can we write down a terminating process which ends with an embedded resolution of singularities?
I have a hard time "believing" the above statement, but I don't know why. If anybody can explain to me that this is not so surprising as a result I would be very thankful.
-
2
This is Hironaka's theorem. Notice that you can require the strict transform of $D$ to be smooth; in general the inverse image of $D$ (i.e. the total transform) will be a normal crossing divisor. See en.wikipedia.org/wiki/Resolution_of_singularities, in particular the section Definitions and the bibliography. Also look at this nice expository paper by Hauser ams.org/journals/bull/2003-40-03/… – Francesco Polizzi Feb 1 at 15:49
1
At some point I found the following reference particularly useful: Wlodarczyk, Jaroslaw (2005), "Simple Hironaka resolution in characteristic zero", J. Amer. Math. Soc. 18 (4): 779–822, doi:10.1090/S0894-0347-05-00493-5 – Chris Brav Feb 1 at 16:25
## 2 Answers
What embedded resolution does achieve is this: if $Z\subset X$ is a Zariski closed set in a variety, then there is a smooth variety $Y\to X$, obtained from a series of blow ups along smooth centers, such that the preimage of $Z$ is a divisor with normal crossings. However, there is no reason to expect that the inverse of image of $Z$ (or your $D$) in a further blow up will be smooth. (Francesco said this already in a comment, but it bears repeating.) This is already clear in the case where D is a union of two lines in the plane. If you take the preimage in a further blow up, you will get a tree of $\mathbb{P}^1$'s. So your instinct is correct here.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Donu and Francesco already explained, the preimage of $D$ is usually not smooth simply because if it contains any subvarieties (subschemes) that are blown up, then the preimage will not be irreducible, but if $X$ is normal, then the preimage will be connected and hence necessarily singular along the intersection of different components.
On the other hand what can be achieved is that the strict transform of $D$ is non-singular. The strict transform or birational transform is the closure of the dense subset of $D$ that lies on the locus of $X$ where the resolution is an isomorphism.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122133851051331, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29636/operad-terminology-operads-with-and-without-o0/34643
|
## Operad terminology - Operads with and without O(0).
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the Markl, Schneider and Stasheff text, topological operads are an indexed collection of spaces $O(n)$ for $n \in \{1,2,3,\cdots\}$ satisfying some axioms. In May's text, the index set is allowed to include zero.
1) Is there a standard terminology for operads with and without $O(0)$?
2) Is there standard terminology for topological operads where $O(0)$ is a point, vs. $O(0)$ not being a point?
Although it's less important I'd be curious if people have examples where these distinctions are interesting.
Since any operad acts on its $O(0)$ part perhaps the $O(0)$ part should be called something like its "base"? But then "baseless operad" would sound kind of pejorative.
-
3
In my experience, there are so many variations on terminology with operads that there is almost never a standard term for anything. Regarding your question specifically, Lazarev (e.g., in arXiv:0704.2561) tends to refer to O(0) as the 'vacuum part' of the operad, though this is more in the context of modular operads. His terminology comes from analogy with QFT where one would talk about vacuum diagrams (no legs) contributing to the vacuum expectation value. – Jeffrey Giansiracusa Jun 26 2010 at 20:53
1
The "n" in O(n) could be referred to as "operadic degree". I certainly don't approve of "level". – André Henriques Jun 26 2010 at 22:00
1
According to wikipedia, a synonym of "arity" is "adicity", so you could call it the "operadicity". – Harry Gindi Jun 26 2010 at 22:07
3
FWIW, I'm in the habit of referring to n as the arity. (Certainly it is standard to refer to the "arity" of operations in logic and in universal algebra.) So $O(0)$ would be the 0-ary (or nullary) component. Also in logic, 0-ary function symbols are what are usually called "constants", so an operad where $O(0)$ is empty could logically be called an "operad without constants" or a "constant-free operad". The only trouble with that is that many people won't understand what the heck you are talking about until you explain. :-) – Todd Trimble Jun 27 2010 at 2:19
2
More recently, Peter May has called operads with $O(0)$ a point reduced. I like that terminology better. – Mike Shulman Jun 27 2010 at 3:12
show 8 more comments
## 2 Answers
I can second Jeffrey's comment, reduced is used to say that O(1) is just the monoidal unit (it allows us to use the Boardman Vogt resolution in homotopy theory). It's my opinion that this terminology will probably stick.
I would also say that a $\mu$ in O(n) had arity n.
That the O(0) part of an operad is referred to as the 'constants' of the operad makes a lot of sense, every algebra for O must contain O(0) and the composition of those must behave in a certain way.
Calling O(0) the point also makes sense, because in the category of algebras O(0) will be the initial object.
Here my comment has become too long, just as I've got to the point of my comment:
The comments to the question tend to prefer terminology that relates to the behaviour of the operad (eg "reduction", because a unit lowers the arity). My personal preference (and I think the literature follows it), is that terminology should have more of a relation to the category of algebras than to the operad itself.
So my vote is that you call O(0) the initial of O. And you call an operad without O(0) initial-less or uninitiated.
-
1
No, reduced is taken: means the 0th object is a point, not the first. That terminology is the one that sticks. – Peter May Dec 16 2011 at 17:39
It's worth pointing out that in Berger and Moerdijk's "Axiomatic Homotopy Theory for Operads" an operad $P$ is reduced if $P(0)$ is the unit of the underlying category. This matches Peter May's comment more than the answer. – David White Jun 7 at 17:04
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is interesting to note that the general theory of operads with constants and that of operads without constants (here I refer to $O(0)$ as $\it{constants}$ showing my personal preference for terminology) admit the following distinct difference. Just for simplicity, let's consider operads enriched in sets and let's allow all (coloured) operads instead of just the monochromatic ones. Thus, let $\mathbf{Ope}$ be the category of all small coloured operads (symmetric or not, does not matter for this example) in $\mathbf{Set}$. Let $\mathbf{cfOpe}$ be the full subcategory of $\mathbf{Ope}$ consisting of the constant-free operads (that is those operads in which no $0$-ary arrows exist).
Now consider the obvious functors $j:\mathbf{Cat}\to\mathbf{Ope}$ and $l:\mathbf{Cat}\to\mathbf{cfOpe}$. It is rather simple to show that each of these functors has a right adjoint so we get $j':\mathbf{Ope}\to\mathbf{Cat}$ and $l': \mathbf{cfOpe}\to\mathbf{Cat}$. However, $l'$ has again a right adjoint while $j'$ does not.
Not much changes if one considers operads enriched in some symmetric monoidal category $V$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438184499740601, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.