url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/C09/c09ccc.html
|
# NAG Library Function Documentnag_mldwt (c09ccc)
## 1 Purpose
nag_mldwt (c09ccc) computes the one-dimensional multi-level discrete wavelet transform (DWT). The initialization function nag_wfilt (c09aac) must be called first to set up the DWT options.
## 2 Specification
#include <nag.h>
#include <nagc09.h>
void nag_mldwt (Integer n, const double x[], Integer lenc, double c[], Integer nwl, Integer dwtlev[], Integer icomm[], NagError *fail)
## 3 Description
nag_mldwt (c09ccc) computes the multi-level DWT of one-dimensional data. For a given wavelet and end extension method, nag_mldwt (c09ccc) will compute a multi-level transform of a data array, ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$, using a specified number, ${n}_{l}$, of levels. The number of levels specified, ${n}_{l}$, must be no more than the value ${l}_{\mathrm{max}}$ returned in nwl by the initialization function nag_wfilt (c09aac) for the given problem. The transform is returned as a set of coefficients for the different levels (packed into a single array) and a representation of the multi-level structure.
The notation used here assigns level $0$ to the input dataset, $x$, with level $1$ being the first set of coefficients computed, with the detail coefficients, ${d}_{1}$, being stored while the approximation coefficients, ${a}_{1}$, are used as the input to a repeat of the wavelet transform. This process is continued until, at level ${n}_{l}$, both the detail coefficients, ${d}_{{n}_{l}}$, and the approximation coefficients, ${a}_{{n}_{l}}$ are retained. The output array, $C$, stores these sets of coefficients in reverse order, starting with ${a}_{{n}_{l}}$ followed by ${d}_{{n}_{l}},{d}_{{n}_{l}-1},\dots ,{d}_{1}$.
None.
## 5 Arguments
1: n – IntegerInput
On entry: the number of elements, $n$, in the data array $x$.
Constraint: this must be the same as the value n passed to the initialization function nag_wfilt (c09aac).
2: x[n] – const doubleInput
On entry: x contains the one-dimensional input dataset ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$.
3: lenc – IntegerInput
On entry: the dimension of the array c. c must be large enough to contain the number, ${n}_{c}$, of wavelet coefficients. The maximum value of ${n}_{c}$ is returned in nwc by the call to the initialization function nag_wfilt (c09aac) and corresponds to the DWT being continued for the maximum number of levels possible for the given data set. When the number of levels, ${n}_{l}$, is chosen to be less than the maximum, then ${n}_{c}$ is correspondingly smaller and lenc can be reduced by noting that the number of coefficients at each level is given by $⌈\stackrel{-}{n}/2⌉$ for ${\mathbf{mode}}=\mathrm{Nag_Periodic}$ in nag_wfilt (c09aac) and $⌊\left(\stackrel{-}{n}+{n}_{f}-1\right)/2⌋$ for ${\mathbf{mode}}=\mathrm{Nag_HalfPointSymmetric}$, $\mathrm{Nag_WholePointSymmetric}$ or $\mathrm{Nag_ZeroPadded}$, where $\stackrel{-}{n}$ is the number of input data at that level and ${n}_{f}$ is the filter length provided by the call to nag_wfilt (c09aac). At the final level the storage is doubled to contain the set of approximation coefficients.
Constraint: ${\mathbf{lenc}}\ge {n}_{c}$, where ${n}_{c}$ is the number of approximation and detail coefficients that correspond to a transform with nwl levels.
4: c[lenc] – doubleOutput
On exit: Let $q\left(\mathit{i}\right)$ denote the number of coefficients (of each type) produced by the wavelet transform at level $\mathit{i}$, for $\mathit{i}={n}_{l},{n}_{l}-1,\dots ,1$. These values are returned in dwtlev. Setting ${k}_{1}=q\left({n}_{l}\right)$ and ${k}_{\mathit{j}+1}={k}_{\mathit{j}}+q\left({n}_{l}-\mathit{j}+1\right)$, for $\mathit{j}=1,2,\dots ,{n}_{l}$, the coefficients are stored as follows:
${\mathbf{c}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{k}_{1}$
Contains the level ${n}_{l}$ approximation coefficients, ${a}_{{n}_{l}}$.
${\mathbf{c}}\left[\mathit{i}-1\right]$, for $\mathit{i}={k}_{1}+1,\dots ,{k}_{2}$
Contains the level ${n}_{l}$ detail coefficients ${d}_{{n}_{l}}$.
${\mathbf{c}}\left[\mathit{i}-1\right]$, for $\mathit{i}={k}_{j}+1,\dots ,{k}_{j+1}$
Contains the level ${n}_{l}-\mathit{j}+1$ detail coefficients, for $\mathit{j}=2,3,\dots ,{n}_{l}$.
5: nwl – IntegerInput
On entry: the number of levels, ${n}_{l}$, in the multi-level resolution to be performed.
Constraint: $1\le {\mathbf{nwl}}\le {l}_{\mathrm{max}}$, where ${l}_{\mathrm{max}}$ is the value returned in nwl (the maximum number of levels) by the call to the initialization function nag_wfilt (c09aac).
6: dwtlev[${\mathbf{nwl}}+1$] – IntegerOutput
On exit: the number of transform coefficients at each level. ${\mathbf{dwtlev}}\left[0\right]$ and ${\mathbf{dwtlev}}\left[1\right]$ contain the number, $q\left({n}_{l}\right)$, of approximation and detail coefficients respectively, for the final level of resolution (these are equal); ${\mathbf{dwtlev}}\left[\mathit{i}-1\right]$ contains the number of detail coefficients, $q\left({n}_{l}-\mathit{i}+2\right)$, for the (${n}_{l}-\mathit{i}+2$)th level, for $\mathit{i}=3,4,\dots ,{n}_{l}+1$.
7: icomm[$100$] – IntegerCommunication Array
On entry: contains details of the discrete wavelet transform and the problem dimension as setup in the call to the initialization function nag_wfilt (c09aac).
On exit: contains additional information on the computed transform.
8: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_ARRAY_DIM_LEN
On entry, lenc is set too small: ${\mathbf{lenc}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{lenc}}\ge 〈\mathit{\text{value}}〉$.
NE_BAD_PARAM
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INITIALIZATION
Either the initialization function has not been called first or array icomm has been corrupted.
Either the initialization function was called with ${\mathbf{wtrans}}=\mathrm{Nag_SingleLevel}$ or array icomm has been corrupted.
On entry, n is inconsistent with the value passed to the initialization function: ${\mathbf{n}}=〈\mathit{\text{value}}〉$, n should be $〈\mathit{\text{value}}〉$.
On entry, nwl is larger than the maximum number of levels returned by the initialization function: ${\mathbf{nwl}}=〈\mathit{\text{value}}〉$, maximum $\text{}=〈\mathit{\text{value}}〉$.
NE_INT
On entry, ${\mathbf{nwl}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nwl}}\ge 1$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
## 7 Accuracy
The accuracy of the wavelet transform depends only on the floating point operations used in the convolution and downsampling and should thus be close to machine precision.
## 8 Further Comments
The wavelet coefficients at each level can be extracted from the output array c using the information contained in dwtlev on exit (see the descriptions of c and dwtlev in Section 5). For example, given an input data set, $x$, denoising can be carried out by applying a thresholding operation to the detail coefficients at every level. The elements ${\mathbf{C}}\left({k}_{1}+1:{k}_{{n}_{l}+1}\right)$, as described in Section 5, contain the detail coefficients, ${\stackrel{^}{d}}_{\mathit{i}\mathit{j}}$, for $\mathit{i}={n}_{l},{n}_{l}-1,\dots ,1$ and $\mathit{j}=1,2,\dots ,q\left(i\right)$, where ${\stackrel{^}{d}}_{ij}={d}_{ij}+\sigma {\epsilon }_{ij}$ and $\sigma {\epsilon }_{ij}$ is the transformed noise term. If some threshold parameter $\alpha $ is chosen, a simple hard thresholding rule can be applied as
$d- ij = 0, if d^ij ≤ α d^ij , if d^ij > α,$
taking ${\stackrel{-}{d}}_{ij}$ to be an approximation to the required detail coefficient without noise, ${d}_{ij}$. The resulting coefficients can then be used as input to nag_imldwt (c09cdc) in order to reconstruct the denoised signal.
See the references given in the introduction to this chapter for a more complete account of wavelet denoising and other applications.
## 9 Example
This example performs a multi-level resolution of a dataset using the Daubechies wavelet (see ${\mathbf{wavnam}}=\mathrm{Nag_Daubechies4}$ in nag_wfilt (c09aac)) using zero end extensions, the number of levels of resolution, the number of coefficients in each level and the coefficients themselves are reused. The original dataset is then reconstructed using nag_imldwt (c09cdc).
### 9.1 Program Text
Program Text (c09ccce.c)
### 9.2 Program Data
Program Data (c09ccce.d)
### 9.3 Program Results
Program Results (c09ccce.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 86, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7380545139312744, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/11188?sort=oldest
|
## cardinality of product modulo direct sum
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
let $(X_i)_{i \in I}$ be an infinite family of sets with $|X_i| \geq 2$. we define an equivalence relation on $X = \prod_{i \in I} X_i$ by $x \sim y \Leftrightarrow \{i : x_i \neq y_i\}$ is finite. what is the cardinality of $X/\sim$? we may endow the $X_i$ with group structures and write this set as $\prod_{i \in I} X_i / \oplus_{i \in I} X_i$.
it is easy to see that $\min_i |X_i| \leq |X/\sim|$ and $|X| \leq |X/\sim| * \max_i |X_i|$. in particular, if $|X_i|$ is constant, $|X/\sim| = |X|$.
if all the $X_i$ are finite, it can be shown $|X/\sim|=2^{|I|}$. the same equation holds, if every $X_i$ satisfies $\aleph_0 \leq |X_i| \leq |I|$ (I'll add the proofs if needed).
the general case struggles me.
-
1
Your second and third paragraphs are contradictory. (I believe the latter a lot more.) – Reid Barton Jan 8 2010 at 21:40
1
You are asking for the cardinal of a reduced product with respect to the filter of cofinite sets. You'll find relevant information Chen Chung Chang, H. Jerome Keisler, Model theory, for example. It appears to be quite complicated in general! – Mariano Suárez-Alvarez Jan 8 2010 at 22:14
## 1 Answer
The cardinality of the reduced product is always the same as that of the product, modulo omitting finitely many unusually large $X_i$'s. (Even when $I$ is finite, in which case all $X_i$'s should be omitted.)
Arrange the sets $X_i$ in nondecreasing order of size in a wellordered sequence `$(X_\alpha)_{\alpha<\tau}$`. We may assume that $\tau$ is a limit ordinal. Otherwise, the last coordinate $\tau-1$ (like any single coordinate) contributes nothing to the reduced product. Omit $X_{\tau-1}$ and repeat as long as necessary.
I will show that then $|X| = |X/{\sim}|$ where $X = \prod_{\alpha<\tau} X_\alpha$.
Let $\kappa = \sup_{\alpha<\tau} |X_\alpha|$. Since each ${\sim}$-equivalence class has size $|\tau|\cdot\kappa$ (see note) we have
$|X| \leq |X/{\sim}|\cdot|\tau|\cdot\kappa = \max(|X/{\sim}|,|\tau|,\kappa).$
Since $|X| \geq 2^{|\tau|} > |\tau|$, we conclude that either $|X/{\sim}| = |X|$ or $|X/{\sim}| \leq |X| \leq \kappa$.
Since $\tau$ is a limit ordinal, the diagonal embedding $d:\kappa \to \prod_{\alpha<\tau} |X_\alpha|$, where
`$d_\alpha(\xi) = \begin{cases} \xi & when\ \xi < |X_\alpha| \\ 0 & otherwise,\end{cases}$`
shows that $\kappa \leq |X/{\sim}|$. So, in the case $|X/{\sim}| \leq |X| \leq \kappa$, we in fact have $|X/{\sim}| = |X| = \kappa$.
Note: The elements ${\sim}$-equivalent to a given $x \in X$ are obtained by selecting finitely many new values from the sets `$X_\alpha-\{x_\alpha\}$` to replace the corresponding value of the sequence $x$. There are at least `$\sum_{\alpha<\tau} |X_\alpha-\{x_\alpha\}|$` and no more than $\left(\sum_{\alpha<\tau} |X_\alpha|\right)^{<\omega}$ ways of doing this. Since $\tau$ is infinite and the $X_\alpha$'s all have two or more elements, these two bounds are equal to $\sum_{\alpha<\tau} |X_\alpha| = |\tau|\cdot\kappa$.
-
why does every equivalence class has size $\sum_{\alpha < \tau} |X_\alpha|$? – Martin Brandenburg Jan 9 2010 at 11:09
I've clarified that fact in the end note. – François G. Dorais♦ Jan 9 2010 at 14:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9093183875083923, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4103885
|
Physics Forums
Blog Entries: 1
## Analyzing a coordinate transformation
In McCauley's book Classical Mchanics: Transformations, Flows, Integrable and Chaotic Dynamics we are analyzing a coordinate transformation in order to arrive at symmetry laws. A coordinate transformation is given by $q_i(\alpha) = F_i(q_1,...,q_f, \alpha)$. Then, to the first order Mccauley states in equation 2.50 an infinitesimal shift in the coordinates can be given by
[tex]
\delta q_i = \left[ \frac{\partial q_i(\alpha)}{\partial \alpha} \right]_{\alpha=0} \delta \alpha
[/tex]
The variation in action is then given as
$$\delta A = \int_{t_1}^{t_2} \frac{dL_{\alpha}}{d \alpha} \delta \alpha dt = \int_{t_1}^{t_2} \left(\frac{\partial L_{\alpha}}{\partial q_i(\alpha)}{\partial q_i(\alpha)}{\partial \alpha} + \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \frac{\partial \dot{q}_i(\alpha)}{\alpha} \right) \delta \alpha dt.$$
I understand up to here. The book then states that we can deduce the following from the above:
$$\delta A = \left[ \left[ \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \delta q_i \right]^{t_2}_{t_1} \right]_{\alpha=0} = \left[ p_i(\alpha) \left[\frac{\partial q_i(\alpha)}{\partial \alpha} \right]_{\alpha = 0} \delta \alpha \right]_{t_1}^{t_2} .$$
I don't understand this line. For instance, shouldn't it be $\delta \dot{q}_i$ in the second expression? Or shouldn't the q be undotted? But then where does the second term from the above equation go? Again, any help would be appreciated.
Thanks.
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Recognitions: Science Advisor According to the form of your transformation, i.e., a point transformation that is not explicitly time dependent (also $\alpha$ is not time dependent here, i.e., you have a global symmetry under consideration) you have $$\frac{\partial \dot{q}_i}{\partial \alpha}=\frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial q_i}{\partial \alpha}.$$ Further, due to the equations of motion (Euler-Lagrange equations of the variational problem of Hamilton's principle) you have $$\frac{\partial L}{\partial q_i} = \frac{\mathrm{d}}{\mathrm{d} t} \frac{\partial L}{\partial \dot{q}_i}.$$ Thus, for the trajectory of the particle the integrand is a total time derivative and thus you have $$\delta A=\delta \alpha \left [ \left (p_i \frac{\partial q_i}{\partial \alpha} \right )_{\alpha=0} \right]_{t=t_1}^{t=t_2}.$$ As written in the book. Further by assumption your infinitesimal transformation is a symmetry of the action, i.e., the actino doesn't change, and our derivation shows that along the trajectory of the particle the quantity $$Q=\left (p_i \frac{\partial q_i}{\partial_ \alpha}\right )_{\alpha=0}$$ is constant in time, i.e., it is the conserved quantity of the symmetry under consideration, which is one of Noether's theorem. You can also show the opposite: I.e., if you have a conserved quantity there must be a symmetry of the action, generated by the conserved quantity. In other words each conserved quantity implies a one-parameter symmetry group of the action. This is another of Noether's theorems. The latter theorem becomes much more elegant in the Hamiltonian formulation of Hamilton's principle, where you have everything in terms of the Lie algebra on the phase-space functions with the Poisson bracket as the Lie product. Then the infinitesimal symetry transformations form a subalgebra, and the finite symmetry transformations are given by the flow of the Lie derivatives of the phase-space variables with respect to the generator of the symmetry, which then turns out to be the corresponding conserved quantity.
Blog Entries: 1 I think I understand now. The fact I think I overlooked is that everything we are working with must satisfy the Euler-Lagrange equation since we are dealing with real, physical particles. So essentially the integral representing the variation in action can be rewritten as $$\int_{t_1}^{t_2} \left( \frac{d}{dt} \left[ \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \right] \frac{\partial q_i(\alpha)}{\partial \alpha} + \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \frac{\partial \dot{q}_i(\alpha)}{\partial \alpha} \right) \delta \alpha dt = \int_{t_1}^{t_2} \frac{d}{dt} \left[ \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \frac{\partial q_i}{\partial \alpha} \right] \delta \alpha dt = \left[ \frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \frac{\partial q_i}{\partial \alpha} \delta \alpha \right]_{t_1}^{t_2}.$$ Then based on the equation given above for an infinitesimal shift in the coordinates this can all be rewritten as $$\left[\left(\frac{\partial L_{\alpha}}{\partial \dot{q}_i(\alpha)} \delta q_i \right)_{\alpha=0} \right]_{t_1}^{t_2},$$ and by definition of the canonical momentum you get, as you stated $$\delta A=\delta \alpha \left [ \left (p_i \frac{\partial q_i}{\partial \alpha} \right )_{\alpha=0} \right]_{t=t_1}^{t=t_2}.$$ Then if the action variation is 0, Q will be conserved across the transformation, correct? I don't fully understand how this leads to a conservation law in general, though. For instance, lets say we are restricted to motion on a circle of radius r. Then, we can say that $\alpha = \theta$. So a change in alpha would be some small rotation of the coordinate system. As we rotate the coordinate system, this means that the canonical (angular) momentum would be conserved. But how does this translate into a conservation law when the particle is moving along the circle itself. Or have I misinterpreted the math?
Recognitions:
Science Advisor
## Analyzing a coordinate transformation
Of course, you have to make sure that your transformation really leaves the action invariant. Thus, in your example, the rotation is only a symmetry, if the Lagrangian doesn't depend on $\theta$. Then the canonical momentum $p_{\theta}$ (which is an angular-momentum component along the axis of rotation, parametrized by $\theta$). That's why generalized coordinates, on which the Lagrangian doesn't depend are called cyclic.
Blog Entries: 1 So assuming the Lagrangian is independent of theta should make the angular momentum a conserved quantity when the coordinate system is rotated. But can the rotation of the coordinate system be reinterpreted as the particle rotating rather than the coordinate system rotating? Is this interpretation what makes the angular momentum a constant when it is the particle moving rather than the coordinate system rotating?
Thread Tools
| | | |
|------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: Analyzing a coordinate transformation | | |
| Thread | Forum | Replies |
| | Quantum Physics | 7 |
| | Special & General Relativity | 12 |
| | Special & General Relativity | 2 |
| | General Math | 0 |
| | Calculus | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060946106910706, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/26110/list
|
## Return to Answer
2 made more concise
The answer to your question is "there must be, it's just a question of doing the bookkeeping carefully". It's well-known that a subgroup of $\mathrm{PGL}(2,p)$ with order prime to $p$ is either cyclic, dihedral, tetrahedral ($A_4$), octahedral ($S_4$) or icosahedral ($A_5$). The icosahedral case only happens when $p\equiv\pm1$ (mod $5$). We now have to pull these back to $\mathrm{GL}(2,p)$, so have to count how many subgroups of $\mathrm{GL}(2,p)$ lie above a given subgroup of $\mathrm{PGL}(2,p)$ etc.
For subgroups of order divisible by $p$, the result in $\mathrm{PGL}(2,p)$ is that such a subgroup lies inside a normalizer of a Sylow $p$-subgroup or is $\mathrm{PSL}(2,p)$ or $\mathrm{PGL}(2,p)$. So in $\mathrm{GL}(2,p)$ either the group lies inside a normalizer of a Sylow $p$-subgroup or lies between $\mathrm{SL}(2,p)$ and $\mathrm{GL}(2,p)$.
1
The answer to your question is "there must be, it's just a question of doing the bookkeeping carefully". It's well-known that a subgroup of $\mathrm{PGL}(2,p)$ with order prime to $p$ is either cyclic, dihedral, tetrahedral ($A_4$), octahedral ($S_4$) or icosahedral ($A_5$). The icosahedral case only happens when $p\equiv\pm1$ (mod $5$). We now have to pull these back to $\mathrm{GL}(2,p)$, so have to count how many subgroups of $\mathrm{GL}(2,p)$ lie above a given subgroup of $\mathrm{PGL}(2,p)$ etc.
For subgroups of order divisible by $p$, the result in $\mathrm{PGL}(2,p)$ is that such a subgroup lies inside a normalizer of a Sylow $p$-subgroup or is $\mathrm{PSL}(2,p)$ or $\mathrm{PGL}(2,p)$. So in $\mathrm{GL}(2,p)$ either the group lies inside a normalizer of a Sylow $p$-subgroup or lies between $\mathrm{SL}(2,p)$ and $\mathrm{GL}(2,p)$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433786869049072, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/208343-need-help-some-precalc-trigonometry.html
|
# Thread:
1. ## Need help with some Precalc Trigonometry.
I have been trying to figure out how to do this problem almost all day, hopefully you guys can help. Here is the problem "find two angles between 0(degrees) and 360(degrees) which satisfy the following:
ex: sin(theta)= -1/2
please include work so i can do it by myself
2. ## Re: Need help with some Precalc Trigonometry.
First, ask yourself how many solutions will there be, and in which quadrants will they be. Think of where the line $y=-\frac{1}{2}$ will intersect the unit circle. Do you understand why this works?
3. ## Re: Need help with some Precalc Trigonometry.
No, sadly enough, i dont. But i know how to find out what quadrant it will be in, and thats about it, is there any way i could put a picture on here of the problem so you can see it or is that against the rules.
4. ## Re: Need help with some Precalc Trigonometry.
You may attach images here, without breaking any rules as long of course as the image does not contain anything inappropriate.
Imagine a ray which originates at the origin and terminates at some point on the unit circle. Now, imagine that this terminal point is initially at $(1,0)$. Then, let the ray move in a counter-clockwise direction to a particular point of interest on the unit circle. Let the angle through with the ray has turned be $\theta$. Then we have that the coordinates of the terminal point are $(\cos(\theta),\sin(\theta))$. Does this sound familiar?
5. ## Re: Need help with some Precalc Trigonometry.
my teacher hasn't taught us how to use the unit circle, but gave us an assignment(worth a quiz grade) with some questions on the problem i asked you, ill put up a picture of the problems one i tried to answer and i erased but i put(210, and 330) as the answer, i kind of under stand what you are talking about with the terminal point you showed me but i dont know how to use it.
6. ## Re: Need help with some Precalc Trigonometry.
im going to sleep, so if i dont respond in a while, thats why. Thank you for helping me.
7. ## Re: Need help with some Precalc Trigonometry.
Well, there are certainly other ways of looking at it, and whatever way you used worked as you have the correct angles in degrees.
I suspect you observed that $\sin(30^{\circ})=\frac{1}{2}$, then used:
$(180+30)^{\circ}$ and $(360-30)^{\circ}$
8. ## Re: Need help with some Precalc Trigonometry.
Well, actually my sister help me use the unit circle and i was able to just find 1/2 on there and get 210 and 330, so all i really need to do is find the reference angle and minus 180 and 360 by it?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466302394866943, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/55546/which-areas-of-arithmetic-algebraic-geometry-can-be-learned-as-black-boxes-and
|
## Which areas of arithmetic algebraic geometry can be learned as “black boxes” and are there any references where they are treated as such?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In Matthew Emerton's comment on Terry Tao's blog, he speaks about learning etale cohomology or the theory of Neron models as "black boxes". By this he means that you can learn what the theory is about and how to use it, without going into the detailed proofs of why they can be used.
Which theories (e.g. etale cohomology) can be learned as black boxes?
And where would one go (e.g. find lecture notes) to learn something like that?
Notes on something like this would ideally give you an idea of what is going on, give examples, and most importantly illustrate how they would be used to solve problems. I am mainly interested in arithmetic algebraic geometry and algebraic number theory, so I would especially like to know about "black boxes" in this direction, though "black boxes" in other areas might also be worth knowing about.
-
2
Isn't that answered in the last paragraph of the question? – José Figueroa-O'Farrill Feb 15 2011 at 20:11
7
When I was student, Hironaka's theorem on the existence of resolutions of singularities was considered a black box by pretty much everyone. I suspect it is still not far too from the truth today, in spite of the many simplifications. – Donu Arapura Feb 15 2011 at 20:54
3
The formalism of Grothendieck's 6 operations (aka Voevsodsky's cross functors) and nearby/vanishing cycles constitutes a huge black box for Betti, étale, de Rham and p-adic and motivic (co)homology. Ayoub's thesis is a very complete SGA(if note EGA)-like reference. Deligne Cisinski's paper on triangulated categories of motives has more concise formulation. This includes the étale cohomology black box the Emerton was talking about: for example proprer base change theorem is the Exchange isomorphism $f'_!g'^* = g^*f_!$, while being Zariski local follows from $g^!f_* = f'_*g^!$ – YBL Feb 16 2011 at 1:42
4
One can "learn" anything to some extent as a black box. Learn definitions, statements of theorems, some applications, then try some new applications. But it is not as much fun as knowing how it works. I used Riemann Roch for years as a black box, but after reading Riemann, I understood it - what had been mysterious became clear. But life is short. Black box learning is easier surrounded by experts, absorbing useful knowledge by osmosis. 5 minutes with Hironaka helped more than staring at the paper for much longer. But to make real progress one must study too. – roy smith Feb 16 2011 at 3:45
3
I also think this site makes a huge contribution to black box learning. I often read the answers to other peoples questions here for the wonderful succinct expert answers so generously provided by many. – roy smith Feb 16 2011 at 3:47
show 5 more comments
## 2 Answers
I find Hodge theory pretty scary stuff with its compact inclusions of Sobolev spaces, pseudodifferential operators and parametrixes for elliptic differential operators. However it is very easy to use the results of Hodge theory as emanating from a black box. I remember how exhilarated I was by the argument that a Hopf surface, homeomorphic to $S^1 \times S^3$, could not be Kähler, and much less projective, just because its first Betti number is $b_1=1$. Whereas by Hodge theory a compact Kähler manifold $X$ has betti numbers $b_q(X)$ which are even whenever $q$ is odd.
-
it might be simpler to notice that $b_2=0$, so it cannot be symplectic :) – Pavol S. Feb 16 2011 at 16:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I have found Groebner bases to be incredibly useful for testing concrete ideas about varieties, and although I did spend time learning how they work, all I've ever really needed to know is how to interpret the results of a Groebner Basis calculation, and how to choose monomial orders that will produce useful answers. I don't actually know how the most efficient algorithms and their implementations (Fauguère F4 and F5) arrive at an answer---the textbook algorithms are painfully slow for complicated calculations.
-
4
Almost nobody knows how "the most efficient algorithms and their implementations (Fauguère F4 and F5) arrive at an answer" since there are only two really nontoy implementations, and both are closed source (Magma/Steele and Maple/Faugere). I personally think this is a shame. – William Stein Feb 16 2011 at 3:47
I couldn't agree more. – known google Feb 16 2011 at 4:11
The main principles behind F4 and F5 have been published (it took a while back then, but now they have been in print for 10+ years). The actual implementations are undoubtedly way more intricate than the basic algorithms suggest, but that's something you could say about any serious implementation of any non-trivial algorithm. That being said, looking at the hundreds of thousands of lines of code might not be the best way to get a feel for the algorithm either. – Thierry Zell Feb 16 2011 at 5:03
I also think that your answer blends two issues that should be kept very separate: understanding how an algorithm works and understanding its output. Many users only care about being sure that the output is indeed a Groebner basis, and couldn't care less about how the basis was computed. But any user has to learn at some point about various monomial orders, the FGLM algorithm and so on. – Thierry Zell Feb 16 2011 at 5:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562556147575378, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/282546/query-regarding-the-continuity-of-f-and-f-1/282551
|
# Query regarding the continuity of $f$ and $f^{-1}$.
Background: (From the definition of Homeomorphism in Topology, by Munkres )
Let $X$ and $Y$ represent topological spaces and $$f \colon X \longrightarrow Y$$ be a bijective function. Then if $f$ and $f^{-1}$ are continuous functions, $f$ is a homeomorphism.
Question:
(1) Would it suffice to say that if $f$ is a bijective, continuous function then $f$ is a homeomorphism?
(1.b) If not, when does it ever happen that $f$ is a continuous, bijective function and $f^{-1}$ is not also a continuous function?
-
– Jonas Meyer Jan 20 at 7:18
## 2 Answers
The answer to question $1$ is no. Let $X$ be a set and $\tau$, $\tau'$ two topologies on $X$ with $\tau'$ strictly coarser than $\tau$. Then $\operatorname{id} : (X, \tau) \to (X, \tau')$ is a continuous bijection, but its inverse is not continuous.
-
I think @JonasMeyer is correct, but I see what you mean. Thank you. – providence Jan 20 at 7:26
This is a nice class of examples. It arises for example when considering the norm and weak topologies on a Banach space. – Jonas Meyer Jan 20 at 7:31
1. It's not sufficient. There are counterexamples.
2. If the map fails to be open, for example: $\theta\mapsto e^{\theta i}$ for $\theta\in[0,2\pi)$. The domain is not compact and the range is, so it is not a homeomorphism. But it is easy to check that this is a continuous bijection.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243742823600769, "perplexity_flag": "head"}
|
http://www.haskell.org/haskellwiki/index.php?title=Function&oldid=7112
|
# Function
### From HaskellWiki
Revision as of 00:16, 19 October 2006 by BrettGiles (Talk | contribs)
Mathematically speaking, a function relates all values in a set A to values in a set B. The function $square\ x = x^2$, given that x is an integer, will map all elements of the set of integers into another set -- in this case the set of square integers. In Haskell functions can be specified as below in the examples, with an optional type specification that gives the compiler (and other programmers) a hint as to the use of the function.
## 1 Examples
```square :: Int -> Int
square x = x * x```
In other words, a function has input and output, and it describes how to produce the output from its input. Functions can be applied, which just means that you give an input value as argument to the function and can then expect to receive the corresponding output value.
Haskell functions are first class entities, which means that they
• can be given names
• can be the value of some expression
• can be members of a list
• can be elements of a tuple
• can be passed as parameters to a function
• can be returned from a function as a result
(quoted from Davie's Introduction to Functional Programming Systems using Haskell.)
### 1.1 map example
As an example of the power of first-class functions, consider the function map:
```map :: (a -> b) -> [a] -> [b]
map f xs = [f x | x <- xs]```
(Note this is a Higher order function.)
This function takes two arguments: a function f which maps as to bs, and a list xs of as. It returns a list of bs which are the results of applying f to every member of xs. So
map square [1,1,2,3,5,8]
would yield the list
[1,1,4,9,25,64]
. When you realize that the list of bs that map returns can itself be a list of functions, things start to get interesting. Suppose you have some data structure (e.g. Set) that has the function
insert :: Int -> Set -> Set
, which takes an integer and a set, and returns the set created by inserting the given integer into the given set. And suppose you have mySet and myList, a set and a list of values to be added to the set, respectively. One could write a function to recurse over the list of integers, each time inserting a single member of myList, but with first-class functions this is not necessary. Look at the expression
map insert myList
-- what is the type of the list which it produces? Since insert takes an Int and a Set, but only Ints were given, the resulting list will be of functions that take a set and return a set. Conceptually, the code
map insert [1,2,3]
will return the list
[(insert 1) (insert 2) (insert 3)]
.
### 1.2 Composition / folding example
Haskell supports a Function composition operator:
```(.) :: (b -> c) -> (a ->b) -> (a->c)
(f . g) x = f (g x)```
So, for example,
((insert 1) . (insert 2) . (insert 3)) mySet
is the same as
insert 1 (insert 2 (insert 3 mySet))
. We're almost there -- what we need now is a function that can automatically put the composition operator between every element of
map insert myList
. Such code is included in Haskell, and it's known as folding. Several variants of the fold function are defined, but the basic concept is the same: given a function and a list, "collapse" the list by applying the function "between" its elements. This is easiest to see with simple binary operators, but it is not limited to them. For example,
foldr1 (+) [1,1,2,3,5]
eventually creates the expression
1+1+2+3+5
, and thus returns 12. In the set example,
foldr1 (.) (map insert myList)
gives us what we want, the successive insertion of every element of myList. What is the type of this expression? It is
Set -> Set
, meaning it will take a set and return a set -- in this case, the set it returns will have every element of myList inserted. To complete the example,
`newSet = (foldr1 (.) (map insert myList)) mySet`
will define newSet as mySet with the elements of myList inserted.
## 2 See also
• The Functions section in the Gentle Introduction.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894102573394775, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/9511/why-all-rocks-are-not-orbiting-bigger-rocks-in-space?answertab=oldest
|
# Why all rocks are not orbiting bigger rocks in space?
Why only big rocks (planets) have satellites and not small ones? Why cosmic dust doesn't orbit rocks that are many times heavier than the dust grains? If dust is still too heavy then what about molecules, atoms, or any particle for that matter? The mass difference should be millions of millions times, isn't it enough for orbiting? Moon is 1% of Earth mass, yet we don't see 1kg rocks orbiting 100kg ones.
-
## 3 Answers
The mass difference has practically nothing to do with whether two things can orbit each other. You can have two objects of the same mass in orbit (e.g. binary star systems), or you can have two objects of wildly different masses in orbit (the Earth and a piece of space junk), or anything in between. There are a lot of factors involved, for example:
1. A random dust grain probably won't be going at the right velocity to be captured into orbit around a bigger rock (like an asteroid).
2. When dust grains do orbit asteroids, there will probably be a lot of them, so they're going to collide with each other and lose energy, so they fall out of orbit rather quickly.
3. Planets tend to be pretty isolated, so things can orbit them without getting disturbed by other planets. In contrast, smaller rocks like asteroids are often found in large groups, so a dust grain couldn't orbit just one - it would also be affected significantly by the gravity of other asteroids in the vicinity.
-
3
OK, David, I disagree that you have captured the main point. The OP was asking about small bodies orbiting 100-kilogram bodies. You know that this is impossible and has nothing to do with the 3 details you wrote. The gravitational acceleration coming from 100-kilogram objects is so tiny that the orbital speed would have to be essentially zero for a reasonable radius comparable to 1 AU or similar solar-system-like distances, or smaller. 100 kilograms is $10^{22}$ times lighter than the Earth, so the required orbital velocities for the same $r$ are $10^{11}$ times lower. – Luboš Motl May 6 '11 at 7:03
1
The gravitational influence of a 100 kg body 1 AU from the Sun, its Hill Sphere, is about 40 meters; not trivial. However, as Luboš pointed out, a dust particle would need a near-zero relative velocity to orbit it. – Michael Luciuk May 6 '11 at 11:41
Not only is the sphere of influense very small, and the orbital velocities very small, but the perturbation needed to knock it out of orbit is also very small. Small bodies are affected by solar radiation pressure (and the radiation pressure of reradiated heat), and these tend to push them around. That means the stability of tiny objects orbitting other tiny objects is poor. – Omega Centauri May 6 '11 at 16:17
@Lubos: no, it's not impossible for something like a 1 kg mass and a 100 kg mass to be in orbit. As you said, it would just require an unreasonably low velocity, which is exactly what point #1 is about. – David Zaslavsky♦ May 6 '11 at 19:52
````An intriguing picture of small solar systems comes to mind.
````
Nevertheless, Luboš is correct that for any recognizable system the velocities of capture would be so low as to make the question moot.
As a rough example , equating centripetal gravitational with centrifugal forces, a particle orbiting a 100 kg mass 100 meters away from it would need a velocity order of magnitude of ten microns per second, something that would hardly be observable: the orbit of 628 meters would take 1.7*10^4 hours to complete.
In any case, in real life in our space around the sun it is a many body problem, as David suggests above, so statistical and chaotic behaviours will enter the problem and destroy any small scale regularity. Only in deep outer space outside the sun's field such an orbit might be undisturbed and observed by a patient observer :).
The reason for such small velocities is the weakness of the gravitational constant G. The velocity of a stable orbit is proportional to the square root of G. The mass of the orbiting particle does not matter as it is eliminated in equating centripetal and centrifugal forces.
-
I want to expand a bit on David Zaslavsky's point #1: "A random dust grain probably won't be going at the right velocity to be captured into orbit around a bigger rock (like an asteroid)." Without the assistance of a third body, it's actually impossible for an object to be gravitationally "captured" in an orbit by another: if it wasn't already in orbit before, there's no way for it to start orbiting.
The reason is just conservation of energy. If the object wasn't already in an orbit, then it was moving too fast, given its distance, to be in an orbit. (To put it the other way around, it was too far away, given its speed.) Even if the little guy was moving in just the right direction to come close to the big guy, it won't be captured: it will move in on a hyperbolic path, approach the other object once, and then fly away again, simply because it has too much energy to be captured. The only way to produce an orbit is for a third body to interact with the system and siphon off some energy at the right time.
That can happen, but unless the density of things flying around is high, it's rare. And if the density is high, then subsequent collisions, which will disrupt the orbit, will also be common.
-
+1 good to point this out... I glossed over it in the interest of simplicity. – David Zaslavsky♦ May 6 '11 at 19:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473966956138611, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/25439/is-every-g-invariant-function-on-a-lie-algebra-a-trace/25441
|
## Is every G-invariant function on a Lie algebra a trace?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am in the (slow) process of editing my notes on Lie Groups and Quantum Groups (V Serganova, Math 261B, UC Berkeley, Spring 2010. Mostly I can fill in gaps to arguments, but I have found myself completely stuck in one step of one proof. One possibility that would get me unstuck is a positive answer to the following (which may be obviously false or trivial, but I'm not thinking well):
Question: Let $\mathfrak g$ be a finite-dimensional Lie algebra over $\mathbb K$, and if necessary you may assume that $\mathbb K = \mathbb C$ and that $\mathfrak g$ is semisimple. Then $\mathfrak g$ acts on itself by the adjoint action, and on polynomial functions $f : \mathfrak g \to \mathbb K$ via derivations. A polynomial `$f: \mathfrak g \to \mathbb K\,$` is $\mathfrak g$-invariant if $\mathfrak g \cdot f = 0$. For example, let $\pi: \mathfrak g \to \mathfrak{gl}(V)$ be any finite-dimensional reprensentation. Then `$x \mapsto \operatorname{tr}_V \bigl(\pi(x)^n\bigr)$` is $\mathfrak g$-invariant for any $n\in \mathbb N$. Is every $\mathfrak g$-invariant function of this form? Or at least a sum of products of functions on this form?
When $\mathfrak g$ is one of the classical groups $\mathfrak{sl},\mathfrak{so},\mathfrak{sp}$, or the exceptional group `$G_2$` the answer is yes, because we did those examples in the aforementioned class notes. But I have no good grasp for the $E$ series, and I don't know if the statement holds for non-semisimples.
What I'm actually trying to prove is a weaker statement, but I figured I'd ask the stronger question, because to me the answer is not obviously "no". The weaker statement:
Claim: Let $\mathfrak g$ be a finite-dimensional semisimple Lie algebra over $\mathbb C$. Then every $\mathfrak g$-invariant function is constant on nilpotent elements of $\mathfrak g$. (Recall that $x\in \mathfrak g$ is nilpotent if `$\operatorname{ad}(x) = [x,] \in \mathfrak{gl}(\mathfrak g)$` is a nilpotent matrix — some power of it vanishes.)
It's clear that the spectrum of any nilpotent matrix is `$\{0\}$`, and for a semisimple Lie algebra, any nilpotent element acts nilpotently in all representations. For the classical groups, in the notes we exhibited generators for the rings of $\mathfrak g$-invariant functions as traces of representations, and so we can just check the above claim. But we did not do the $E$ series or $F_4$.
-
## 4 Answers
The answer to the general question is "no":
If $\mathfrak{g}$ is solvable, by Lie's theorem its commutant $\mathfrak{g}^{\prime}=[\mathfrak{g},\mathfrak{g}]$ is represented by strictly upper triangular matrices in a suitable basis in any finite-dimensional module. Hence all "trace generated" polynomials are zero on $\mathfrak{g}^{\prime}$; in other words, they factor through the abelianization $\mathfrak{g}/\mathfrak{g}^{\prime}$ and are generated by linear invariant polynomials. Unless the adjoint action of $G$ with Lie algebra $\mathfrak{g}$ on $\mathfrak{g}^{\prime}$ has a Zariski dense orbit, there are invariant polynomials that cannot be obtained in this way.
The answer to the claim is "yes", this is Kostant's theorem from his celebrated paper:
If $G$ is a complex semisimple group then its nullcone $\mathcal{N}\subset\mathfrak{g}$ is the Zariski closure of a single adjoint orbit consisting of regular nilpotent elements.
Kostant actually proved that the nullcone is the scheme-theoretic complete intersection defined by $rk\;G$ homogeneous positive degree algebra generators of $\mathbb{C}[\mathfrak{g}]^G$ — this is the connection with the Chevalley theorem mentioned by others. But for the present purpose, it is enough to show that regular nilpotents are Zariski open and dense in $\mathcal{N}\cap\mathfrak{n},$ and a good way of doing it was indicated by David Speyer.
-
Great! The section I'm editing is working towards a proof of Kostant's theorem. – Theo Johnson-Freyd May 21 2010 at 2:38
1
Kostant's theorem is easily the single most important theorem about semisimple Lie groups/Lie algebras that I didn't properly learn when I first learned the subject. It wasn't in Serre's book or "Seminaire Sophus Lie". I am glad to see that the situation has changed drastically (e.g. Chris and Ginzburg feature a proof). – Victor Protsak May 21 2010 at 2:52
The nullcone in a semisimple Lie algebra definitely gets too little attention in the structure and classification theory. The problem is partly that its importance doesn't emerge clearly until you get further into representation theory and related algebraic geometry. (And Kostant's early papers were dauntingly long though full of ideas, so they didn't translate easily into textbook treatments.) For a follow-up to standard Lie theory I'd certainly advocate the study of nilpotent orbits and the geometry of the nullcone, though the subject still needs its own comprehensive book. – Jim Humphreys May 21 2010 at 22:25
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The answer to your question is yes for semisimple Lie algebras. This is essentially the content of the Chevalley restriction theorem. See the proof at the beginning of chapter 2 of Gaitsgory's notes.
-
Chevalley's restriction theorem is an important step in describing the center of the enveloping algebra of a semisimple complex Lie algebra (which eventually gets identified with the polynomial algebra of Weyl group invariants) It is explained in quite a few books and lecture notes including Gaitgory's notes and section 23 of my 1972 text, with references there to other sources. The ideas have since been extended to other settings as well, but the theorem itself still requires work. It's well worth consulting multiple sources. – Jim Humphreys May 21 2010 at 22:16
If the representation is fixed as the fundamental representation, then in the case of $\mathfrak{so}(2n)$, you need Pfaffians as well as traces.
-
Here is a sketch of an alternate proof of the claim; making this rigorous may be harder than the approach you take.
Let $G$ be the lie group corresponding to the lie algebra $\mathfrak{g}$. So $G$ acts on $\mathfrak{g}$. $G$-invariant functions are, as the name suggests, invariant under this action.
If $x$ is nilpotent then we can use the $G$-action to move $x$ into the nilradical `$\mathfrak{n}_+$`. Let `$\psi: \mathbb{C}^* \to T$` be a one parameter subgrop that paris positively with the positive roots. So, for $x$ in `$\mathfrak{n}_+$`, we have `$\lim_{t \to 0} \psi(t) x=0$`.
So $0$ is in the closure of $Gx$ and, if $f$ is $G$ invariant, we must have $f(x)=f(0)$. In particular, if $f$ is $G$-invariant and has positive degree, then $f(x)=0$.
-
The one-parameter group construction is a special case of the Hilbert-Mumford criterion: Zariski closure of Gx contains 0 ${\ }\iff$ there is a one-parameter subgroup $T\subset G$ such that Zariski closure of Tx contains 0. – Victor Protsak May 21 2010 at 1:43
This is essentially the approach I was planning to take. Thanks! – Theo Johnson-Freyd May 21 2010 at 2:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357056617736816, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Harmonic_Warping&oldid=8131
|
# Harmonic Warping
### From Math Images
Revision as of 11:28, 10 July 2009 by Ryang1 (Talk | contribs)
Harmonic Warping of Blue Wash
Fields: Calculus and Fractals
Image Created By: Paul Cockshott
Website: Fractal Art
Harmonic Warping of Blue Wash
This image is a tiling based on harmonic warping operations. These operations take a source image and compress it to show the infinite tiling of the source image within a finite space.
# Basic Description
This image is an infinite tiling. If you look closely at the edges of the image, you can see that the tiles become smaller and smaller and seem to fade into the edges. The border of the image is infinite so that the tiling continues unendingly and the tiles become eternally smaller.
The source image for this tiling is another image that is mathematically interesting and is also featured on this website. See Blue Wash for more information about how the source image was created.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Single Variable Calculus
[Click to view A More Mathematical Explanation]
Harmonic Warping Equation
To create this image, a har [...]
[Click to hide A More Mathematical Explanation]
Harmonic Warping Equation
To create this image, a harmonic warping operation was used to map the infinite tiling of the source image onto a finite plane. This operation essentially took the entire infinite Euclidean Euclidean refers to the traditional geometric space that most people are initially exposed to, as opposed to non-Euclidean (ex. hyperbolic geometry and elliptic geometry) plane and squashed it into a square. This type of operation can be called a distance compressing warp.
The equations used to perform the harmonic warp is show in a graph to the right and is as follows, where (x,y) is a coordinate on the Euclidean plane tiling and (d(x), d(y)) is a coordinate on the non-Euclidean square tiling
$d(x) = 1 - \frac{1}{1+x}$
$d(y) = 1 - \frac{1}{1+y}$
You can observe for both of these equations that as x and y go to infinity, d(x) and d(y) both approach a limit of 1.
[Show Limit Proof][Hide Limit Proof]
The graph to the right shows clearly that d(x) approaches 1 as x goes to infinity. Mathematically:
$\lim_{x \rightarrow \infty}d(x) = 1 - \frac{1}{1+x}$
$\lim_{x \rightarrow \infty}d(x) = 1 - \frac{1}{1+\infty}$
$\lim_{x \rightarrow \infty}d(x) = 1 - \frac{1}{\infty}$
$\lim_{x \rightarrow \infty}d(x) = 1 - 0$
$\lim_{x \rightarrow \infty}d(x) = 1$
Since d(x) and d(y) approach 1 as x and y go to infinity, the square plane that the infinite tiling is mapped to must be a unit square (that is its dimensions are 1 unit by 1 unit). Since the unit square fits an infinite tiling within its finite border, the square is not a traditional Euclidean plane. As the tiling approaches the border of the square, distance within the square increases non-linearly. In fact, the border of the square is infinite because the tiling goes on indefinitely.
Here is another example of this type of tiling contained in a square using the Union Flag:
Union Flag (source image) Union Flag tiled infinitely into a unit square
## Polar Harmonic Warping
[Show]
[Hide]
A polar tiling
A tiling similar to the one mentioned above can be performed in polar coordinates. Polar tilings are infinite Euclidean tilings condensed into a finite circular space with an infinite border. The harmonic warping operations used to map the infinite Euclidean plane onto the circle are performed based on the radius of the circle being 1 unit.
## Four Infinite Poles
[Show]
[Hide]
Crosses continue to intersect at 90 degrees "X"'s collapse to the four poles Union Flag Four Infinite Pole Tiling
Another tiling that is possible involves designating the four cardinal poles of the circular border as infinite areas. The tiling then becomes very similar to a tiling done in the Poincaré Disk Model representing hyperbolic geometry. If the center of the circle of this type of tiling corresponds to the Euclidean origin where the x and y axes meet, then we can see that the tiling is consist with the source image. In the Poincaré Disk Model, straight lines are represented as circles and perpendicular lines still intersect at an angle of 90 degrees. A break down of the four infinite pole tiling of the Union Flag above shows that in each tile, the cross of the Union Flag still intersects with the x axes and the "X" of the Union Flag collapses towards each of the four poles.
## Comparing the Different Types of Tilings
| | | |
|----------------------------|---------------------|---------------------|
| | Saint Andrew's Flag | Saint George's Flag |
| Original Flag | | |
| Rectangular Tiling | | |
| Polar Tiling | | |
| Four Infinite Poles Tiling | | |
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
Paul Cockshott is a computer scientist and a reader at the University of Glasgow. The various math images featured on this page emerged from his research dealing with digital image processing.
# References
Paul Cockshott, Paul Cockshott
# Future Directions for this Page
I suggest adding a section on why this operation is called Harmonic warping and expanding the Polar Tiling section.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9003387093544006, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/13638/which-popular-games-are-the-most-mathematical/13645
|
## Which popular games are the most mathematical?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I consider a game to be mathematical if there is interesting mathematics (to a mathematician) involved in
• the game's structure,
• optimal strategies,
• practical strategies,
• analysis of the game results/performance.
Which popular games are particularly mathematical by this definition?
Motivation: I got into backgammon a bit over 10 years ago after overhearing Rob Kirby say to another mathematician at MSRI that he thought backgammon was a game worth studying. Since then, I have written over 100 articles on the mathematics of backgammon as a columnist for a backgammon magazine. My target audience is backgammon players, not mathematicians, so much of the material I cover is not mathematically interesting to a mathematician. However, I have been able to include topics such as martingale decomposition, deconvolution, divergent series, first passage times, stable distributions, stochastic differential equations, the reflection principle in combinatorics, asymptotic behavior of recurrences, $\chi^2$ statistical analysis, variance reduction in Monte Carlo simulations, etc. I have also made a few videos for a poker instruction site, and I am collaborating on a book on practical applications of mathematics to poker aimed at poker players. I would like to know which other games can be used similarly as a way to popularize mathematics, and which games I am likely to appreciate more as a mathematician than the general population will.
Other examples:
• go
• bridge
• Set.
Non-example: I do not believe chess is mathematical, despite the popular conception that chess and mathematics are related. Game theory says almost nothing about chess. The rules seem mathematically arbitrary. Most of the analysis in chess is mathematically meaningless, since positions are won, drawn, or tied (some minor complications can occur with the 50 move rule), and yet chess players distinguish strong moves from even stronger moves, and usually can't determine the true value of a position.
To me, the most mathematical aspect of chess is that the linear evaluation of piece strength is highly correlated which side can win in the end game. Second, there is a logarithmic rating system in which all chess players say they are underrated by 150 points. (Not all games have good rating systems.) However, these are not enough for me to consider chess to be mathematical. I can't imagine writing many columns on the mathematics of chess aimed at chess players.
Non-example: I would exclude Nim. Nim has a highly mathematical structure and optimal strategy, but I do not consider it a popular game since I don't know people who actually play Nim for fun.
To clarify, I want the games as played to be mathematical. It does not count if there are mathematical puzzles you can describe on the same board. Does being a mathematician help you to learn the game faster, to play the game better, or to analyze the game more accurately? (As opposed to a smart philosopher or engineer...) If mathematics helps significantly in a game people actually play, particularly if interesting mathematics is involved in a surprising way, then it qualifies to be in this collection.
If my criteria seem horribly arbitrary as some have commented, so be it, but this seems in line with questions like Real world applications of math, by arxive subject area? or Cocktail party math. I'm asking for applications of real mathematics to games people actually play. If someone is unconvinced that mathematics says anything they care about, and you find out he plays go, then you can describe something he might appreciate if you understand combinatorial game theory and how this arises naturally in go endgames.
-
42
Yeah, well the rules of mathematics are chessly arbitrary. – Harry Gindi Feb 1 2010 at 11:19
25
I think it's a mischaracterization to say chess is nonmathematical; it's just that chess, like so many things one encounters in the real world, is neither elegant nor simple from the point of view of mathematics. That game theory can't tell us much about chess tells us more about the limitations of game theory than about the mathematical nature of chess. That said, your suggested examples are definitely better. – Mark Meckes Feb 1 2010 at 14:29
8
This is very far from 'give a list of all games.' One hope is to find other popular games whose play involves mathematics. Another is to learn more real mathematics about games I already know. A third idea is to see what resonates with other mathematicians. I'm sorry if you don't find these interesting, or if you find my criteria arbitrary--I don't see a huge difference between this and questions like, "What are neat applications of mathematics/this field?" – Douglas Zare Feb 1 2010 at 17:56
8
You disagree with my statement that Nim is not actually played for fun? I can show you go clubs, bridge clubs, backgammon clubs, even a "world championship of rock-paper-scissors," etc. I've never seen a Nim club or heard someone describe himself or herself as a Nim player. There are many theoretical games people don't actually play, and I don't think it's arbitrary to exclude those. I'll clarify my reasons for excluding chess later. – Douglas Zare Feb 1 2010 at 23:17
8
In "l'année dernière à Marienbad" ("last year in Marienbad"), a movie by Alain Resnais, you can see people playing Nim for fun. Now this just moves the problem, because I don't know anyone watching that movie for fun. (I just mean that peculiar movie. Resnais made a lot of very good movies. But that one is a serious contender for the prize of the most boring movie ever). – Joël Oct 7 2011 at 19:31
show 9 more comments
## 50 Answers
Set is a card game that is very mathematical.
Set is played with a deck with 81 cards. Each card corresponds to a point in affine 4-space over $\mathbb Z/3$, with 3 possible colors, shadings, shapes, and counts. The players must identify Sets, sets of 3 cards corresponding to collinear points. Sets are also triples of cards which add up to the 0-vector. The three cards pictured form a Set.
A natural question which arises during play is whether there are any Sets among the cards which have been dealt out. There can be 9 cards in a codimension 1 subspace which do not contain a Set, corresponding to a nondegenerate conic in affine 3-space such as $z=x^2+y^2$. There can be at most 20 cards not containing a Set, corresponding to a nondegenerate conic in the projective 3-space containing 10 points.
-
6
Here's an online version for people wanting to see what it's like: setgame.ath.cx – Zev Chonoles Feb 1 2010 at 13:19
1
Does this satisfy the "popular" criterion? I haven't exactly seen many Set clubs around. – Douglas S. Stones Feb 2 2010 at 5:58
7
It is at least popular enough that Set decks can be found in big-box bookstores like Borders or Barnes & Noble. – Matt Noonan Feb 2 2010 at 6:31
3
Set is very popular with certain types of student clubs (e.g. math clubs), although I don't think it's really the kind of game that warrants its own clubs. – Qiaochu Yuan Jun 7 2011 at 16:14
1
Set is usually a hit at parties (coming from an undergraduate at a school that ranks in the top party schools in the US.) – Benjamin Braun Jan 29 at 13:49
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Hex is a popular game with some interesting mathematical properties. John Nash gave an easy proof that the first player can force a win, his famous "stealing strategies" argument. His proof gives no indication as to what the optimal strategy actually looks like.
There is also a nice AMM paper by David Gale in which he shows that the fact that Hex can not end in a draw is equivalent to the Brouwer fixed point theorem (for higher dimensions, one needs a higher dimensional version of Hex).
One variant is called Y. Both players attempt to create a group connecting all sides of a triangular board. As with Hex, there are no ties possible. A commercial version adds 3 points of positive curvature, with 5 neighbors instead of 6.
-
Dots and boxes is a pencil-and-paper game with a reasonably deep mathematical theory. The game is often played by schoolchildren.
-
1
One thing interesting about Dots and Boxes is that it can often be well approximated by Nimstring which itsef can be mapped onto Nim. So the theory of Nim is important even if Nim itself isn't that interesting. – Dan Piponi Feb 1 2010 at 17:38
3
FWIW, at a very high level of play on a small board, nim theory is surprisingly irrelevant. Dots and boxes is a very mathematical game, but nim is only one component of it, and there are plenty of expert-level players on a 5x5 board who know nothing at all about nim but who will still wipe the floor with anyone who has read Winning Ways and assumes that they are now an expert. See for example littlegolem.net/jsp/forum/… , a discussion which is visibly (a) mathematics and (b) relevant to dots and boxes, but (c) has nothing to do with Nim. – Kevin Buzzard Feb 2 2010 at 11:34
4
It's mentioned in the Wikipedia article, but it may as well be mentioned again: Elwyn Berlekamp wrote a book on the mathematical theory of dots and boxes. – Todd Trimble Oct 7 2011 at 12:27
show 1 more comment
Dear All,
This was a favorite pass time on my mobile. Its pushing blocks also known as Sokoban:
Some years ago it came as a little surprise to me that it is NP complete. Here is one paper saying that:
Demain, E.D. and Hoffmann, M.
Pushing Blocks is NP-Complete for Noncrossing Solution Paths, 2001
http://www.inf.ethz.ch/~hoffmann/pub/dh-pbnns-01.pdf
Best Regards
-
2
Actually, Sokoban has been proved to be PSPACE-complete by J. Culberson: webdocs.cs.ualberta.ca/~joe/Preprints/Sokoban/…. The paper by Demaine and Hoffman seems to consider several different variants of the game; moreover, they only show NP-hardness. Some more references: en.wikipedia.org/wiki/Sokoban en.wikipedia.org/wiki/… David Eppstein's webpage: ics.uci.edu/~eppstein/cgt/hard.html – Jan Kyncl Jan 14 at 5:37
The game of Go is mathematical in several ways. Its rules involve connected sets of pieces rather than pieces. Many combinatorial games including infinitesimals can be represented as positions in go endgames, as was described in Mathematical Go: Chilling Gets the Last Point
-
2
Though I love Go, I can't agree that it has a mathematical feel except in the endgame. Set and Sudoku both seem to use the same sort of "proof" process that we're already used to; with the exception of Go endgames and perhaps complex ko fights, I rarely have the familiar "proving / deriving" feeling during a game. – Matt Noonan Feb 1 2010 at 20:52
3
I think many of the ideas which you can prove in the go endgame (before infinitesimals) are present but too complicated to prove in much earlier stages. In addition, the connectivity fights in go seem like they should give you that feeling of proving elementary statements. You sometimes want to prove that you can connect one group of stones to another provided that there is no enemy stone added to a particular area. The smaller the area, the less you need to worry about that connection. – Douglas Zare Feb 2 2010 at 1:47
1
go is so complex! clearly, it is mathematical, but my favorite part is its organic nature that openings and middle game take or feel like. – Sean Tilson Apr 7 2010 at 3:01
1
Go is interesting from the computer science point of view because it is very far from solved. Checkers is solved, and chess has been "solved" from a practical point of view - computers beat the best humans. Will this happen with Go? And when computers start beating pros, will moving to a larger board regains the advantage for humans? – Pait Jun 7 2011 at 17:04
2
Bu the way i've been thinking for a long time to a "continuous version" of go, where you can play anywhere on the field (not just at intersections). There would be some rules to modify but the overall seems more natural to me. Unfortunately I don't have the skills to program it, would someone be interested? – Raphael L Aug 27 2011 at 14:35
show 5 more comments
The ability to embed mathematical problems into chess (like combinatorial game theory into go) should not be underestimated. Papers by Richard Stanley and Noam Elkies demonstrate problems where the objective is to determine the number of ways to perform a given task. They include problems where the answer is
• A Catalan number, say the 7th or even the 17th. (Problems A and B from Stanley. Problem A from Elkies.)
• Fibonacci numbers, arbitrarily large. (Problem 4 from Elkies.)
• The coefficients of the Maclaurin series for tangent, say the 7th or 9th. (Problem D from Stanley. Problems B and 1 from Elkies.)
• Directly computable from the Selberg integral $\int_0^1 \cdots \int_0^1 \prod_{1\le i\lt j\le 4} (x_i - x_j)^2 dx_1\cdots dx_4$. (Problem E from Stanley.)
Of course, the answers are this for some mathematical reason, not accidentally. Many of the problems are also elegant from a chess perspective.
-
5
Let me add that I am impressed by their ability to embed interesting problems in a game I still view as not mathematical. – Douglas Zare Feb 1 2010 at 10:37
show 1 more comment
Poker is a family of card games.
Many model games from game theory approximate poker situations, and some of the earliest work on game theory featured model games for betting and bluffing in poker (despite the popular misconception that bluffing is not mathematical) studied by Borel and von Neumann.
Nes Ankeny wrote a book Poker strategy: Winning with game theory in 1981 which gives an interesting mathematical approach to poker. Ankeny was a number theorist who was also a world-class poker player.
Tournament poker often rewards lower places than first. This means the value of chips is nonlinear, and several models have been used to determine the appropriate risk aversion by finding good functions from the distributions of chips to probabilities of finishing in each place. One is diffusion, which led to an application of the Riemann map of an equilateral triangle, although the difficulty of computing this and higher dimensional diffusion led to the widespread adoption of the independent chip model instead: Shuffle all chips, and rank players by their highest chips. Equivalently, remove the chips from play one by one.
Bill Chen and Jerrod Ankenman wrote The Mathematics of Poker aimed more at mathematicians than poker players. They studied model games in which players are dealt numbers from [0,1] instead of cards. They also computed the Nash equilibrium strategies for some situations in NL Texas Hold'em, the most popular variant at the moment. They also addressed a few topics outside of game theory, such as the risk of ruin probability with an unknown but normally distributed true win rate, and with a distribution skewed enough that the Brownian approximation fails, as for tournament play.
When the first few players fold, and we know they are more likely to have folded 8-4 than ace-ace, what can we say about the distributions of hands for the remaining players? Jerrod Ankenman remarked, "the problem of finding the hand distributions of the blinds given that the first n players have folded a specified set of distributions [sets of hands] is NP-hard."
[I merged two answers about poker.]
-
2
See also Alspach's articles: math.sfu.ca/~alspach/pokerdigest.html – Douglas S. Stones Feb 2 2010 at 8:04
show 1 more comment
Believe it or not, Battle Ship is an interesting mathematical game. Well, at least if you play it in high enough dimension: Finding small explicit sets that hit all large enough combinatorial rectangles (ships) has been studied quite a lot and there are still a couple of open problems. See for instance, here.
-
5
Many children's games are surprisingly mathematically interesting. These games typically have very simple rules and very little scope for player strategy. This means that that the entire evolution of the game can be described by simple rules, which in turn means that the game can be treated mathematically. Even games like War and Candyland, which have no player strategy whatsoever, lead to interesting math. – David Harris Jan 14 2011 at 20:29
What about Pool? It contains quite a lot of geometry.
-
3
Especially hyperbolic pool! geometrygames.org/HyperbolicGames – Qiaochu Yuan Oct 7 2011 at 19:04
There's also the famous Rubik's Cube, which is popular and heavily maths-related.
-
2
But is that considered a game? – Todd Trimble Oct 7 2011 at 12:35
Lights Out is a game which has effectively been reduced to a problem in linear algebra, particularly a routine exercise in Gaussian elimination. A good link can be found here. What's particularly interesting is the fact that operations in the game commute, which allows for the linear algebra approach.
I wonder if there are any non-commutative turn based games which can also be solved mathematically? Certainly, chess is out of the question!
-
2
I gave a talk at MathFest in 2007 on the $n$-dimensional generalization of this game. Just assume your playing the game on a lattice and pressing a light changes all lights touching it. We also generalized the solution to include things like lights out on a torus or a sphere or a mobius band... The point is, all the solutions are similar and the generalization follows fairly easily by reconstructing the "button vectors" which are related to the change of state when pressing a particular button. – B. Bischof Feb 2 2010 at 17:43
2
I'm not sure about non-commutative turn based games, but certainly the puzzles presented in a number of popular video games can be easily described with non-abelian groups. One that comes to mind was a lever puzzle in the first God of War game. The state of a device was manipulated with two levers and a particular state would open the door. It turns out that the state space of this device was a group generated by the actions of the two levers. So proceeding in the game was equivalent to finding a representation of a given group element in terms of the generators. – Aubrey da Cunha Oct 7 2011 at 18:28
Clay Institute in its lectures on millennium problems list as one of it question "P vs NP Problem" and simple Minesweeper is listed as example for which finding strategy is equivalent to solution of such problem....which was proved by Richard Kaye referenced below. Here is the beginning of minesweeper article:
The connection between the game and the prize problem was explained by Richard Kaye of the University of Birmingham, England ('Minesweeper is NP-complete', Mathematical Intelligencer volume 22 number 4, 2000, pages 9-15). And before anyone gets too excited, you won't win the prize by winning the game. To win the prize, you will have to find a really slick method to answer questions about Minesweeper when it's played on gigantic grids and all the evidence suggests that there isn't a slick method. In fact, if you can prove that there isn't one, you can win the prize that way too.
-
Blokus is a fairly new game that's gaining popularity (though there are older games with a similar set-up). There are several versions, and the four-player version has some non-cooperative elements to the gameplay.
Each player takes turns to place polyominoes of size 1 squares through five (the monomino, domino, triominoes, tetrominoes, and pentominoes) so that they touch a previously played piece of their own colour, but only at the corners. The overall aim of the game is to try and cover as much area with your own pieces as possible. The countertactics to stop a player doing this involve placing your pieces in a way that will block them from making good moves.
I think this game would fit your criteria. It is relatively unstudied from a mathematical point of view as far as I know. I imagine some familiarity with some of the mathematical work on tessellations of polyominoes would have to give a player at least a marginal advantage in planning a long-term strategy. It probably fits the criteria in other ways too.
-
Since you mentioned bridge in the question, but nobody has said anything about it, I'll take a stab. Interestingly, bridge has several more-or-less orthogonal mathematical aspects to it.
1. The play of the hand necessarily involves calculating or estimating probabilities. These are not so difficult as to be mathematically interesting, but I do think they can be slightly more challenging that counting your outs in poker. In bridge there are often multiple possible ways of combining chances to make your contract, some highly dependent for their success upon the order in which the chances are taken.
2. Coming up with efficient communication schemes is central to both bidding and defense. I don't really know enough of the theory behind designing bidding systems to comment. But designing an efficient "relay" system probably involves a smidgen of math.
3. Finally there's more esoteric stuff. For instance, since bridge is not a game of complete information, one doesn't usually expect combinatorial game theory structures to arise. However it can happen that the bidding and play reveal enough information so that everyone knows what cards everyone else has, in which case there is of course complete information. Sometimes this actually brings added complexity though! One manifestation of this is higher order throw-ins, which can be analyzed via nimbers, etc.
-
2
I understand that Meckstroth and Rodwell, probably the best pair in the US if not the world, once used the Fibonacci sequence in their bidding system. Never got to ask how… – Chad Groft Apr 8 2010 at 1:18
2
Hmm... I've never learned much about this, but some googling (see orig.gibware.com/moscito/moscito.pdf) suggests that common to all relay systems is the principle that a bid X permits the transfer of roughly the golden ratio times as much information as the bid X+1 permits. So that would sort of suggest designing your relay system around the fibonacci sequence, except successive EARLY fibonacci numbers are not that THAT close to the golden ratio, and probably it is these early values which would arise most often in actual play. So that might cost your system efficiency. – Sam Lichtenstein Apr 8 2010 at 15:34
show 3 more comments
Rock-Paper-Scissors remains a popular children's game. It's a simple 0-sum game with a mixed Nash equilibrium.
In practice, even if that is your goal, it's hard to generate a uniformly random choice from {rock,paper,scissors} which is independent from what you and your opponent have chosen before. While the unexploitable strategy is simple in theory, exploiting people is complicated, and can involve statistics and hidden-Markov models.
There is an gambling site which lets you play rock-paper-scissors against an opponent, charging a rake so that the Nash equilibrium strategy will lose on average.
Cryptographic issues arise if you want to be confident that a distant opponent's choice was not made with knowledge of yours.
-
1
For what it's worth (i.e. nothing), Rock-Paper-Scissors is a commutative, non-associative magma. – Ketil Tveiten Oct 7 2011 at 8:42
4
This game, also known as Roshambo, is surprisingly subtle if you try to design an algorithm that will win a round-robin tournament that includes some intentionally weak strategies. Exploiting weak strategies without opening yourself to exploitation is a tricky business. See for example webdocs.cs.ualberta.ca/~darse/rsbpc.html – Timothy Chow Oct 7 2011 at 17:20
My vote is for the game "Clue".
It's a simple game that young children can learn and enjoy. When they first start playing, they use simple elimination. As they progress they can continue to more advanced strategies. They learn to observe what the other players are asking of each other, who's passing and on what guesses.
Clue may not be a game that adults will play on their own, but when it comes to including the little ones, it's fantastic.
-
For some applications of combinatorial game theory to actual chess endgames, see the article by Elkies at http://arxiv.org/abs/math/9905198. For an article I wrote with Elkies on the mathematical aspects of the knight in chess (but with little significance to the actual game of chess), see http://math.mit.edu/~rstan/papers/knight.pdf.
-
Backgammon is a game of skill and chance.
The doubling cube emphasizes absolute evaluations as opposed to relative evaluations, although it makes some some equities not exist, as the relevant series diverge.
Several areas of backgammon, from determining the appropriate doubling strategy to analyzing the race, are well-approximated by random walks with absorbing barriers.
The analysis of backgammon positions and strategies frequently involves Monte Carlo analysis, variance reduction techniques, and statistics.
Backgammon has been a success for artificial intelligence since neural networks have been able to learn to play at or above the level of the best human players from self-play.
-
The game of Mafia, also marketed as Werewolf, depends in practice mostly on how skillful the players are at lying, but there are some fascinating mathematical questions that arise when tries to devise optimal strategies for expert players. Let me describe one such expert strategy to give the flavor. In what follows, I will assume basic knowledge of the (simple) rules, which you can find at the above Wikipedia link.
Suppose there is a detective, who secretly learns someone's identity each night. How can the detective communicate his knowledge without exposing himself to the Mafia? Each day, each townsperson claims to be the detective, and announces the piece of information he learned the previous night. The real detective tells the truth, but the Mafia will usually not be able to distinguish the real detective from all the impersonators. Of course, the townspeople will not know either—until the detective is killed. Then the townspeople, being expert players with excellent memories, will remember everything the detective said before being killed, and will therefore get a windfall of truthful information that they can they exploit to their advantage.
Many questions arise naturally. What is the probability that the townspeople win if they use this strategy? The Mafia have some extra information (they know who they are) and hence if some townsperson makes a false statement while impersonating the detective, the Mafia will detect this and know that that townsperson is not the detective. So perhaps the detective should lie occasionally to counter this strategy? How should the townspeople lie? Should they attempt to give mutually consistent stories or not? As far as I know, these strategic issues have remained largely unexplored.
See also this MO question that announces a mathematical paper on the Mafia game.
-
Tic-tac-toe and Gomoku (five-in-a-row) are common games that have fairly mathematical rules. Players alternately choose points from some subset of a lattice and try to form a line segment of a certain length.
The Hales–Jewett theorem is a result from Ramsey theory that essentially says that however long the lines must be, a draw is not possible in a sufficiently large dimension.
Gomoku has been solved, constructively. (The first player wins.)
The game of Connect Four adds the additional element of "gravity". It has also been solved. (The first player wins on the standard board size, but not on some boards of slightly different size.)
-
3
FWIW, it is widely believed by expert players that connect 4 on an 8x8 board is a win for the 2nd player. Some expert players on the internet have played hundreds if not thousands of games over the last year or two and occasionally lose as player 1, but never as player 2. – Kevin Buzzard Feb 2 2010 at 11:35
2
My goto-source for this kind information is the link homepages.cwi.nl/~tromp/c4/c4.html which has all boards with width+height <= 15. However, en.wikibooks.org/wiki/… claims that 8x8 is known to be a black (second player) win. – aorq Feb 2 2010 at 18:00
1
Tic-tac-toe can in fact be mapped: xkcd.com/832 – Emilio Pisanty Jun 28 at 11:43
Call a two player deterministic game finite if the game tree has finite depth. Now we can play...
Hypergame: Player one names a finite game $\Gamma$, for which player two will play the first move. Play then proceeds as normal, with the winner of $\Gamma$ winning hypergame.
Question: for the first move, can player one choose $\Gamma = \text{Hypergame}$?
-
2
This game is slightly more interesting if player two gets to choose who goes first in Gamma. As it stands, player one can choose to play the game "first player loses", so player two loses. – aorq Feb 1 2010 at 23:56
2
Answer: No, because Hypergame is not a finite game. There are game trees of arbitrarily large depth. – Chad Groft Apr 8 2010 at 1:16
1
This is actually the hypergame paradox, a game theoretic version (sort of) of Mirimanoff's paradox on well founded sets. Both answers lead to contradiction. – godelian Oct 8 2011 at 3:04
show 2 more comments
I didn't see mention of Conway's The Game of Life. Is it a game? Well, Conway calls it a zero-player game! Other people call it a cellular automaton. You decide.
-
Sprouts is a game contrived by J. H. Conway and M. S. Paterson in the 1960s.
It is an impartial game for two players played on a plane with some spots. Each move consists of both
1. joining two spots (could be the same spot) with a simple curve which does not go through existing spots or curves, such that the degree of each spot after the move does not exceed 3; and
2. placing a new spot on that curve.
Who makes the last move is the winner/loser according to normal/misère play convention.
This game is of topological nature but there are only finitely many inequivalent options at each move, and the game always terminates after finitely many moves (in fact, bounded by number of initial spots), making it an combinatorial game.
It enjoys some popularity, as reflected by the existence of a world association (the WGOSA, World Game of Sprouts Association).
There are rich graph-theoretic results concerning this game, for example see this page in NRICH and this section in Winning Ways. Experienced players make use of these results to set up goals.
Here is a website dedicated to the determination of the theoretical winners. A pattern with period 6 emerged under both play conventions. The researchers have published several papers and even considered Sprouts on general surfaces ("compact" is not essential, I think), and proved that the theoretical winner of the Sprouts game with a fixed number of spots on different compact surfaces is ultimately periodic in genus, with period 1/period 2 in the case of orientable/non-orientable surfaces.
-
1
Oh, I had completely forgotten about this game. It was a great distraction from classes once upon a time. – Ed Dean Oct 8 2011 at 2:30
StarCraft, a very popular RTS, which is taught at Berkeley. http://kotaku.com/5141355/competitive-starcraft-gets-uc-berkeley-class
where "Calculus and Differential Equations are highly recommended for full understanding of the course.".
-
More puzzles than games, but many of the number puzzles published by Nikoli are quite mathematical in nature. They tend to involve an interplay between local and global conditions that have to be satisfied simultaneously, and one can glimpse geometric and graph-theoretic properties lurking.
The multi-player game Carcassonne has many of the same aspects, especially the issue of farms separated by roads, which sort of brings in the Jordan curve theorem and a lot of interesting parity issues.
-
Baseball fits the criteria of math underlying the game's structure, its optimal and practical strategies, and the analysis of results and performance. It certainly fits the criterion of popularity.
-
In bridge, missing QJxx in a suit, if the Q or J drops on the first round, it is better to finesse if possible on the second round if nothing else is known about the distribution. This is obvious to a mathematician, but the simple conditional probability is so difficult for the average person that bridge teachers have incorporated the principle into the qualitative "Rule of Restricted Choice", which says that if an opponent plays a card that can be from equals (such as the "quack" from QJ), it increases the probability that the other opponent has the second equal card.
In mathematics we often prove uniqueness before existence. The one thing I find appealing about Sudoko is that knowing a solution is unique can help in finding the solution.
-
Bulls and cows and its modern variant Mastermind, for which Don Knuth demonstrated that the codebreaker can win in at most five moves. Playing this game with pencil and paper (in a way where both players are codemakers and codebreakers) can be very fun.
-
1
Thanks, I had forgotten Mastermind, even though I worked on it as an undergraduate. There is a strategy known which takes the minimal expected number of guesses under the assumption that the code is chosen uniformly. That doesn't solve the $2$-player game but it is great progress. The strategy of guessing randomly among the codes consistent with the past information is only off by a fraction of a guess on average, IIRC about $4.7$ instead of $4.4$. I believe I read these results in the Journal of Recreational Mathematics. – Douglas Zare Jun 7 2011 at 17:41
show 1 more comment
The game of Cootie, where players roll dice to collect parts of an insect (cootie), is a variant of the coupon collector's problem.
Instead of collecting a single instance of each coupon, players must collect multiple copies (6 legs, 2 eyes, 1 head, etc.) to win. It turns out you can compute the expected number of rolls to win at Cootie (even with a weighted die) with a finite sum.
In particular, if you have $L$ objects to collect and for each object `$\ell<L$` you need $q_\ell$ copies and the probability of getting the object is $p_\ell$, then the expected number of rolls to get all of the needed objects is
`$\displaystyle\sum_{\ell\in L} \frac{{p_\ell}^{q_\ell}}{(q_\ell -1)!}\int_0^\infty x^{q_\ell}\exp(-x) \prod_{k\in L-\{\ell\}}(\exp(p_k x)-\exp_{<q_k}(p_k x))dx.$`
If you're interested, check out my paper out for the full computation.
-
Well, most card games have mathematical implications, of course.
I'm disappointed at your considering chess non-mathematical. Wonder what Noam Elkies would think. :-)
When I was a teenager I would play a lot a board game named here "Risiko!" (I believe that the English name is "Risk"). My impression then was that there were some mathematical aspects that could be considered while planning a strategy.
(added later)
Also Hex should be added to the list of mathematically interesting games.
-
4
We don't play risk anymore at my house because it always ends in either a verbal argument or a physical confrontation. =\. – Harry Gindi Feb 1 2010 at 11:37
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576218128204346, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/21562/what-are-some-mathematical-concepts-that-were-pretty-much-created-from-scratch/26089
|
## What are some mathematical concepts that were (pretty much) created from scratch and do not owe a debt to previous work?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Almost any mathematical concept has antecedents; it builds on, or is related to, previously known concepts. But are there concepts that owe little or nothing to previous work?
The only example I know is Cantor's theory of sets. Nothing like his concrete manipulations of actual infinite objects had been done before.
-
45
I think most answers to this question will reveal more about our ignorance of previous developments than about revolutionary discoveries. – S. Carnahan♦ Apr 16 2010 at 22:10
3
@Scott: Do you hate all of my questions? – teil Apr 21 2010 at 5:21
15
This seems to be a question about the history of mathematics (a subtle beast, if one is interested in going beyond "Whig history" narratives) - but it is not a question about opinion or argument, in my view: and MO seems a good place for people who know some of these details well to contribute. (See Franz Lemmermeyer's comment below regarding Koenigsberg and Euler.) It's certainly a damn sight better, in my opinion, than the Why Has No One Categorified Rice Pudding? question. – Yemon Choi Apr 25 2010 at 0:41
@Yemon: Thanks for resurrecting my questions. – teil Apr 25 2010 at 6:09
To the original poster: I don't intend to make a habit of it ;) but in a couple of cases I felt that the questions were fine, or at least had attracted worthwhile answers. Anyway, I only have the one vote to re-open; what others do is their choice! – Yemon Choi Apr 25 2010 at 9:26
## 17 Answers
Shannon's work on Information theory. Maybe the math wasn't new but the ideas (such as positing a qualitative metric of information and identifying its relevance to design of communication systems) definitely were.
-
14
I don't think it's fair to characterize Shannon's information theory as completely new. In fact, the idea of a qualitative matric of information was quite well-known under the name of enthropy in physics (statistical mechanics was developed in 19th-first half of 20th century). Shannon introduced that concept into math and formalized it; but he didn't pretend to invent the idea, and even used the same name! – Ilya Nikokoshev Apr 25 2010 at 6:01
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is another example in Set Theory, which is Paul Cohen's forcing. Of course, forcing had some ties with earlier work, but the bulk of it was completely new.
-
Hermann Weyl wrote in a 1939 article on invariant theory: "The Theory of Invariants came into existence about the middle of the nineteenth century somewhat like Minerva: a grown-up virgin, mailed in the shining armor of algebra, she sprang forth from Cayley's Jovian head. Her Athens over which she ruled and which she served as a tutelary and beneficent goddess was projective geometry."
-
22
People just don't write like that any more. – Michael Lugo Apr 17 2010 at 10:49
18
they don't, but they should – Sean Tilson Apr 17 2010 at 23:27
15
But then referees and editors often encourage removal of such passages. – Richard Kent Apr 18 2010 at 18:19
I wonder if Euler deducing the infinitude of primes from the divergence of the harmonic series or Riemann's work on the Riemann zeta function would be suitable examples?
-
1
I agree. More specifically, credit to Euler for the zeta series and its product formula, and to Riemann for viewing it as a function of a complex variable. – John Stillwell Apr 18 2010 at 23:01
The solution of the cubic equation by Scipione del Ferro and Tartaglia in the early 16th century. This was not only a great advance in algebra, but it also forced mathematicians to confront complex numbers.
-
5
The idea that the cubic equation can be "solved" surely owes a debt to the notion that the quadratic equation can be solved... – Qiaochu Yuan Apr 18 2010 at 17:38
2
True, but since it took thousands of years to get beyond the solution of the quadratic, I think that something extra was involved. – John Stillwell Apr 18 2010 at 22:56
7
Feynman mentions this example in one of his books (I think What Do You Care What Other People Think?!) as an important realization to people living at the time, that they could do something that the ancient Greeks could not. – Todd Trimble Jun 12 2011 at 12:23
This happened hundreds of times in physics throughout the twentieth century, because physicists were specifically trained to do mathematics from scratch. The main reason is that it was too time consuming in pre-internet times to learn the specialized jargon of each subfield, so it was easier just to rederive the stuff.
The most significant early success of this sort of willful ignorance is probably the development of special relativity from essentially nothing. The Minkowski geometry of relativity is remarkable, because if you interpret the words "point" and "line" as usual, and the word "circle" as a unit hyperbola with 45-degree angle asymptotes (the unit circle of relativity), it satisfies all the explicit axioms of Euclid's geometry, as set out in the elements, including the axiom of parallels, but is not Euclidean. The essential difference is that circles are not closed curves, so that certain implicit betweenness properties fail. There are distinct points which are at a zero "distance" from one another, the hypotenuse of a right triangle is always shorter than one of the sides, etc. This is amazing to me, because of the number of people who have considered models of geometry before Einstein (including all the heavy focus on non-Euclidean geometry for the previous century). All the bigwigs missed Minkowski geometry.
Aside from Einstein's work, there are the following mathematical developments from physics, all of which came out of nowhere mathematically:
• Quantum mechanics, in particular, the theory of the canonical commutation relation [x,p]=i and its relationship with wave operators and random walks.
• Dirac's distribution theory (delta-functions): this completed the notion of Eigenvalue of a linear operator to include Eigenvalues and Eigenfunctions for the x operator in quantum mechanics.
• Majorana spinors--- these were due to the discovery of the Dirac equation. The representation theory of SO(p,q) is now entirely dependent on dirac matrices and the Majorana and Weyl conditions.
• Wigner's random matrix theory. This was completely ab-initio, and is now very active mathematics.
• Anderson localization: this is also a mathematical surprise--- the eigenfunctions of randomized potentials are localized in space. The full resulting theory has still not been made part of mathematics, but Anderson's paper is an ab-initio (although not rigorous) argument.
• Metropolis algorithm--- this essentially inaugurated monte-carlo methods, and I do not know any previous work it builds on.
• Feynman's path integral--- this was developed within mathematics as the Wiener integral at about the same time, but the physics work is completely ab-initio. Needless to say, the results are not going into mathematics easily (in my opinion, this is mostly due to the reluctance of mathematicians to make every subset of R measurable).
• Candlin's fermionic path integral (Berezin integrals)--- Candlin in 1956 develops the whole theory of path integrals for fermionic fields from scratch in a Neuvo Cimento article with next to no citations (in either direction). The theory was ignored for a decade for no apparent reason.
• Mandelstam's double dispersion relations (and dispersion relations in general).
• Kraichnan's inverse cascade--- generally the statistical theory of nonlinear classical equations is developed from scratch by Kraichnan and others. The biggest shocker is the inverse cascade--- in two dimensions, eddies go up from small scales to big scales.
• Zimmermann's forest formula--- this is now part of mathematics, due to Kreimer and Connes, but Zimmermann did it from scratch in physics.
• The theory of second order phase transitions and modern renormalization by Widom/Wilson.
• Wilson's theory of operator product expansions, (which is not a part of mathematics yet)
• Supersymmetry is developed from scratch by several groups with no previous motivation in mathematics (not much in physics). The original germ of an idea is in Golfond and Likhtman, but the person who does most of the early theory work is Pierre Ramond. Wess and Zumino's work also comes out of nowhere.
• Virasoro algebra/Kac-Moody algebra-- Virasoro algbera is the theory of infinitesimal conformal maps under composition, so it should have been classical mathematics, but as far as I know, it wasn't. The theory started (as far as I know) with the study of string theory in the early 1970s.
• Mirror symmetry--- this owes to previous work in T-duality in string theory, not in mathematics.
• Witten's global anomalies--- these are not yet part of rigorous mathematics, but they are ab-initio, and were a complete surprise.
I got tired, but there are hundreds, maybe thousands of examples, because all the results in the physics literature were generally ab-initio. It is a standard practice for some mathematicians to scan the physics literature for original ideas and incorporate them into mathematics.
-
1
The "owe little or nothing to previous work" part of the question seems to disqualify most answers from physics, since such concepts often build on earlier work on physical problems. For example, Einstein's formulation of special relativity owes a big debt to Maxwell's work on electromagnetism (Minkowski's discovery of spacetime geometry was inspired by Einstein's paper), and Dirac's distributions are derived from Heaviside's. The history of the Virasoro algebra dates to 1909 and is covered in brief in the Wikipedia page. – S. Carnahan♦ Aug 1 2011 at 7:43
Fair enough--- but the usual way mathematics is done is by quoting and using previously proven theorems, and the mathematical work of the physicists generally does not quote previous theorems, but instead constructs the objects in question from scratch. So I think it is in the spirit of the question. The Dirac and Virasoro examples might be inappropriate, I don't know the history of the things very well. – Ron Maimon Aug 2 2011 at 17:42
The existence of irrational numbers.
-
4
Can you qualify that? That $\sqrt{2}$ is irrational has been known for a long time, and that it 'exists' is 'clear' from simple geometrical constructions. I am not saying you are wrong, but I really think your answer needs expanded! – Jacques Carette Apr 16 2010 at 14:49
6
It may have been known for a long time, but somebody had to discover it! Perhaps Hippasus of Metapontum, about 2500 years ago. It must have been as unexpected as Cantor's infinities. – TonyK Apr 16 2010 at 16:26
This is an intruiging question. I have some suggestions but I am not sure about them.
1) Frege's work on logic. (Logic was stagnated for many many centuries before.)
2) Conway's surreal numbers.
3) Game theory (e.g. zero sum games).
-
1
I strongly disagree with the assessment of Frege; plenty of others helped pave the way, including for example Boole. I am a little skeptical of surreal numbers as well. – Todd Trimble Jun 12 2011 at 12:27
1
@Todd: have you read Boole's actual work on logic, and compared it to what Frege wrote? [I have recently read a number of papers by both.] They are really quite different. Frege's work is infused with a lot of philosophy and deep 'foundational' thinking about all of mathematics. Boole's work is fantastic, but in a different direction. – Jacques Carette Jun 13 2011 at 2:33
3
What I had in mind when I wrote that was that Boole and others paved the way for the realization that logic could be mathematicized. My understanding is that Boole's work shows how propositional logic can be represented in symbolic, algebraic form. Subsequently, others like E. Schroeder and C.S. Peirce had pushed the algebraization of relational calculus quite far (including of course relational composition, closely tied to quantification). Frege in fact knew of this work but was somewhat dismissive. Anyway, pursuit of the analogies between algebra and FOL was quite vigorous before Frege. – Todd Trimble Jul 31 2011 at 0:04
By the way, there is some interesting commentary on the matter here (see especially the quotation of Hilary Putnam): en.wikipedia.org/wiki/… – Todd Trimble Jul 31 2011 at 0:54
Although his work was certainly related to earlier fields, I believe that Ramanujan (pretty much) built up a lot of his work from scratch.
-
3
I am not sure what exactly do you mean by this. In the early years, Ramanujan discovered lots of interesting and important formulas, and later proved them and some theorems, but originally these were not "concepts". Later on in his life he did introduced some concepts (notably en.wikipedia.org/wiki/Mock_theta_function ) but they were clearly related to some earlier work. – Igor Pak Apr 16 2010 at 17:58
Fair point Igor. I was simply emphasizing Ramanujan's isolationist nature. Since he was almost completely unaware of earlier work, one could reasonably say that he could not owe a debt to it. But you are right that he did not introduce completely novel concepts. – Tony Huynh Apr 16 2010 at 19:17
5
The idea that Ramanujan came up with math from nowhere is an urban legend: It is a fun idea, so it is passed on without being checked. Some urban legends are true, and some are not. I'd like to see references. – Douglas Zare Apr 16 2010 at 22:52
13
I suppose that the mathematics professor from Good Will Hunting doesn't count as a legitimate reference? – Tony Huynh Apr 17 2010 at 1:08
It's difficult to be certain with Ramanujan - most of his methods are completely unknown. – teil Apr 18 2010 at 13:08
Graph theory is an example that comes to mind, via the problem of the seven bridges of Königsberg : http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg .
-
15
There were no graphs in Euler's solution; the translation of Euler's idea into graph theory came later. See "The truth about Koenigsberg" in "Leonhard Euler. Life, work and legacy" (Bradley and Sandifer, eds.). – Franz Lemmermeyer Apr 16 2010 at 13:47
Écalle's work on resummation and resurgent functions. While there is a bit of work that pre-dates him, the vast bulk of his theory is really novel and built 'from scratch'. This is especially clear to anyone who has ever tried to read the Orsay preprints of his original manuscripts on resurgent functions! [The only notation more spectacular than his was Frege's]
-
1
Some of Écalle's writings are available at math.u-psud.fr/~biblio/numerisation – Chandan Singh Dalawat Apr 19 2010 at 9:31
Stallings's bipolar structures created to prove that groups of cohomological dimension 1 are free.
(Stallings might not have agreed with my nomination, but his statement at the end of the paper that his techniques are a result of "meditating on the proof of the Sphere Theorem" somehow makes his work even more remarkable to me.)
-
It seems like Dirichlet's Theorem on Primes in Arithmetic Progressions came out of nowhere, or at least his methods of proof. While the complex analysis may not have been new, his application of it, through the Dirichlet characters and the series he made from them, to number theory was pretty novel.
-
The Analytic Geometry of Rene Descartes.
-
Pcf theory/cardinal arithmetic. Well, it's not exactly built from scratch, but there are plenty of nice results which do not use any sophisticated metamathematical machinery (such as forcing, inner models, etc).
Edit: I've deleted part of my answer due to a little misunderstanding.
-
Point-set topology owes a great deal to whatever was known about metric spaces at the time. I don't think you can reasonably claim that the concept doesn't owe a dept to previous work. – Qiaochu Yuan Apr 18 2010 at 17:37
Of course, you're right. It seems that I misread the original question. I've now deleted the bad part. – Haim Apr 18 2010 at 17:46
A big chunk of Shelah's work, in general, seems to have come out of nowhere! – David FernandezBreton Apr 5 2012 at 6:29
Random Graphs.
Started by Paul Erdos and Alfred Renyi.
-
laplace transforms as initiated by euler
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960176408290863, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/165286-find-volume-triple-integral-print.html
|
# Find the Volume, Triple Integral
Printable View
• December 4th 2010, 03:17 PM
jegues
1 Attachment(s)
Find the Volume, Triple Integral
Find the volume in the first octant bounded by the surfaces,
$4x+4y+z = 16, z=0, y=\frac{x}{2},y=2x$
See figure attached for my attempt.
Not sure where I went wrong on this one, I think I got a good sketch of the actual volume, it looks like very thin slice of cheese of some sort.
Is the mistake in the way I set up my integral? Or in evaluating the integral itself? Or both?
Thanks again!
• December 4th 2010, 05:20 PM
Ackbeet
You have a very good sketch of the volume. Your problem is in setting up the integral. Think about the region in the xy plane that is one bound on your solid. It's bounded by y = x/2, y = 2x, and by the intersection of the plane 4x + 4y + z = 16 with the xy plane. What does this region look like?
• December 5th 2010, 07:42 AM
jegues
1 Attachment(s)
Quote:
Originally Posted by Ackbeet
You have a very good sketch of the volume. Your problem is in setting up the integral. Think about the region in the xy plane that is one bound on your solid. It's bounded by y = x/2, y = 2x, and by the intersection of the plane 4x + 4y + z = 16 with the xy plane. What does this region look like?
I can picture the region the the xy plane in my mind and I'm thinking I'm gonna switch my order of integration. The area I'd have to find the xy plane would require me to use 2 seperate double integrals, and that's not what I want.
If I project my solid back into the yz plane I'd have a simple triangle, formed by z=0, y=0 and 4y + z = 16. This would only require 1 double integral to compute the area. Then I take this area and multiply it by my first integration to get the volume.
Edit: Hmmm... Integrating in the x direction first gives me a simple region in the yz plane however the first integration is ugly.
I'll try it your way and Integrate in the z direction first and I'll look at my region and see how things go. I'll post my results.
Edit: Okay I've got my 2nd attempt posted. The answer is listed to be 128/9 so it seems I just missed 1 negative sign somewhere. Can anyone spot it? Actually, I found it. It was on the 2nd last line in the term farthest to the right. It should be (-2/9 * 8)
• December 6th 2010, 04:50 AM
Ackbeet
So you've solved it, then?
All times are GMT -8. The time now is 05:54 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561993479728699, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/76345/upper-and-lower-bounds-of-the-total-surface-area-of-convex-polytopes-that-partiti
|
## Upper and Lower Bounds of the Total Surface Area of Convex Polytopes that Partition a Hypercube
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $C$ be a hypercube in $\mathbb{R}^D$ with edge length of $L$. Let $\mathcal{P}_1,\ldots,\mathcal{P}_K$ be $K$ convex polytopes that partition $C$. Let $S_k$ be the surface area of the polytope $\mathcal{P}_k$, where $k\in \{1,\ldots, K \}$. Define $S=\sum_{k=1}^K S_k$, what is the upper bound and lower bound of $S$? In particular, I'm interested in the bounds which can be expressed as function of $K,D$ and $L$.
1. In case it is not clear, the surface area of a convex polytope $\mathcal{P}$ is the sum of $(D-1)$-dimensional Lebesgue measure of the facets of $\mathcal{P}$.
2. There may have some nice results when $K\rightarrow\infty$ and reformulating this problem as a tessellation induced by some random processes. However, I'm interested in small $K$, say $1< K<50$.
3. I guess this problem has been solved, but I'm struggling to find good literature.
-
2
The lower bound is the surface area of the cube. (This is the trivial part). Denote by $\bar S_K$ this upper bound. Let $A$ be area of the cube and $B$ be the maximal area of the intersection of the cube with a hyper plane. then $\bar S_1=A$, $\bar S_2=A+2{\cdot}B$ and I guess that $$\bar S_K=A+2{\cdot}(K-1){\cdot}B$$ for all $K\ge 3$ – Anton Petrunin Sep 25 2011 at 17:14
1
@Anton: Surely you are right. One can draw many parallel sectios of the cube close to that providig the maximal section area. THis area should be known. – Ilya Bogdanov Sep 26 2011 at 5:21
1
Thanks Anton Petrunin and Ilya Bogdanov for the nice answer and explanation. To complete the answer, the maximal area of the intersection of a $D$-cube with a $(D-1)$-dimensional hyper plane is $\sqrt{2}L^{D-1}$, where $L$ is the edge length of hypercube. jstor.org/stable/2046239 – han Sep 26 2011 at 15:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066054821014404, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/137450-diagonalizable-matrix.html
|
Thread:
1. Diagonalizable matrix
Are the following matrices diagonalizable:
I think neither are diagonalizable because they don't have at least two linearly independent eigenvectors. Am I correct?
2. Originally Posted by temaire
Are the following matrices diagonalizable:
I think neither are diagonalizable because they don't have at least two linearly independent eigenvectors. Am I correct?
To diagonalize a matrix, I would first find the eigenvectors. The three eigenvectors $v_1, v_2, v_3$ will form the three columns(c1,c2,c3) of the matrix $P$.
If P is invertible , then the matrix given by $P^{-1} A P = D$ is the diagonal matrix
3. Originally Posted by harish21
To diagonalize a matrix, I would first find the eigenvectors. The three eigenvectors $v_1, v_2, v_3$ will form the three columns(c1,c2,c3) of the matrix $P$.
If P is invertible , then the matrix given by $P^{-1} A P = D$ is the diagonal matrix
I have found the eigenvectors for both matrices. However, I only found one eigenvector for each matrix, which means that the two matrices do not have at least two linearly independent eigenvectors, which means that they are not diagonalizable. Is this correct?
4. Originally Posted by temaire
I have found the eigenvectors for both matrices. However, I only found one eigenvector for each matrix, which means that the two matrices do not have at least two linearly independent eigenvectors, which means that they are not diagonalizable. Is this correct?
A $n \times n$ matrix is diagonalizable only if it has n linearly independent eigenvectors.
So your matrix is not diagonalizable!
Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463136196136475, "perplexity_flag": "head"}
|
http://dwaynecrooks.wordpress.com/tag/project-euler-2/
|
# Incongruent Thoughts
## Project Euler – Problem 145
The problem: Problem 145
### Analysis
Suprisingly enough the brute-force solution works. It takes a couple of minutes but it gets the job done. Suppose we have a function called is_reversible then the following algorithm can be used to count all the reversible numbers below one-billion.
```count = 0
for i = 1 upto 1000000000 do
if (is_reversible(i)) then count++
endfor
output count
```
### Implementation
Uses the OCaml programming language.
```(* a left fold over the digits of a positive integer *)
let rec nfoldl f v = function
0 -> v
| n -> nfoldl f (f v (n mod 10)) (n / 10)
let reverse = nfoldl (fun n d -> (n * 10) + d) 0
let odd n = n mod 2 = 1
let allodd = nfoldl (fun b d -> b && odd d) true
let is_reversible n =
n mod 10 <> 0 && allodd (n + reverse n)
let naive_count n =
let cnt = ref 0 in
for i = 1 to n-1 do
if is_reversible i then incr cnt;
done;
!cnt
```
The above code is elegant and it solves the problem but I’m not satisfied with its speed. In that regard I decided to improve on the algorithm a bit.
### More Analysis
Notice that since the sum of a number, $n$, and its reverse, $reverse(n)$ must yield a number with all digits odd, it must be the case that $n$ cannot begin and end with digits of the same parity (i.e. both even or both odd). Hence, a likely candidate must lie in one of these sets $A = \{n : n \,\text{begins with}\, 2, 4, 6, 8 \,\text{and}\, n \,\text{ends with}\, 1, 3, 5, 7, 9\}$ or $B = \{n : n \,\text{begins with}\, 1, 3, 5, 7, 9 \,\text{and}\, n \,\text{ends with}\, 2, 4, 6, 8\}$. Now, $|A| = |B| = (4*5)*10^7 = 200000000$, i.e. 200 million. Furthermore, if a number in $A$ is reversible then its reverse is in $B$ and is obviously reversible as well. This means we only need to generate the elements and count those in $A$ that are reversible. Our final answer must then be multiplied by $2$ to account for those in $B$.
### A Better Implementation
Uses the Haskell programming language. I switched languages when I realised that the list code I needed for this to work was not implemented using tail-recursion in the OCaml standard library. I did rewrite those pieces using tail recursion but in the end I still decided to switch languages just for the fun of it. Actually, the Haskell version below isn’t much better. Its still slow. But very elegant :D. At least I can use it for testing correctness on small cases since the solution is so simple and easy to read.
```nfoldl _ v 0 = v
nfoldl f v n = nfoldl f (f v (n `mod` 10)) (n `div` 10)
nreverse = nfoldl (\n d -> (n * 10) + d) 0
allodd = nfoldl (\b d -> b && odd d) True
isReversible n = n `mod` 10 /= 0 && allodd (n + nreverse n)
countDigits = (+1) . floor . logBase 10 . fromIntegral
genCandidates n =
let gen m i len = if i > len then
if m >= n then [] else [m]
else
if i == 1 then
concatMap (\d -> gen (m*10+d) (i+1) len) [1,3,5,7,9]
else if i == len then
concatMap (\d -> gen (m*10+d) (i+1) len) [2,4,6,8]
else
concatMap (\d -> gen (m*10+d) (i+1) len) [0..9]
in concatMap (gen 0 1) [1..countDigits n]
betterCount = (*2) . length . (filter isReversible) . genCandidates
```
Finally, I bit my tongue and decided to code up a solution in everyone’s favorite system programming language, C.
```#include <stdio.h>
#include <math.h>
#define FALSE 0
#define TRUE 1
#define even(n) ((n)%2==0)
int reverse(int n) {
int m = 0;
while (n > 0) { m = m*10 + (n%10); n /= 10; }
return m;
}
int allodd(int n) {
while (n > 0) {
if (even(n%10)) return FALSE;
n /= 10;
}
return TRUE;
}
#define is_reversible(n) ((n)%10>0 && allodd((n) + reverse(n)))
#define count_digits(n) ((int)(floor(log10(n) + 1)))
int __count;
void __better_count(int m, int i, int len, int n) {
int j;
if (i > len) {
if (m < n && is_reversible(m)) __count += 2;
} else {
if (i == 1) {
for (j = 1; j <= 9; j+=2) __better_count(m*10+j, i+1, len, n);
} else if (i == len) {
for (j = 2; j <= 8; j+=2) __better_count(m*10+j, i+1, len, n);
} else {
for (j = 0; j <= 9; j++) __better_count(m*10+j, i+1, len, n);
}
}
}
int better_count(int n) {
int len;
__count = 0;
for (len = 2; len <= count_digits(n); len++) __better_count(0, 1, len, n);
return __count;
}
int main(void) {
printf("%d\n", better_count(1000000000));
return 0;
}
```
In the end, the problem was easy and I had a lot of fun playing around with the various languages. I learned quite a lot, so it certainly wasn’t a waste of time.
Tagged c, haskell, ocaml, p145, programming, project euler
## Project Euler – Trinidad and Tobago Standings
Below is an image of the current standings in projecteuler.net as of 21/11/2010 for Trinidad and Tobago.
As you can see I’m 2nd in the standings. Damn you wallygold. Lol. I need to solve 10 more problems to reach Level 3. Hopefully, I will have time to do it before the year ends.
A note about the languages I use. Currently I’m using racket (a scheme based language) to solve the problems. However, in the past I’ve solved some of the problems using C, Java, Python, Haskell, Scala, LISP and of course the trusty old pencil and paper. I guess I tend to utilize whatever language I am currently into.
Tagged project euler
## Project Euler – Problem 71
The problem: Problem 71
### Analysis
Let $d$ be some integer larger than $1$, $F(d) = \{ \frac{a}{b} : 1 \leq a < b \leq d\}$ and $F_0(d) = \{0\} \cup F(d)$. Let $prev_d$ be a function from $F(d)$ to $F_0(d)$ defined as follows:
$prev_d(t) = s$,
where $s$ is the largest fraction in $F_0(d)$ that is less than $t$. As the example in the problem illustrates, $prev_8(\frac{3}{7}) = \frac{2}{5}$. We need to find $prev_{1000000}(\frac{3}{7}).$
Let $t \in F(d)$ and $n$ be an integer between $1$ and $d$ inclusive. We now define $p_d(t,n)$ as follows:
$p_d(t,n) = s$,
where $s$ is the largest fraction with a denominator of $n$ in $F_0(d)$ that is less than $t$.
Clearly then,
$prev_d(t) = max_{2 \leq n \leq d}\{ p_d(t,n) \}$.
Thus it suffices to show how to compute $p_d(t,n)$. The implementation which will be given below uses a modified binary search to compute $p_d(t,n)$. The running time is $O(lg\,n)$. Therefore, it takes
$lg\,2 + lg\,3 + \cdots + lg\,d = lg\,d! = O(d \cdot lg\,d)$
time to compute $prev_d(t)$ for any $t$.
### Implementation of $p_d(t,n)$
Uses the racket programming language.
```;; finds the largest fraction with a denominator of n
;; that is less than t
(define (p t n) ;; assumes n <= d
(let ([a (numerator t)]
[b (denominator t)])
(/ (let loop ([l 0] [h (- n 1)])
(if (> l h)
0
(let ([m (quotient (+ l h) 2)])
(if (< (* m b) (* a n))
(max m (loop (+ m 1) h))
(loop l (- m 1))))))
n)))
```
An example showing how to use p to compute $prev_8(\frac{3}{7})$.
```> (p (/ 3 7) 2)
0
> (p (/ 3 7) 3)
1/3
> (p (/ 3 7) 4)
1/4
> (p (/ 3 7) 5)
2/5
> (p (/ 3 7) 6)
1/3
> (p (/ 3 7) 7)
2/7
> (p (/ 3 7) 8)
3/8
```
Since $\frac{2}{5}$ is the maximum of the values that were computed, we can conclude that $prev_8(\frac{3}{7}) = \frac{2}{5}$.
Finally, we use the same idea to get $prev_{1000000}(\frac{3}{7})$. The full implementation takes about 7 seconds to compute the answer.
Tagged p71, programming, project euler, racket
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202127456665039, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/153849/how-to-eliminate-qx-y-in-system-of-two-pde?answertab=oldest
|
# How to eliminate Q(x,y) in system of two PDE
Analytical problem is simple but I am not sure is it possible in this kind of system. I will give my idea. We can apply on first equation $\frac{\partial }{\partial x}$ and then that it is possible to substitute from second equation $\frac{\partial Q}{\partial x}$ in first, but the problem is that we have Q on two places and coefficient in front is not the same (A and B). A, B, C, D , E and F are constants.
i need one PDE equation just with P(x,y)
$$-A\frac{\partial ^2Q(x,y)}{\partial x^2}+B\frac{\partial ^3P(x,y)}{\partial x^3}+C Q(x,y)-C\frac{\partial P(x,y)}{\partial x}+D\frac{\partial ^2Q(x,y)}{\partial y^2}=0$$
$$-B\frac{\partial ^3Q(x,y)}{\partial x^3}+E\frac{\partial ^4P(x,y)}{\partial x^4}-C \frac{\partial Q(x,y)}{\partial x}+C\frac{\partial ^2P(x,y)}{\partial x^2}-F\frac{\partial ^2P(x,y)}{\partial y^2}=0$$
Thank you in advance
-
## 2 Answers
$$-A\frac{\partial^{2}Q}{\partial x^{2}}+B\frac{\partial^{3}P}{\partial x^{3}}+CQ-C\frac{\partial P}{\partial x}+D\frac{\partial^{2}Q}{\partial y^{2}}=0\tag{{1}}$$ $$-B\frac{\partial^{3}Q}{\partial x^{3}}+E\frac{\partial^{4}P}{\partial x^{4}}-C\frac{\partial Q}{\partial x}+C\frac{\partial^{2}P}{\partial x^{2}}-F\frac{\partial^{2}P}{\partial y^{2}}=0\tag{{2}}$$
Differentiate $(1)$ with respect to $x$ :$$-A\frac{\partial^{3}Q}{\partial x^{3}}+B\frac{\partial^{4}P}{\partial x^{4}}+C\frac{\partial Q}{\partial x}-C\frac{\partial^{2}P}{\partial x^{2}}+D\frac{\partial^{3}Q}{\partial x\partial y^{2}}=0\tag{{3}}$$
Add $(2)$ and $(3)$:$$-\left(A+B\right)\frac{\partial^{3}Q}{\partial x^{3}}+\left(E+B\right)\frac{\partial^{4}P}{\partial x^{4}}+D\frac{\partial^{3}Q}{\partial x\partial y^{2}}-F\frac{\partial^{2}P}{\partial y^{2}}=0\tag{4}$$ Differentiate $(2)$ twice with respect to $y$ :$$-B\frac{\partial^{5}Q}{\partial x^{3}\partial y^{2}}+E\frac{\partial^{6}P}{\partial x^{4}\partial y^{2}}-C\frac{\partial^{3}Q}{\partial x\partial y^{2}}+C\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}-F\frac{\partial^{4}P}{\partial y^{4}}=0\tag{{5}}$$
Differentiate $(4)$ twice with respect to $x$ $$-\left(A+B\right)\frac{\partial^{5}Q}{\partial x^{5}}+\left(E+B\right)\frac{\partial^{6}P}{\partial x^{6}}+D\frac{\partial^{5}Q}{\partial x^{3}\partial y^{2}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}=0\tag{6}$$
Differentiate $(2)$ twice with respect to $x$ :$$-B\frac{\partial^{5}Q}{\partial x^{5}}+E\frac{\partial^{6}P}{\partial x^{6}}-C\frac{\partial^{3}Q}{\partial x^{3}}+C\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}=0\tag{{7}}$$
From $(7)$:
$$\frac{\partial^{5}Q}{\partial x^{5}}=\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{6}}-C\frac{\partial^{3}Q}{\partial x^{3}}+C\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}\right)\tag{{8}}$$
Substitute into $(6)$: $$-\left(A+B\right)\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{6}}-C\frac{\partial^{3}Q}{\partial x^{3}}+C\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}\right)+\left(E+B\right)\frac{\partial^{6}P}{\partial x^{6}}+D\frac{\partial^{5}Q}{\partial x^{3}\partial y^{2}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}=0 \tag{9}$$
From $(5)$ $$\frac{\partial^{5}Q}{\partial x^{3}\partial y^{2}}=\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{4}\partial y^{2}}-C\frac{\partial^{3}Q}{\partial x\partial y^{2}}+C\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}-F\frac{\partial^{4}P}{\partial y^{4}}\right)\tag{10}$$
Substitute into $(9)$: $$-\left(A+B\right)\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{6}}-C\frac{\partial^{3}Q}{\partial x^{3}}+C\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}\right)+\left(E+B\right)\frac{\partial^{6}P}{\partial x^{6}}+D\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{4}\partial y^{2}}-C\frac{\partial^{3}Q}{\partial x\partial y^{2}}+C\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}-F\frac{\partial^{4}P}{\partial y^{4}}\right)-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}=0\tag{11}$$
From $(4)$: $$\frac{\partial^{3}Q}{\partial x\partial y^{2}}=-\frac{1}{D}\left(-\left(A+B\right)\frac{\partial^{3}Q}{\partial x^{3}}+\left(E+B\right)\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{2}P}{\partial y^{2}}\right)\tag{12}$$
Substitute into $(11)$: $$-\left(A+B\right)\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{6}}-C\frac{\partial^{3}Q}{\partial x^{3}}+C\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}\right)+\left(E+B\right)\frac{\partial^{6}P}{\partial x^{6}}+D\frac{1}{B}\left(E\frac{\partial^{6}P}{\partial x^{4}\partial y^{2}}+C\frac{1}{D}\left(-\left(A+B\right)\frac{\partial^{3}Q}{\partial x^{3}}+\left(E+B\right)\frac{\partial^{4}P}{\partial x^{4}}-F\frac{\partial^{2}P}{\partial y^{2}}\right)+C\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}-F\frac{\partial^{4}P}{\partial y^{4}}\right)-F\frac{\partial^{4}P}{\partial x^{2}\partial y^{2}}=0\tag{13}$$
Differentiate $(1)$ with respect to $x$ three times:$$-A\frac{\partial^{5}Q}{\partial x^{5}}+B\frac{\partial^{6}P}{\partial x^{6}}+C\frac{\partial^{3}Q}{\partial x^{3}}-C\frac{\partial^{4}P}{\partial x^{4}}+D\frac{\partial^{5}Q}{\partial y^{5}}=0\tag{14}$$
$(13)$ gives an expression for $$\frac{\partial^{3}Q}{\partial x^{3}}$$ . substitution into $(8)$ gives equation for $$\frac{\partial^{5}Q}{\partial x^{5}}$$ . Substituting the results in $(14)$ gives an equation in terms of P and its derivatives only.
-
yes, but I need to eliminate Q and all derivatives of function Q, how to do that? – George Jun 4 '12 at 21:20
you just need to solve this linear system (i used the uniform notation $i=0$ corresponds to $Q$) – Valentin Jun 4 '12 at 21:22
@ Valentin Thanks, right answer! – George Jun 4 '12 at 21:25
@ Valentin in second equation should be derivatives x and y? – George Jun 4 '12 at 21:37
yes, you are right i made a slip, so you actually might need to differentiate each a few times more by $x$ and $y$ until the number of equations matches the number of variables – Valentin Jun 4 '12 at 21:46
show 4 more comments
Maple can handle this with casesplit in the PDEtools package.
des:= {-A*diff(Q(x,y),x,x)+B*diff(P(x,y),x,x,x)+C*Q(x,y) -C*diff(P(x,y),x)+D*diff(Q(x,y),y,y)=0, -B*diff(Q(x,y),x,x,x)+E*diff(P(x,y),x,x,x,x)-C*diff(Q(x,y),x) +C*diff(P(x,y),x,x)-F*diff(P(x,y),y,y)=0};
PDEtools:-casesplit(des,[Q,P]);
and the last equation returned is (after some cleaning up)
$$\left( -{B}^{2}+EA \right) {\frac {\partial ^{6}}{\partial {x}^{6}}}P \left( x,y \right) + \left( - D C-AF \right) {\frac { \partial ^{4}}{\partial {y}^{2}\partial {x}^{2}}}P \left( x,y \right) + \left( AC-CE \right) {\frac {\partial ^{4}}{\partial {x}^{4}}}P \left( x,y \right) + D F{\frac {\partial ^{4}}{ \partial {y}^{4}}}P \left( x,y \right) - D E { \frac {\partial ^{6}}{\partial {y}^{2}\partial {x}^{4}}}P \left( x,y \right) +CF{\frac {\partial ^{2}}{\partial {y}^{2}}}P \left( x,y \right) = 0$$
-
@ Robert Israel Is it possible to do this in $Mathematica$? – George Jun 5 '12 at 8:15
@George: I'm not a Mathematica user, so I don't know. – Robert Israel Jun 5 '12 at 18:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8779503107070923, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/294079/why-isnt-the-naive-primes-algorithm-in-p
|
# Why isn't the naive PRIMES algorithm in P?
The naive algorithm tries dividing $n$ by $2 \dots n-1$ to see if it divides without a remainder. Each division can be done in $O(n)$-time and there are $O(n)$ divisions to be made. What's wrong with this algorithm, or rather why is it not in $P$ complexity?
-
## 1 Answer
Primality testing algorithms have their complexity measured as the number of digits grows to infinity, not the size of the integer itself. Given an integer $N$ with $n$ digits, so that $N\approx 10^n$, the algorithm you gave above grows exponentially in $n$, the number of digits.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398682117462158, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Polar_Equations&oldid=25173
|
# Polar Equations
### From Math Images
Revision as of 17:51, 14 July 2011 by Chanj (Talk | contribs)
A polar rose (Rhodonea Curve)
This polar rose is created with the polar equation: $r = cos(\pi\theta)$.
A polar rose (Rhodonea Curve)
Fields: Algebra and Calculus
Created By: chanj
# Basic Description
Polar equations are used to create interesting curves, and in most cases they are periodic like sine waves. Other types of curves can also be created using polar equations besides roses, such as Archimedean spirals and limaçons. See the Polar Coordinates page for some background information.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *calculus, trigonometry
[Click to view A More Mathematical Explanation]
## Rose
The general polar equations form to create a rose is UNIQ7dc518d35e1dba17-math-00000001-Q [...]
[Click to hide A More Mathematical Explanation]
## Rose
The general polar equations form to create a rose is $r = a \sin(n \theta)$ or $r = a \cos(n \theta)$. Note that the difference between sine and cosine is $\sin(\theta) = \cos(\theta-\frac{\pi}{2})$, so choosing between sine and cosine affects where the curve starts and ends. $a$ represents the maximum value $r$ can be, i.e. the maximum radius of the rose. $n$ affects the number of petals on the graph:
• If $n$ is an odd integer, then there would be $n$ petals, and the curve repeats itself every $\pi$.
• If $n$ is an even integer, then there would be $2n$ petals, and the curve repeats itself every $2 \pi$.
• If $n$ is a rational fraction ($p/q$ where $p$ and $q$ are integers), then the curve repeats at the $\theta = \pi q k$, where $k = 1$ if $pq$ is odd, and $k = 2$ if $pq$ is even.
$r = \cos(\frac{1}{2}\theta)$ The angle coefficient is $\frac{1}{2} = 0.5$. $1 \times 2 = 2$, which is even. Therefore, the curve repeats itself every $\pi \times 2 \times 2 \approx 12.566.$ $r = \cos(\frac{1}{3}\theta)$ The angle coefficient is $\frac{1}{3} \approx 0.33333$. $1 \times 3 = 3$, which is odd. Therefore, the curve repeats itself every $\pi \times 3 \times 1 \approx 9.425.$
• If $n$ is irrational, then there are an infinite number of petals.
$r = \cos(e \theta)$
$\theta \text{ from } 0 \text{ to...}$
...$10$ ...$50$ ...$100$
$\text{Note: }e \approx 2.71828$
Below is an applet to graph polar roses, which is used to graph the examples above:
If you can see this message, you do not have the Java software required to view the applet.
Source code: Rose graphing applet
## Other Polar Curves
Archimedean Spirals
Archimedes' Spiral $r = a\theta$ The spiral can be used to square a circle and trisect an angle. Fermat's Spiral $r = \pm a\sqrt\theta$ This spiral's pattern can be seen in disc phyllotaxis.
Hyperbolic spiral$r = \frac{a}{\theta}$ It begins at an infinite distance from the pole, and winds faster as it approaches closer to the pole. Lituus $r^2 \theta = a^2$It is asymptotic at the $x$ axis as the distance increases from the pole.
Limaçon[1]
The word "limaçon" derives from the Latin word "limax," meaning snail. The general equation for a limaçon is $r = b + a\cos(\theta)$.
• If $b = a/2$, then it is a trisectrix (see figure 2).
• If $b = a$, then it becomes a cardioid (see figure 3).
• If $2a > b > a$, then it is dimpled (see figure 4).
• If $b \geq 2a$, then the curve is convex (see figure 5).
$r = \cos(\theta)$ 1 $r = 0.5 + \cos(\theta)$ 2 Cardioid $r = 1 + \cos(\theta)$3 $r = 1.5 + \cos(\theta)$4 $r = 2 + \cos(\theta)$5
## Finding Derivatives
A derivative gives the slope of any point in a function.
Consider the polar curve $r = f(\theta)$. If we turn it into parametric equations, we would get:
• $x = r \cos(\theta) = f(\theta) \cos(\theta)$
• $y = r \sin(\theta) = f(\theta) \sin(\theta)$
Using the method of finding the derivative of parametric equations and the product rule, we would get:
$\frac{dy}{dx} = \frac{\frac{dy}{d\theta}}{\frac{dx}{d\theta}} = \frac{\frac{dr}{d\theta} \sin(\theta) + r \cos(\theta)}{\frac{dr}{d\theta} \cos(\theta) - r \sin(\theta)}$
Note: It is not necessary to turn the polar equation to parametric equations to find derivatives. You can simply use the formula above.
Find the derivative of $r = 1 + \sin(\theta)$ at $\theta = \frac{\pi}{3}$.
$\frac{dr}{d\theta} = \cos(\theta)$
$\frac{dy}{dx} = \frac{\frac{dr}{d\theta} \sin(\theta) + r \cos(\theta)}{\frac{dr}{d\theta} \cos(\theta) - r \sin(\theta)}$
$= \frac{\cos(\theta) \sin(\theta) + (1 + \sin(\theta) ) \cos(\theta)}{\cos(\theta)\cos(\theta) - (1 + \sin(\theta) ) \sin(\theta)}$
$= \frac{\cos(\theta)\sin(\theta) + \cos(\theta) + \cos(\theta)\sin(\theta)}{\cos^2(\theta) - \sin(\theta) - \sin^2(\theta)}$
Note: Using the double-angle formula, we get $\cos^2(\theta) - \sin^2(\theta) = 1 - 2\sin^2(\theta)$
$= \frac{\cos(1+2\sin(\theta))}{1-2\sin^2(\theta)-\sin(\theta)}$
## Finding Areas and Arc Lengths
Area of a sector of a circle.
To find the area of a sector of a circle, where $r$ is the radius, you would use $A = \frac{1}{2} r^2 \theta$.
$A = \int_{-\frac{\pi}{4}}^\frac{\pi}{4}\! \frac{1}{2} \cos^2(2\theta) d\theta$
Therefore, for $r = f(\theta)$, the formula for the area of a polar region is:
$A = \int_a^b\! \frac{1}{2} r^2 d\theta$
The formula to find the arc length for $r = f(\theta)$ and assuming $r$ is continuous is:
$L = \int_a^b\! \sqrt{r^2 + {\bigg(\frac{dr}{d\theta}\bigg)} ^2}$ $d\theta$
# Why It's Interesting
Polar coordinates are often used in navigation, such as aircrafts. They are also used to plot gravitational fields and point sources. Furthermore, polar patterns are seen in the directionality of microphones, which is the direction at which the microphone picks up sound. A well-known pattern is a cardioid.
## Possible Future work
• More details can be written about the different curves, maybe they can get their own pages.
• Applets can be made to draw these different curves, like the one on the page for roses.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# Related Links
### Additional Resources
Polar Coordinates
Cardioid
Source code: Rose graphing applet
# References
Wolfram MathWorld: Rose, Limacon, Archimedean Spiral
Wikipedia: Polar Coordinate System, Archimedean Spiral, Fermat's Spiral
1. ↑ Weisstein, Eric W. (2011). http://mathworld.wolfram.com/Limacon.html. Wolfram:MathWorld.
2. ↑ 2.0 2.1 Stewert, James. (2009). Calculus Early Transcendentals. Ohio:Cengage Learning.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 73, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8662855625152588, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/15627/is-learning-to-use-mathematica-useful-for-pure-theoretical-research-in-mathemati/15632
|
# Is learning to use Mathematica useful for pure theoretical research in Mathematics and Computer Science?
I am looking for opinions from Mathematica users about Mathematica itself. After reading the faq, I thought that in some sense I "wish to solve a problem using Mathematica" ... although I understand that this question is probably too vague. I hope not to be distracting here.
I see that some researchers (in math and computer science) at my university use Mathematica to carry out intricate calculations. I wonder if mathematical software today is so good that it can do better than humans at symbolic manipulation. This would mean that it would be really worthwhile to acquire some background in using such software, especially in those fields of research where one often has to deal with huge formulas.
Also, two subquestions:
1. How does one cite the use of Mathematica in a research paper? One could in principle say "beginning from formula X, using Mathematica we get Y". Would that be accepted professionally?
2. Are these systems foolproof (particularly Mathematica)? I mean, is it known that there are no special instances of symbolic-manipulation problems for which the software could end up doing an error dealing with them?
P.S. I didn't find a proper tag for my question.
-
1
If the question is "is Mathematica better at manipulating formulas than humans" I'd say that in most cases it's so much better it's funny (eg it can work with polynomials that would take a human hours to write down, or can do integrals that would take hours to look up in Gradshteyn and Ryzhik and others that aren't there). So can Maple for that matter. But of course they have bugs so some answer might be wrong. And finally, viewing it as an algebra system only is too restrictive; think of it as a platform to do general computation (numerics, visualization etc). – acl Dec 3 '12 at 17:36
I wrote this answer mathematica.stackexchange.com/questions/733/… to warn about reckless using `Mathematica` when it is not necessary. On the other hand it may be an indispensable tool. – Artes Dec 3 '12 at 17:45
symbolic computation is useful, but I think numerical computation is much, much more important in the real world. 99.9% of real world problems can't be solved analytically. That is why we still have Fortran going strong even after 50 years :) – Nasser Dec 3 '12 at 18:26
@NasserM.Abbasi I don't quite agree with that. All fundamental intuition we have about our world is essentially analytical. Pure numerics would drive us to chaos in no time. – Leonid Shifrin Dec 3 '12 at 18:37
1
If I find myself using numerical, statistical, or even empirical solutions to real-world problems, I have a nagging feeling in the back of my mind that I just haven't thought hard enough about the problem. I have that nagging feeling a lot ;-) – Jagra Dec 3 '12 at 21:29
show 7 more comments
## 3 Answers
“I wonder if mathematical software today is so good that it can do better than humans at symbolic manipulation. This would mean that it would be really worthwhile to acquire some background in using such software, especially in those fields of research where one often has to deal with huge formulas.”
I would say faster and more accurate symbolic manipulation is one of the main points of Mathematica so yes I’d agree with the above implication and that it would be worthwhile acquiring some background.
Symbolic manipulation appears in many contexts however, and your header question about its usefulness in “pure theoretical research in Mathematics and Computer Science” is almost a completely separate (and loaded) question in itself with its own philosophical and practical overtones well beyond the two subquestions posed. These subquestions do tangentially relate to these overtones though:
1. Citations: "beginning from formula X, using Mathematica we get Y" I think as general rule this would be OK provided details are provided (perhaps in appendices/code attachments) about how this was done and in particular how such results can be reproduced. Usually it is the conceptual insights that are of interest more so than the mechanical manipulation. The other way is to cite computational work is to create a related Demonstration and cite this using it’s own citation syntax (that appears below each Demonstration).
2. Correctness: For sufficiently large manipulations I think correctness actually becomes much more of an issue if Mathematica (or similar) is not used. The chance of error doing this manipulations by hand is much higher that from using Mathematica's inbuilt transformations. I’d even go further and say the chance of an error using these inbuilt transformations is actually much lower than the chance of an error appearing in published theoretical proofs (usually done entirely by hand). The caveat to this is of course good programming practice that builds upon these transformations and constant vigilance some of which Leonid mentioned but some rules of thumb I’ve found useful are:
• For lower-dimensions, implement solutions using at least 3 different (“conceptionally orthogonal”) methods. (The flexibility of Mathematica’s language means this is usually not too difficult and provides a good margin of error)
• For higher-order dimensions check for lower-order consistency (It is usually impractically to refine 3 different implementations so usually it is sufficient to settle on one for refinement and efficiency improvements in tackling higher-order dimensions)
• Use sanity checks often (graphs, implications of output in relation to obvious truths)
• Use validation suites routinely (either unit tests in Wolfram Workbench, or custom-made ones in the frontend- my preferred method)
• Cross-reference with other published algorithms/output
• Keep an open mind that errors are possible given these aren’t proofs but try to categorize possible error sources - e.g. One crude, overarching one for error sources:
I’d say 3) and 4) are pretty unlikely error sources; 2) is where most errors occur and hence its focus in the above measures (which can also help in confirming that 1) is a publishable result).
To the specific question in the header -take theoretical computer science (TCS)- one could ask is it even a result in “Theoretical Computer Science” if Mathematica is heavily involved? This comes down to definitions and philosophy. There is a school of thought that for example experimentation using high-performance computation in TCS is unlikely to yield too many insights. (I’m talking about traditional experimentation, not actual TCS proofs in mathematica which is again is an even more loaded question.)
Take one luminary - Scott Aaronson’s view about using high-performance computation in TCS research given in a presentation in which he states (slide 4):
The hope: Examining the minimal circuits would inspire new conjectures about asymptotic behavior, which we could then try to prove
Conventional wisdom: We wouldn’t learn anything this way - There are $~2^{2^{n}}$ circuits on $n$ variables — astronomical even for tiny $n$
- Small-$n$ behavior can be notoriously misleading about asymptotics
My view: The conventional wisdom is probably right. That’s why I’m talking in this session.
This stackexchange entry indicates some successes in in experimental complexity theory although it appears pretty limited and in limited domains.
I can offer a kind of counter-example in a Demonstration in which for a type of CNF circuit a conjecture about asymptotic (threshold) behaviour is surprisingly clear from considering only the first few dimensions $n=2,3,4$ (it had been checked statistically for larger values of $n$ and in related theoretical work)
(Note how “The hope:” part above implicitly reveals what is considered TCS or of value in TCS).
Then there are also the cultural issues worth considering.
How many (latex) papers in “TCS journals” even mention runnable code? (I’d suggest <1%)
Timelessness. It’s a pretty good bet that latex-generated PDF’s in TCS will be viewable in 10-20-50 years mainly because this format already houses so much scientific knowledge. Conversely it’s almost guaranteed that a sufficiently large Mathematica package (e.g. of the sort that might involve some systematic experimentation in TCS) will not be runnable in even 5 years. This of course is not a Mathematica issue per-se (it perhaps handles backward compatibility better than most) but one common to all experimentation since backward compatibility is an order of magnitude greater problem for runnable code compared with static documents. One of the potentially important things about the Demonstration site IMO is that maybe this backward compatibility will end up being managed for you.
This timelessness and cultural issues become relevant to the extent that your code becomes more and more complex and more and more part of your results - which in many ways will be inevitable in any systematic experimentation (if you take the philosophical position about the potential of symbolic manipulation in TCS) that might increasingly be needed to discover something new.
So my take it on using Mathematica systematically in TCS would be:
• For “mainly theoretical results” any motivation/checking/illustrations using Mathematica could be beneficial.
• Closer and deeper integration between mathematica experimentation and TCS is still a relatively unexplored and potentially fruitful area IMO (- as someone not working in the field) but … most experts in the area would probably disagree and at any rate …
• Technically the infrastructure for a larger-scale Latex/Mathematica - theoretical/experimental framework is not sufficiently developed (IMO) to currently go too far down this path.
• Demonstrations might be a step in the right direction and perhaps provide a good litmus test for the level of complexity in terms of symbolic manipulation used in TCS research. If it can be put into a Demonstration your work may have a better chance of gaining some sort of timelessness. Currently Demonstrations are a fair way behind what can be done in a notebook (e.g. integration of packages, input fields, external data sources etc) but IMO this situation may improve over time and particularly if WRI shares this view about the importance of imbuing this timelessness in computational research (perhaps adding package support, Google Play/App Store functionality etc).
-
Regarding your general question: from my own experience, a large part of my research findings while still at academia would have been much harder or outright impossible to get without Mathematica, and that applies to both numerical and analytical work. I also know this for many other people. There are a number of areas where it allows you to get through some pretty sophisticated stuff without being an expert in some particular field, such as special functions, for instance.
1. No, in most cases saying "from X to Y you get using Mathematica" (or any other system) is not professional, at least in the fields I was working (theoretical and mathematical physics). You can say that you used Mathematica to obtain some results, but there should be a proof of them not relying on it. For some computations, one can attach Mathematica code and claim the results, but that happens mostly for some complementary or auxiliary results, not for main results you report in the paper.
2. No, these systems are not foolproof and probably will never be. They are tools, albeit very sophisticated, and as with any tool, however powerful, it all depends on the person who is using it. You should not rely on any single tool alone. What I personally was doing was to perform many checks, numerical and analytical, to make sure that the results I was getting were correct. Even if most of them were obtained with Mathematica, one can always devise cross-checks, check limiting cases, do computations in qualitatively different ways, perform some graphical checks, etc - what matters is that you understand the problem well, then you'll know what to do.
So, my take on it is that Mathematica is enormously helpful for both getting the understanding of the problem being solved and getting the actual results, but is not a replacement for a human, and not meant to be.
-
1
At some point I was dealing with formulas which had about 70000 terms each of which was a polynomail multiplied by several Bessel functions. I was able to use those formulas productively in Mathemaica, and there is no question that I would not be able to use or even obtain those without it. They were not, of course, the final result, but they were instrumental to get to it. – Leonid Shifrin Dec 3 '12 at 18:22
Don't forget about graphical checks! – murray Dec 5 '12 at 18:47
@murray Well, I sort of implicitly included them into the checks I was talking about. But may be it is worth mentioning explicitly. Edited. – Leonid Shifrin Dec 5 '12 at 18:49
I agree with remarks of Leonid and acl - Mathematica is generally much better at performing large computations than are humans. I would add the caveat, though, that Mathematica can certainly miss special simplifications involving symmetry. As a result, there are plenty of situations that can be evaluated by hand but not by Mathematica. One area that is rife with this sort of thing is numerical differential equations. If you have an equation on a disk with symmetric initial conditions, then you might need to reduce the dimension using the symmetry by hand.
On the positive side, I published a paper in Complex Systems where the main result depended on the exact eigenvalues of a large, irregular matrix. This is exactly the kind of thing where Mathematica will excel compared to humans. Furthermore, as is often the case, once we know the eigenvalue and corresponding eigenvector - proving that fact is a simple computation. The solution is self-verifying.
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495806694030762, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F02/f02intro.html
|
# NAG Library Chapter Introductionf02 – Eigenvalues and Eigenvectors
## 1 Scope of the Chapter
This chapter provides functions for various types of matrix eigenvalue problem:
• standard eigenvalue problems (finding eigenvalues and eigenvectors of a square matrix $A$);
• singular value problems (finding singular values and singular vectors of a rectangular matrix $A$);
• generalized eigenvalue problems (finding eigenvalues and eigenvectors of a matrix pencil $A-\lambda B$).
Functions are provided for both real and complex data.
The majority of functions for these problems can be found in Chapter f08 which contains software derived from LAPACK (see Anderson et al. (1999)). However, you should read the introduction to this chapter before turning to Chapter f08, especially if you are a new user. Chapter f12 contains functions for large sparse eigenvalue problems, although one such function is also available in this chapter.
Chapters f02 and f08 contain Black Box (or Driver) functions that enable many problems to be solved by a call to a single function, and the decision trees in Section 4 direct you to the most appropriate functions in Chapters f02 and f08.
## 2 Background to the Problems
Here we describe the different types of problem which can be tackled by the functions in this chapter, and give a brief outline of the methods used to solve them. If you have one specific type of problem to solve, you need only read the relevant sub-section and then turn to Section 3. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (1996) or Parlett (1998).
In each sub-section, we first describe the problem in terms of real matrices. The changes needed to adapt the discussion to complex matrices are usually simple and obvious: a matrix transpose such as ${Q}^{\mathrm{T}}$ must be replaced by its conjugate transpose ${Q}^{\mathrm{H}}$; symmetric matrices must be replaced by Hermitian matrices, and orthogonal matrices by unitary matrices. Any additional changes are noted at the end of the sub-section.
### 2.1 Standard Eigenvalue Problems
Let $A$ be a square matrix of order $n$. The standard eigenvalue problem is to find eigenvalues, $\lambda $, and corresponding eigenvectors, $x\ne 0$, such that
$Ax=λx.$ (1)
(The phrase ‘eigenvalue problem’ is sometimes abbreviated to eigenproblem.)
#### 2.1.1 Standard symmetric eigenvalue problems
If $A$ is real symmetric, the eigenvalue problem has many desirable features, and it is advisable to take advantage of symmetry whenever possible.
The eigenvalues $\lambda $ are all real, and the eigenvectors can be chosen to be mutually orthogonal. That is, we can write
$Azi=λizi for i=1,2,…,n$
or equivalently:
$AZ=ZΛ$ (2)
where $\Lambda $ is a real diagonal matrix whose diagonal elements ${\lambda }_{i}$ are the eigenvalues, and $Z$ is a real orthogonal matrix whose columns ${z}_{i}$ are the eigenvectors. This implies that ${z}_{i}^{\mathrm{T}}{z}_{j}=0$ if $i\ne j$, and ${‖{z}_{i}‖}_{2}=1$.
Equation (2) can be rewritten
$A=ZΛZT.$ (3)
This is known as the eigen-decomposition or spectral factorization of $A$.
Eigenvalues of a real symmetric matrix are well-conditioned, that is, they are not unduly sensitive to perturbations in the original matrix $A$. The sensitivity of an eigenvector depends on how small the gap is between its eigenvalue and any other eigenvalue: the smaller the gap, the more sensitive the eigenvector. More details on the accuracy of computed eigenvalues and eigenvectors are given in the function documents, and in the f08 Chapter Introduction.
For dense or band matrices, the computation of eigenvalues and eigenvectors proceeds in the following stages:
1. $A$ is reduced to a symmetric tridiagonal matrix $T$ by an orthogonal similarity transformation: $A=QT{Q}^{\mathrm{T}}$, where $Q$ is orthogonal. (A tridiagonal matrix is zero except for the main diagonal and the first subdiagonal and superdiagonal on either side.) $T$ has the same eigenvalues as $A$ and is easier to handle.
2. Eigenvalues and eigenvectors of $T$ are computed as required. If all eigenvalues (and optionally eigenvectors) are required, they are computed by the $QR$ algorithm, which effectively factorizes $T$ as $T=S\Lambda {S}^{\mathrm{T}}$, where $S$ is orthogonal, or by the divide-and-conquer method. If only selected eigenvalues are required, they are computed by bisection, and if selected eigenvectors are required, they are computed by inverse iteration. If $s$ is an eigenvector of $T$, then $Qs$ is an eigenvector of $A$.
All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex Hermitian matrix. The eigenvectors are complex, but the eigenvalues are all real, and so is the tridiagonal matrix $T$.
#### 2.1.2 Standard nonsymmetric eigenvalue problems
A real nonsymmetric matrix $A$ may have complex eigenvalues, occurring as complex conjugate pairs. If $x$ is an eigenvector corresponding to a complex eigenvalue $\lambda $, then the complex conjugate vector $\stackrel{-}{x}$ is the eigenvector corresponding to the complex conjugate eigenvalue $\stackrel{-}{\lambda }$. Note that the vector $x$ defined in equation (1) is sometimes called a right eigenvector; a left eigenvector $y$ is defined by
$yHA=λyH or ATy=λ-y.$
Functions in this chapter only compute right eigenvectors (the usual requirement), but functions in Chapter f08 can compute left or right eigenvectors or both.
The eigenvalue problem can be solved via the Schur factorization of $A$, defined as
$A=ZTZT,$
where $Z$ is an orthogonal matrix and $T$ is a real upper quasi-triangular matrix, with the same eigenvalues as $A$. $T$ is called the Schur form of $A$. If all the eigenvalues of $A$ are real, then $T$ is upper triangular, and its diagonal elements are the eigenvalues of $A$. If $A$ has complex conjugate pairs of eigenvalues, then $T$ has $2$ by $2$ diagonal blocks, whose eigenvalues are the complex conjugate pairs of eigenvalues of $A$. (The structure of $T$ is simpler if the matrices are complex – see below.)
For example, the following matrix is in quasi-triangular form
$1 * * * 0 2 -1 * 0 1 2 * 0 0 0 3$
and has eigenvalues $1$, $2±i$, and $3$. (The elements indicated by ‘$*$’ may take any values.)
The columns of $Z$ are called the Schur vectors. For each $k\left(1\le k\le n\right)$, the first $k$ columns of $Z$ form an orthonormal basis for the invariant subspace corresponding to the first $k$ eigenvalues on the diagonal of $T$. (An invariant subspace (for $A$) is a subspace $S$ such that for any vector $v$ in $S$, $Av$ is also in $S$.) Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of $k$ eigenvalues occupy the $k$ leading positions on the diagonal of $T$, and functions for this purpose are provided in Chapter f08.
Note that if $A$ is symmetric, the Schur vectors are the same as the eigenvectors, but if $A$ is nonsymmetric, they are distinct, and the Schur vectors, being orthonormal, are often more satisfactory to work with in numerical computation.
Eigenvalues and eigenvectors of a nonsymmetric matrix may be ill-conditioned, that is, sensitive to perturbations in $A$. Chapter f08 contains functions which compute or estimate the condition numbers of eigenvalues and eigenvectors, and the f08 Chapter Introduction gives more details about the error analysis of nonsymmetric eigenproblems. The accuracy with which eigenvalues and eigenvectors can be obtained is often improved by balancing a matrix. This is discussed further in Section 3.4.
Computation of eigenvalues, eigenvectors or the Schur factorization proceeds in the following stages:
1. $A$ is reduced to an upper Hessenberg matrix $H$ by an orthogonal similarity transformation: $A=QH{Q}^{\mathrm{T}}$, where $Q$ is orthogonal. (An upper Hessenberg matrix is zero below the first subdiagonal.) $H$ has the same eigenvalues as $A$, and is easier to handle.
2. The upper Hessenberg matrix $H$ is reduced to Schur form $T$ by the $QR$ algorithm, giving the Schur factorization $H=ST{S}^{\mathrm{T}}$. The eigenvalues of $A$ are obtained from the diagonal blocks of $T$. The matrix $Z$ of Schur vectors (if required) is computed as $Z=QS$.
3. After the eigenvalues have been found, eigenvectors may be computed, if required, in two different ways. Eigenvectors of $H$ can be computed by inverse iteration, and then pre-multiplied by $Q$ to give eigenvectors of $A$; this approach is usually preferred if only a few eigenvectors are required. Alternatively, eigenvectors of $T$ can be computed by back-substitution, and pre-multiplied by $Z$ to give eigenvectors of $A$.
All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex matrix. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the Schur form $T$ is simply a complex upper triangular matrix.
### 2.2 The Singular Value Decomposition
The singular value decomposition (SVD) of a real $m$ by $n$ matrix $A$ is given by
$A=UΣVT,$
where $U$ and $V$ are orthogonal and $\Sigma $ is an $m$ by $n$ diagonal matrix with real diagonal elements, ${\sigma }_{i}$, such that
$σ1≥σ2≥⋯≥σminm,n≥0.$
The ${\sigma }_{i}$ are the singular values of $A$ and the first $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ columns of $U$ and $V$ are, respectively, the left and right singular vectors of $A$. The singular values and singular vectors satisfy
$Avi=σiui and ATui=σivi$
where ${u}_{i}$ and ${v}_{i}$ are the $i$th columns of $U$ and $V$ respectively.
The singular value decomposition of $A$ is closely related to the eigen-decompositions of the symmetric matrices ${A}^{\mathrm{T}}A$ or $A{A}^{\mathrm{T}}$, because:
$ATAvi=σi2vi and AATui=σi2ui.$
However, these relationships are not recommended as a means of computing singular values or vectors unless $A$ is sparse and functions from Chapter f12 are to be used.
If ${U}_{k}$, ${V}_{k}$ denote the leading $k$ columns of $U$ and $V$ respectively, and if ${\Sigma }_{k}$ denotes the leading principal submatrix of $\Sigma $, then
$Ak ≡ Uk Σk VTk$
is the best rank-$k$ approximation to $A$ in both the $2$-norm and the Frobenius norm.
Singular values are well-conditioned; that is, they are not unduly sensitive to perturbations in $A$. The sensitivity of a singular vector depends on how small the gap is between its singular value and any other singular value: the smaller the gap, the more sensitive the singular vector. More details on the accuracy of computed singular values and vectors are given in the function documents and in the f08 Chapter Introduction.
The singular value decomposition is useful for the numerical determination of the rank of a matrix, and for solving linear least squares problems, especially when they are rank-deficient (or nearly so). See Chapter f04.
Computation of singular values and vectors proceeds in the following stages:
1. $A$ is reduced to an upper bidiagonal matrix $B$ by an orthogonal transformation $A={U}_{1}B{V}_{1}^{\mathrm{T}}$, where ${U}_{1}$ and ${V}_{1}$ are orthogonal. (An upper bidiagonal matrix is zero except for the main diagonal and the first superdiagonal.) $B$ has the same singular values as $A$, and is easier to handle.
2. The SVD of the bidiagonal matrix $B$ is computed as $B={U}_{2}\Sigma {V}_{2}^{\mathrm{T}}$, where ${U}_{2}$ and ${V}_{2}$ are orthogonal and $\Sigma $ is diagonal as described above. Then in the SVD of $A$, $U={U}_{1}{U}_{2}$ and $V={V}_{1}{V}_{2}$.
All the above remarks also apply – with the obvious changes – to the case when $A$ is a complex matrix. The singular vectors are complex, but the singular values are real and non-negative, and the bidiagonal matrix $B$ is also real.
### 2.3 Generalized Eigenvalue Problems
Let $A$ and $B$ be square matrices of order $n$. The generalized eigenvalue problem is to find eigenvalues, $\lambda $, and corresponding eigenvectors, $x\ne 0$, such that
$Ax=λBx.$ (4)
For given $A$ and $B$, the set of all matrices of the form $A-\lambda B$ is called a pencil, and $\lambda $ and $x$ are said to be an eigenvalue and eigenvector of the pencil $A-\lambda B$.
When $B$ is nonsingular, equation (4) is mathematically equivalent to $\left({B}^{-1}A\right)x=\lambda x$, and when $A$ is nonsingular, it is equivalent to $\left({A}^{-1}B\right)x=\left(1/\lambda \right)x$. Thus, in theory, if one of the matrices $A$ or $B$ is known to be nonsingular, the problem could be reduced to a standard eigenvalue problem.
However, for this reduction to be satisfactory from the point of view of numerical stability, it is necessary not only that $B$ (or $A$) should be nonsingular, but that it should be well-conditioned with respect to inversion. The nearer $B$ is to singularity, the more unsatisfactory ${B}^{-1}A$ will be as a vehicle for determining the required eigenvalues. Well-determined eigenvalues of the original problem (4) may be poorly determined even by the correctly rounded version of ${B}^{-1}A$.
We consider first a special class of problems in which $B$ is known to be nonsingular, and then return to the general case in the following sub-section.
#### 2.3.1 Generalized symmetric-definite eigenvalue problems
If $A$ and $B$ are symmetric and $B$ is positive definite, then the generalized eigenvalue problem has desirable properties similar to those of the standard symmetric eigenvalue problem. The eigenvalues are all real, and the eigenvectors, while not orthogonal in the usual sense, satisfy the relations ${z}_{i}^{\mathrm{T}}B{z}_{j}=0$ for $i\ne j$ and can be normalized so that ${z}_{i}^{\mathrm{T}}B{z}_{i}=1$.
Note that it is not enough for $A$ and $B$ to be symmetric; $B$ must also be positive definite, which implies nonsingularity. Eigenproblems with these properties are referred to as symmetric-definite problems.
If $\Lambda $ is the diagonal matrix whose diagonal elements are the eigenvalues, and $Z$ is the matrix whose columns are the eigenvectors, then
$ZTAZ=Λ and ZTBZ=I.$
To compute eigenvalues and eigenvectors, the problem can be reduced to a standard symmetric eigenvalue problem, using the Cholesky factorization of $B$ as $L{L}^{\mathrm{T}}$ or ${U}^{\mathrm{T}}U$ (see Chapter f07). Note, however, that this reduction does implicitly involve the inversion of $B$, and hence this approach should not be used if $B$ is ill-conditioned with respect to inversion.
For example, with $B=L{L}^{\mathrm{T}}$, we have
$Az=λBz ⇔ L-1AL-T LTz = λLTz .$
Hence the eigenvalues of $Az=\lambda Bz$ are those of $Cy=\lambda y$, where $C$ is the symmetric matrix $C={L}^{-1}A{L}^{-\mathrm{T}}$ and $y={L}^{\mathrm{T}}z$. The standard symmetric eigenproblem $Cy=\lambda y$ may be solved by the methods described in Section 2.1.1. The eigenvectors $z$ of the original problem may be recovered by computing $z={L}^{-\mathrm{T}}y$.
Most of the functions which solve this class of problems can also solve the closely related problems
$ABx=λx or BAx=λx$
where again $A$ and $B$ are symmetric and $B$ is positive definite. See the function documents for details.
All the above remarks also apply – with the obvious changes – to the case when $A$ and $B$ are complex Hermitian matrices. Such problems are called Hermitian-definite. The eigenvectors are complex, but the eigenvalues are all real.
#### 2.3.2 Generalized nonsymmetric eigenvalue problems
Any generalized eigenproblem which is not symmetric-definite with well-conditioned $B$ must be handled as if it were a general nonsymmetric problem.
If $B$ is singular, the problem has infinite eigenvalues. These are not a problem; they are equivalent to zero eigenvalues of the problem $Bx=\mu Ax$. Computationally they appear as very large values.
If $A$ and $B$ are both singular and have a common null space, then $A-\lambda B$ is singular for all $\lambda $; in other words, any value $\lambda $ can be regarded as an eigenvalue. Pencils with this property are called singular.
As with standard nonsymmetric problems, a real problem may have complex eigenvalues, occurring as complex conjugate pairs.
The generalized eigenvalue problem can be solved via the generalized Schur factorization of $A$ and $B$:
$A=QUZT, B=QVZT$
where $Q$ and $Z$ are orthogonal, $V$ is upper triangular, and $U$ is upper quasi-triangular (defined just as in Section 2.1.2).
If all the eigenvalues are real, then $U$ is upper triangular; the eigenvalues are given by ${\lambda }_{i}={u}_{ii}/{v}_{ii}$. If there are complex conjugate pairs of eigenvalues, then $U$ has $2$ by $2$ diagonal blocks.
Eigenvalues and eigenvectors of a generalized nonsymmetric problem may be ill-conditioned; that is, sensitive to perturbations in $A$ or $B$.
Particular care must be taken if, for some $i$, ${u}_{ii}={v}_{ii}=0$, or in practical terms if ${u}_{ii}$ and ${v}_{ii}$ are both small; this means that the pencil is singular, or approximately so. Not only is the particular value ${\lambda }_{i}$ undetermined, but also no reliance can be placed on any of the computed eigenvalues. See also the function documents.
Computation of eigenvalues and eigenvectors proceeds in the following stages.
1. The pencil $A-\lambda B$ is reduced by an orthogonal transformation to a pencil $H-\lambda K$ in which $H$ is upper Hessenberg and $K$ is upper triangular: $A={Q}_{1}H{Z}_{1}^{\mathrm{T}}$ and $B={Q}_{1}K{Z}_{1}^{\mathrm{T}}$. The pencil $H-\lambda K$ has the same eigenvalues as $A-\lambda B$, and is easier to handle.
2. The upper Hessenberg matrix $H$ is reduced to upper quasi-triangular form, while $K$ is maintained in upper triangular form, using the $QZ$ algorithm. This gives the generalized Schur factorization: $H={Q}_{2}U{Z}_{2}$ and $K={Q}_{2}V{Z}_{2}$.
3. Eigenvectors of the pencil $U-\lambda V$ are computed (if required) by back-substitution, and pre-multiplied by ${Z}_{1}{Z}_{2}$ to give eigenvectors of $A$.
All the above remarks also apply – with the obvious changes – to the case when $A$ and $B$ are complex matrices. The eigenvalues are in general complex, so there is no need for special treatment of complex conjugate pairs, and the matrix $U$ in the generalized Schur factorization is simply a complex upper triangular matrix.
## 3 Recommendations on Choice and Use of Available Functions
### 3.1 Black Box Functions and General Purpose Functions
Functions in the NAG C Library for solving eigenvalue problems fall into two categories.
1. Black Box Functions: these are designed to solve a standard type of problem in a single call – for example, to compute all the eigenvalues and eigenvectors of a real symmetric matrix. You are recommended to use a black box function if there is one to meet your needs; refer to the decision tree in Section 4.1 or the index in Section 5.
2. General Purpose Functions: these perform the computational subtasks which make up the separate stages of the overall task, as described in Section 2 – for example, reducing a real symmetric matrix to tridiagonal form. General purpose functions are to be found, for historical reasons, some in this chapter, a few in Chapter f01, but most in Chapter f08. If there is no black box function that meets your needs, you will need to use one or more general purpose functions.
Here are some of the more likely reasons why you may need to do this:
• Your problem is already in one of the reduced forms – for example, your symmetric matrix is already tridiagonal.
• You wish to economize on storage for symmetric matrices (see Section 3.3).
• You wish to find selected eigenvalues or eigenvectors of a generalized symmetric-definite eigenproblem (see also Section 3.2).
The decision trees in Section 4.2 list the combinations of general purpose functions which are needed to solve many common types of problem.
Sometimes a combination of a black box function and one or more general purpose functions will be the most convenient way to solve your problem: the black box function can be used to compute most of the results, and a general purpose function can be used to perform a subsidiary computation, such as computing condition numbers of eigenvalues and eigenvectors.
### 3.2 Computing Selected Eigenvalues and Eigenvectors
The decision trees and the function documents make a distinction between functions which compute all eigenvalues or eigenvectors, and functions which compute selected eigenvalues or eigenvectors; the two classes of function use different algorithms.
It is difficult to give clear guidance on which of these two classes of function to use in a particular case, especially with regard to computing eigenvectors. If you only wish to compute a very few eigenvectors, then a function for selected eigenvectors will be more economical, but if you want to compute a substantial subset (an old rule of thumb suggested more than 25%), then it may be more economical to compute all of them. Conversely, if you wish to compute all the eigenvectors of a sufficiently large symmetric tridiagonal matrix, the function for selected eigenvectors may be faster.
The choice depends on the properties of the matrix and on the computing environment; if it is critical, you should perform your own timing tests.
For dense nonsymmetric eigenproblems, there are no algorithms provided for computing selected eigenvalues; it is always necessary to compute all the eigenvalues, but you can then select specific eigenvectors for computation by inverse iteration.
### 3.3 Storage Schemes for Symmetric Matrices
Functions which handle symmetric matrices are usually designed to use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length $n\left(n+1\right)/2$; in other words, the storage is almost halved. This storage format is referred to as packed storage.
Functions designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency.
A band matrix is one whose nonzero elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required.
Functions which take advantage of packed storage or bandedness are provided for both standard symmetric eigenproblems and generalized symmetric-definite eigenproblems.
### 3.4 Balancing for Nonsymmmetric Eigenproblems
There are two preprocessing steps which one may perform on a nonsymmetric matrix $A$ in order to make its eigenproblem easier. Together they are referred to as balancing.
1. Permutation: this involves reordering the rows and columns to make $A$ more nearly upper triangular (and thus closer to Schur form): ${A}^{\prime }=PA{P}^{\mathrm{T}}$, where $P$ is a permutation matrix. If $A$ has a significant number of zero elements, this preliminary permutation can reduce the amount of work required, and also improve the accuracy of the computed eigenvalues. In the extreme case, if $A$ is permutable to upper triangular form, then no floating point operations are needed to reduce it to Schur form.
2. Scaling: a diagonal matrix $D$ is used to make the rows and columns of ${A}^{\prime }$ more nearly equal in norm: ${A}^{\prime \prime }=D{A}^{\prime }{D}^{-1}$. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter II/11 of Wilkinson and Reinsch (1971)).
Functions are provided in Chapter f08 for performing either or both of these preprocessing steps, and also for transforming computed eigenvectors or Schur vectors back to those of the original matrix.
Black box functions in this chapter which compute the Schur factorization perform only the permutation step, since diagonal scaling is not in general an orthogonal transformation. The black box functions which compute eigenvectors perform both forms of balancing.
### 3.5 Non-uniqueness of Eigenvectors and Singular Vectors
Eigenvectors, as defined by equations (1) or (4), are not uniquely defined. If $x$ is an eigenvector, then so is $kx$ where $k$ is any nonzero scalar. Eigenvectors computed by different algorithms, or on different computers, may appear to disagree completely, though in fact they differ only by a scalar factor (which may be complex). These differences should not be significant in any application in which the eigenvectors will be used, but they can arouse uncertainty about the correctness of computed results.
Even if eigenvectors $x$ are normalized so that ${‖x‖}_{2}=1$, this is not sufficient to fix them uniquely, since they can still be multiplied by a scalar factor $k$ such that $\left|k\right|=1$. To counteract this inconvenience, most of the functions in this chapter, and in Chapter f08, normalize eigenvectors (and Schur vectors) so that ${‖x‖}_{2}=1$ and the component of $x$ with largest absolute value is real and positive. (There is still a possible indeterminacy if there are two components of equal largest absolute value – or in practice if they are very close – but this is rare.)
In symmetric problems the computed eigenvalues are sorted into ascending order, but in nonsymmetric problems the order in which the computed eigenvalues are returned is dependent on the detailed working of the algorithm and may be sensitive to rounding errors. The Schur form and Schur vectors depend on the ordering of the eigenvalues and this is another possible cause of non-uniqueness when they are computed. However, it must be stressed again that variations in the results from this cause should not be significant. (Functions in Chapter f08 can be used to transform the Schur form and Schur vectors so that the eigenvalues appear in any given order if this is important.)
In singular value problems, the left and right singular vectors $u$ and $v$ which correspond to a singular value $\sigma $ cannot be normalized independently: if $u$ is multiplied by a factor $k$ such that $\left|k\right|=1$, then $v$ must also be multiplied by $k$.
Non-uniqueness also occurs among eigenvectors which correspond to a multiple eigenvalue, or among singular vectors which correspond to a multiple singular value. In practice, this is more likely to be apparent as the extreme sensitivity of eigenvectors which correspond to a cluster of close eigenvalues (or of singular vectors which correspond to a cluster of close singular values).
## 4 Decision Trees
### 4.1 Black Box Functions
The decision tree for this section is divided into three sub-trees.
Note: for the Chapter f08 functions there is generally a choice of simple and comprehensive function. The comprehensive functions return additional information such as condition and/or error estimates.
### Tree 1: Eigenvalues and Eigenvectors of Real Matrices
| | | | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|--------------------------------|------|--------|
| Is the eigenproblem $Ax=\lambda Bx$? | _yes | Are $A$ and $B$ symmetric with $B$ positive definite and well-conditioned w.r.t inversion? | _yes | Are eigenvalues only required? | _yes | f02adc |
| | | | | | | no| | | |
| | | | | | | f02aec | | |
| | | | no| | | | | |
| | | | f02bjc | | | | |
| no| | | | | | | |
| The eigenproblem is $Ax=\lambda x$. Is $A$ symmetric? | _yes | Are eigenvalues only required? | _yes | f02aac | | |
| | | | no| | | | | |
| | | | f02abc | | | | |
| no| | | | | | | |
| Are eigenvalues only required? | _yes | f02afc | | | | |
| no| | | | | | | |
| Is the Schur factorization required? | _yes | See Chapter f08 | | | | |
| no| | | | | | | |
| Are all eigenvectors required? | _yes | f02agc | | | | |
| no| | | | | | | |
| f02ecc | | | | | | |
### Tree 2: Eigenvalues and Eigenvectors of Complex Matrices
| | | | | |
|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|--------------------------------|------|--------|
| Is the eigenproblem $Ax=\lambda Bx$? | _yes | See Chapter f08 | | |
| no| | | | | |
| The eigenproblem is $Ax=\lambda x$. Is $A$ Hermitian? | _yes | Are eigenvalues only required? | _yes | f02awc |
| | | | no| | | |
| | | | f02axc | | |
| no| | | | | |
| Are eigenvalues only required? | _yes | See Chapter f08 | | |
| no| | | | | |
| Is the Schur factorization required? | _yes | See Chapter f08 | | |
| no| | | | | |
| Are all eigenvectors required? | _yes | See Chapter f08 | | |
| no| | | | | |
| f02gcc | | | | |
### Tree 3: Singular Values and Singular Vectors
| | | |
|---------------------------------------------------------------------------------------------------|------|--------|
| Is $A$ a complex matrix? | _yes | f02xec |
| no| | | |
| f02wec | | |
### 4.2 General Purpose Functions (Eigenvalues and Eigenvectors)
Functions for large sparse eigenvalue problems are to be found in Chapter f12, see the f12 Chapter Introduction.
The decision tree for this section addressing dense problems, is divided into eight sub-trees:
• Tree 1 Real Symmetric Eigenvalue Problems in the f08 Chapter Introduction
• Tree 2 Real Generalized Symmetric-definite Eigenvalue Problems in the f08 Chapter Introduction
• Tree 3 Real Nonsymmetric Eigenvalue Problems in the f08 Chapter Introduction
• Tree 4 Real Generalized Nonsymmetric Eigenvalue Problems in the f08 Chapter Introduction
• Tree 5 Complex Hermitian Eigenvalue Problems in the f08 Chapter Introduction
• Tree 6 Complex Generalized Hermitian-definite Eigenvalue Problems in the f08 Chapter Introduction
• Tree 7 Complex non-Hermitian Eigenvalue Problems in the f08 Chapter Introduction
• Tree 8 Complex Generalized non-Hermitian Eigenvalue Problems in the f08 Chapter Introduction
As it is very unlikely that one of the functions in this section will be called on its own, the other functions required to solve a given problem are listed in the order in which they should be called.
### 4.3 General Purpose Functions (Singular Value Decomposition)
See Section 4.2 in the f08 Chapter Introduction. For real sparse matrices where only selected singular values are required (possibly with their singular vectors), functions from Chapter f12 may be applied to the symmetric matrix ${A}^{\mathrm{T}}A$; see Section 9 in nag_real_symm_sparse_eigensystem_iter (f12fbc).
## 5 Functionality Index
Black Box functions,
complex eigenproblem,
selected eigenvalues and eigenvectors nag_complex_eigensystem_sel (f02gcc)
complex Hermitian eigenproblem,
all eigenvalues nag_hermitian_eigenvalues (f02awc)
all eigenvalues and eigenvectors nag_hermitian_eigensystem (f02axc)
complex singular value problem nag_complex_svd (f02xec)
real eigenproblem,
all eigenvalues nag_real_eigenvalues (f02afc)
all eigenvalues and eigenvectors nag_real_eigensystem (f02agc)
selected eigenvalues and eigenvectors nag_real_eigensystem_sel (f02ecc)
real generalized eigenproblem,
all eigenvalues and optionally eigenvectors nag_real_general_eigensystem (f02bjc)
real generalized symmetric-definite eigenproblem,
all eigenvalues nag_real_symm_general_eigenvalues (f02adc)
all eigenvalues and eigenvectors nag_real_symm_general_eigensystem (f02aec)
real singular value problem nag_real_svd (f02wec)
real symmetric eigenproblem,
all eigenvalues nag_real_symm_eigenvalues (f02aac)
all eigenvalues and eigenvectors nag_real_symm_eigensystem (f02abc)
General Purpose functions (see also Chapter f12),
real m by n matrix, leading terms SVD nag_real_partial_svd (f02wgc)
None.
## 7 References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 308, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.785301923751831, "perplexity_flag": "head"}
|
http://www.reference.com/browse/Incidence+graph
|
Definitions
Nearby Words
# Incidence structure
In combinatorial mathematics, an incidence structure is a triple
$C=\left(P,L,I\right).,$
where P is a set of "points", L is a set of "lines" and $I subseteq P times L$ is the incidence relation. The elements of I are called flags. If
$\left(p,ell\right) in I,$
we say that point p "lies on" line ℓ.
## Comparison with other structures
A figure may look like a graph, but in a graph an edge has just two ends (beyond a vertex a new edge starts), while a line in an incidence structure can be incident to more points.
An incidence structure has no concept of a point being in between two other points, the order of points on a line is undefined.
## Dual structure
If we interchange the role of "points" and "lines" in
$C=\left(P,L,I\right),,!$
the dual structure
$C^*=\left(L,P,I^*\right),!$
is obtained, where I* is the inverse relation of I. Clearly
$C^\left\{**\right\}=C.,!$
A structure C that is isomorphic to its dual C* is called self-dual.
## Correspondence with hypergraphs
Each hypergraph or set system can be regarded as an incidence structure in which the universal set plays the role of "points", the corresponding family of sets plays the role of "lines" and the incidence relation is set membership "∈". Conversely, every incidence structure can be viewed as a hypergraph.
### Example: Fano plane
In particular, let
P = {1,2,3,4,5,6,7},
L = {{1,2,4},{2,3,5},{3,4,6},{4,5,7},{5,6,1},{6,7,2},{7,1,3}}
The corresponding incidence structure is called the Fano plane.
## Geometric representation
Incidence structures can be modelled by points and curves in the Euclidean plane with usual geometric incidence. Some incidence structures admit representation by points and lines. The Fano plane is not one of them since it needs at least one curve.
## Levi graph of an incidence structure
Each incidence structure $C$ corresponds to a bipartite graph called Levi graph or incidence graph with a given black and white vertex coloring where black vertices correspond to points and white vertices correspond to lines of $C$ and the edges correspond to flags.
### Example: Heawood graph
For instance, the Levi graph of the Fano plane is the Heawood graph. Since the Heawood graph is connected and vertex-transitive, there exists an automorphism (such as the one defined by a reflection about the vertical axis in the above figure) interchanging black and white vertices. This, in turn, implies that the Fano plane is self-dual.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920954167842865, "perplexity_flag": "head"}
|
http://ams.org/bookstore?fn=20&arg1=gsmseries&ikey=GSM-123
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
Lectures on Linear Partial Differential Equations
Gregory Eskin, University of California, Los Angeles, CA
SEARCH THIS BOOK:
Graduate Studies in Mathematics
2011; 410 pp; hardcover
Volume: 123
ISBN-10: 0-8218-5284-1
ISBN-13: 978-0-8218-5284-2
List Price: US\$74
Member Price: US\$59.20
Order Code: GSM/123
See also:
Partial Differential Equations: Second Edition - Lawrence C Evans
This book is a reader-friendly, relatively short introduction to the modern theory of linear partial differential equations. An effort has been made to present complete proofs in an accessible and self-contained form.
The first three chapters are on elementary distribution theory and Sobolev spaces with many examples and applications to equations with constant coefficients. The following chapters study the Cauchy problem for parabolic and hyperbolic equations, boundary value problems for elliptic equations, heat trace asymptotics, and scattering theory. The book also covers microlocal analysis, including the theory of pseudodifferential and Fourier integral operators, and the propagation of singularities for operators of real principal type. Among the more advanced topics are the global theory of Fourier integral operators and the geometric optics construction in the large, the Atiyah-Singer index theorem in $$\mathbb R^n$$, and the oblique derivative problem.
Readership
Graduate students and research mathematicians interested in partial differential equations.
AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8123334646224976, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/170854-quick-question-about-divergent-convergent.html
|
# Thread:
1. ## Quick question about divergent and convergent
Here is a problem:
If you integrate that, you would get:
Now using the Fundamental theorem of calculus, F(b) - F(a) you would get an answer of 4ln(4) - 8.
But how is that answer even possible? If you used the formula F(b) - F(a), it would look something like this,
Wouldn't the log(0) give an indeterminate answer? Doesn't that automatically qualify the problem to be divergent?
Same thing with this problem:
Integrate that and you would get:
Use the fundamental theorem of calculus and you would get this:
2(sqrt4)ln(4)-4sqrt4-[(2sqrt0)ln(0)-4sqrt0]
This problem turns out to be divergent. But I'm not so sure if it is because of the ln(0) or what.
Please clarify how the answers are how they are for both of my questions. Thank you so much in advance.
2. These are examples of improper integrals. If the function is not defined at some point on your region of integration, you need to evaluate a limit.
In your first example, you need to rewrite this as
$\displaystyle \lim_{\epsilon \to 0}\int_{\epsilon}^{4}{\frac{\log{x}}{\sqrt{x}}\,dx } = \lim_{\epsilon \to 0}\left[2\sqrt{x}\log{x}-4\sqrt{x}\right]_{\epsilon}^{4}$
$\displaystyle = 2\sqrt{4}\log{4} - 4\sqrt{4} - \lim_{\epsilon \to 0}\left(2\sqrt{\epsilon}\log{\epsilon} - 4\sqrt{\epsilon}\right)$
Can you go from there?
In the second example, you need to break it up into two definite integrals.
3. Ah thank you so much for your help for the first one. I can finally solve it now.
For the second one, you said to break it up into two definite integrals.
Would this be it?
I chose those limits because I saw my book used it on an example problem. I'm not sure if the limits would be the same for every problem or it depends. Please clarify for me and thank you thus far.
4. No. The function is not defined where the denominator is zero, i.e. where $\displaystyle x = -1, 2$.
Since $\displaystyle x = 2$ is in your region of integration, that will be the point where you break the integral into two integrals.
So it should be $\displaystyle \int_0^2{\frac{1}{x^2 - x - 2}\,dx} + \int_2^3{\frac{1}{x^2 - x- 2}\,dx}$.
5. Originally Posted by Prove It
No. The function is not defined where the denominator is zero, i.e. where $\displaystyle x = -1, 2$.
Since $\displaystyle x = 2$ is in your region of integration, that will be the point where you break the integral into two integrals.
So it should be $\displaystyle \int_0^2{\frac{1}{x^2 - x - 2}\,dx} + \int_2^3{\frac{1}{x^2 - x- 2}\,dx}$.
So that is how you figure out. Thank you so much for your help!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250804781913757, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78505/can-you-perturb-a-submanifold-to-intersect-transversally-with-any-other-smoot
|
## Can you perturb'' a submanifold to intersect transversally with any other smooth submanifold of projective space?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathcal{D} \approx \mathbb{P}^{\delta_d}$ be the space of homogeneous degree $d$ polynomials in three vriables, where $\delta_d = \frac{d(d+3)}{2}$. Let $$X \subset \mathcal{D} \times \mathbb{P}^2$$
be a smooth embedded complex submanifold, not necessarily closed. Given a point $p\in \mathbb{P}^2$, we get a hyperplane $$\tilde{H}_p \in \mathcal{D} \times \mathbb{P}^2.$$ Note that a point $p$ first of all gives a hyperplane $H_p$ in $\mathcal{D}$ (which is the space of degree $d$ polynomials passing through the point $p$). This gives us a hyperplane $$\tilde{H}_p := H_p\times \mathbb{P}^2 \in \mathcal{D} \times \mathbb{P}^2.$$ Let us further define $H_p^* \subset H_p$ to be the space of degree $d$ curves such that $p$ is a smooth point of the curve. Similarly define $$\tilde{H}_p^* := H_p^*\times \mathbb{P}^2 \in \mathcal{D} \times \mathbb{P}^2.$$
Is it true that for almost all choices of $p \in \mathbb{P}^2$, $\tilde{H}^*_p$ is transverse to $X$?
-
## 2 Answers
My first answer was wrong. The answer is no.
First observe that the embedding $X\to \mathcal D\times\mathbb P^2$ is a bit distracting. The map $X\to \mathbb P^2$ plays no role in the question of transversality to a given $\tilde H_q$. So I think about the problem like this:
You have smooth maps
$\mathbb P^2\leftarrow H\to \mathcal D\leftarrow X$,
where $H\subset \mathcal D\times \mathbb P^2$ is as in my wrong answer (i.e. the space of all pairs $(f,q)$ where $q$ is on the curve defined by $f$). The projection $H\to \mathbb P^2$ is a submersion, so the fiber $H_q$ over any $q$ is a manifold, and the question is whether for a dense set of $q$ the map $H_q\to \mathcal D$ is transverse to the map $X\to \mathcal D$.
(We say that two maps $A\to B\leftarrow C$ are transverse if whenever points $a$ and $c$ both go to the point $b$ then the tangent space $T_bB$ is spanned by the images of $T_aA$ and $T_cC$.)
If the map $H\to \mathcal D$ were a submersion, then the answer would be yes (for any smooth $X$ and any map $X\to \mathcal D$), by the following argument:
The submersion $H\to \mathcal D$ is transverse to any $X\to \mathcal D$. Thus the fiber product $Y=H\times_{\mathcal D}X$ is a manifold, and a little bit of playing with tangent spaces yields that those $q$ for which $H_q\to \mathcal D$ is transverse to $X\to \mathcal D$ are precisely the regular values of the projection $Y\to \mathbb P^2$. In particular the transversality holds for a dense set of choices of $q$.
But $H\to \mathcal D$ is not a submersion; this fails at precisely those points $(f,q)$ such that $q$ is a non-smooth point of the curve defined by $f$.
And there are counterexamples with $d=2$. Let's work in an affine plane in $\mathbb P^2$ with coordinates $(x,y)$. For each $q=(x_0,y_0)$ the quadratic equation $(x-x_0)(y-y_0)=0$ defines a curve through $q$. Let $X\subset \mathcal D$ be this two-dimensional family. For any $(x_0,y_0)$ the manifold $H_q$ intersects $X$ non-transversely.
-
Thank you for the answer. But I have a question about the counter example. Here your X also depends on the point q. So as you change q, both the X and the H move. In my question, X does not depend on q. As you change q only H should change. X should remain fixed. So do you still think this is a counter example to my question? – Ritwik Oct 21 2011 at 19:08
I did not express myself very well. I meant that $X$ is parametrized by pairs $q=(x_0,y_0)$, and that for any given point $q$ the manifold $H_q$ fails to be transverse to $X$ (where it meets $X$, namely at the point in $X$ corresponding to $q$). Does that make sense? – Tom Goodwillie Oct 22 2011 at 0:04
I have edited the question slightly. According to your argument, the answer to my question should now be yes, since $\pi_{D}$ is a submersion. Am I correct? – Ritwik Oct 25 2011 at 2:48
Yes, I believe so. – Tom Goodwillie Oct 25 2011 at 11:10
One further thing. Requiring $\pi_{\mathcal{D}}$ to be a submersion is a sufficient criteria, but not a necessary one. I simply need $\pi_{\mathcal{D}}$ to be transverse to $X$ (which is guaranteed if it is a submersion). – Ritwik Oct 25 2011 at 13:22
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Recall that for a smooth map $f:Y\to Z$ we say that $z\in Z$ is a critical value for $f$ if there is a point $y\in Y$ such that $f(y)=z$ and such that the induced map of tangent spaces $T_yY\to T_zZ$ is not surjective. A point in $Z$ is called a regular value of $f$ if it is not a critical value of $f$ (in particular if it is not in the image of $f$ at all). The usual tool in differential topology for showing that something can be perturbed to make something transverse to something is Sard's Theorem, which says that the set of critical values of a $C^\infty$ map $f$ always has measure zero and in particular that the set of regular values is dense. Of course, in your case you have complex analytic rather than just $C^\infty$.
Let $H\subset\mathcal D\times \mathbb P^2$ be the union over $p\in \mathbb P^2$ of $H_p\times p$. This is a submanifold, and furthermore the projection $H\to \mathbb P^2$ is a submersion. (That is, every point of $\mathbb P^2$ is a regular value for this map.) It follows that the fiber product $X\times_{\mathbb P^2}H$ is a manifold. Consider the projection $$X\times_{\mathbb P^2}H\to \mathbb P^2.$$ I claim that the regular values of this map are precisely those $q$ such that $X$ is transverse to $\tilde H_q$. That gives what you want.
EDIT: This answer is wrong. I will have to think about it some more.
-
Thank you. This is a very neat proof! I assume that in this case I can also conclude that the regular values form an OPEN dense subset of $P^2$? – Ritwik Oct 19 2011 at 2:58
1
No. Its complement is the image of a closed set (the set of critical points), but that does not guarantee that it itself is closed unless $X$ is compact. – Tom Goodwillie Oct 19 2011 at 10:22
I am sorry, but your last statement.... that the regular values of the projection map are those q such that \tilde{H}_q is transverse to X........is it obvious? – Ritwik Oct 20 2011 at 18:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 5, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408833980560303, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/139699/what-are-some-examples-of-a-mathematical-result-being-counterintuitive/139795
|
# What are some examples of a mathematical result being counterintuitive?
As I procrastinate studying for my Maths Exams, I want to know what are some cool examples of where math counters intuition.
My first and favorite experience of this is Gabriel's Horn that you see in intro Calc course, where the figure has finite volume but infinite surface area (I later learned of Koch's snowflake which is a 1d analog). I just remember doing out the integrals for it and thinking that it was unreal. I later heard the remark that you can fill it with paint, but you can't paint it, which blew my mind.
Also, philosophically/psychologically speaking, why does this happen? It seems that our intuition often guides us and is often correct for "finite" things, but when things become "infinite" our intuition flat-out fails.
-
32
Why does it happen? Because our intuition is developed by dealing with finite things: it is quite unsurprising that we are surprised by phenomena specific to infinite objects! This is exactly the same as the fact that our bodies are trained to move and act under the effect of gravity, so when we are in space we become clumsy and need to retrain. Intuition is not fixed: if you study phenomena associated to infinite objects, you develop an intuition for that, and presumably people working with large cardinals, (cont.) – Mariano Suárez-Alvarez♦ May 2 '12 at 1:13
26
(cont) or strange objects like graphs with chromatic number $\aleph_8$ or Banach-Tarski partitions of a sphere, after a while find them just as intuitive as you and me find the formula for the area of a triangle. Intuition is, in most situations, just a name we put on familiarity. – Mariano Suárez-Alvarez♦ May 2 '12 at 1:15
4
– Qiaochu Yuan May 2 '12 at 1:20
12
I think remarks like "you can fill it with paint, but you can't paint it" are actually not helpful. In trying to appeal to our everyday intuition, they get in the way of mathematical understanding. Of course, you can't paint Gabriel's Horn (it's surface area is infinite) but you can't fill it with paint either (because paint molecules have a finite size, and Gabriel's Horn gets infinitely thin). Or, more prosaically, you can't fill Gabriel's Horn with paint because it's a mathematical idealisation that doesn't exist in the physical world. – Chris Taylor May 2 '12 at 7:35
13
"In mathematics you don't understand things. You just get used to them." ---John von Neumann. – Nate Eldredge May 2 '12 at 19:33
show 13 more comments
## 39 Answers
Here's a counterintuitive example from The Cauchy Schwarz Master Class, about what happens to cubes and spheres in high dimensions:
Consider a n-dimensional cube with side length 4, $B=[-2,2]^n$, with radius 1 spheres placed inside it at every corner of the smaller cube $[-1,1]^n$. Ie, the set of spheres centered at coordinates $(\pm 1,\pm 1, \dots, \pm 1)$ that all just barely touch their neighbor and the wall of the enclosing box. Place another sphere $S$ at the center of the box at 0, large enough so that it just barely touches all of the other spheres in each corner.
Below is a diagram for dimensions n=2 and n=3.
Does the box always contain the central sphere? (Ie, $S \subset B$?)
Surprisingly, No! The radius of the blue sphere $S$ actually diverges as the dimension increases, as shown by the simple calculation in the following image,
The crossover point is dimension n=9, where the central sphere just barely touches the faces of the red box, as well as each of the 512(!) spheres in the corners. In fact, in high dimensions nearly all of the central sphere's volume is outside the box.
-
1
But the volume of the box diverges just as well. As you increase dimensions shouldn't you expect everything to just keep growing? – Steven-Owen Nov 3 '12 at 17:02
show 1 more comment
It's somewhat counterintuitive that simple symmetric random walks in 1 dimension and in 2 dimensions return to the origin with probability 1.
Once one has absorbed that fact, it may be somewhat counterintuitive that the same thing is not true in higher dimensions.
(see Proving that 1- and 2-d simple symmetric random walks return to the origin with probability 1, Examples of results failing in higher dimensions, and Pólya's Random Walk Constant)
-
As some other people said, "intuition is highly subjective". Different people think about problems in different ways.
That said, there are many, many counter-intuitive results in mathematics. This is why people demand rigorous proof! ;-)
• Almost any result involving probability. Humans suck at probability! (E.g., the birthday paradox: The probability that anyone in the room shares the same birthday as you is very small, unless you have a lot of people. But the probability that anybody in the room shares a birthday is very high. Way higher than you'd imagine...)
• Almost any result involving infinite sets. Infinity doesn't behave how you'd expect at all! ("Infinity" actually comes in different sizes. $\mathbb{Q}$ is the same size as $\mathbb{N}$, despite being a superset of it. Subtracting an infinite set from an infinite set can yield a result of positive finite size. Etc.)
• Several results about things which are impossible to compute. (E.g., the halting problem looks like it should be really, really easy, but it's actually impossible. Rice's theorem also sounds completely ludicrous. The busy beaver function is non-computable, regardless of how easy it looks. And so forth.)
• Fractal geometry contains a few results which break people's minds. (E.g., polygon which has infinity perimeter and zero area. A Julia set where every point simultaneously touches three basins of attraction. A connected curve with no derivatives...)
I could probably think of more, given enough time...
-
The Monty Hall Problem is another finite example which most people find highly counter-intuitive. I believe even Erdos refused to believe its solution was correct for a while.
-
11
Here is a reference to the story about Erdős, but I agree with this guy's interpretation: "I doubt Erdős was really confused. The Monty Hall problem is complicated because usually the person explaining it tries to make it complicated by leaving out necessary information." – Dan Brumleve May 2 '12 at 7:30
5
I heard the story from a mathematician who was actually there when Erdös learned about the problem (Ken Binmore). Erdös was confused about the problem. – Michael Greinecker May 2 '12 at 9:07
2
I've heard that most of the confusion was caused by faulty or conflicting statements of the problem. – rschwieb May 2 '12 at 18:17
3
For me, everything becomes crystal clear if the number of doors is changed from 3 to 100. Then saying that switching doesn't make a difference is akin to saying you have good chances at guessing a secret number between 1 and a 100 on your first try. – Alex May 3 '12 at 0:23
1
I was going to mention the Monty Hall problem as well. Other examples in probability are the waiting time paradox and Benford's law for lead digits. Fir contingency tables in statistics there is Simpson's paradox. Probability has a wealth of counterintuitive examples – Michael Chernick May 10 '12 at 20:03
show 1 more comment
The topological manifold $\mathbb{R}^n$ has a unique smooth structure up to diffeomorphism... as long as $n \neq 4$.
However, $\mathbb{R}^4$ admits uncountably many exotic smooth structures.
-
5
The only dimension for which $\mathbb{R}^n$ admits exotic smooth structures is $n = 4$... I just can't get over it. – Jesse Madnick May 4 '12 at 7:47
2
@JessMadnich: Why is this? When or how does "4" enter the proof? – Nick Kidman May 6 '12 at 21:10
1
Interesting coincidence, the only dimension for which $\mathbb{R}^n$ admits a (non-comutative) skew filed structure, compatible with the multiplication of $\mathbb{R}$ is also $n=4$. – N. S. May 8 '12 at 16:38
5
There are no coincidences in mathematics - only reasons too abstract for us to have spotted yet :) – Chris Taylor May 11 '12 at 10:35
show 1 more comment
It is possible to define a curve which fills every point of a two-dimensional square (or, more generally, an $n$-dimensional hypercube). Such curves are called space-filling curves, or sometimes Peano curves.
More precisely, there is a continuous surjection from the interval $I$ onto the square $I\times I$.
This is related to the (also counter-intuitive?) result of Cantor, that the cardinality of the number of points in the unit interval is the same as the that of the unit square, or indeed any finite-dimensional manifold.
-
4
– robjohn♦ May 2 '12 at 19:24
Whether something is intuitive or counterintuitive is a very subjective matter. Lots of results are counterintuitive if you don't have the correct intuition. But here's one elementary result of my own that you may find counterintuitive.
Suppose $N$ players are to conduct a knockout tournament. Their starting positions, on the leaves of a rooted binary tree, are chosen randomly, all such assignments being equally likely. When two players are at the children of an unoccupied node, they play a game and the winner (ties are not allowed) advances to that node. The winner of the tournament is the player who reaches the root. We assume that in any game between two given players $i$ and $j$, the probability that $i$ wins is a given number $p_{ij}$, independent of past history. These probabilities are assumed to satisfy strong stochastic transitivity, which means that if $p_{ij} \ge 1/2$ then $p_{ik} \ge p_{jk}$ for all $k$, i.e. if $i$ wins against $j$ at least half the time, then $i$ does at least as well as $j$ against any other player. Thus the probabilities $p_{ij}$ generate a consistent ordering of the players by ability.
Now it seems intuitive that under these conditions, better players have a better chance of winning the tournament. Indeed, it was conjectured that this was the case. However, it is not true, as I proved: "Stronger Players Need Not Win More Knockout Tournaments", Journal of the American Statistical Association 76 (1981) 950-951: http://www.tandfonline.com/doi/abs/10.1080/01621459.1981.10477747
-
3
Is there an in depth explanation of that available that's not behind a paywall? – Dan Neely May 2 '12 at 12:51
4
I haven't seen either paper, but the abstract of the Chen and Hwang paper Stronger players win more balanced knockout tournaments says that your counterintuitive result applies only for unbalanced tournaments. Is your counterintuitive result essentially that the strongest player might have to play more games than a weaker player? If so, the result seems much less surprising than it did at first. – MJD May 2 '12 at 15:34
1
It's more than that. In the particular example I found, one player gets a "bye" into the final round. The most probable way for one of the two weakest players (4 and 5) to win the tournament is to not only get that "bye" but to play one of the 13 identical players labelled 2 (whom both have probability $\epsilon$ of beating) rather than player 1 (whom they have no chance of beating). Player 2 is the only one who has a chance against player 1. – Robert Israel May 2 '12 at 18:49
2
If the worst player (5) gets the bye, player 4 is more likely to beat player 3 in the first round than player 5 would have; in the second round player 4 or 5 would then lose for sure, but player 3 would have had a chance to advance against 2. So 5 getting the bye increases player 1's chance of facing player 2 rather than 3 in the third round, and this is what gives 5 a better chance of winning the tournament than 4. – Robert Israel May 2 '12 at 19:01
show 5 more comments
Suppose we are tossing a fair coin. Then the expected waiting time for heads-heads is 6 throws, but the expected waiting time for tails-heads is 4 throws. This is very counterintuitive to me because the events heads-heads and tails-heads has the same probability, namely $\tfrac{1}{4}$. The general result is the following:
Suppose we are throwing a coin that has probability $p$ for heads and probability $q=1-p$ for tails. Let $V_{\text{HH}}$ be first time we encounter two heads in a row and $V_{\text{TH}}$ be the first time we encounter heads and tails in a row, i.e. $$V_{\text{HH}}(\omega)=\min\{n\geq 2\mid \omega\in H_{n-1}\cap H_n\},\\ V_{\text{TH}}(\omega)=\min\{n\geq 2\mid \omega\in H_{n-1}^c\cap H_n\},$$ where $H_n$ is the event that we see heads in the $n$'th throw. Then $$E[V_{\text{HH}}]=\frac{1+p}{p^2},\\ E[V_{\text{TH}}]=\frac{1}{pq}.$$ Putting $p=q=\tfrac{1}{2}$ we see that if our coin is a fair coin then $E[V_{\text{HH}}]=6$ and $E[V_{\text{TH}}]=4$.
-
show 4 more comments
Really interesting question, I have some examples that many people find counterintuitive.
The set $\mathbb Q$ of rational numbers as the same cardinality of the set of natural numbers $\mathbb N$, although $\mathbb N$ is strictly contained in $\mathbb Q$. Similarly many people find it to be counterintuitive that even numbers are equal in cardinality to the naturals (i.e. the sets $\{2n \mid n \in \mathbb N\}$ and $\mathbb N$ have the same cardinality).
The set $\mathbb R$ has cardinality strictly greater than the set $\mathbb N$ (and so also of the set $\mathbb Q$) (so there's not just one type of infinity).
Another good example of a counterintuitive fact is the Banach-Tarski paradox stating that a ball can be decomposed in a finite number of pieces which can be glued together to build up two balls identical to the first one (I say that this is a paradox because the axiom of choice is clearly true :D).
If other examples come to my mind I'll add them later.
-
1
+1 for the Banach-Tarski paradox, it's the first that came to mind when read the question. I think that it is counter-intuitive because intuition would tell that any 3d object has volume. But no well-defined volume can be assigned to these pieces. – ypercube May 2 '12 at 21:31
4
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" (According to Wikipedia, this is a quote from someone called Jerry Bona). – user1729 May 10 '12 at 13:33
Choose a natural number, for example $n=8$. Then pick a base, for example $b=2$, and finally select another natural number called the bump factor, for example $B=1000$. Then construct a sequence of natural numbers as follows: The first term of the sequence is simply $n$ written in expanded base $b$. $$m_{0}=2^{2+1}=8$$ The second term is obtained from the first by bumping the base $b$ by a factor of $B$ and then subtracting $1$ from the result. $$m_{1}=2000^{2000+1}-1=\sum_{k=0}^{2000}1999\cdot2000^{k}>10^{10^3}$$ The third term is obtained from the second by bumping the new base ($2000$) by a factor of $B$ and then subtracting $1$ from the result. Denoting $d=2\cdot 10^{6}$ we have $$m_{2}=1999d^{d}+1999d^{1999}+\cdots+1999d+1998>10^{10^7}$$ Continuing in this fashion we denote $e=2\cdot10^{9}$ and the next term is $$m_{3}=1999e^{e}+1999e^{1999}+\cdots+1999e+1997>10^{10^{10}}.$$ The next term $m_{5}$ has over 24 trillion decimal digits.
Intuition tells us that the sequence $(m_{r})$ goes to infinity, and very fast. However, this is not the case. Surprisingly, the sequence will reach $0$ in finitely many steps. That is, there is an $r\in \mathbb{N}$ for which $m_{r}=0$.
The sequence we constructed is an example of a Goodstein sequence, and the fact that it terminates is a very particular case of Goodstein's Theorem. This theorem is counterintuitive for two reasons. First because of what the theorem concludes. Roughly speaking, it states that any sequence of natural numbers of the type constructed above (i.e. a Goodstein sequence) will always terminate. Second, because of what it is required to prove it. Goodstein's theorem is a fairly elementary statement about natural numbers (i.e. formulated within the Peano Axioms of Arithemtic) and yet its proof cannot be carried out using only these axioms. It requires infinite ordinals.
-
show 1 more comment
Just to throw in something different, it's pretty wild that Khinchin's constant is universal for almost every real number (except for rationals and a few other miscreants). By definition if $x$ has continued fraction $x=a_0+\frac{1}{a_1+\frac{1}{a_2+\ldots}}$, then for almost all $x$,
$\lim_{n\rightarrow\infty} (a_1a_2\cdots a_n)^{1/n}\approx 2.685$
-
3
This is cool but I'm not sure what intuition it violates. – Dan Brumleve May 2 '12 at 6:36
5
You do see this for decimal expansions. By the strong law of large numbers, if $x$ has decimal expansion $a_0.a_1 a_2 a_3 \ldots$, then for almost all $x$, $\lim_{n \to \infty} (1/n) (a_1 + \cdots + a_n) = 4.5$. – Michael Lugo May 3 '12 at 0:09
3
Khinchin's result has nothing to do with base 10. The continued fraction expansion of a number does not depend on what base you are using to write your numbers. – Johan May 4 '12 at 9:25
1
@Sam: think of it like this: for "random" $x$, the terms of the continued fraction are also "random". Heuristically, the relevant fact is something to the effect that the early terms in the sequence don't substantially affect the distribution of the later terms. – Hurkyl May 4 '12 at 14:36
show 3 more comments
I also Think The Kakeya Needle Problem is worth mentioning (see http://mathworld.wolfram.com/KakeyaNeedleProblem.html). To me it is counter-intuitive that there is no smallest set, in which a needle of unit length can be freely rotated. Unless it has to be convex, of course.
-
5
This is great; I hadn't heard of that problem before. – joriki May 6 '12 at 7:27
show 3 more comments
The existence of countable countably infinite connected Hausdorff spaces is (to me) counterintutive. (Just one example; I could think of others . . . . .)
Later edit: A Hausdorff space is a topological space in which, for every pair of points $x$ and $y$, there are open neighborhoods of $x$ and $y$ that do not intersect each others, i.e. $x$ and $y$ can be in a certain sense separated from each other.
A connected space is a topological space that cannot be broken into separate components having no proximity to each other. Imagine two disks remote from each other. No sequence of points in one disk can approach a point in the other as a limit. That's a space that is not connected.
Countable means either finite or countably infinite, as opposed to uncountably infinite, and that means one can list all the point in a sequence: $x_1,x_2,x_3,\ldots$. The sequence may be infinite, but each term in the sequence has only finitely many terms before it.
So figure out what a countable connected Hausdorff space is based on all that.
-
2
It would be interesting if you expanded. However, it sounds quite advanced theory. – Peter Tamaroff May 2 '12 at 2:30
5
– t.b. May 2 '12 at 6:50
3
It doesn't required any background beyond a semester of point-set topology. – Michael Hardy May 2 '12 at 17:07
show 3 more comments
I didn't think of this until today, but it's an important thing that I, and many other people, find completely mindboggling.
Let's consider properties, like "is red" or "has kidneys" or "has a heart". Now there's a certain sense in which two properties might be the same even though they don't look the same, which is that they might be true of exactly the same entities. For example, it might turn out that everything that has kidneys also has a heart and vice versa, so that even though the two properties have different meanings (kidneys are not the same as hearts), they amount to the same thing in practice.
Mathematics is of course full of such properties; consider for example the property ${\mathcal O}_1$ of being expressible in the form $2n+1$ for some integer $n$, and the property ${\mathcal O}_2$ of being expressible in the form $S_{n+1} - S_n$ for some pair of consecutive squares. Many theorems are of this type, that two seemingly different properties are actually the same.
So let's try to abstract away the senses of properties, leaving only the classes of things that possess them. We'll say that there are these entities called sets which are abstractions of properties. Things belong to a set exactly if they possess the property of which the set is the extension:
1. For every property $P(x)$, there is a corresponding set $\{x : P(x)\}$ of exactly those entities $x$ for which $P(x)$ is true.
2. An entity $y$ is a member of a set $\{x : P(x)\}$ if, and only if, $P(y)$ is true.
That seems utterly straightforward and utterly unexceptionable, and yet, it is utterly wrong.
There are completely mundane properties for which there is no corresponding set of all the entities with the property.
What? Who ordered that?
-
2
@jake: I am talking about Russell's paradox. Take $P(x) = x\not\in x$, $P(x) = (x\in x\implies 2+2=5)$, or $P(x) = \lnot\exists y: y\in x\wedge x\in y$. None of these properties has an extension. – MJD May 6 '12 at 19:23
show 1 more comment
Another elementary example: Connelly spheres, also known as flexible polyhedra. These are non-convex polyhedra, homeomorphic to a sphere, with triangular faces; the polyhedra can be deformed continuously, while the faces remain rigid. It took about 211 years to find a counterexample to Euler's conjecture that embedded polyhedra are rigid. See e.g. http://www.reocities.com/jshum_1999/polyhedra/introduction.htm
-
2
But its volume doesn't change! (Connelly's Bellows Theorem) – JeffE May 8 '12 at 18:48
show 1 more comment
There are a number of results of the form "Proposition P fails in dimension $d$" where P holds in lower dimensions, many of which can seem counterintuitive until you understand higher dimensional phenomena.
Here's an elementary one, which many people on this site won't find counterintuitive but some might. Consider the question "What is the maximum number of vertices a polyhedron in $\mathbb{R}^d$ can have such that there is a segment joining every pair of points which is an edge of the polyhedron?" For $d=2$, the answer is obviously 3, with a triange. It's not difficult to see that a tetrahedron is optimal for $d=3$. Intuition suggests that the $d$-simplex is optimal based on this.
But for $d=4$, in fact, there is no maximum number. There are polyhedra in $\mathbb{R}^4$ with arbitrarily many vertices and an external edge joining each pair of vertices. If you take any finite collection of points on the moment curve $\{(t,t^2,t^3,t^4)\, | \, t>0\}$, the segment joining any two of the points is a face of the convex hull of the collection. Once you have an intuition for higher dimensional geometry, this is obvious, but it can seem counterintuitive.
A more advanced example, that I still find counterintuitive at times, is this: In $\mathbb{R}^d$ for $d=2,3$, given any polyhedron, one can move each of the vertices a small amount to obtain a combinatorially equivalent polyhedron with rational vertices. But in $d=4$ and higher there are polyhedra which can not be realized with rational coordinates.
EDIT: I was asked to provide a reference. This is a well-known result in some circles, particularly in computational geometry, so it's covered in a number of locations. Marcel Berger's Geometry Revealed covers both of the above so-called counterintuitive statements, as well as the surprisingly nonobvious case $d=3$, in chapter 8, roughly page 550, and is a pretty easy read. If you don't have access to Springer, the paper Realization spaces of polytopes by Richter-Gebert is the most comprehensive treatment I know of, and probably any book citing this paper is quoting the result.
-
3
I never found this one to be as counterintuitive. Comparing hypervolumes of n-spheres is geometrically more meaningfully thought of as comparing the ratio of their hypervolumes to those of unit hypercubes (via dimensional analysis). But for me, the more natural thing was to compare the ratio of their hypervolumes to that of their circumscribing cubes, which then gives a monotonically decreasing sequence... – Logan Maingi May 2 '12 at 7:02
1
The question then becomes whether the sequence decreases faster than $2^{-n}$, and you can probably convince yourself that the sequence should decrease super-geometrically based on geometric intuition. If you look at it that way, then there's nothing mysterious about the volume formula. Unfortunately, this is how I first considered the problem, and so I never had the opportunity to be surprised by this result. – Logan Maingi May 2 '12 at 7:06
1
That is a good point. When I first discovered this I thought it was really weird (maybe because I'm not a very visual thinker). Here is another one: a 2-dimensional random walk returns to the origin almost surely, but in 3 or more dimensions it may not! – Dan Brumleve May 2 '12 at 7:24
show 6 more comments
Löb's or Curry's paradox:
````If this sentence is true, then Germany borders China.
````
Logic says this means `Germany borders China` (or anthing you want to put after the `then`).
-
1
This got a lot more interesting after I thought about it for a minute! It's different from "this sentence is false". – Nick Alger May 4 '12 at 8:08
2
@NickAlger Not really, instead of just "paradox" it is "If true fact, then paradox." as in: If Germany does not border China, then this sentence is false. – Phira May 6 '12 at 9:59
1
What does it mean for the sentence to be true? Sentences of the form if p then q, are true (or provable) in my naive sense if I can get you from p to q using "logic," however, "if this sentence is true, then Q" is confusing. – Steven-Owen May 6 '12 at 19:06
show 7 more comments
The famous example of a counterintuitive fact in statistics is the James-Stein phenomenon. Suppose $X_1,\ldots,X_m$ are independent normally distributed random variables with expected values $\mu_1,\ldots,\mu_m$. One wishes to estimate $\mu_1,\ldots,\mu_m$ based on observation of $X_1,\ldots,X_m$. If instead of using $(X_1,\ldots,X_m)$ as the estimator of $(\mu_1,\ldots,\mu_m)$, one uses the James-Stein estimator $$\left(1-\frac{(m-2)\sigma^2}{X_1^2+\cdots+X_m^2}\right)(X_1,\ldots,X_m)$$ (where $\sigma^2$ is the common variance) then the mean square error is smaller, regardless of the value of $(\mu_1,\ldots,\mu_m)$.
And the James-Stein estimator is demonstrably not even an admissible estimator, in the decision-theoretic sense. Thus the obvious estimator is inferior to one that is inferior to some admissible estimators.
One is "shrinking toward the origin", and it should be apparent that it doesn't matter which point you take to be the origin. In practice one should take the point toward which one shrinks to be the best prior guess about the value of $(\mu_1,\ldots,\mu_n)$.
The reason for the non-admissibility is that sometimes $(m-1)\sigma^2/(X_1^2+\cdots+X_n^2)$ is more than $1$, so that the sign gets reversed. That's too extreme by any standards. A piecewise-defined estimator that shrinks toward the origin but no further than the origin is superior in the mean-squared-error sense.
In the '80s and '90s, Morris L. Eaton showed that the fact that this works if $m\ge 3$ but not if $m\le2$ (apparent from the "$m-2$" in the numerator) is really the same fact as the fact that random walks are recurrent in dimension $\le2$ and transient in dimension $\ge 3$, which I think was discovered about a hundred years ago.
-
show 1 more comment
Here are a few counter-intuitive results that have surprised me at one point or another:
1. Impossible Constructions using Straightedge and Compass. Not all regular $n$-gons are constructible with straightedge and compass.
2. Godel's Incompleteness Theorems. Certain non-trivial arithmetics cannot be both complete and consistent.
3. Exotic spheres. In certain dimensions there are spheres which are homeomorphic but not diffeomorphic to the standard sphere.
4. Kuratowski's Closure-Complement Theorem. The largest number of distinct sets obtainable by repeatedly applying closure and complement to a given starting subset of a topological space is 14.
5. Dehn's Answer to Hilbert's Third Problem. The cube and regular tetrahedron are not scissor-congruent.
-
The fact that one can easily prove the existence of uncountably infinite (as opposed to countably infinite) sets is counterintutive to me. Not that fact that uncountably infinite sets exist, but the fact that the proof is so simple. I was astonished when I first learned of it. I was in ninth grade. I think it was in a book by Vilenkin that I read the proof.
Similarly the fact that one can easily prove that the square root of $2$ is irrational. I hadn't expected that to be so simple. And the mere existence of irrational numbers seems counterintuitive: why shouldn't fractions be enough to fill up all the space between integers?
-
I think Smale's paradox (sphere eversion) is pretty counterintuitive.
Also check out Wikipedia's list of mathematical paradoxes. ("'Paradox' here has the sense of 'unintuitive result', rather than 'apparent contradiction'.")
-
Although well-known, I feel compelled to note the remarkable equation
$$e^{i\pi} + 1 = 0.$$
That five of mathematics most well-known quantities are related in such a pleasantly simple way is astonishing and, to the the uninitiated, is certainly not intuitive. Of course, once one knows about infinite series, their basic properties and how to define the trigonometric and exponential functions with them, deriving this equation is routine. But, without this knowledge, the above equation seems almost mystical. In fact, this equation is what first piqued my own interest in mathematics.
-
Intuition is a really subjective and personal matter. To go even further with the problem of such list is that there are many proof requiring some use of the axiom of choice. On the other hand, not assuming the axiom of choice can be equally reasonable, and here is a short list of how things might break down completely:
1. The real numbers can be a countable union of countable sets.
2. There might be no free ultrafilters, at all (on any set).
3. The rational numbers might have at least two non-isomorphic algebraic closures.
4. The natural numbers with the discrete topology might not be a Lindelof space.
Some results in ZFC which are completely unintuitive the first time you hear them:
1. While being perfectly definable, the set $\mathcal P(\mathbb N)$ can differ greatly between models of ZFC; or an even worse formulation:
2. There are models $M\subseteq N\subseteq V$ such that $N$ has more reals than $M$ and $V$ has more reals than $N$, but the amount of real numbers of $M$ and $V$ is the same.
3. There is a polynomial with integer coefficients which has a rational root if and only if ZFC is inconsistent.
4. Every model of ZFC is one class forcing from being $L[a]$ where $a$ is a real number; and every model is one set forcing away from being $HOD[A]$ for some set $A$.
5. The union of countably many disjoint open intervals might have uncountably many boundary points (e.g. the complement of the Cantor set in $[0,1]$).
Both lists are infinitely long, and I can probably ramble about the first list for several days. The point, as I say at first, is what we take for "granted" as intuitive which can change greatly between two people of different mathematical education; mathematical culture; and what is their usual axiomatic system (which is essential for "results").
One strange result on mathematicians is a direct corollary of the first result in the second list:
People are used to think that there is only one universe, only one fixed way to handle sets. While it is true that for the working mathematician this is often a reasonable approach, set theorists deal with models of set theory, much like group theorists deal with models of group theory.
Somehow everyone is flabbergasted when they are being told (for the first time, if not more) that there are many models of ZFC with different number of sets of natural numbers in each model; but no one falls of their chair when they are told that some fields have more irrational numbers than others...
-
I think the following has (suprisingly) not been pointed out already:
http://en.wikipedia.org/wiki/List_of_paradoxes#Mathematics
As a general rule paradoxes (counterintuitive truths) are very important in mathematics and there are many books dedicated to them. 1 and 2 are famous examples. The Monty Hall problem and Banach-Tarski paradox even have books dedicated to them, and each is the subject of ongoing research.
Paradoxes arise when simplification does not work, when usual assumptions do not hold. Of course this will depend on the person thinking about the phenomenon, on her experience. A topologist is well aware of counterexamples in her field so she would not find them paradoxical anymore.
Also I am not sure the Blue-eyed Islanders Paradox has been mentioned here. It has received much internet attention recently, foremost thanks to Terence Tao, c.f. also xkcd.
-
show 1 more comment
The fact that for any infinite set $A$ there is a bijection between $A$ and $A \times A$ is very counterintuitive for me...
-
Another elementary one. There is a configuration of 30 convex bodies in 3-dimensional space with disjoint interiors that "cannot be taken apart with two hands". That is, it's impossible to split up the set of bodies into two nonempty subsets and, by a rigid motion, move one of the subsets away to infinity without disturbing a member of the second subset. See http://www.cs.ubc.ca/nest/imager/contributions/snoeyink/sculpt/theorem.html and http://www.springerlink.com/content/v32326564w8l4n75/
-
I think a puzzle at calculus level is the following: Given a real number $x$ and a conditionally convergent series, the series can be re-arranged so that its sum is $x$.
-
-
13
I don't really find this one counterintuitive. – Michael Hardy May 2 '12 at 17:08
5
Probably the guys who scratch their head at Zeno's paradox are also the ones who are stumped by this one... – J. M. May 2 '12 at 17:27
1
@MichaelHardy we now just proved that intuition differs between different people. I thinnk that one who knows and hence "feels" some knowledge area better, will see things as basic and intuitive, while people new to it, will find same ideas as confusing and counterintutive. Intuition adjusts itself to our knowledge. – Sandman4 May 2 '12 at 17:34
3
I don't see why it is offensive, I must say. Both scenarios rest on there being such a thing as the geometric series, for starters. – J. M. May 2 '12 at 17:38
2
It's offensive because I felt offended and it's counterintuitive because It's against my intuition. The common between the two (feelings and intuition) is that both are personal and not a scientific concepts. – Sandman4 May 2 '12 at 17:42
show 2 more comments
Perhaps the Banach–Tarski paradox: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of non-overlapping pieces (i.e. subsets), which can then be put back together in a different way to yield two identical copies of the original ball. The reassembly process involves only moving the pieces around and rotating them, without changing their shape.
http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox
-
1
Already quoted. – ineff May 2 '12 at 20:08
1
– Martin Sleziak May 6 '12 at 7:30
The concentration of measure phenomena on the sphere:
If $A\subset\mathcal{S}^{n-1}$ is a measurable set on the sphere with $\lambda(A)=1/2$ and $A_\epsilon$ is an epsilon neighborhood of $A$ on $\mathcal{S}^{n-1}$, then
$$\lambda(A_\epsilon)\geq 1-\frac{2}{e^{n\epsilon^2/2}}$$
So say you take $A$ to be a cap on the sphere and fix a small $\epsilon$. As the dimension of the sphere increases, eventually the $\epsilon$ enlargement of $A$ will have almost the entire area of the sphere! Playing with the upper and lower cap and the corresponding enlargements, one sees that area is concentrated around the equator.
Imagine you have a lawnmower and you cut the grass moving along the equator. What percentage of the sphere do you mow? Well, in 3 dimensions, not that much. But as you cut the grass on higher and higher dimensional spheres, moving centered along the equator, the surface area covered becomes almost 100% of the entire area of the sphere!
This result felt pretty counter-intuitive to me the first time I saw it.
-
show 1 more comment
## protected by J. M.May 6 '12 at 4:09
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 160, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953813374042511, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/169881-inequalities.html
|
# Thread:
1. ## inequalities
Please check my work.thank you so much
Attached Files
• Inequality.doc (25.5 KB, 14 views)
2. Quadratic Inequalities are always easiest to solve if you complete the square.
$\displaystyle x^2 - 2x - 3 \leq 0$
$\displaystyle x^2 - 2x + (-1)^2 - (-1)^2 - 3 \leq 0$
$\displaystyle (x - 1)^2 - 4 \leq 0$
$\displaystyle (x - 1)^2 \leq 4$
$\displaystyle \sqrt{(x-1)^2} \leq \sqrt{4}$
$\displaystyle |x-1| \leq 2$
$\displaystyle -2 \leq x - 1 \leq 2$
$\displaystyle -1 \leq x \leq 3$.
3. Another good way to solve quadratic inequalities (and other complicated inequalities) is to solve the associated equation.
For example, to solve $\displaystyle x^2 - 2x - 3 \leq 0$, first solve $\displaystyle x^2 - 2x - 3= 0$, by factoring, $(x- 3)(x+ 1)= 0$ so x= 3 and x= -1. The point is this: a polynomial is a "continuous" function which means that it in order to go from positive to negative, it must pass through 0 and that can happen only at x= -1 and x= 3 which divide the real line into three intervals.
To decide which interval is ">0" and which "< 0" you can use either of two methods:
1) Choose a number in each interval and check the sign. x= -2 is less than -1 and $(-2)^2- 2(-2)- 3= 4+ 4- 3= 5> 0$ so the polynomial is greater than 0 for all x< -2. x= 0 is between -1 and 3 and $(0)^2- 2(0)- 3=-3< 0$ so the polynomial is less than 0 for all x between -1 and 3. Finally, x= 4 is larger than 3 and $(4)^2- 2(4)- 3= 5> 0$ so the polynomial is greater than 0 for all x greater than 3. Since the inequality say " $\le 0$", the solution set is $-1\le x\le 3$.
2) Notice that x+ 1 is negative for x< -1 and positive for x> -1 and that x- 3 is negative for x< 3 and positive for x> 3. If x< -1, it is also less than 3 so both factors are negative and their product is positive. If -1< x< 3, one factor, x+1, is positive but x- 3 is still negative. The product of one positive and one negative factor is negative. Finally, if x> 3, both factors are positive and their product is positive. That gives the same solution set as before and (fortunately!) the same solution as Prove It.
4. Originally Posted by jam2011
Please check my work.thank you so much
$x^2-2x-3\ \le\ 0$
$(x-3)(x+1)\le\ 0$
If $x-3 \ge\ 0$, then $x+1=(x-3)+4$ must be $\ge\ 0$ since it is 4 greater than (x-3)
No luck there...
Case 2:
If $x-3\ \le\ 0$, then $x\ \le\ 3$ and $x+1\ \ge\ 0$ only for $x\ \ge\ -1$
Hence $-1\ \le\ x\ \le\ 3$
so your answer is correct and the diagrams help illustrate the intersections.
Yet another way to proceed is....
$x^2-2x\ \le\ 3\Rightarrow\ x(x-2)\ \le\ 3$
The factors of 3 that differ by 2 are 1 and 3 or $-1$ and $-3$
$x(x-2)=3$ for $x=3$ and for $x=-1$
If $x<-1$, then $x(x-2)>3$
If $x>3$, then $x(x-2)>3$
The solution is $-1\ \le\ x\ \le\ 3$
5. $x^2 - 2x - 3 \le 0$
$(x + 1)(x - 3) \le 0$
think about the graph of the quadratic ... a parabola that opens upward with two zeros, $x = -1$ and $x = 3$
the section of the graph below the x-axis (the less than or equal to section) lies between and includes the two zeros.
6. nice job !!! thank you so much...i learn a lot.there are many ways to find and solve this problem. i owe you a lot...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523782730102539, "perplexity_flag": "head"}
|
http://nrich.maths.org/6710/note
|
nrich enriching mathematicsSkip over navigation
### Picture Story
Can you see how this picture illustrates the formula for the sum of the first six cube numbers?
### Summing Geometric Progressions
Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences?
### Double Trouble
Simple additions can lead to intriguing results...
# Slick Summing
### Why do this problem?
This problem provides an introduction to summing simple arithmetic series. By seeing a particular case, students can perceive the structure and see where the general method for summing such series comes from.
The Stage 5 problem Speedy Summations poses more challenging questions using $\sum$ notation.
### Possible approach
You may wish to show the video, in which Charlie works out $1+2+3+4+5+6+7+8+9+10$ in silence, or you may wish to recreate the video for yourself on the board.
"With your partner, discuss what you saw. Can you recreate the video for yourselves? Can you explain what happened?"
Once students have had a chance to make sense of the video:
"I wonder if you could adapt Charlie's method to work out some similar questions? Try $1 + 2 + 3 + \dots + 19 + 20$ first." While students are working, write up a few more questions on the board, such as
$1 + 2 + 3 + \dots + 99 + 100$ and
$40 + 41 + 42 + \dots + 99 + 100$.
Circulate to check for misconceptions - one common one is that the sum from 1 to 20 will be twice the sum from 1 to 10, and for the last example above, some students will assume the sequence has 60 terms.
Bring the class together to discuss what they have done.
"I wonder whether Charlie's method could be adapted for sequences that don't go up in ones?"
$1 + 3 + 5 + \dots + 17 + 19$
$2 + 4 + 6 + \dots + 18 + 20$
$42 + 44 + 46 + \dots + 98 + 100$
"In a while I'm going to give you another question like these and you'll need to have developed a strategy to work it out efficiently."
While students are working, listen out for useful comments that they make about how to work out such sums generally. Then bring the class together to share answers and methods for the questions they have worked on.
Make up a few questions like those above, and invite students out to the board to work them out 'on the spot', explaining what they do as they go along. This might be an opportunity to invite more than one student to the board at the same time to work on the same question.
Next, invite students to create a formula from their general thinking:
"Imagine summing the sequence $1 + 2 + \dots + (n-1) + n$. Can you use what you did with the numerical examples to create an expression for this sum?"
Give students time to think and discuss in pairs and then build up the formula together.
### Key questions
What can you say about the sum of the first and last, and the second and penultimate terms of these sequences?
How do you know these sums of pairs will always be the same?
### Possible extension
Speedy Summations contains some challenging follow-up questions.
### Possible support
Picturing Triangle Numbers offers a pictorial representation of the sum of the first $n$ integers.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338502883911133, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/236768/is-this-mean-curvature?answertab=votes
|
# Is this mean curvature?
Suppose $N_t:=\partial B(p, t)\subset M^{n+1}$ be the distance sphere in a Riemannian manifold. Let $\{x_1, \cdots, x_n\}$ be a coordinate of the distance sphere $\partial B(p, t)$. Hence $\{x_1, \cdots, x_n, t\}$ is a coordinate of $M$. Denoted $g_{ij}=g(\partial/\partial x_i, \partial/\partial x_j)$ be the first fundamental form of $N_t$. Let $G=\sqrt{\det g_{ij}}$
My question is: what is the geometric meanning of $$tr(G^{-1}\frac{d}{dt}G)$$ I guess it is $2H$, where $H$ is the mean curvature of $N_t$ w.r.t. the outer normal $\partial/\partial t$. But I am not good at tensor calculations, I don't know how to transfer it into an orthonormal basis. (if G is diagonalized, then it seems inside the trace, it is sencond fundamental form, correct?)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143657684326172, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/06/28/monoidal-categories/?like=1&source=post_flair&_wpnonce=e94a57fbb5
|
# The Unapologetic Mathematician
## Monoidal categories
We know that monoids are one of the most basic algebraic structures on which many others are built. Naturally, they’re one of the first concepts we want to categorify. That is, we want to consider a category with some extra structure making its objects behave like a monoid.
So let’s charge ahead and try to write down what this means. We need some gadget that takes two objects and spits out another. The natural thing to use here is a functor $\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}:\mathcal{C}\times\mathcal{C}\rightarrow\mathcal{C}$. We’re using the same symbol we did for tensor products — and for a good reason — but we don’t need it to be that operation.
Now we need this functor to satisfy a couple rules to make it like a monoid multiplication. It should be associative, so $(A\otimes B)\otimes C=A\otimes(B\otimes C)$ for all objects $A$, $B$, and $C$ in $\mathcal{C}$. There should be an “identity” object $\mathbf{1}$ so that $A\otimes\mathbf{1}=A=\mathbf{1}\otimes A$ for all objects $A\in\mathcal{C}$.
We know that the natural numbers $\mathbb{N}$ form a monoid under multiplication with $1$ as the identity, and we know that the category $\mathbf{FinSet}$ of finite sets categorifies the natural numbers with Cartesian products standing in for multiplication. So let’s look at it to verify that everything works out. We use $\times$ as our monoidal structure and see that $(A\times B)\times C=A\times(B\times C)$… but it doesn’t really. On the left we have the set $\{((a,b),c)|a\in A,b\in B,c\in C\}$, and on the right we have the set $\{(a,(b,c))|a\in A,b\in B,c\in C\}$, and these are not the same set. What happened?
The problem is that the results are not the same, but are only isomorphic. The monoid conditions are equations
• $(a\cdot b)\cdot c=a\cdot(b\cdot c)$
• $a\cdot1=a=1\cdot a$
So when we categorify the concept we need to replace these by natural isomorphisms
• $\alpha_{A,B,C}:(A\otimes B)\otimes C\rightarrow A\otimes(B\otimes C)$
• $\lambda_A:\mathbf{1}\otimes A\rightarrow A$
• $\rho_A:A\otimes\mathbf{1}\rightarrow A$
These say that while the results of the two functors on either side of the arrow might not be the same, they are isomorphic. Even better, the isomorphism should commute with arrows in $\mathcal{C}$, as described by the naturality squares. For instance, if we have an arrow $f:A_1\rightarrow A_2$ in $\mathcal{C}$ then we can apply it before or after $\lambda$: $\lambda_{A_2}\circ(1_\mathbf{1}\otimes f)=f\circ\lambda_{A_1}$ as arrows from $\mathbf{1}\otimes A_1$ to $A_2$.
As a side note, the isomorphism $\alpha$ is often called the “associator”, but I don’t know of a similarly catchy name for the other two isomorphisms. When we’ve “weakened” the definition of a monoidal category like this we sometimes call the result a “weak monoidal category”. Alternatively — and this is the convention I prefer — we call these the monoidal categories, and the above definition with equalities instead of just isomorphisms gives “strict monoidal categories”.
Unfortunately, we’re not quite done with revising our definition yet. We’ll be taking our tensor products and identity objects and stringing them together to make new functors, and similarly we’ll be using these natural isomorphisms to relate these functors, but we need to make sure that the relationship doesn’t depend on how we build it from the basic natural isomorphisms. An example should help make this clearer.
This is the pentagon diagram. The vertices of the pentagon are the five different ways of parenthesizing a product of four different objects. The edges are single steps, each using one associator. Around the left, we apply the associator to the first three factors and leave $D$ alone (use the identity arrow $1_D$), then we apply the associator to $A$, $B\otimes C$, and $D$, and finally we apply the associator to the last three factors and leave $A$ alone. Around the right, we apply the associator twice, first to $A\otimes B$, $C$, and $D$, and then to $A$, $B$, and $C\otimes D$. So we have two different natural isomorphisms from $((\underline{\hphantom{X}}\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}})\otimes\underline{\hphantom{X}}$ to $\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes(\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}))$. And we have to insist that they’re the same.
Here’s another example:
This triangle diagram is read the same as the pentagon above: we have two different natural transformations from $(\underline{\hphantom{X}}\otimes\mathbf{1})\otimes\underline{\hphantom{X}}$ to $\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}$, and we insist that they be the same.
What’s happened is we’ve replaced equations at the level of sets with (natural) isomorphisms at the level of the category, but these isomorphisms are now subject to new equations. We’ve seen two examples of these new equations, and it turns out that all the others follow from these two. I’ll defer the justification of this “coherence theorem” until later.
For now, let’s go back to $\mathbf{FinSet}$ We can use the universal property of the product to give an arrow $\alpha_{A,B,C}:(A\times B)\times C\rightarrow A\times(B\times C)$, and we can verify that these form the components of a natural isomorphism. Similarly, we can use the singleton $*$ as an identity object and determine isomorphisms $\lambda_A:*\times A\rightarrow A$ and $\rho_A:A\times *\rightarrow A$. They do indeed satisfy the pentagon and triangle identities above, making $\mathbf{FinSet}$ into a monoidal category.
In fact, you could establish the associator and other isomorphisms for $\mathbf{FinSet}$ by looking at the elements of the sets and defining particular functions, but if we do it all by the universal properties of products and terminal objects we get a great generalization: any category with finite products (in particular, pairwise products and a terminal object) can use them as a monoidal structure. Dually, any category with finite coproducts can use them as a monoidal structure.
For any ring $R$, the category $R\mathbf{-mod-}R$ of all $R-R$ bimodules has a monoidal structure given by $\otimes_R$, and because of this monoidal categories are often called “tensor categories” and the monoidal structure a tensor product.
### Like this:
Posted by John Armstrong | Category theory
## 22 Comments »
1. [...] Lane’s Coherence Theorem Okay, as I promised yesterday, I’m going to prove the Coherence Theorem for monoidal categories today. That is, any two [...]
Pingback by | June 29, 2007 | Reply
2. [...] Functors and Natural Transformations Now that we’ve got monoidal categories as the category-level analogue of monoids we need the category-level analogue of monoid [...]
Pingback by | June 30, 2007 | Reply
3. [...] and Symmetries So we’ve got monoidal categories that categorify the notion of monoids. In building up, we had to weaken things, refusing to talk [...]
Pingback by | July 2, 2007 | Reply
4. [...] with Duals Now we’ve got monoidal categories to categorify the notion of a monoid, we should consider what the proper analogue of an inverse is. [...]
Pingback by | July 7, 2007 | Reply
5. [...] last example along these lines, let’s throw all these structures in together. We start with a monoidal category, and we want it to have both a braiding and duals. Naturally, they’ll have to play well [...]
Pingback by | July 13, 2007 | Reply
6. [...] monoidal category has an “identity” object , so to make it a bit more interesting let’s throw in a [...]
Pingback by | July 23, 2007 | Reply
7. [...] Now we want to take our 2-categories of spans and add some 2-categorical analogue of a monoidal structure on [...]
Pingback by | October 11, 2007 | Reply
8. [...] The Category of Matrices II As we consider the category of matrices over the field , we find a monoidal structure. [...]
Pingback by | June 3, 2008 | Reply
9. Surely on the pentagon diagram, the bottom associator shouldn’t have that first alpha; it doesn’t seem to make sense as it stands.
Comment by PhiJ | July 14, 2008 | Reply
10. Heh.. Looks like you caught a typo nobody else did. I’ll get on that.
There. Is that better? If nothing else it gave me the excuse to get the commutative diagrams package installed on my new computer.
Comment by | July 14, 2008 | Reply
11. [...] Representations of Bialgebras What’s so great about bialgebras? Their categories of representations are monoidal! [...]
Pingback by | November 11, 2008 | Reply
12. [...] But the 2-category language gives us a bit more flexibility. Instead of demanding that the morphism satisfy the associative law on the nose, we can add a “coassociator” 2-morphism to our model 2-category. Similarly, we dispense with the left and right counit laws and add left and right counit 2-morphisms. Then we insist that these 2-morphisms satisfy pentagon and triangle identities dual to those we defined when we talked about monoidal categories. [...]
Pingback by | November 18, 2008 | Reply
13. What about right and left unitors for rho and lambda (as used by Baez and Stay). Somehow this makes me think of Skeletor, Megatron, and other figures from when my kids were small, but that should make it more fun …
Comment by Avery Andrews | November 19, 2008 | Reply
14. You’re going to hate me for this, Avery, but I dislike “unitor” partly because it reminds me of the Transformers from when I was small.
Comment by | November 19, 2008 | Reply
15. Interesting shared association. My wife thought it was hilarious
Comment by Avery Andrews | November 19, 2008 | Reply
16. Hmm how about left & right `unitals’ for $\lambda$ and $\rho$?
Comment by Avery Andrews | December 15, 2008 | Reply
17. [...] category is also monoidal closed. There are various natural monoidal structures we could use, so we’ll start with the [...]
Pingback by | May 15, 2009 | Reply
18. What is the relation between universal property of tensor products (a la modules) and the tensor product in a monoidal category? Does this monoidal product satisfy some universal property?
Comment by Giusto | November 19, 2009 | Reply
19. The monoidal product in a monoidal category doesn’t need to satisfy any universal property. It just has to come with an associator and left and right unit isomorphisms.
Comment by | November 19, 2009 | Reply
• Suppose the category is k-linear and has internal Hom. Actually I’m referring to a paper which explains this connexion but I can’t recall which. It’s not axiomatic for monoidal categories but is a result if certain things are assumed for \mathcal{C}.
Comment by Giusto | November 19, 2009 | Reply
20. Giusto, for more on universal properties and monoidal categories, you might want to look at the interconnections between multicategories and monoidal categories as described in Tom Leinster’s book, Higher Operads, Higher Categories. Specifically, see pages 82-87 (pages 112-117 of the pdf file here, where an equivalence between monoidal categories and representable multicategories is given. This gives the sought-after interpretation which generalizes the sense in which tensor products of bimodules are universal with respect to multilinear maps.
Comment by | November 20, 2009 | Reply
21. [...] in the above exercise behaves like an associator, like we talked about in the context of monoidal categories. And, just like in that case, we will find left and right identity [...]
Pingback by | November 29, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 58, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155862331390381, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/33805/why-is-the-x-x1-3-atlas-on-r-diffeomorphic-with-the-x-x-atlas-on-r/33818
|
## Why is the x->x^1/3 atlas on R diffeomorphic with the x->x atlas on R?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The fact that the atlas using $\phi: x \mapsto x^{1/3}$ on $\mathbb{R}$ is diffeomorphic to the trivial atlas using $\psi: x \mapsto x$ on $\mathbb{R}$ highlights my ignorance of diffeomorphisms and atlases. Apologies in advance for clustering several questions, but I'm not sure how to disentangle them.
First of all, which is the more relevant/correct statement here: that the atlases are diffeomorphic, that the manifolds are diffeomorphic ($M \mapsto N$), that the manifold-atlas pairs are diffeomorphic ($(M,\psi) \mapsto (N,\phi)$), or that the manifolds and their differential structures are diffeomorphic ($(M,\mathcal{A}) \mapsto (N,\mathcal{B})$)?
Is the relevant diffeomorphism patently obvious here given that the domain of $x \mapsto x^{1/3}$ is the same as that of $x \mapsto x$?
If we had two homeomorphic manifolds with atlases that were not diffeomorphic, would it be obvious?
-
I suspect some of this might be terminology problems. "Atlases" are not objects that have a diffeomorphism relation. Smooth manifolds do. Topological manifolds can be enhanced to smooth manifolds by giving them a smooth atlas. All of these details are in the first few chapters of any textbook on abstract smooth manifolds (as opposed to submanifolds of euclidean space). – Ryan Budney Jul 29 2010 at 17:34
Can you recommend a textbook? – Outis Jul 29 2010 at 18:17
Conlon's "Differentiable Manifolds" would likely be suitable. – Ryan Budney Jul 29 2010 at 18:21
## 2 Answers
The fact that the atlas using $x \mapsto x^{1/3}$ on $\mathbb{R}$ is diffeomorphic to the trivial atlas using $x \mapsto x$ on $\mathbb{R}$ highlights my ignorance of diffeomorphisms and atlases.
Other differential topologists should weigh in on this to confirm or deny, but as a differential topologist I have never come across the notion of a diffeomorphism between two atlases, or even two smooth structures. Moreover, what you have here is not two atlases but two charts. These may seem like picky points, but if you find yourself getting confused then one good technique to learn is to sharpen your definitions. By that, I mean be a bit more careful about distinguishing between things that although often used synonymously are actually distinct.
So you have two charts, $x \mapsto x^{1/3}$ and $x \mapsto x$. As both of these have image $\mathbb{R}$, they each define an atlas: $\{x \mapsto x^{1/3}\}$ and $\{x \mapsto x\}$. Each of these atlases then defines a smooth structure on $\mathbb{R}$. Each of these smooth structures defines a smooth manifold with underlying topological space $\mathbb{R}$. Although each of these constructions follows in a unique way from the previous step, technically each is a different thing.
Back to the confusion about diffeomorphisms. We talk of two atlases being equivalent if they generate the same smooth structure, or if they define the same smooth manifold. In concrete terms, we can test this by looking to see if the identity map is smooth in both directions (using the atlases to test smoothness).
But I can have inequivalent atlases that nonetheless define diffeomorphic manifolds. This is because the condition of being equivalent is stronger than that of defining diffeomorphic manifolds. Equivalence rests on the smoothness of the identity map (in both directions), the manifolds being diffeomorphic rests on the smoothness of some map (and its inverse). So although the two atlases given are inequivalent, they define diffeomorphic manifolds because I'm free to take the map $x \mapsto x^{1/3}$ and its inverse to define the diffeomorphism.
It's a good exercise to help with sorting out the definitions to check that you really understand why these two manifold structures on $\mathbb{R}$ are diffeomorphic. That is, write down the map and write out its compositions with the transition maps and see that it works.
-
Your first paragraph is makes pretty much the same point I made to the OP in mathoverflow.net/questions/33676/… I think, however, your third and fourth paragraphs are much better written than the equivalent ones in my post. – Willie Wong Jul 29 2010 at 19:31
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In regards to your first question, the last three statements are essentially the same since to talk about diffeomorphisms between two manifolds you need to have a smooth structure on those manifolds, which is a choice of an (equivalence class of) atlas. One would usually simply say that the manifold $R$ with its regular smooth structure is diffeomorphic to the manifold $R$ with the smooth structure determined by the atlas $x \mapsto x^{1/3}$.
Here, the different smooth structure on $R$ is provided by an atlas that is a homeomorphism $F: R \to R$. This will always produce a manifold diffeomorphic to $R$ with the regular smooth structure since a diffeomorphism between them will be $F^{-1}: R \to R'$, where $R'$ is $R$ with the atlas $(R, F)$.
I do not know a lot about exotic structures, but I do not believe that there are examples where it is obvious that two smooth manifolds, that have the same underlying topological manifold, are not diffeomorphic. For example, I believe Milnor proved that some of his exotic 7-spheres were not diffeomorphic using Morse theory.
-
3
Milnor used Morse theory to show his 7-manifolds were in fact homemorphic to $S^7$. He used other techniques (including the Hirzebruch signature theorem) to show that some of them were NOT diffeomorphic. – Jason DeVito Jul 29 2010 at 16:45
«Obvious», of course, is a relative notion! :) – Mariano Suárez-Alvarez Jul 29 2010 at 16:45
Thanks. Does an atlas uniquely determine the smooth structure of a manifold? – Outis Jul 29 2010 at 17:56
See the sub-section on atlases and compatible atlases, here: en.wikipedia.org/wiki/Differentiable_manifold – Ryan Budney Jul 29 2010 at 17:59
Ryan, thanks, I see the answer is yes. – Outis Jul 29 2010 at 18:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548128843307495, "perplexity_flag": "head"}
|
http://nrich.maths.org/708/solution
|
### There's a Limit
Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely?
### Not Continued Fractions
Which rational numbers cannot be written in the form x + 1/(y + 1/z) where x, y and z are integers?
### Archimedes and Numerical Roots
The problem is how did Archimedes calculate the lengths of the sides of the polygons which needed him to be able to calculate square roots?
# Fair Shares?
##### Stage: 4 Challenge Level:
We received a variety of solutions to each part of this problem. Patrick from Woodbridge School explained how he was surprised to find each child received the same amount, and went on to use a spreadsheet to investigate:
At first it seemed that the 5th child would obviously get most, but then I realised that in fact $\frac{1}{6}$ of the money is quite a lot. I then thought that they would all be very closely grouped with child 5 having slightly more. It was surprising to see that every child received £5.
I then adapted the spreadsheet for a varying number of children. I decided that since the denominator of the fraction was decreased by one, I would decrease the number of children by one, so I performed trial and error testing on sums of money equalling 1 mod 5 (I was assuming a simple case of the first child receiving a whole amount of money.) I got a good result with £16 shared between 4 children.
This seems to show a relationship: $\frac{1}{6}$ gives 5 children £5, $\frac{1}{5}$ gives 4 children £4. I decided to test this on the spreadsheet, and the pattern seems to hold that a fraction of $\frac{1}{n}$ gives $n-1$ children £$n-1$ each.
8 children means that $n = 9$ for my calculations, so $\frac{1}{9}$ must be used to share out $8 \times 8 =$£$64$. I checked this and it worked.
Let there be $n$ children. We must show that $\frac{1}{n+1}$ is the fraction and the total sum of money is $n^2$.
First, to prove that child 1 gets £$n$: The fractional part of his sum is $\frac{n^2-1}{n+1}$, which reduces (because we have a difference of two squares as the numerator) to $n-1$.
Child 1 gets the fractional part plus £1, so his sum is $n-1+1 = n$.
In the case of child $a$, we know that $(a-1) \times n$ has been given out already, so the money left is $n^2-n(a-1)$. Therefore his sum is $\frac{n^2-n(a-1)-a}{n+1} + a$.
This simplifies to $\frac{n^2-na+n-a+an+a}{n+1}$, simplifying to $\frac{n^2+n}{n+1}$ and thence to $n$. Thus, every child gets £$n$ out of a prize fund of $n^2$, if the fraction used is $\frac{1}{n+1}$.
Joshua, from St John's Junior School, explained how you can always find an amount to share in this way with $n$ children:
This only works if you've got $n$ children, £$n^2$ and each time divide by $n+1$.
Child 1 receives: $$1 + \frac{n^2 -1}{n+1} = 1+ \frac{(n-1)(n+1)}{n+1} = 1 + (n-1) = n$$
Child 2 receives: $$2 + \frac{n^2 -n -2}{n+1} = \frac{2(n+1) + (n^2 - n -2)}{n+1} = \frac{n^2 +n}{n+1} = n$$
In general, child $a$ receives: $$a + \frac{n^2 - n(a-1) -a}{n+1} = \frac{n^2 - na + na + a - a + n}{n+1} = \frac{n^2+n}{n+1} = n$$
This means every child gets £n.
Harrison, from Caringbah High School, Australia, and Francois, from Abingdon School, both used simultaneous equations to find out how much Mrs Hobson had to share out. You can read Francois' solution here. Preveina from Crest Girls' Academy sent in this solution. Well done to all of you.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9702906012535095, "perplexity_flag": "middle"}
|
http://idontgetoutmuch.wordpress.com/2008/05/12/isomorphic-types/
|
# Idontgetoutmuch’s Weblog
Mathematical Notes
## Isomorphic Types
Nothing to do with differential geometry but there was a question on the haskell-cafe mailing list asking for “a proof that initial algebras & final coalgebras of a CPO coincide”. I presume that means a CPO-category.
A category is a CPO-category iff
• There is a terminator, 1.
• Each hom-set, ${\rm Hom}(A,B)$, is a CPO with a bottom.
• For any three objects, arrow composition ${\rm Hom}(A,B) \times {\rm Hom}(B,C) \longrightarrow {\rm Hom}(A,C)$ is strict continuous.
• Idempotents split, that is, if $f\circ f = f$ then $f= u\circ d$ and $d\circ u = 1$.
The following lemma answers the poster’s question.
Lemma
If $T$ is a covariant endofunctor on a CPO-category, then $<F,f>$ is an initial $T$-algebra with respect to the sub-category of strict maps if and only if $<F,f^{-1}>$ is a final $T$-co-algebra.
Proof
See Freyd.
We can do a bit more. We borrow some notation from Meijer, Fokkinga and Paterson. If, for a given functor, the initial algebra exists and $\phi : FA \longrightarrow A$ is an algebra then we denote the unique arrow from the initial algebra to this algebra as $(\![\phi]\!)$. Dually, if for a given functor, the final co-algebra exists and $\phi : A \longrightarrow FA$ is a co-algebra then we denote the unique arrow to the final co-algebra as ${[\!(\phi)\!]}$.
Theorem
In a CPO-category, if $\phi : TA \longrightarrow A$ is an isomorphism then $(\![\phi]\!) \circ {[\!({\phi^{-1}})\!]} = 1$ and ${[\!({\phi^{-1}})\!]} \circ (\![\phi]\!) = 1$.
Proof
Let $a : TA \longrightarrow A$ be the initial $T$-algebra and let $b : B \longrightarrow TB$ be the final $T$-co-alegebra. Then by the above lemma, we have that $A=B$ and $b=a^{-1}$. We also have
Since there is only one arrow from the initial algebra to itself $1_A$ then we must have ${[\!({\phi^{-1}})\!]} \circ (\![\phi]\!) = 1$.
For the other way, there is always a morphism from the algebra $\phi : TC \longrightarrow C$ to the initial algebra. Therefore, $\phi : TC \longrightarrow C$ must be isomorphic to the initial algebra.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8670603632926941, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/118574/conley-index-for-isolated-invariant-sets-with-no-exit-points/118591
|
## Conley index for isolated invariant sets with no exit points
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Conlay described in $\textit{Isolated Invariant Sets and the Morse Index (1976)}$ the bases of what would be known as Conley Index Theory.
For the sake of simplicity let's think of vector fields defined on manifolds (a more general situation is enough by just considering locally compact spaces). Im going to pose a few basic notions needed for the question so even someone who is not familiar with the topic may give an answer.
A set (in the phase space) is called invariant if it is the union of solution curves. It is isolated if it is the maximal invariant set in some neighborhood of itself. A compact such neighborhood is called an isolating neighborhood for the invariant set.
An isolating neighborhood is an isolating block if the integral curves through each boundary point of the neighborhood goes immediately out of it in one or the other time direction.
And finally:
The $\textit{(Conley)}$ index is the homotopy type of the pointed space obtained from a block on collapsing the set of exit points (points in the boundary where integral curves go out the neighborhood) to one point.
Well, I've read that when we have an isolating block, say $N$, that has no exit points, we would have to collapse the empty set $\emptyset$ to one point (?), so there's a convention that the resulting space is $$(N \bigcup \lbrace \star \rbrace, \star)$$ that is, the disjoint union of the space $N$ and an external point, all of it pointed at that external point).
So the questions are
• Why this convention makes sense (apart from that $\textit{it just works}$)? Is that the natural way of defining the operation "collapse to a point'' when all you have to collapse is $\textit{nothing}$ (i.e. $\emptyset$)?
• Is there a bigger frame where we don't need this convention because a more general rule contains it as a particular case?
-
## 3 Answers
I maybe wrong but a way of seeing is the following: from the universal property of quotients you would like that any continuous map $f:N\to Y$ to a topological space $Y$ defines a continuous map of pointed topological spaces between $N$ collapsed with nothing and $(Y,y_0)$ no matter what $y_0$ is. Since you "collapse with nothing" you do not want to lose some continuous functions, do you?
And I would say that this definition does exactly this, since it just requires that you declare $f(\star)=y_0$.
-
Thank you, that was pretty much what I was looking for. – Pablo Jan 12 at 10:37
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The Conley index is a generalization of the Morse index and it must give basically "the same" answer when the Morse index is defined: when the invariant set is an isolated non-degenerate critical point and the isolating neighborhood is a small neighborhood where the Morse lemma gives you a normal form for the function. If you work out the dictionary between Morse index and Conley index, you will see why in the case of an isolated minimim the convention comes up.
-
Well, I knew that. I don't know if my questions doesn't have an answer but what I was looking for is for the reasaon of the operation of ''collapsing the emptyset to one point'' or, stating it differently, if that is the way it is extended in other cases. – Pablo Jan 10 at 23:52
I see. Are you just looking for the the definition and uses of Alexandrov's one point compactification? – alvarezpaiva Jan 11 at 13:47
The fundamental result of Conley index theory states that a homotopy type of a pointed space $N_1 / N_2$, with $[N_2]$ as a basepoint, is independent of the choice of an index pair $(N_1 , N_2)$. I feel like you are mainly asking why $N / \emptyset = N \coprod \lbrace * \rbrace$.
This might not be entirely correct but one way I see the quotient $N_1 / N_2$ is the same as $N_1$ attached with a cone of $N_1$ on $N_2$. So when $N_2$ is empty you don't glue anything and have an extra basepoint.
Alternatively, there is an extension of Conley index by Mrozek, Reineck, Srzednickiy, and Matematyki in their paper "The Conley index over a base." Let $X$ be your space with a flow and let $Z$ be a space with a map $\omega : X \rightarrow Z$. This kind of makes $X$ a parametrized space (or ex-space over $Z$). The Conley index over $Z$ is given by $N_1 \cup_{\omega_{|_{N_2}}} Z$. This recovers your situation when $Z$ is a point.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521850347518921, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/tags/return/hot?filter=year
|
# Tag Info
## Hot answers tagged return
6
### Square root of time
Scaling volatility as you do is often leading to inaccurate results which is over-estimating volatility especially when you scale daily volatility to even longer periods. Please see the following for more: http://economics.sas.upenn.edu/~fdiebold/papers/paper18/dsi.pdf The above paper also explains why scaling the way you did does not properly account for ...
2
### Regression giving the return on a stock
The basic CAPM - which is what your regression estimates - says $$R_S = R_f + \beta_S (R_{Market}-R_f)$$ where $$\beta_S = \frac{Cov(R_M,R_S)}{Var(R_M)}$$ i.e. the return of a certain stock depends only on the correlation with the market portfolio. For your pricing equation to work, you will need to have an idea about the expected market (excess) ...
1
### Calculating Geometric mean
I'm currently also using daily returns which I want to annualize. This is my approach: For every month, I calculate the simple return using the formula: (end-of-month closing price / beginning-of-month closing price) - 1. I use the Excel formula somproduct(geomean(A1:A12+1)-1) to find the monthly compounded return. Finally, I annualize the result of step 2 ...
1
### Square root of time
for the square-root rule: it holds for log-returns, if you assume the same variance and no autocorrelation. Because then: $$Var[r_1 + \cdots + r_d] = Var[r_1] + \cdots + Var[r_d] = d Var[r_1]$$ and thus $$\sqrt{Var[r_1 + \cdots + r_d] } = \sqrt{d} \sqrt{Var[r_1]}.$$ This is mathematically true for any distribution that fulfills the assumptions. For the ...
1
### Logarithmic returns for realized variance?
It depends on your investment strategy. The most common approach is to use the close price of $p_t$ and $p_{t+1}$. The volatility you measure using this method implies the "assumption" that your are able to trade at close every day. If you choose to compute the daily returns from open to close, then you assume that you are selling your position every night ...
1
### Is Arithmetic Return Bias Basis of Low Vol Anomaly?
No, the "low-beta" anomaly is not the result of the difference between arithmetic and geometric mean returns. Statistical tests verifying the existence of the anomaly rely on models employing the arithmetic mean returns, $$\mu_a = \mu_g + \frac{\sigma^2}{2}$$, hence the penalty excess volatility incurs when compounding returns over time does not explain the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870139122009277, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/121557?sort=newest
|
## Like Diophantine equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear all, I have posted this question on m.s.e. Unfortunately, no one responded to answer. I hope, this site and members of this site will answer my questions.
The equation $x^n - ny^x-nxy$ = $0$ has solution set $(n, x, y) = (1, 1, \frac12), (2, 1, \frac14), (3, 1, \frac16), \ldots$
I would like to know/learn the following (Kindly discuss)
1) If we want to know the graph. How would be the look of the graph and what kind of graph we get?
2) The cited above equations has infinite solutions with $x = 1$. Can we have solutions with $x >1$ and other $n, y$ are some positives?
3) If solutions exists how to find them for $x < 1$ and $x > 1$?
Thanks in advance.
-
A Diophantine equation is typically one where one is interested in integeral solutions only. Now you mention solutions containing rational numbers. It is now not clear what exactly you are looking for. Please clarify. – quid Feb 12 at 12:34
It is not Diophantine equation. However, I felt it is like Diophantine. Can we have solutions in integers or in other than integers? how to solve such equations? may I know the look of the graph of this function? – jihadi Feb 12 at 16:36
If you want to know how the graph looks like (of what function even?, but I guess the one devined by the expression on the left of the equation), why not just plot it (using [freely] available software) for selected values of n fixed as a function of x and y? An what do you mean by 'other than integers' precisly? Say fix x,n as some integers. So you have a polynomial in y, which of course has solutions (in the complex numbers at least). Yet I somehow doubt this is what you want to know. So what is it you want to know? – quid Feb 12 at 20:17
I am unable to draw the function by graph. Moreover, I am looking the solutions in integers. If there, how to find such integer solutions? Is there any particular method to obtain integers solutions? Other wise solutions in $R^3$. – jihadi Feb 13 at 4:08
## 1 Answer
The equation $x^n - ny^x - nxy = 0$ has no solutions in positive integers.
First notice that $ny$ must divide $x^n$, and $x$ in turn must divide $ny^x$. Therefore, the set of prime divisors of $x$ and $ny$ is the same.
Let $p$ be any prime dividing $x$ (or $ny$) and $u=\nu_p(x)$, $v=\nu_p(y)$, $w=\nu_p(n)$ be the corresponding $p$-adic valuations. We remark that $u>0$.
Since $x^n - ny^x - nxy = 0$, the two smallest values among $\nu_p(x^n)=nu$, $\nu_p(ny^x)=xv+w$, $\nu_p(nxy)=u+v+w$ must be equal. It is easy to see that $u+v+w < xv+w$ unless $v=0$. So there are two cases to consider:
1) $v=0$. In this case we have $xv+w = w < u+v+w$ and $xv+w = w < p^w \leq nu$, that is $\nu_p(ny^x)$ is a sole smallest valuation among the three, which is impossible.
2) $v>0$ and $u+v+w = nu$, that is, $v+w=(n-1)u$ and thus $\nu_p(ny) = \nu_p(x^{n-1})$. Since $p$ is arbitrary prime dividing $x$ and $ny$, we conclude that $ny = x^{n-1}$. The equation take form: $$x^n - x^{n-1}y^{x-1} - x^n = 0$$ which reduces to $$x^{n-1}y^{x-1} = 0,$$ a contradiction.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482364058494568, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/209914/rectangular-box-of-largest-surface-area-inscribed-in-a-sphere-of-radius-1?answertab=votes
|
# Rectangular box of largest surface area inscribed in a sphere of radius 1?
I am having trouble trying to follow a textbook example of such a problem. Using the Lagrange's Multiplier method, we could set up a set of equations like below:
$f(x,y,z)=8(xy+yz+zx)$
$g(x,y,z)=x^2+y^2+z^2-1=0, x>0, y>0, z>0$
then the lambda could be found by solving
$8(y+z,z+x,x+y)=2\lambda(x,y,z)$
suddenly, the textbook jumps right to the conclusion that
$8(x+y+z)=\lambda(x+y+z)$
$\lambda=8$
How is it derived exactly? I've tried googling and searching for answers but I had no luck finding anything. If someone could break it down step by step, it'd be greatly appreciated. Thanks.
-
## 1 Answer
Just before the "jump" you have three equations $$\begin{align} 8(y+z)&=2\lambda x\\ 8(z+x)&=2\lambda y\\ 8(x+y)&=2\lambda z \end{align}$$ Add the three left hand sides and the three right hand sides and do a bit of dividing. What do you get?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486664533615112, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/65612?sort=votes
|
## Euler characteristics and characteristic classes for real manifolds?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider an oriented manifold $X$. To calculate its Euler characteristic, one might integrate the Euler class. Now if $X$ were a complex manifold, and given as a section of some complex bundle $E$ over $Y$, we would have
$\chi(X) = \int\limits_X \mbox{ } c_* (TY) / c_* (E)$
where $c_*(\cdot)$ is the total Chern class.
Can anything of the sort be said if $X$ is a real manifold?
Presumably, if one wants only the Euler characteristic modulo 2, one can use the Stiefel-Whitney classes instead of the Chern classes. On the other hand, it seems to me that the topology of $TY$ and $E$ as bundles over $Y$ cannot suffice to carry the information of the Euler characteristic of the zero locus of a section of $E$. So I guess what I'm really asking is:
What should I know about a section $\sigma:Y\to E$ in order to know the Euler number of its intersection with the zero section, assuming this is transverse?
-
Perhaps you meant "... a section $\sigma:Y\to E$ ..."? – Somnath Basu May 21 2011 at 2:42
Your question isn't clear to me. What does "real manifold" mean? And "Can anything of the sort be said..." of course, but what sort of thing are you interested in? – Ryan Budney May 21 2011 at 4:04
4
It's noteworthy that for a rank $r$ oriented bundle $E$ on an $n$-dimensional compact oriented manifold $Y$ the Euler characteristic of the zero locus $X$ of a transverse section is given by integrating the Euler class of $TY$ over $X=Y$ in the extreme case $r=0$, and by integrating the Euler class of $E$ over the zero-dimensional oriented manifold $X$ in the other extreme case $r=n$. When $0<r<n$ I can't imagine what an answer could look like. If $n=4$ and $r=2$ then the Euler characteristic of $X$ can be any even integer, for any choice of $Y$ and $E$. – Tom Goodwillie May 21 2011 at 4:05
@Sommath, yes, thanks. @Ryan, I just meant to emphasize it wasnt a complex manifold. – Vivek Shende May 21 2011 at 6:17
Vivek -- a small remark: the Euler characteristic of the mod 2 cohomology coincides with the Euler characteristic of the cohomology with any field coefficients and with the Euler characteristic of the integral cohomology (defined as the alternating sum of the ranks). – algori May 21 2011 at 13:39
show 2 more comments
## 1 Answer
As far as I understand the question, it is asking: given a(n oriented) vector bundle $E$ on a(n oriented) manifold, what information on the Euler characteristic of the zero locus of a transversal section of $E$ can we deduce from the characteristic classes of $E$?
I'm afraid the answer to that is "in general, not much, apart from the parity". For instance, every orientable surface $S$ is the zero locus of a function (i.e., a section of the trivial 1-bundle) on the 3-sphere: embed the surface in the sphere and take the "oriented distance" function. In a similar way one can realize $S$ as the zero locus of two functions on $S^4$: realize $S^3$ as the equator in $S^4$, take one of the functions to be the oriented distance function extended to $S^4$ and the other a height function whose zero locus is $S^3$.
The reason one can compute the Chern classes of the zero locus in the holomorphic case is that the total Chern class is invertible, so the total Chern class of tangent bundle of the zero locus of a transversal section comes from the ambient manifold. The Euler class however is not invertible.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9025863409042358, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/101636/jacobis-criterion-for-projective-schemes
|
# Jacobi's criterion for projective schemes?
When can we apply the Jacobi's criterion for the projective variety $V(f_{1}, \ldots, f_{r}) \subset \mathbb{P}^{n}$ in order to find the singularities of the scheme $\mathrm{Proj} \left( k[x_{1}, \ldots, x_{n+1}] / (f_{1}, \ldots, f_{r}) \right)$?
In Hartshorne's book Algebraic Geometry, Proposition II.2.6, we have a fully faithful functor from the category of varieties over $k$ to the category of schemes over $k$, but it seems to provide information only for the closed points of the scheme.
Thank you.
-
1
Check out the general Jacobi's criterion in EGA or in Liu's book. – Martin Brandenburg Jan 23 '12 at 15:33
## 1 Answer
The Jacobian Criterion can be applied to any kind of point on a projective (or affine) variety, closed or not. In the non-closed case one has to adapt the requirement for the rank of the Jacobian appropriately. The field $k$ should be perfect.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884933352470398, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/57475/how-many-different-possible-simply-graphs-are-there-with-vertex-set-v-of-n-elemen/57478
|
## How many different possible simply graphs are there with vertex set V of n elements
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is V is a set with n elements, how many different simple, undirected graphs are there with vertex set V?
-
## 2 Answers
If you consider isomorphic graphs different, then obviously the answer is $2^{n\choose 2}$. Most graphs have no nontrivial automorphisms, so up to isomorphism the number of different graphs is asymptotically $2^{n\choose 2}/n!$. This goes back to a famous method of Pólya (1937), see this paper for more information. You can find Pólya's original paper here.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
See http://oeis.org/A000088. This is the sequence which gives the number of isomorphism classes of simple graphs on n vertices, also called the number of graphs on n unlabeled nodes. You will also find a lot of relevant references here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237457513809204, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29522?sort=newest
|
## Solving a Diophantine equation related to Algebraic Geometry, Steiner systems and $q$-binomials?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The short version of my question is:
1)For which positive integers $k, n$ is there a solution to the equation $$k(6k+1)=1+q+q^2+\cdots+q^n$$ with $q$ a prime power?
2) For which positive integers $k, n$ is there a solution to the equation $$(3k+1)(2k+1)=1+q+q^2+\cdots+q^n$$ with $q$ a prime power?
Now for some motivation. In this question I ask for an algebra-geometric construction of certain Steiner systems (there's background on what Steiner systems are and on the details and motivation for the construction in that question). In particular, if the construction can be carried out for a $(p, q, r)$ Steiner system, the blocks of the Steiner system would be given by the $p-1$-plane sections of some variety $X$ in affine or projective space over a finite field $\mathbb{F}_q$. If $X$ is in affine $n+1$-space and $p=2$, the number of blocks would be given by $1+q+\cdots+q^n$. Now the number of blocks in a $(2, 3, 6k+1)$ system is $k(6k+1)$ and the number of blocks in a $(2, 3, 6k+3)$ system is $(3k+1)(2k+1)$ (a $(2, 3, n)$ Steiner system is realizable iff $n=1, 3$ mod $6$, $n > 3$). So the question comes from setting these two quantities equal.
In other words, these equations must be solvable for all $k$ if the algebra-geometric construction of Steiner systems is to go through for all $(2, 3, n)$ systems (the Steiner triple systems) in affine space.
The most general form of this question (which covers both the affine and the projective versions) is:
For integers $n, 1 < p < q < r$, when is there a prime power $q$ such that $$\frac{r!(q-p)!}{q!(r-p)!}=\left[n \atop p-1\right]_q$$ or $$\frac{r!(q-p)!}{q!(r-p)!}=\left[n \atop p\right]_q?$$
Of course, I only expect answers to concentrate on the numbered questions (1) and (2) at the top of the page.
EDIT: Note that e.g. $k=4$ has no solutions for the first equation.
-
I can only comment that you have weird tags but not the right one, diophantine equations. – Wadim Zudilin Jun 26 2010 at 3:55
Fixed; tag added. – Daniel Litt Jun 26 2010 at 6:26
## 1 Answer
Let $t = 1+q+q^2+\dots+q^n$ then each of the equations (1) and (2) implies that $24t+1$ is a square (namely, $24t+1=(12k+1)^2$ and $24t+1=(12k+5)^2$, respectively). For $n=2$ that leads to a Pellian equation (with possibly infinitely many solutions), for $n=3,4$ to an elliptic curve (with finitely many solutions, if any), and for $n>4$ to a hyper-elliptic curve (with no solutions for most $n$).
Cases $n=3,4$ are easy to solve.
For $n=3$, integer solutions are $q=-1, 0, 2, 3, 13, 25, 32, 104, 177$ out of which only $2,3,13,25,32$ are powers of primes. For $n=4$, integral solutions are $q=-1,0,1,25,132$ out of which only $25$ is a power of prime. These numerical values are computed in SAGE and MAGMA.
Also, for a fixed value of $k$, it is possible to verify solubility of the given equations by iterating all possible $q$ dividing the l.h.s. minus 1. In particular, equation (1) has solutions only for the following $k$ below $10^6$: 1, 2, 3, 15, 52, 75, 1302, 32552, 813802. Similarly, equation (2) has solutions only for the following $k$ below $10^6$: 1, 10, 260, 6510, 162760.
-
Thanks! I have to refresh my number theory. I was following the strategy in your last paragraph, but didn't have mathematical software on hand so stopped at $k=4$; this seems to suggest that any algebra-geometric construction of general Steiner systems is going to be pretty complicated if it exists. – Daniel Litt Jun 27 2010 at 7:03
Max, this is a nice argument! – Wadim Zudilin Jun 29 2010 at 0:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246782064437866, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/74841/an-example-of-a-beautiful-proof-that-would-be-accessible-at-the-high-school-level/74842
|
## An example of a beautiful proof that would be accessible at the high school level?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The background of my question comes from an observation that what we teach in schools does not always reflect what we practice. Beauty is part of what drives mathematicians, but we rarely talk about beauty in the teaching of school mathematics.
I'm trying to collect examples of good, accessible proofs that could be used in middle school or high school. Here are two that I have come across thus far:
(1) Pick's Theorem: The area, A, of a lattice polygon, with boundary points B and interior points I is A = I + B/2 - 1.
I'm actually not so interested in verifying the theorem (sometimes given as a middle school task) but in actually proving it. There are a few nice proofs floating around, like one given in "Proofs from the Book" which uses a clever application of Euler's formula. A very different, but also clever proof, which Bjorn Poonen was kind enough to show to me, uses a double counting of angle measures, around each vertex and also around the boundary. Both of these proofs involve math that doesn't go much beyond the high school level, and they feel like real mathematics.
(2) Menelaus Theorem: If a line meets the sides BC, CA, and AB of a triangle in the points D, E, and F then (AE/EC) (CD/DB) (BF/FA) = 1. (converse also true) See: http://www.cut-the-knot.org/Generalization/Menelaus.shtml, also for the related Ceva's Theorem.
Again, I'm not interested in the proof for verification purposes, but for a beautiful, enlightening proof. I came across such a proof by Grunbaum and Shepard in Mathematics Magazine. They use what they call the Area Principle, which compares the areas of triangles that share the same base (I would like to insert a figure here, but I do not know how. -- given triangles ABC and DBC and the point P that lies at the intersection of AD and BC, AP/PD = Area (ABC)/Area(DBC).) This principle is great-- with it, you can knock out Menelaus, Ceva's, and a similar theorem involving pentagons. And it is not hard-- I think that an average high school student could follow it; and a clever student might be able to discover this principle themselves.
Anyway, I'd be grateful for any more examples like these. I'd also be interested in people's judgements about what makes these proofs beautiful (if indeed they are-- is there a difference between a beautiful proof and a clever one?) but I don't know if that kind of discussion is appropriate for this forum.
Edit: I just want to be clear that in my question I'm really asking about proofs you'd consider to be beautiful, not just ones that are neat or accessible at the high school level. (not that the distinction is always so easy to make...)
-
2
Surely you know the books ''Math! Encounters with Undergraduates'', ''Math Talks for Undergraduates'' and ''The beauty of doing Mathematics'' by Serge Lang. They were written with your same intention. – Giuseppe Sep 8 2011 at 11:15
1
@quid: Thanks. @Giuseppe: I really like Lang's book. But one thing that strikes me, both with his book and a few others I've read on the beauty of mathematics-- they seem to be fairly general. More like the experience you would get of beauty by visiting a museum, rather than the beauty you would experience in learning to paint yourself. Thanks for the other references, I will look at them. – Manya Sep 8 2011 at 11:28
39
I remember Lang giving a talk at Caltech supposedly aimed at undergraduates, but far more advanced. Lang called on a postdoc in the front row, apparently thinking he was a student, and the postdoc couldn't answer. At one point, he wrote out a complicated formula, and challenged Prof. Ramakrishnan, "Do you teach your students this?" "No." "So you see, Caltech is not better than anywhere [sic] else." After a bit more, he went back to the formula and corrected a sign. Ramakrishnan called out, "That, we teach." – Douglas Zare Sep 14 2011 at 0:38
2
Even the notion of proof may not be accessible at high school level – Fernando Muro Sep 15 2011 at 20:33
8
As far as I remember, the thing I liked the most in high school maths (age of 14) was the so called Ruffini's rule: $(x-a)$ divides a polynomial $P(x)$ if and only if $P(a)=0$. It looked to me so incredibly easy and so full of consequences. I hope they still learn it with a proof. – Pietro Majer Sep 18 2011 at 20:12
show 5 more comments
## 44 Answers
Extending on Ralph's answer, there is a similar very neat proof for the formula for $Q_n:=1^2+2^2+\dots+n^2$. Write down numbers in an equilateral triangle as follows:
```` 1
2 2
3 3 3
4 4 4 4
````
Now, clearly the sum of the numbers in the triangle is $Q_n$. On the other hand, if you superimpose three such triangles rotated by $120^\circ$ each, then the sum of the numbers in each position equals $2n+1$. Therefore, you can double-count $3Q_n=\frac{n(n+1)}{2}(2n+1)$. $\square$
(I first heard this proof from János Pataki).
How to prove formally that all positions sum to $2n+1$? Easy induction ("moving down-left or down-right from the topmost number does not alter the sum, since one of the three summand increases and one decreases"). This is a discrete analogue of the Euclidean geometry theorem "given a point $P$ in an equilateral triangle $ABC$, the sum of its three distances from the sides is constant" (proof: sum the areas of $APB,BPC,CPA$), which you can mention as well.
How to generalize to sum of cubes? Same trick on a tetrahedron. EDIT: there's some way to generalize it to higher dimensions, but unfortunately it's more complicated than this. See the comments below.
If you wish to tell them something about "what is the fourth dimension (for a mathematician)", this is an excellent start.
-
2
@Frederico: Careful. The sum of cubes does not correspond to a tetrahedron, but rather a pyramid. (Which does not have this nice symmetry) However the slightly modified sum $$Q_N^'=\sum_{n=1}^N n\frac{n(n+1)}{2}=\frac{1}{2}\sum_{n=1}^N n^3+\frac{1}{2}\sum_{n=1}^N n^2$$ will correspond to a tetrahedron. By using this symmetry we find $$Q_N^' = \frac{1}{4} (3N+1)\left(\frac{N(N+1)(N+2)}{6}\right)$$ since each entry will be $3N+1$ and the tetrahedral numbers are $\binom{N+3}{3}=\frac{N(N+1)(N+2)}{6}$. From here we can deduce that $$\sum_{n=1}^N n^3=\frac{(N^2+N)^2}{4}.$$ – Eric Naslund Sep 16 2011 at 23:31
3
In short, you won't have a nice generalization of the solution for $n=2$ to higher dimensions. Such a generalization is not expected either since Faulhaber's Formula is not so simple. – Eric Naslund Sep 16 2011 at 23:39
1
@Eric Oh, you're right. :( What a pity, it would've been an even neater generalization. – Federico Poloni Sep 17 2011 at 9:57
1
Cool. Both the example and the follow up comments. – Manya Sep 19 2011 at 8:30
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The theorem of "friends and strangers": the Ramsey number $R(3,3)=6$. Not only can the proof be understood by high-school students, a proof can be discovered by students at that level via something akin to the Socratic method. First students can establish the bound $R(3,3) > 5$ by 2-coloring the edges of $K_5$:
Then they can reason through that a 2-coloring of the edges of $K_6$ must contain a monochromatic triangle, and so $R(3,3)=6$: in every group of six, three must be friends or three must be strangers.
After this exercise, an inductive proof of the 2-color version of Ramsey's theorem is in reach.
An added bonus here is that one quickly reaches the frontiers of mathematics: $R(5,5)$ is unknown! It can be a revelation to students that there is a frontier of mathematics. And then one can tell the Erdős story about $R(6,6)$, as recounted here. :-)
-
3
R(3,3)=6 sticks in my memory as the one time I managed to explain mathematics to a non-mathematical friend in the pub – Yemon Choi Sep 8 2011 at 21:57
5
This might be apocryphal, but I've read a story about a sociologist who was surprised to discover such patterns of friendship or non-friendship among his subjects, and pondering the deep psychological origins. If true, a wonderful argument for the need for math literacy. – Thierry Zell Sep 9 2011 at 1:11
23
From Jacob Fox's lecture notes: "In the 1950’s, a Hungarian sociologist S. Szalai studied friendship relationships between children. He observed that in any group of around 20 children, he was able to find four children who were mutual friends, or four children such that no two of them were friends. Before drawing any sociological conclusions, Szalai consulted three eminent mathematicians in Hungary at that time: Erdős, Turán and Sós. A brief discussion revealed that indeed this is a mathematical phenomenon rather than a sociological one." – Joseph O'Rourke Sep 9 2011 at 12:07
4
Re the Szalai story: $R(4,4) = 18$. – Joseph O'Rourke Sep 9 2011 at 13:36
2
I finally tracked down where I must have read about Szalai's story for the first time: N. Alon and M. Krivelevich's article on extremal combinatorics in the Princeton's companion. Also available at: cs.tau.ac.il/~krivelev/papers.html – Thierry Zell Sep 29 2011 at 17:30
show 3 more comments
Euler's Bridges of Konigsberg problem. You can give it to students for five minutes to play with, watch them get annoyed, and then offer them the classical simple and beautiful impossibility proof. I think a lot of high school students, and even bright middle school students, would be totally convinced.
-
The proof, by counting inversions, that you can't interchange the 14 and 15 in the 15 puzzle, just by sliding, is accessible to high-school students, introduces important ideas, and might be found beautiful.
-
Going by the parameters of the question, I don't see why the proof would necessarily need to be of a sophisticated theorem. I think Euclid's proof of the infinitude of primes is beautiful and definitely accessible to a high school audience. Having given the proof, one might reflect on some of its features that generalize to many other contexts, like proof by contradiction or the ability to use a clever construction to avoid infinite enumeration.
-
19
No! Do not use this proof as an occasion to talk about proof by contradiction. Euclid's proof of this proposition was not by contradiction. The conventional practice of rearranging it into a proof by contradiction not only adds an extra complication that serves no purpose, but also leads to confusions such as the belief that if you multiply the first n primes and add 1, the result is always prime. See the paper by me and Catherine Woodgold on this: "Prime Simplicity", Mathematical Intelligencer, autumn 2009, pages 44--52. – Michael Hardy Sep 15 2011 at 20:03
1
It IS by contradiction, just not the kind of contradiction we have in mind. – Mohamed Alaa El Behairy Sep 28 2011 at 1:35
8
To complete Michael Hardy's comment, Euclid's original proof proves the following statement: given any finite list of primes, we can extend the list by finding a prime not in the list. (Proof: multiply the primes in the list and add one; any prime factor of this new number is a prime not in the original list.) So it's a constructive way of taking a list of primes and producing another prime; we don't have to assume that the original list was "all the primes" (as in the contradiction proof) or that they were the first n primes. – shreevatsa Dec 23 2011 at 4:42
show 2 more comments
The trefoil knot is non-trivial.
Proof: It has a tricoloring: And the existence of a tricoloring is preserved by Reidermeister moves. QED
-
1
+1 because it is so obviously technically uncomprehensible to a high school student (at myn hiogh school) yet plausibly beautiful, and beautifully plausible. – roy smith Sep 16 2011 at 5:05
12
@Roy: I disagree with you. I don't think that this is incomprehensible to high school students. The notion of knot is pretty intuitive, and it's very easy to explain what a tricoloring is. Also, the Reidemeister moves are pretty easy things to explain (ok -- I'm not talking about the actual proof that they generate everything). The most difficult aspect of the argument is probably to convince a high school student that there is need for proof. Namely, isn't the trefoil obviously non-trivial!? For that, it might be good to show them some monster unknots first... – André Henriques Sep 16 2011 at 5:17
5
When I was shown the Reidemeister moves in school, several of my classmates and I made the objection, in essence, that it wasn't clear that they generated everything. Worse, since we didn't have topology to work with, we didn't really have a "real" definition to compare it with, so it felt to us that the real issues were being swept under the rug. – Daniel McLaury Jun 15 at 18:49
Two of my favorites: (1) The parameterization of all primitive Pythagorean triples; (2) The formula for the $n$th Fibonacci number in terms of the golden ratio $\phi = \frac{1+\sqrt{5}}{2}$, with the corollary that $\displaystyle \lim_{n \longrightarrow \infty} F_n/F_{n-1} = \phi$.
-
Seeing the struggle of many students with standard trigonometry, I especially like the rational parametrization of $x^2+y^2=1$ (which is equivalent to listing all Pythagorean triples) by starting from $\sin^2\phi+\cos^2\phi=1$ and then using $$\sin\phi=\frac{2t}{1+t^2}, \quad \cos\phi=\frac{1-t^2}{1+t^2}, \qquad t=\tan\frac{\phi}2.$$ Note that the formulas are usually used in the context of integration of rational expressions in sine and cosine.
At the same time, a more general "geometric" argument (applicable to general quadratics), due to Bachet (1620), is still at school level. Namely, fix a single rational point on the curve, $(x _ 0,y _ 0)$ say, and consider the intersection points of the curve and straight lines $y-y_0=t(x-x_0)$ with rational slope $t$ passing through the point.
A beauty here is because of variety of different geometric and analytic methods for solving a classical arithmetic problem.
-
2
Since coordinate geometry was invented after 1620 I wonder how Bachet could have done that. – Franz Lemmermeyer Sep 8 2011 at 13:23
6
Franz, I will be definitely happier if you are a little bit more constructive in your critisism: you seem to be the right person to explain why the method is usually attributed to Bachet! – Wadim Zudilin Sep 10 2011 at 6:12
Cantor's diagonal argument.
The warm-up could be an equally beautiful proof, namely that the rationals are countable.
-
6
I am a bit torn on this one. At least I believe before one would have to also prove, say, that the cardinality of the rationals is equal to that of the naturals (and/or related things). Because if not the 'insight' that there are more reals than naturals might seem too 'obvious' to make a proof enlightening. – quid Sep 8 2011 at 10:51
2
But the enumeration proving that the rationals are countable is also almost as beautiful as the diagonal argument! Perhaps they could be combined. – J.C. Ottem Sep 8 2011 at 11:11
1
Yes, I agree; combined it could be very interesting. I should have made my agreement clearer in my first comment; sorry about that. – quid Sep 8 2011 at 11:19
1
The counting-rationals argument loses some appeal when you have to deal properly with skipping the non-reduced fractions. The visual proof of course establishes that $\mathbb{N} \times \mathbb{N} \cong \mathbb{N}$ as sets, and the countability of $\mathbb{Q}$ follows by applying an unrelated lemma on cardinality of surjective images. Glossing over a lemma like that seems exactly like what a beautiful proof (especially an exemplary proof) should avoid. – Ryan Reich Sep 9 2011 at 21:09
2
There is a simple bijection between the positive rationals and positive integers shown by the fusc function. Let $f(n)$ be the number of ways of representing $n$ as a sum of powers of $2$ with at most $2$ copies of each power. $f(4)=3$ since $4=4,2+2,2+1+1$. The sequence $\{f(n)\}_{n=0} = \{1,1,2,1,3,2,3,1,4,...\}$. Over natural numbers $n$, $g(n) = f(n-1)/f(n)$ hits each positive rational precisely once. See also the Calkin-Wilf tree, how Euclid's algorithm reduces relatively prime ordered pairs. The sequences of numerators and denominators in order are offset copies of $\{f(n)\}$. – Douglas Zare Sep 10 2011 at 21:33
show 1 more comment
The proof that $\sqrt{2}$ is irrational is a nice example of proof by contradiction.
-
1
@Manya My conception of beauty has changed since I first saw that proof. At that time, I did indeed consider it quite beautiful. Now, however, it's so embedded in me that it's hard to appreciate its beauty. – Quinn Culver Sep 8 2011 at 13:58
show 8 more comments
The Gale-Sharpley stable marriage theorem, http://en.wikipedia.org/wiki/Stable_marriage_problem .
The algorithm and its proof are very much accessible to school students. Despite its innocuous look, the algorithm is not easy at all to invent.
On a similar note, Hall's theorem: http://en.wikipedia.org/wiki/Hall%27s_marriage_theorem#Graph_theory . This looks like a recreational puzzle but actually is closer to university mathematics than everything done in high school.
Here is another combinatorial exercise which, properly presented, does not even look like mathematics: http://www.artofproblemsolving.com/Forum/viewtopic.php?p=279550#p279550 . The thing I don't like about it is that the standard "gotcha" proof (explained in the usual, informal way) requires a bit too much concentration to understand - some students might fail at it and take it as an example that mathematical proofs are something one either believes or not, rather than something one can check. Of course, one can formalize the proof, but this requires quite an amount of time in a high school class.
-
1
I’ll point out that the “combinatorial exercise” is actually useful. Assume that we are given a sequence $s_1,\dots,s_n$ of symbols of arities $a_1,\dots,a_n\ge0$, respectively. It is easy to see that if we can arrange them into a well-formed term, then $\sum_ia_i=n-1$. The exercise says that, conversely, if this identity holds, then there exists a (unique) cyclic permutation of the string $s_1\dots s_n$ which is a valid term in Polish (prefix) notation. Among other things, this allows you to count the well-formed terms consisting of $n$ symbols. – Emil Jeřábek Sep 8 2011 at 12:29
show 1 more comment
The Halting Problem. The first time I saw this was my senior year in high school and it completely blew me away. All you need is a notion of what an algorithm is and very basic logic (enough to recognize that assuming $A$ and deriving $\neg A$ is a contradiction)
In a similar vein, Russell's Paradox. The problem here is that you need some basic set theory, so this is more for advanced high school students.
The first beautiful proof I saw in high school (it was beautiful at the time, but now seems too trivial) was the fact that for a geometric series $a, ax, ax^2, \dots$ the sum of the first $n$ terms is $a \cdot \frac{1-x^n}{1-x}$. I thought this was cool because of all the cancellation that seems to come out of nowhere. The treatment here is nice.
-
One of the keys to making a proof accessible to high school students (or just non-mathematicians) is to make the answer relevant. This gives a dual responsibility, to ensure that the theorem is motivated and that the proof is accessible. The proof of the infinity of the primes has been mentioned already and is a fantastic example. You can lead students in to it using the (almost trivial) proof that there is no largest integer.
Another example is the classification of the regular polyhedra. With good students and models you can even lead them to the proof there there are at most 6 regular polytopes in 4d (actually showing they all exist is a little harder).
Keeping with polyhedra, the Euler characteristic is also powerful. Start with balloons and get the students to draw lines freely so you get a tiling. Then get them to count faces, vertices and edges. David Eppstein collected 19 proofs to choose from, several of which would be perfect for non-mathematicians: http://www.ics.uci.edu/~eppstein/junkyard/euler/
As a final example (and to show that it does not have to be deep mathematics to motivate) you can consider the question of blocking a square on a chess board and filling the remainder with tromioes. It starts with a puzzle, you can get people to play with, and leads to a lovely induction proof: http://www.cut-the-knot.org/Curriculum/Games/TriggTromino.shtml
Actually polyominoes are a fantastic source of many other fun, non-trivial but accessible proofs.
-
I recommend Kelly's proof of the Sylvester-Gallai theorem (the original proof of Gallai was about 30 pages long, this one takes a few lines). The theorem and the proof can be read here.
-
1
@Emil: There are at least 3 points on $l$, at least two on one side of the perpendicular. Call the closer of these two $B$ and the farther of these two $C$. Now let $m$ denote the line $PC$, then the pair $(B,m)$ has a smaller distance than $(P,l)$. Contradiction. – GH Sep 8 2011 at 12:08
1
Thanks, now I got it. I misread it first. – Emil Jeřábek Sep 8 2011 at 12:36
show 1 more comment
1) Many elementary binomial identities or identities with Fibonacci numbers have beautiful proofs. Let me only mention the matrix representation of Fibonacci numbers whose determinant gives Cassini's identity.
2) Another elementary problem is the following: Is it possible to cover a checkerboard from which one white and one black square have been removed with dominoes? To show that this is possible run through the board in a cyclical way. Observe that on this path between a white and a black square are an even number of squares. Since I don't know how to make figures I indicate such a path for a 4x4-board: ((1,1),(1,2),(1,3),(1,4),(2,4),(2,3),(2,2),(3,2),(3,3),(3,4),(4,4),(4,3),(4,2),(4,1),(3,1),(2,1)).
-
3
I would also add the (checkerboard) proof that if you remove opposite corners from the 8x8 board, the resulting figure can't be tiled with dominoes; if presented without the board coloration it's not immediately obvious, and its proof provides a wonderful a-ha moment that should be easily accessible for high school students (if not earlier). These (along with the various graph-theory problems) also have the advantage of showing students that mathematics is about more than just numbers. – Steven Stadnicki Sep 8 2011 at 21:31
show 1 more comment
The formula $1 + 2 + ... + n = n(n+1)/2$ can be proved on middle school level: Assume first n is even. Then there are n/2 pairs (1,n), (2,n-1), ..., (n/2,n/2+1) those sum is always n+1. Thus the overall sum is n/2*(n+1). The case when n is odd can be treated in the same manner.
-
13
Actually this proof would be more approriate for primary school (age 10 or so). Smarter kids (like Gauss) figure it out themselves. – GH Sep 8 2011 at 12:04
2
I prefer the "visual proof" because you don't have to reason, you just "see" it: put 1 little square in the first raw, two in the second,... n in the last raw, and you've obtained a figure of which you can easily compute the area as: (AreaOfBigSquare-AreaOfSquaresOnTheDiagonal)/2+ AreaOfSquaresOnTheDiagonal=$(n^2-n)/2+n=n(n+1)/2$. Of course to pass from $(n^2-n)/2+n$ to $n(n+1)/2$ the kid must have learned some "algebra". – Qfwfq Sep 8 2011 at 13:48
1
No, the best proof is the visual one where you draw a triangle with 1, 2, ..., n circles in each row, and another row below the last with n + 1 circles. Anything in the smaller triangle can be specified by two "coordinates" in the last row, obtained by dropping down parallel to the sides. Thus, $\binom{n + 1}{2}$ using the definition of "n + 1 choose 2"; to get a formula for that number you could use the argument in Dan's comment. This one is nice because it is both visual and a bijective proof, rather than computational. – Ryan Reich Sep 8 2011 at 15:31
3
How about this: write the numbers from $1$ to $n$ in row, then write them again in a row below, but backwards. Add up the columns, to get $n+1$ each time, so $n(n+1)$ in total, because there are $n$ columns. Divide by $2$. – euklid345 Sep 9 2011 at 4:22
show 2 more comments
I suggest Euler's polyeder formula with application to the Platonic solids. One first observes that instead of polyeders one can consider graphs in the plane, counting the unbounded region as a face. One observes next that one can just as well allow the edges to be pieces of curves. Then one observes that the formula $F - E + V = 2$ is preserved if one edge or one vertex is added, thus the formula is proved by induction. Then use the formula to prove that there are no regular polyeders except the five Platonic solids as follows: faces can only be triangles, squares or pentagons, edges are always common to exactly two faces, and for the number of faces that meet in a vertex there are a number of possibilities that one checks. For each case, $E$ and $V$ can be expressed in terms of $F$, and the formula gives the possible values of $F$.
-
I've been collecting simple, often one-step, proofs.
http://www.cut-the-knot.org/proofs/index.shtml
Some I judge beautiful - these are listed separately.
-
i suggest the proof archimedes wanted on his tombstone and its relatives. i.e. since two solids with the same horizontal slice area at every height have the same volume, hence by pythagoras, the volume of a cylinder equals the sum of the volumes of an inscribed cone and an inscribed solid hemisphere.
to generalize this, the volume generated by revolving a solid hemisphere around a planar axis in 4 space equal that generated by revolving a cylinder minus that generated by revolving a cone. Using the fact that the center of gravity of a cone is 1/4 the way up from the base, one obtains the volume of a 4-sphere, as pi^2/2 R^4.
a generalization of the first computation is that of the volume of a bicylinder (intersection of two perpendicular cylinders), since it is the difference of the volumes of a cube containing the bicylinder and a square based double pyramid also inscribed in the cube.
I find these beautiful, but of course that is subjective.
I also like euclid's argument for pythagoras, and for constructing a regular pentagon, but they are hard to reproduce here briefly.
-
For someone in high school, I think it's good to prove that the sum of the interior angles of a triangle is $\pi$ if they don't know why. Personally, I was never shown why this fact is true, and I feel that it's generally a bad idea to not know why something in math is true, especially when the answer is pretty. My favorite proof is to think about how the normal vector changes as you walk around the triangle -- it's nice because it generalizes to other shapes (which may not even be polygons).
-
This topological proof of the fundamental theorem of algebra is accessible to high school students, particularly those at the precalculus level.
There are two major problems with this. First, while the winding number is intuitive, it takes effort to define it rigorously. Second, you also want to establish the basic property that the winding number doesn't change as you deform a curve without going over the origin, which again is difficult to establish rigorously without topology. Without these details, you might call this a hand-waving argument instead of a proof. It's good to give references to where these results will be established rigorously, and to give arguments for other results which are more complete.
Nevertheless, I like presenting this proof for several reasons. I think it's beautiful. Geometrically, what $x\mapsto x^n$ does to the complex plane is easy to understand, but many students have little intuition about what this map does, only what polynomials look like on the real line. So, this argument doesn't just say that the statement is true, it is illuminating. The fundamental theorem of algebra is also a result students encountered in algebra, but they usually don't know why it's called a theorem. This is also an opportunity to talk a little about what is studied in more advanced areas of mathematics. It can lead into discussions of topology or the difficulty solving polynomials by radicals.
-
You should certainly look at the two books by Ross Honsberger, "Mathematical Gems I" and "Mathematical Gems II". A favourite example of mine the proof due to Conway that there are configurations of checkers below the half plane on an infinite board that allow you to move a checker 4 rows into the upper half plane, but not five rows.
The negative result is an ingenious argument using nothing more than the quadratic formula, but provides a great example of to apply mathematics in unexpected contexts.
-
I like the lovely theorem in 19th century Euclidean Geometry as follows.
Let ABC be a triangle. let D,E,F be points on BC,CA,AB respectively. Then the circumcircles of AFE, BDF, CDE meet at a point.
I like this because the proof uses the property of the angles of cyclic quadrilaterals, and its converse. Also if one wants to convince students of the necessity of proof, then one should start with a result which is surprising.
It is a good thing that this situation can we worked on for more implications. Let P,Q,R be the centres of the three circles just given. Then the triangle PQR is similar to the triangle ABC.
For all these reasons I think it is a pity that some of Euclidean Geometry is not in University courses, or often school courses, in order to acquaint students with something important in our mathematical heritage. Should a student get a degree in maths without knowing why the angle in a semicircle is a right angle?
-
Im in high school and i loved the proof of the fermat-toriccelli point of a triangle.
-
Morley's Theorem.
Wikipedia's proof is completely elementary and only involves trigonometric identities and euclidean plane geometry.
There is also a proof by Alain Connes, based on affine geometry techniques. Of course it is a bit more technical, but again it involves math that doesn't go much beyond the high school level, and could be appreciated by the most gifted students
-
1
Thanks. Actually this example was given by Rota in a famous paper about the phenomenology of mathematical beauty as an example of a theorem that is surprising but not beautiful (in response to Hardy's claim that beauty arises from a feeling of surprise.) Of course, it is open to debate... Thanks for the Connes reference, I didn't know about that. – Manya Sep 8 2011 at 11:40
1
See also Conway's proof (linked at the Wikipedia article). – Todd Trimble Sep 8 2011 at 12:05
1
@Manya: Well, I guess it's a matter of taste. Personally, I do find this theorem beautiful. Thank you for the remark – Francesco Polizzi Sep 8 2011 at 12:40
1
But the question was asking for a beautiful proof, not a beautiful theorem. – euklid345 Sep 9 2011 at 4:51
1
P.S. (@ eucklid) Good point. Rota actually claimed that neither the theorem nor any of the proofs of it (thus far?) are beautiful. – Manya Sep 19 2011 at 8:16
show 1 more comment
In the general game "Poset Chomp" the first player always has a winning strategy. The proof is a one-line strategy stealing argument, hence nonconstructive. In fact, a winning strategy is unknown in most cases, which makes the result interesting and mysterious. For a good quick account see here.
-
3
@GH: I'm aware of the history of chomp and of simple posets and the strategy-stealing argument. Have you worked with middle school students who were not selected to be competitors in a math competition? What percentage do you think will understand and be impressed by a nonconstructive existence result about an abstract game they have not seen before? When I tell members of the general public about mathematics, I try to relate it to concrete objects and situations I think they know beforehand. – Douglas Zare Sep 9 2011 at 0:08
show 5 more comments
Why not some elementary theorems of Euclidan geometry? As I recall, the more general and fundamental theorems were just taken as given in my schooling, but I think many of them can be given accessible and beautiful proofs. Here are some good ones:
1) The Pythagorean theorem. (many lovely proofs)
2) Parallelograms having congruent bases and heights have the same area. (Euclid's proof is pretty.)
3) Use 2 to derive that similar triangles have corresponding sides in common proportion.
4) Two distinct circles have at most 2 points of intersection.
5) Prove the formula for volume of a pyramid without using calculus.
-
4
What I absolutely dislike about 4) is how it cements the common misconception that mathematics is about giving painstakingly difficult proofs to intuitively obvious statements. Part 3) is only slightly better in this aspect. The rest are pretty good. – darij grinberg Sep 9 2011 at 11:50
1
The arguments are not so painful. #4 merely involves some observations relating the center of a circle, isoceles triangles, and the fact that two distinct lines intersect at most once. In any case I disagree with the sentiment. Part of the mathematical way of thinking is resisting the urge to accept things just because they seem obvious at first, and always demand that your knowledge but put on a firmer footing. I believe this is the essential "life lesson" students should take from mathematics. Sadly it is not being imparted in today's secondary schools much. – mbsq Sep 9 2011 at 19:00
10
I disagree. In school, mathematical proofs are like castles built on sand - not only do most students never realize what they are for, but they often tend to be sloppy right up to flawed (not "flawed" in the sense of "informal", but flawed in the sense of arguments that wouldn't be accepted as a correct proof even in a published paper), and the idea that proofs can be interesting is totally missing (at best they are considered a necessary evil by students and teachers alike). Adding to this a "revelation" that mathematicians prove trivial things in complicated (for students, at least) ... – darij grinberg Sep 9 2011 at 21:04
6
... ways doesn't help. In reality, mathematics is maybe 1% about proving things that are intuitively obvious (even topology), and 99% about proving things that are either surprising or seem to be useful in proving surprising things. Skepticism is a good life lesson, but it is better taught by providing examples of false intuitively obvious assertions with counterexamples than by providing examples of correct intuitively obvious assertions with their seemingly redundant proofs. – darij grinberg Sep 9 2011 at 21:07
show 1 more comment
This is maybe ambitious, for the details are obviously not completely accessible to the high school level; but the beauty of the ideas is, and this video is really a superb example of divulgation. Smale's theorem on the eversion of the 2-sphere and Thurston's construction.
-
Minkowsky's Theorem (every convex region in the plane of area greater than 4 that's symmetric about the origin contains a lattice point other than (0,0)) is not at all obvious (are you sure you can't squeeze a sufficiently large "blob of irrational slope" in there?) but has a beautiful, simple, and surprising geometric proof.
-
1
Do you want to give a hint about the proof? – Manya Nov 19 2011 at 12:22
1
Draw the region $R$ in the plane. Cut the plane into 2x2 squares by cutting along the lines $x=2i$ and $y=2j$ for all integers $i,j$; each square contains some part of $R$ (possible none.) Stack the squares on top of each other. Since $R$ has area greater than 4, there exist two squares whose parts of $R$ overlap. Write down what this means algebraically, apply symmetry and convexity, and construct the nontrivial lattice point. – unknown (google) Nov 20 2011 at 11:14
Those are pretty nice, and can be done at a fairly low level (say, from 12 years old onwards) :
• The proof of the formula "half base times height" for the area of a triangle, by first considering a right triangle and completing a rectangle, then considering an arbitrary triangle and breaking it in two along an height (two cases : inside(+) or outside(-)) : it is a nice example of how mathematicians treat the general case by reduction to particular cases ;
• Euclid's proof of the pythagorean theorem using the previous formula, as in this animation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 87, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498869776725769, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/32645?sort=votes
|
## Eigenvector centrality
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was wondering if you can calculate eigenvector centrality with undirected graphs and if you can, what is the best means of doing so. I understand how to calculate the adjacency matrix and how to calculate its eigenvector (spectral) decomposition, I just am unaware as to how to combine this parts in order to calculate eigenvector centrality. Thanks in advance!
-
What's the matter with the "using the adjacency matrix to compute eigenvector centrality" section of the wikipedia article? en.wikipedia.org/wiki/Centrality Perhaps you are considering a more general notion than this? – Jon Bannon Jul 20 2010 at 14:46
Looking at en.wikipedia.org/wiki/… it seems that it suffices to find an eigenvector of the largest eigenvalue of the adjacency matrix. Then the eigenvector centrality of the $i$-th vertex is the $i$-th coordinate of such a vector. – Daniel Litt Jul 20 2010 at 14:46
## 1 Answer
The wikipedia article quoted by Jon Bannon mentions using the power-iteration method as readily applicable -- and this is in my experience (for connected graphs, with degrees <5)quite efficient, say starting with the vector with weight 1 for every site. And this wikipedia article mentions several other choices for measuring centrality, besides the "eigenvector centrality". But it does not mention some choices indicated in D. J. Klein, "Centrality Measure in Graphs", J. Math. Chem. 47 (2010) 1209-1223. There centrality measure is suggested to be related to choice of metric or semimetric D on the graph. A couple choices for D yield centrality measures very similar to common measures, and a new "resistive centrality" is noted to result in connection with the choice of D as the "resistance distance" metric.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202911853790283, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/6896-homomorphism-print.html
|
# Homomorphism
Printable View
• October 26th 2006, 10:38 AM
PauKelome
Homomorphism
Hi, new to the board.
I need help proving that f: Z5 to Z7 has no ring homomorphisms.
Thanks in anticipation.
• October 26th 2006, 04:26 PM
ThePerfectHacker
Quote:
Originally Posted by PauKelome
Hi, new to the board.
I need help proving that f: Z5 to Z7 has no ring homomorphisms.
Thanks in anticipation.
That is never true. Everything always has a "trivial-homomorphism".
Consider,
$f:\mathbb{Z}_5\to \mathbb{Z}_7$
$f(x)=0$
Is indeed a homomorphism.
Perhaps you meant an isomorphism.
• October 26th 2006, 06:08 PM
PauKelome
Yes that is what I mean.
• October 26th 2006, 06:12 PM
ThePerfectHacker
Quote:
Originally Posted by PauKelome
Yes that is what I mean.
Then the answer is trivial.
Since $|\mathbb{Z}_5|<|\mathbb{Z}_7|$
There is no way to create an isomorphism.
• October 27th 2006, 05:50 PM
PauKelome
OK. I don't think I got my question answered correctly. By my definition, a ring homomorphism exists when R and S are rings, then the function f: R to S adheres to the following properties:
f(a+b)=f(a)+f(b) for all a and b in R
f(ab)=f(a)f(b) for all a and b in R
f(1)=1.
So I need to prove that f: Z5 to Z7 and f: Z6 to Z7 do not have ring homomorphisms. Which conditions fail?
Thanks for any further help on this issue. Pau
• October 28th 2006, 02:55 PM
ThePerfectHacker
Quote:
Originally Posted by PauKelome
OK. I don't think I got my question answered correctly. By my definition, a ring homomorphism exists when R and S are rings, then the function f: R to S adheres to the following properties:
f(a+b)=f(a)+f(b) for all a and b in R
f(ab)=f(a)f(b) for all a and b in R
f(1)=1.
So I need to prove that f: Z5 to Z7 and f: Z6 to Z7 do not have ring homomorphisms. Which conditions fail?
Thanks for any further help on this issue. Pau
First, a ring homomorphism is,
$\phi:R\to R'$
Such that,
$\phi(a+b)=\phi(a)+'\phi(b)$
$\phi(ab)=\phi(a)\cdot ' \phi(b)$
In no way, is it defined for,
$\phi(1)=1'$.
Yes, it happens to be true if a ring homomorphism preserves unity and zero's for the two rings but that can easily be proved from the first two statements, thus it is not necessarily.
---
Now, returning to the question. Again, there does exist a ring homomorphism. The trivial-homomorphism can be made to exist between any two rings or groups.
Define,
$\phi:R\to R'$ as,
$\phi(x)=0$ for all $x\in R$.
Maybe, you wish to show there does not exist no other homomorphism.
• October 28th 2006, 03:13 PM
Plato
This problem usually asks to show that there are no non-trivial homomorphism between these two rings. One can show that $\phi (nx) = n\phi (nx)\quad \& \quad \phi (x^n ) = \left[ {\phi (x)} \right]^n$ is true for any ring homomorphism. Use that to complete your problem.
• October 28th 2006, 03:25 PM
ThePerfectHacker
Quote:
Originally Posted by Plato
This problem usually asks to show that there are no non-trivial homomorphism between these two rings. One can show that $\phi (nx) = n\phi (nx)\quad \& \quad \phi (x^n ) = \left[ {\phi (x)} \right]^n$ is true for any ring homomorphism. Use that to complete your problem.
Assume,
$\phi:\mathbb{Z}_5\to\mathbb{Z}_7$
Is a homomorphism.
Then by the fundamental homomorphism theorem,
$\ker (\phi)$ is an ideal in $\mathbb{Z}_7$.
Since,
$\mathbb{Z}_7$ is a field it has no proper-nontrivial ideals. Thus, $\ker (\phi)=\{0\} \mbox{ or }\mathbb{Z}_7$
Thus,
$\phi[\mathbb{Z}_5]\simeq \mathbb{Z}_7$
Or,
$\phi[\mathbb{Z}_5]\simeq \{0\}$
But the first isomorphism cannot exist because,
$|\mathbb{Z}_5|<|\mathbb{Z}_7|$
Thus,
$\phi[\mathbb{Z}_5]\simeq \{0\}$
Which means, the function, $\phi$ maps everything into a single element. Which must be $\phi(x)=0$
Which is exactly the trivial homomorphism.
(The full details are to the reader to finish).
All times are GMT -8. The time now is 04:31 PM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342293739318848, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/58299/a-infinity-structure-of-e-infinity-algebras
|
## A-infinity structure of E-infinity algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is perhaps somewhat related to this question. Fix a field $k$ of characteristic $p>0$. Suppose that $A$ is an $E_\infty$-algebra over $k$. Then $A$ also has an $A_\infty$-algebra structure, and therefore so does its homology $HA$. Its homology is also graded commutative.
I'm looking for an extended version of graded commutativity, a result like this: if $m_n$ is any of the higher multiplications in the $A_\infty$ structure on $HA$, then for any $1 \leq j \leq n$, $$x m_n(a_1 \otimes \cdots \otimes a_n) = \pm m_n(a_1 \otimes \cdots \otimes x a_j \otimes \cdots \otimes a_n).$$ This is certainly true if $n=2$ by graded commutativity. What about for larger values?
(I'm tempted to tag any question about $E_\infty$-algebras as "commutative algebra", but I suppose that would be misleading...)
Edit: as Fernando points out in his comment, this is too much to expect in general. The $A_\infty$-algebra structure on $HA$ is not unique, so perhaps the right question is, are there conditions on $x$ and the $a_i$ so that, for some choice of $m_n$, $x m_n(\dots) = \dots$?
Along with Fernando's example, another one to consider is the mod $p$ cohomology of a cyclic group of order $p$, with $p$ odd. If $x$ is the generator of $H^1$ and $y$ is the generator of $H^2$, then $m_p(x^{\otimes p}) = \pm y$, so $x m_p(x^{\otimes p}) = \pm xy \neq 0$ while I think $m_p(x^2 \otimes x^{\otimes p-1}) = 0$, since $x^2=0$.
-
2
Is $x\in HA$ any element? If so then $m_n(a_1\otimes\cdots\otimes a_n)= \pm a_1\cdots a_n\cdot m_n(1\otimes\cdots\otimes 1)=0$ if the $A$-infinity structure is normalized (you can always assume this). – Fernando Muro Mar 13 2011 at 14:24
## 1 Answer
as Fernando points out in his comment, this is too much to expect in general. The A∞-algebra structure on HA is not unique
There is a unique $A_\infty$-algebra structure on $H(A)$ such that $H(A)$ and $A$ are weakly equivalent (i.e. $A_\infty$-quasi-isomorphic).
About your question, in characteristic 0 you could replace your $E_\infty$-algebra by a weakly equivalent DG commutative algebra... then on its cohomology you would get a $C_\infty$-structure (which is a strictly commutative $A_\infty$-structure).
In positive characteristic this is more difficult. But there is still the possibility to deal with divided power algebras (see e.g. http://math.univ-lille1.fr/~fresse/PartitionHomology.pdf on page 18 for a hint).
I know this is not really an answer to your question. But I hope to can help.
-
Damien, the structure is not unique. Perhapes you are mislead by the fact that it is essentially unique, but there are tons precisely because of that. – Fernando Muro Apr 26 2011 at 12:11
By essentieally unique, do you mean that it is unique up to a unique $A_\infty$-isomorphism ? If so, then we agree :-) – DamienC Apr 26 2011 at 12:52
@Damien: Yes, exactly. – Fernando Muro Apr 26 2011 at 19:59
The non-uniqueness is important, because $m_n(a_1 \otimes \dots \otimes a_n)$ may have very different values for two different (but isomorphic) $A_\infty$ structures. Anyway, I will look at the citation you mentioned -- I mainly care about the positive characteristic case. – John Palmieri Apr 27 2011 at 20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301801323890686, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/08/18/power-series/?like=1&source=post_flair&_wpnonce=3fdfac1636
|
# The Unapologetic Mathematician
## Power Series
Prodded by some comments, I think I’ll go even further afield from linear algebra. It’s a slightly different order than I’d originally thought of, but it will lead to some more explicit examples when we’re back in the realm of linear algebra, so it’s got its own benefits.
I’ll note here in passing that mathematics actually doesn’t proceed in a straight line, despite the impression most people get. The lower-level classes are pretty standard, yes — natural-number arithmetic, fractions, algebra, geometry, calculus, and so on. But at about this point where most people peter out, the subject behaves more like an alluvial fan — many parallel rivulets carry off in different directions, but they’re all ultimately part of the same river. So in that metaphor, I’m pulling a bit of an avulsion.
Anyhow, power series are sort of like polynomials, except that the coefficients don’t have to die out at infinity. That is, when we consider the algebra of polynomials $\mathbb{F}[X]$ as a vector space over $\mathbb{F}$ it’s isomorphic to the infinite direct sum
$\displaystyle\mathbb{F}[X]\cong\bigoplus\limits_{k=0}^\infty\mathbb{F}X^k$
but the algebra of power series — written $\mathbb{F}[[X]]$ — is isomorphic to the infinite direct product
$\displaystyle\mathbb{F}[[X]]\cong\prod\limits_{k=0}^\infty\mathbb{F}X^k$
It’s important to note here that the $X^i$ do not form a basis here, since we can’t write an arbitrary power series as a finite linear combination of them. But really they should behave like a basis, because they capture the behavior of every power series. In particular, if we specify that $\mu(X^m,X^n)=X^{m+n}$ then we have a well-defined multiplication extending that of power series.
I don’t want to do all the fine details right now, but I can at least sketch how this all works out, and how we can adjust our semantics to talk about power series as if the $X^i$ were an honest basis. The core idea is that we’re going to introduce a topology on the space of polynomials.
So what polynomials should be considered “close” to each other? It turns out to make sense to consider those which agree in their lower-degree terms to be close. That is, we should have the space of tails
$\displaystyle\bigoplus\limits_{k=n+1}^\infty\mathbb{F}X^k$
as an open set. More concretely, for every polynomial $p$ with degree $n$ there is an open set $U_p$ consisting of those polynomials $q$ so that $X^{n+1}$ divides the difference $q-p$.
Notice here that any power series defines, by cutting it off after successively higher degree terms, a descending sequence of these open sets. More to the point, it defines a sequence of polynomials. If the power series’ coefficients are zero after some point — if it’s a polynomial itself — then this sequence stops and stays at that polynomial. But if not it never quite settles down to any one point in the space. Doesn’t this look familiar?
Exactly. Earlier we had sequences of rational numbers which didn’t converge to a rational number. Then we completed the topology to give us the real numbers. Well here we’re just doing the same thing! It turns out that the topology above gives a uniform structure to the space of polynomials, and we can complete that uniform structure to give the vector space underlying the algebra of power series.
So here’s the punch line: once we do this, it becomes natural to consider not just linear maps, but continuous linear maps. Now the images of the $X^k$ can’t be used to uniquely specify a linear map, but they will specify at most one value for a continuous linear map! That is, any power series comes with a sequence converging to it — its polynomial truncations — and if we know the values $f(X^k)$ then we have uniquely defined images of each of these polynomial truncations since each one is a finite linear combination. Then continuity tells us that the image of the power series must be the limit of this sequence of images, if the limit exists.
### Like this:
Posted by John Armstrong | Algebra, Power Series, Ring theory
## 9 Comments »
1. While the introduction of topology here is easy to motivate (just as you explained), it’s also an instance of a general phenomenon, where topology and topological completions are used to extend dualities between structures ‘of finite type’ to more general dualities between “pro-objects” and “ind-objects”. A very famous example (which I am blogging about, very gradually) is Stone duality: there is a duality of finite type between finite sets and finite Boolean algebras, and this is extended to a duality between topological projective limits of finite sets (called Stone spaces) and general inductive limits of finite Boolean algebras, which are general Boolean algebras.
In the present case, we have a perfect duality between finite-dimensional algebras and finite-dimensional coalgebras. Then, on the ind-object side, the space of polynomials $P$ carries a coalgebra structure (with comultiplication $x^n \mapsto \sum_{i + j = n} x^i \otimes x^j$), which can be construed as an inductive limit of finite-dimensional coalgebras.
On the pro-object side, the vector space dual $P^*$ of the space of polynomials is, as you say in the post, the space of formal power series. The algebra structure on $P^*$ can be gotten either purely algebraically, defining the multiplication as a composite
$P^* \otimes P^* \to (P \otimes P)^* \to P^*$
where the first arrow is a canonical map and the second is the transpose of the comultiplication on $P$. Or, in line with the general phenomenon, the multiplication can be defined topologically (just as you have done), and this corresponds to taking a (dual) projective limit of (dual) finite-dimensional algebras, equipped with suitable uniform structures.
Comment by | August 18, 2008 | Reply
2. This is all true, except I haven’t defined the space of power series as the dual of the space of polynomials, though an astute reader could figure that much out from the general yoga of direct products and sums. Even so, it wouldn’t really help, since I haven’t remotely talked about comultiplications.
Still, good points.
Comment by | August 18, 2008 | Reply
3. [...] a little while we’re going to want to talk about “evaluating” a power series like we did when we considered polynomials as functions. But when we try to map into our base [...]
Pingback by | August 26, 2008 | Reply
4. [...] that a power series is like an infinite polynomial. In fact, we introduced a topology so we could see in any power [...]
Pingback by | August 27, 2008 | Reply
5. [...] Series Expansions Up to this point we’ve been talking about power series like , where “power” refers to powers of . This led to us to show that when we evaluate [...]
Pingback by | September 15, 2008 | Reply
6. [...] of Power Series Formally, we defined the product of two power series to be the series you get when you multiply out all the terms and [...]
Pingback by | September 22, 2008 | Reply
7. [...] and there I decided to stop what I was working on about linear algebra. Instead, I set off on power series and how power series expansions can be used to express analytic functions. Then I showed how power [...]
Pingback by | October 16, 2008 | Reply
8. Hello,
I’ve posted in sci.math.research the follwoing question: Consider a power series sum a_n x^n that is convergent for all real x, thus defining a function f: R \to R.
Are there criteria for the a_n to decide whether f is bounded?
Can the topology you defined above give answers to the question how “sparse” the subspace of all bounded power series is within the set of al convergent power series (in tems of cardinality or measure or Baire category or dimensionality)?
Thank you!
Andreas
Comment by Andreas Rüdinger | April 2, 2009 | Reply
9. I can’t say as I know, Andreas. But maybe another commenter around here might?
Comment by | April 2, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232199192047119, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/7474/fitting-a-non-linear-ar-garch1-1-m-model
|
# Fitting a non linear AR + GARCH(1,1)-M model
I want to fit the following model to a time series:
$$y_{t}=\alpha_{0}+\alpha_{1}y_{t-1}+\alpha_{2}y_{t-1}^{2}+\lambda h_{t}+\varepsilon_{t}$$
$$h_{t}=\beta_{0}+\beta_{1}\varepsilon_{t-1}^{2}+\beta_{2}h_{t-1}$$
How can I do this with R or with any other statistical software?
Thanks
-
Sorry ... I missed the "non" in "non-linear" and the square ... I will delete my answer, it is not an answer to your question. – Richard Mar 8 at 14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274968504905701, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/46114-finding-minima.html
|
Thread:
1. Finding minima
Hi,
Suppose we have a $m\times2$ matrix $A$, and we have
$<br /> B=\left[ {\begin{array}{*{20}c}<br /> {A_{11} - a} & {A_{12} - b} \\<br /> {A_{21} - a} & {A_{22} - b} \\<br /> {\vdots} & {\vdots} \\<br /> {A_{m1} - a} & {A_{m2} - b} \\<br /> \end{array}} \right]<br />$.
My problem is to find the values of $a$ and $b$ such that $trace[(B^TB)^{-1}]$ is minimized,
$<br /> [a,b]=\arg\min_{a,b}trace[(B^TB)^{-1}]<br />$.
Since $B^TB$ is a simple $2\times2$ matrix, I get
$trace[(B^TB)^{-1}]=\frac{f_a+f_b}{f_af_b-f_{ab}^2} <br />$
where
$f_a = \sum_{i=1}^{m}(A_{i1}-a)^2<br />$
$<br /> f_b = \sum_{i=1}^{m}(A_{i2}-b)^2$
$<br /> f_{ab} = \sum_{i=1}^{m}[(A_{i1}-a)(A_{i2}-b)]<br />$
The maximum of $\frac{f_a+f_b}{f_af_b-f_{ab}^2}$ is easy to get, but who can help me with finding the minimum? I have been stucked for quite a while.....
Anyway, thanks a lot in advance,
Creed
2. up...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431613683700562, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/61547/rotation-of-a-vector-distribution-to-align-with-a-normal-vector
|
Rotation of a vector distribution to align with a normal vector
I generate a distribution of random points on a unit hemisphere whose pole is on the positive z-axis (the base lies in the x-y plane). Each point represents a directional vector $v$ in which a ray will be fired.
I wish to rotate this distribution of points to align the 'hemisphere' with a vector which is the surface normal vector ($V_n$) of a given surface.
Currently I complete this by converting the cartesian coordinates of $N$ to spherical coordinates as follows:
$\theta = \arctan(N_y/N_x)$
$\phi = \arccos(N_z)$
I then form a rotation matrix for rotating around the z and x axis and use it to transform the original vector $v$ to get my desired vector $v'$. $v'$ will now lie on a hemisphere which is aligned with the surface normal $N$.
Is there a better way to complete this operation without having to calculate the spherical coordinates of the normal vector then compose a rotation matrix and preform a matrix multiplication. This function is used in a tight loop of some computationally expensive code so it would be ideal to preform some fundamental optimisation.
Perhaps some properties of vectors could be used it would be most beneficial if I could avoid having to calculate $\arccos$ and $\arctan$.
-
1. It might be better to use the two-argument arctangent for your application. 2. You do know that $\sin(\arctan(x,y))=\frac{y}{\sqrt{x^2+y^2}}$ and $\cos(\arctan(x,y))=\frac{x}{\sqrt{x^2+y^2}}$, don't you? – J. M. Sep 3 '11 at 8:58
I do use the two argument arctan however I could use $sin(arctan(x,y))=\cfrac{y}{\sqrt{x^2+y^2}}$ to calculate the rotation matrix directly rather than calculating the angle from arctan then the matrix coefficient using sin. – cubiclewar Sep 3 '11 at 10:04
Note that the convention of your computing environment might be backwards from the convention I use. The `atan2()` in your system might have the `y` first and the `x` last. – J. M. Sep 3 '11 at 10:07
1 Answer
Yes, you can do this without any trigonometric functions, and even without any square roots. Take a look at Rodrigues' formula for a rotation matrix given an angle $\theta$ and an axis along a unit vector $k$:
$$R=I\cos\theta+\sin\theta[k]_\times+(1-\cos\theta)kk^{\text T}\;,$$
where $I$ is the identity matrix and
$$[k]_\times=\begin{pmatrix}0&-k_z&k_y\\k_z&0&-k_x\\-k_y&k_x&0\end{pmatrix}\;.$$
You have not an axis and an angle, but a vector $N$ that you want to rotate the $z$ axis into. The axis should be perpendicular to this vector and the $z$ axis, and the angle should be the angle between this vector and the $z$ axis. Since the $z$ component of your vector is the cosine of the angle it forms with the $z$ axis, you already have $\cos\theta=N_z$. Now consider
$$N_\perp=\begin{pmatrix}0\\0\\1\end{pmatrix}\times N=\begin{pmatrix}-N_y\\N_x\\0\end{pmatrix}\;.$$
This vector is perpendicular to both $e_z$ and $N$, so its direction is along the desired rotation axis. Its magnitude is $\sin\theta$. So
$$\begin{eqnarray} N_\perp&=&\sin\theta k\;,\\ [N_\perp]_\times&=&\sin\theta[k]_\times\;,\\ \end{eqnarray}$$
where $k$ is a unit vector along the rotation axis. Now we have most of the ingredients of Rodrigues' formula. The only problem left is that we don't have $kk^{\text T}$ but $\sin^2\theta kk^{\text T}$. But $\sin^2\theta=1-\cos^2\theta$, and we have $\cos\theta$, so we can correct for that using only elementary arithmetic. Putting it all together, we have
$$\begin{eqnarray} R &=& I\cos\theta+[N_\perp]_\times+(1-\cos\theta)N_\perp N_\perp^{\text T}/(1-\cos^2\theta) \\ &=& I\cos\theta+[N_\perp]_\times+N_\perp N_\perp^{\text T}/(1+\cos\theta) \\ &=& IN_z+ \begin{pmatrix}0&0&N_x\\0&0&N_y\\-N_x&-N_y&0\end{pmatrix} +\frac1{1+N_z} \begin{pmatrix}N_y^2&-N_xN_y&0\\-N_xN_y&N_x^2&0\\0&0&0\end{pmatrix} \\ &=& \begin{pmatrix}N_z&0&N_x\\0&N_z&N_y\\-N_x&-N_y&N_z\end{pmatrix} +\frac1{1+N_z} \begin{pmatrix}N_y^2&-N_xN_y&0\\-N_xN_y&N_x^2&0\\0&0&0\end{pmatrix} \;. \end{eqnarray}$$
You can check that this is an orthogonal matrix that rotates a unit vector along the $z$ axis into $N$. If you organize the operations efficiently, you only need two negations, three additions, one division and five multiplications to calculate the rotation matrix.
Note that the result becomes undefined at $N_z=-1$, corresponding to the fact that there's no preferred choice of axis in this case. You can either arbitrarily choose an axis specifically for that case, e.g. the $x$ axis, or, since this might also cause numerical problems near $N_z=-1$, you can instead solve the problem for $-N_z$ whenever $N_z<0$, and then invert the resulting vectors. This will produce a mirror image of your distribution, but I'd assume that this is rotationally invariant around the $z$ axis anyway, and in that case the inversion wouldn't make a difference.
-
Wow, thank you very much, from omitting the trigonometric functions I reduced my run time by around 40% . After a few days of searching for a method to simplify this I didn't really come up with anything. I good mathematician will save you every time! – cubiclewar Sep 3 '11 at 10:28
+1. This is a great answer. – Geoff Mar 18 at 19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283320307731628, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87557?sort=newest
|
A simplicial complex which is not collapsible, but whose barycentric subdivision is
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does anyone know of a simplicial complex which is not collapsible but whose barycentric subdivision is?
Every collapsible complex is necessarily contractible, and subdivision preserves the topological structure, so we are certainly looking for a complex which is contractible, but not collapsible. The only complexes I know of which are contractible but not collapsible are the dunce cap and Bing's house with two rooms. Neither of these have any free faces, and so no iterated subdivision will result in a collapsible complex.
-
3
Main Theorem 5 of arxiv.org/pdf/1107.5789.pdf states that if a $d$-complex admits a geometric realization as a convex subset of $\mathbb{R}^d$, then its $(d-2)$-nd barycentric subdivision is collapsible. – Richard Stanley Feb 5 2012 at 3:08
1 Answer
Lickorish and Martin constructed, for each $r$, a triangulation of the $3$-ball whose $r$th barycentric subdivision collapses, but $(r-1)$th doesn't. The basic idea, going back to Furch and Bing, is to triangulate a cube with a knotted hole, where the missing knot has bridge index $2^r+1$, and then fill back a small part of the hole - so that topologically, no hole remains, but the $3$-ball now contains a knot triangulated by a single edge.
Added later: Kearton and Lickorish also constructed triangulations of the $n$-ball, $n\ge 3$, whose $r$th barycentric subdivision is not collapsible. On the other hand, every triangulation of a ball becomes collapsible after some number of barycentric subdivisions, according to a recent preprint by Adiprassito and Benedetti (see their Corollary 3.5).
-
... and it was known from the very beginning that every triangulation of a ball has a stellar subdivision that is collapsible (Theorem 7 in Simplicial spaces, nuclei, and $m$-groups). Whitehead's papers are always an inspiring reading. maths.ed.ac.uk/~aar/papers/jhcw9.pdf – Sergey Melikhov Jun 21 at 0:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927043676376343, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/22066/are-there-more-bosons-or-fermions-in-the-universe/22068
|
# Are there more bosons or fermions in the universe?
The question is in the title: are there more bosons or fermions in the universe? Or is there the same number of bosons and fermions?
I think there is the same number but I don't know why exactly.
-
I don't think you can quantify it: there are virtual particles as well making both numbers $\infty$. But I think the answer lies in how many photons are given off in a single fusion reaction in a typical star. – Manishearth♦ Mar 8 '12 at 11:14
Knowing the average star color and average energy per reaction; I guess it could be calculated. – Manishearth♦ Mar 8 '12 at 11:15
1
Hmm, there are a lot of neutrinos out there ... – John Rennie Mar 8 '12 at 11:33
@JohnRennie aah yes we'd have to include those in our calculations. I could Google the data, but I'm no astrophysicist so I don't know how applicable the average values are(if they exist). – Manishearth♦ Mar 8 '12 at 12:04
Do you mean fundamental bosons and fermions, or can they be composite? For example, mesons are composite but photons are elementary. – Phil H Mar 8 '12 at 12:08
show 1 more comment
## 4 Answers
Arnold Neumaier mentioned "soft photons" and the "infrared problem"; this bears elaboration. Soft photons are photons with very very low energy, for example a photon can have a period of one year, a wavelength of one lightyear, and an energy of 1E-22 eV. These photons are unobservable by any means. In traditional quantum field theory, it is predicted that every time two electrons repel each other (for example), an infinite number of soft photons are created ("the infrared divergence problem"). This is not regarded as much of a "problem" because the total energy of the infinite number of photons is finite, as the energy of each photon can be arbitrarily small. The vast quantities of soft photons have no observable effect or consequence.
In reality, I imagine that the number of soft photons created by a scattering event is not (strictly speaking) infinite. For example maybe there's no such thing as a photon with wavelength larger than the visible universe. That's just a guess, I don't know. But even if there's some cutoff like that, I would certainly bet that the number of soft photons (and soft gravitons) is vastly more than the number of all other particles in the universe combined. Photons and gravitons are massless, so they can have arbitrarily low energy, which is not true of any other particle because of the mass rest energy.
Since photons and gravitons are both bosons, I would say "more bosons than fermions".
-
If one of the eigenvalues of the neutrino mass matrix is zero (which I believe is consistent with experiment) then neutrinos also have an infrared problem, and exist in infinite amounts. – Arnold Neumaier Mar 8 '12 at 15:06
The answer is no.
Neither the number of bosons nor the number of fermions notr their difference is conserved in nature. This means that these number change all the time.
Should they be equal by coincidence for some time, this would last only for a moment. Even such a coincidence would be extremely unlikely, given the huge number of particles in the universe.
Edit: I had added in the discussion to other answers that due to the infrared problem, the number of each kind of massless particles (in particular that of photons, but possibly also that of one species of neutrinos) is infinite.
This makes the answer to the question somewhat trivial: If there is a massless species of neutrinos (still a possibilty - we now only know that not all neutrinos are massless), the answer would be yes, though not for an interesting reason, as both numbers are countably infinite and hence equal. On the other hand, if all neutrinos are massive, the answer would be no, as there are then infinitely many bosons but only finitely many Fermions.
The real answer to the question is is that counting particles makes no sense; particle number is not a physically relevant observable, and tells nothing interesting about the universe. (Except perhaps an entry in the Guinness Book of Records - but isn't the universe extremal in every respect?)
-
Broadly fermions are associated with matter, and bosons are associated with exchange (force mediation), at least if we are considering elementary particles. A given matter particle will produce and absorb many many exchange particles in its lifetime, so we can guess that there are more bosons than fermions.
It is not, however, fixed. A photon can be converted to an electron/positron pair, which would reduce the number of bosons by 1 and increase the number of fermions by 2. So at different times and places, the balance will be different, including at different stages of the evolution of the universe.
Immediately after the Big Bang, it is theorised that the universe was so small and dense that only photons existed; any matter particles created would be immediately obliterated. However, as the universe cooled, matter particles could survive long enough to form structures, until we have the cooler universe we see today. Perhaps, if the universe survived long enough and expanded far enough, particles would be so far apart that they would experience very little force and the balance would be in favour of fermions. But unless something changes significantly, the structures of matter - atoms, nuclei, crystals - are all stable and involve continual exchange of bosonic particles to mediate the forces that maintain them.
Finally, remember that at quantum scales everything is probabilistic; we cannot say that a photon exists or not in the classical sense until we measure its effect, so there can be many virtual particles with some probability that we do not measure for every particle that we do. For some fairly involved discussion of it, see Quantum Electro-Dynamics, and the electron self-interaction problem. It involves all the possible paths of electrons interacting with themselves, and held up fundamental quantum field theory for quite a while.
-
The question may be unanswerable because I'm pretty sure the number of photons is variable depending on your relativistic frame. At least that was an interesting observation that Feynman made, almost apologetically, in some of his early work; I'd have to look up the specific reference. (Hmm. Feynman's idea there would require variable-count photons to be generated in spin-cancelling pairs...)
-
1
The number of soft photons in a scattering event is in fact infinite due to the infrared problem. Thus the question is, strictly speaking, meaningless. – Arnold Neumaier Mar 8 '12 at 13:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949884295463562, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/12766/proof-that-if-group-n-and-g-n-are-p-groups-than-g-is-p-group
|
# Proof that if group $N$ and $G/N$ are p-groups, than $G$ is p-group
I have another question about correctness of my proof. We know, that $N$ and $G/N$ are p-groups, than $G$ is p-group.
Proof: $|N|=p^m$ and thus $|G|=p^m q$. Suppose that $(p^m,q)=1$, than in $G$ exists element of order $k$, such that $k | q$ and $(k,p^m)=1$ - suppose it's $b \in G$. Element $bN \in G/N$ is also of order $k$, and thus $k | \left| G/N \right|$ which contradicts with prerequisite, that $G/N$ is p-group.
It's correct? It's here simply way to prove that? Thanks.
-
14
|G| = |N| |G/N|. – Qiaochu Yuan Dec 2 '10 at 17:25
1
It's true that $(bN)^k=N$, but it's possible that the order of bN is a divisor of $k$ and not $k$ itself. In particular, you need to deal with the case $bN=N$. – Grumpy Parsnip Dec 2 '10 at 17:27
If b is in G, n is the smallest integer such that b^n is in N, and (b^n) has order m, then ord(b) = nm. – Steve D Dec 2 '10 at 17:30
3
this assumes that the order of G is finite. If that is the case then Qiaochu's solution is best. – Sean Tilson Dec 3 '10 at 1:39
## 1 Answer
There is a slight mistake in your argument: a priori, you don't know if the order $bN$ in $G/N$ is exactly $k$, but you do know that it is of order dividing $k$, hence relatively prime to $p$. In order to show that the order cannot be $1$, you need to use the fact that $N$ itself is a $p$-group (did you notice that you never used it?) so that $g\notin N$. Once you know that, then your contradiction would follow.
As to a simpler way, it depends on your definition of $p$-group! If to you a $p$-group is a group whose order is a power of $p$, then the very simplest way is simply to remember that for any group $G$ and any subgroups $H\subseteq K\subseteq G$, you have $[G:H]=[G:K][K:H]$ (cardinal multiplication in the case of infinite indices if necessary). So $|G|=[G:1]=[G:N][N:1] = |G/N||N|$ is a power of $p$, hence $G$ is a $p$-group.
However, there are meanings of $p$-group: some authors define $G$ to be a $p$-group if for every $g\in G$ there exists $k\gt 0$ such that $g^{p^k}=1$ (that is, the order of every element is a power of $p$). Lagrange's and Cauchy's Theorems tell you that for finite groups, the two definitions coincide. But there are infinite groups that satisfy the second meaning (but obviously not the first); the Prüfer group for $p$ for example.
How do we prove the statement under this definition, for possibly infinite groups (it's also true)? Suppose $N$ is a $p$-group and normal in $G$, and $G/N$ is a $p$-group. Let $g\in G$. We want to show that the order of $g$ is some power of $p$. Look at $gN$ in $G/N$: since $G/N$ is a $p$-group by hypothesis, then there exists $k\gt 0$ such that $(gN)^{p^k} = eN$; that is, $g^{p^k}N = eN$. That means that $g^{p^k}\in N$. Now use the fact that $N$ is itself a $p$-group (under this definition) to deduce that the order of $g$ must be a power of $p$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611551761627197, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/128463-diving-area-into-two-equal-regions.html
|
Thread:
1. diving an area into two equal regions
So, the graph is $y=4-x^{2}$ and it is to be looked at in the first quadrant. In order to find the total area, I drew a line in the graph so that it would be separated into a triangle and a semi-circle. using Area of a Circle formula (divided by 2 since its a semi-circle), I got 2pi. I added that to the Area of a triangle formula and got 10.283. Now, is there an equation I would use to find the vertical line that cuts the region into two smaller regions whose areas are equal?
2. Originally Posted by TheMathTham
So, the graph is $y=4-x^{2}$ and it is to be looked at in the first quadrant. In order to find the total area, I drew a line in the graph so that it would be separated into a triangle and a semi-circle. using Area of a Circle formula (divided by 2 since its a semi-circle), I got 2pi. I added that to the Area of a triangle formula and got 10.283. Now, is there an equation I would use to find the vertical line that cuts the region into two smaller regions whose areas are equal?
a parabola does not separate into those two geometric figures.
$\textcolor{blue}{\int_0^k 4-x^2 {dx} = \int_k^2 4-x^2 {dx}}$
3. Originally Posted by skeeter
a parabola does not separate into those two geometric figures.
$\textcolor{blue}{\int_0^k 4-x^2 {dx} = \int_k^2 4-x^2 {dx}}$
ah ok. good piece of information to know.
So I found that the $\int_{0}^{2}4-x^{2}dx$ was equal to 5.333. So by dividing by 2, i found that 2.667 was the area of each region. Then I set up $F(a_{1})-F(b_{1})=F(a_{2})-F(b_{2})$.
$(4k-\tfrac{1}{3}k^{3})-(0)=(5\tfrac{1}{3})-(4k-\tfrac{1}{3}k^{3})$. Which means that $8k-\tfrac{2}{3}k^{3}=5\tfrac{1}{3}$.
How do I solve it from there?
4. $\int_0^k 4-x^2 = \frac{8}{3}$
$\left[4x - \frac{x^3}{3}\right]_0^k = \frac{8}{3}<br />$
$4k - \frac{k^3}{3} = \frac{8}{3}$
$12k - k^3 = 8$
$0 = k^3 - 12k + 8$
using a calculator ... $k \approx 0.6946$
5. Originally Posted by skeeter
$\int_0^k 4-x^2 = \frac{8}{3}$
$\left[4x - \frac{x^3}{3}\right]_0^k = \frac{8}{3}<br />$
$4k - \frac{k^3}{3} = \frac{8}{3}$
$12k - k^3 = 8$
$0 = k^3 - 12k + 8$
using a calculator ... $k \approx 0.6946$
Is there a way to find that out non-graphically?
6. Originally Posted by TheMathTham
Is there a way to find that out non-graphically?
sure ...
The "Cubic Formula"
... if you're ready to do the extensive algebra required.
7. Originally Posted by skeeter
sure ...
The "Cubic Formula"
... if you're ready to do the extensive algebra required.
...good ol' graphs
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587810635566711, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/153564/is-it-true-that-for-matrices-where-all-entry-are-lower-than-1-determinant-is-lo/153582
|
# Is it true that for matrices where all entry are lower than 1, determinant is lower than 1 as well?
## Generic square matrix with positive 1 bounded entries
Considering a matrix $A=(a_{i.j})$ where $0 \leq a_{i,j} < 1 \forall i,j$. It is important to consider that all entries are strictly lower than 1 and positive.
## Rows sum to a number lower than 1
Let us consider that the sum of all entries of matrix $A$'s rows is lower than 1: $\sum_{j=1}^{n}a_{i,j} < 1$. Sorry, maybe I did not specify it, only wrote in the formula, I talk about rows. Rows sum to a number lower than 1.
## Determinant...
Let us consider $\det(A)$ (determinant).
Is it true that $\det(A)<1$???
Or maybe $|\det(A)| < 1$???
-
2
Perhaps you want to put absolute value signs around everything? Otherwise, consider $\begin{bmatrix}-100 & 0 \\ 0 & -100\end{bmatrix}$. – Rahul Narain Jun 4 '12 at 2:09
Yeah, all entries are positive. Please forgive me... – Andry Jun 4 '12 at 2:13
## 2 Answers
EDIT: Taking into account the condition that the sum of the entries be less than one, the determinant is a sum of $n!$ terms, each of which is at most $n^{-n}$, so the determinant is bounded by $n!/n^n$, which is certainly less than 1 (for $n\gt1$). Each term is les than $n^{-n}$ because it's a product of $n$ numbers that add up to less than 1, and you maximize the product by taking all the number to equal $1/n$.
(Never mind --- I just saw the part about the sum of all the entries, or maybe it's the sum of all the entries in each row, being less than 1.)
Are we only talking about $2\times2$ matrices? If not, then $$\pmatrix{a&b&0\cr0&c&d\cr e&0&f\cr}$$ will have determinant $acf+bde$ which can certainly exceed 1 even if all the variables stand for numbers between zero and one.
-
I am talking about sum of row entries... In your case, did you take into account this in your explaination? Sorry I am still reading, wanna be sure we are talking about the same. In my case I consider sum of row entries, not all elements in the matrix – Andry Jun 4 '12 at 2:52
I edited my questino because I specified the condition on rowas only in the frmula and not using words... It could be misleading. Very sorry for my carelessness – Andry Jun 4 '12 at 2:54
Why don't you go away and think about your question for a couple of days and come back when you're able to put into writing the actual question you want to ask instead of something with lots of conditions missing or incorrectly stated? While you're at it, look up the Hadamard bound on determinants, it might answer your question, depending, of course, on what the heck your question might really be. – Gerry Myerson Jun 4 '12 at 3:07
I can understand that due to my carelessness you had to go through some troubles in understanding what I was looking for. Actually the question is the one here now, no more edits. I simply had in my mind the matrix structure but failed in explaining it and providing good details. I apologized. This being said, I do not think to deserve your bad words as there are many other members in this community behaving really bad towards those who answer their questions. You could simply say to pay more attention, it would have been "more professional". – Andry Jun 4 '12 at 3:37
Each term in the sum isn't necessarily bounded by $n^{-n}$, take a constant multiple of the identity matrix for example.. making all terms ${1 \over n}$ actually minimizes things as the determinant becomes zero. – Zarrax Jun 4 '12 at 4:59
show 1 more comment
Note that $\sum_j a_{ij}^2 < \sum_j a_{ij} < 1$. So the magnitude of each row, viewed as a vector in ${\mathbb R}^n$, is less than one. The absolute value of the determinant of $A$ is the volume of the parallelopiped spanned by the rows, which is at most the product of the magnitudes of the row vectors, and therefore is less than one in this case.
If you want to do it algebraically, you can prove it by induction on the dimension, the $1$ by $1$ case being trivial. Then you can do a cofactor expansion along any $i$th row, getting that $$det(A) = \sum_j (-1)^{i + j} a_{ij} \,det(A_{ij})$$ Note that each matrix $A_{ij}$ also satisfies the conditions of the problems, so each $|det(A_{ij})| < 1$ by induction hypothesis. You then get $$|det(A)| < \sum_j |a_{ij}||det(A_{ij})|$$ $$< \sum_j |a_{ij}|$$ $$< 1$$
-
I think you'll find that what you are using in the first paragraph is what's commonly referred to as the Hadamard bound. – Gerry Myerson Jun 4 '12 at 6:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630913734436035, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/5247/list
|
## Return to Answer
3 edited body
I believe the question you meant to ask in (2) is: For $S$ a surface, is there some theorem like Castelnouvo positivity, regarding the $4$-fold $S \times S$? The answer to this question is "There is an analogous theorem, called the Hodge index theorem, but it is more complicated."
Let me explain what the Hodge index theorem says. Let $X$ be a smooth, algebraic variety over $\mathbb{C}$, with a specified projective embedding. For the purposes of the Riemann hypothesis, you would want to be working over a field of finite characteristic instead, but many of the things I want to say are much more subtle, and are conjectures rather than theorems, in finite characteristic. You should think of $X$ as $S \times S$, where $S$ is the variety for which you want to prove the Riemann hypothesis.
The cohomology $H^k(X, \mathbb{C})$ breaks up in the Hodge decomposition $H^k = \bigoplus_{p+q=k} H^{p,q}$. For the purposes of the Riemann hypothesis, we only care about $H^{m,m}$, so I'll limit my discussion to that case. Now, our specified projective embedding $X \to \mathbb{P}^N$ gives us a map in cohomology in the other direction. $H^2(\mathbb{P}^N)$ is one dimensional and has a standard choice of generator called the hyperplane class; let $\omega$ be the image of this generator in $H^*(X)$. It turns out that $\omega$ lands in $H^{1,1}$.
Cupping with $\omega$ maps $H^{(m-1), (m-1)}$ to $H^{m,m}$. The hard Lefschetz theorem states that this map is injective when $m \leq \dim X$. Let's assume we're in this case, the other is related to this one by Poincare duality. So, $H^{m,m}$ has a filtration as $$H^{m,m} \supset \omega H^{(m-1), (m-1)} \supset \omega^2 H^{(m-2), (m-2)} \supset \cdots.$$ Abbreviate this as $$L^m \supset L^{m-1} \supset \cdots L^1 \supset L^0.$$
Define an inner product on $H^{m,m}$ by $$\langle f,g \rangle = \int \omega^{\dim X-2m} f g.$$ The Hodge index theorem says (in part) that this will be positive definite on $L^0$, negative definite on the orthogonal complement of $L^0$ within $L^1$, positive definite on the orthogonal complement of $L^1$ within $L^2$, and so forth. Let $M^i$ be the orthogonal complement of $L^{i-1}$ in $L^i$. (Not sure of the standard nomenclature here.) The case of $H^{1,1}$ of a surface is particularly easy, because $M^0$ is one-dimensional, spanned by $\omega$, and $M^1$ is the orthognonal complement of $M^0$.
It is relatively easy to prove Castelnuovo positivity from the Hodge index theorem for surfaces; see, for example Hartshorne exercise V.1.9. I have not seen anyone write out an analogue of Castelnuovo positivity for $S \times S$ when $S$ is higher dimensional. However, it is known how to adapt Weyl's proof of the Riemann hypothesis to higher dimensional $S$, if one had an analogue of the Hodge index theorem for $S \times S$ in characteristic $p$. I've been told that a good reference for this is Kleiman's Algebraic Cycles and the Weyl Weil Conjectures but I have not read this myself.
Let me explain why it is difficult to extend the Hodge index theorem to finite characteristic. If $X$ is defined in characteristic $p$, then $H^k(X)$ must be interpreted as cohomology with coefficients in $\mathbb{Q}_{\ell}$ (or, nowadays, $\mathbb{Q}_p$). Since these fields aren't ordered, we can't talk about positive definiteness.
Weyl
Weil dodges this obstacle by talking about the vector space of algebraic cycles. This is a the $\mathbb{Q}$-vector space spanned by algebraic cycles, which I'll denote $A^{m}$. In characteristic $0$, it is a subspace of $H^{2m}(X, \mathbb{Q}) \cap H^{m,m}(X, \mathbb{C})$. The Hodge conjecture says that this it is precisely this subspace. In any characteristic, we have a map $$A^m \to H^{2m}.$$ This map is either known or conjectured to be an injection, depending on exactly how you define $A^m$. Let's assume that it is an injection. The inner product $\langle, \rangle$ is $\mathbb{Q}$-valued on $A^m$, so it makes sense to talk about its signature restricted to subspaces of $A^m$.
I'm going to make a secret switch of notation here, and use $L^i$ and $M^i$ to now refer to constructions in $H^{2m}$ rather than $H^{m,m}$. In the end, we'll be interested in things like $A^m \cap L^i$ which, in characteristic $0$, would live in $H^{m,m}$ anyway. By making this switch, I avoid having to explain how the Hodge decomposition works (and doesn't) in characteristic $p$.
In the case where $X$ is a surface, the generator $\omega$ of $M^0$ lies in $A^m$. One can use this to show that $$A^m = (A^m \cap M^0) \oplus (A^m \cap M^1).$$ The analogue of the Hodge index theorem then says that $\langle, \rangle$ is positive definite on $A^m \cap M^0$ and negative definite on $A^m \cap M^1$.
In all higher dimensional cases, this falls apart. It is (I believe) not known that $\langle, \rangle$ is nondegenerate on $L^i$, so it is not known that we can define the $M^i$. It is certainly not known that $$A^m = \bigoplus (A^m \cap M^i).$$ And it is not known that $(-1)^i \langle, \rangle$ restricted to $A^m \cap M^i$ is positive definite. Grothendieck's standard conjectures assert that all of this works. This is a major, and challenging, field of research.
I'll close by mentioning a challenge that is more suited to a combinatorial algebraic geometer like me. Harry Tamvakis told me that he tried, and failed, to prove the hard Lefschetz and Hodge index theorems for grassmannians by brute force. Here the cohomology ring is given by well known formulas, so the difficulties are all combinatorial. I can't say this is an important problem, but it sounds fun.
2 added 79 characters in body
I believe the question you meant to ask in (2) is: For $S$ a surface, is there some theorem like the Castelnouvo positivity, regarding the $4$-fold $S \times S$? The answer to this question is "There is an analogous theorem, called the Hodge index theorem, but it is more complicated."
Let me explain what the Hodge index theorem says. Let $X$ be a smooth, algebraic variety over $\mathbb{C}$, with a specified projective embedding. For the purposes of the Riemann hypothesis, you would want to be working over a field of finite characteristic instead, but many of the things I want to say are much more subtle, and are conjectures rather than theorems, in finite characteristic. You should think of $X$ as $S \times S$, where $S$ is the variety for which you want to prove the Riemann hypothesis.
The cohomology $H^k(X, \mathbb{C})$ breaks up in the Hodge decomposition $H^k = \bigoplus_{p+q=k} H^{p,q}$. For the purposes of the Riemann hypothesis, we only care about $H^{m,m}$, so I'll limit my discussion to that case. Now, our specified projective embedding $X \to \mathbb{P}^N$ gives us a map in cohomology in the other direction. $H^2(\mathbb{P}^N)$ is one dimensional and has a standard choice of generator called the hyperplane class; let $\omega$ be the image of this generator in $H^*(X)$. It turns out that $\omega$ lands in $H^{1,1}$.
Cupping with $\omega$ maps $H^{(m-1), (m-1)}$ to $H^{m,m}$. The hard Lefschetz theorem states that this map is injective when $m \leq \dim X$. Let's assume we're in this case, the other is related to this one by Poincare duality. So, $H^{m,m}$ has a filtration as $$H^{m,m} \supset \omega H^{(m-1), (m-1)} \supset \omega^2 H^{(m-2), (m-2)} \supset \cdots.$$ Abbreviate this as $$L^m \supset L^{m-1} \supset \cdots L^1 \supset L^0.$$
Define an inner product on $H^{m,m}$ by $$\langle f,g \rangle = \int \omega^{\dim X-2m} f g.$$ The Hodge index theorem says (in part) that this will be positive definite on $L^0$, negative definite on the orthogonal complement of $L^0$ within $L^1$, positive definite on the orthogonal complement of $L^1$ within $L^2$, and so forth. Let $M^i$ be the orthogonal complement of $L^{i-1}$ in $L^i$. (Not sure of the standard nomenclature here.) The case of $H^{1,1}$ of a surface is particularly easy, because $M^0$ is one-dimensional, spanned by $\omega$, and $M^1$ is everything else.the orthognonal complement of $M^0$.
It is relatively easy to prove Castelnuovo positivity from the Hodge index theorem for surfaces; see, for example Hartshorne exercise V.1.9. I have not seen anyone write out an analogue of Castelnuovo positivity for $S \times S$ when $S$ is higher dimensional. However, it is known how to adapt Weyl's proof of the Riemann hypothesis to higher dimensional $S$, if one had an analogue of the Hodge index theorem for $S \times S$ in characteristic $p$. I've been told that a good reference for this is Kleiman's Algebraic Cycles and the Weyl Conjectures but I have not read this myself.
Let me explain why this it is difficult to extend the Hodge index theorem to finite characteristic. If $X$ is defined in characteristic $p$, then $H^k(X)$ must be interpreted as cohomology with coefficients in $\mathbb{Q}_{\ell}$ (or, nowadays, $\mathbb{Q}_p$). Since these fields aren't ordered, we can't talk about positive definiteness.
Weyl dodges this obstacle by talking about the vector space of algebraic cycles. This is a the $\mathbb{Q}$-vector space spanned by algebraic cycles, which I'll denote $A^{m}$. In characteristic $0$, it is a subspace of $H^{2m}(X, \mathbb{Q}) \cap H^{m,m}(X, \mathbb{C})$. The Hodge conjecture says that this it is precisely this subspace. In any characteristic, we have a map $$A^m \to H^{2m}.$$ This map is either known or conjectured to be an injection, depending on exactly how you define $A^m$. Let's assume that it is an injection. The inner product $\langle, \rangle$ is $\mathbb{Q}$-valued on $A^m$, so it makes sense to talk about its signature restricted to sublattices subspaces of $A^m$.
I'm going to make a secret switch of notation here, and use $L^i$ and $M^i$ to now refer to constructions in $H^{2m}$ rather than $H^{m,m}$. In the end, we'll be interested in things like $A^m \cap L^i$ which, in characteristic $0$, would live in $H^{m,m}$ anyway. By making this switch, I avoid having to explain how the Hodge decomposition works (and doesn't) in characteristic $p$.
In the case where $X$ is a surface, the generator $\omega$ of $M^0$ lies in $A^m$. One can use this to show that $$A^m = (A^m \cap M^0) \oplus (A^m \cap M^1).$$ The analogue of the Hodge index theorem then says that $\langle, \rangle$ is positive definite on $A^m \cap M^0$ and negative definite on $A^m \cap M^1$.
In all higher dimensional cases, this falls apart. It is (I believe) not known that $\langle, \rangle$ is nondegenerate on $L^i$, so it is not known that we can define the $M^i$. It is certainly not known that $$A^m = \bigoplus (A^m \cap M^i).$$ And it is not known that $(-1)^i \langle, \rangle$ restricted to $A^m \cap M^i$ is positive definite. Grothendieck's standard conjectures assert that all of this works. This is a major, and challenging, field of research.
I'll close by mentioning a challenge that is more suited to a combinatorial algebraic geometer like me. Harry Tamvakis told me that he tried, and failed, to prove the hard Lefschetz and Hodge index theorems for grassmannians by brute force. Here the cohomology ring is given by well known formulas, so the difficulties are all combinatorial. I can't say this is an important problem, but it sounds fun.
1
I believe the question you meant to ask in (2) is: For $S$ a surface, is there some theorem like the Castelnouvo positivity, regarding the $4$-fold $S \times S$? The answer to this question is "There is an analogous theorem, called the Hodge index theorem, but it is more complicated."
Let me explain what the Hodge index theorem says. Let $X$ be a smooth, algebraic variety over $\mathbb{C}$, with a specified projective embedding. For the purposes of the Riemann hypothesis, you would want to be working over a field of finite characteristic instead, but many of the things I want to say are much more subtle, and are conjectures rather than theorems, in finite characteristic. You should think of $X$ as $S \times S$, where $S$ is the variety for which you want to prove the Riemann hypothesis.
The cohomology $H^k(X, \mathbb{C})$ breaks up in the Hodge decomposition $H^k = \bigoplus_{p+q=k} H^{p,q}$. For the purposes of the Riemann hypothesis, we only care about $H^{m,m}$, so I'll limit my discussion to that case. Now, our specified projective embedding $X \to \mathbb{P}^N$ gives us a map in cohomology in the other direction. $H^2(\mathbb{P}^N)$ is one dimensional and has a standard choice of generator called the hyperplane class; let $\omega$ be the image of this generator in $H^*(X)$. It turns out that $\omega$ lands in $H^{1,1}$.
Cupping with $\omega$ maps $H^{(m-1), (m-1)}$ to $H^{m,m}$. The hard Lefschetz theorem states that this map is injective when $m \leq \dim X$. Let's assume we're in this case, the other is related to this one by Poincare duality. So, $H^{m,m}$ has a filtration as $$H^{m,m} \supset \omega H^{(m-1), (m-1)} \supset \omega^2 H^{(m-2), (m-2)} \supset \cdots.$$ Abbreviate this as $$L^m \supset L^{m-1} \supset \cdots L^1 \supset L^0.$$
Define an inner product on $H^{m,m}$ by $$\langle f,g \rangle = \int \omega^{\dim X-2m} f g.$$ The Hodge index theorem says (in part) that this will be positive definite on $L^0$, negative definite on the orthogonal complement of $L^0$ within $L^1$, positive definite on the orthogonal complement of $L^1$ within $L^2$, and so forth. Let $M^i$ be the orthogonal complement of $L^{i-1}$ in $L^i$. (Not sure of the standard nomenclature here.) The case of $H^{1,1}$ of a surface is particularly easy, because $M^0$ is one-dimensional, spanned by $\omega$, and $M^1$ is everything else.
It is relatively easy to prove Castelnuovo positivity from the Hodge index theorem for surfaces; see, for example Hartshorne exercise V.1.9. I have not seen anyone write out an analogue of Castelnuovo positivity for $S \times S$ when $S$ is higher dimensional. However, it is known how to adapt Weyl's proof of the Riemann hypothesis to higher dimensional $S$, if one had an analogue of the Hodge index theorem for $S \times S$ in characteristic $p$. I've been told that a good reference for this is Kleiman's Algebraic Cycles and the Weyl Conjectures but I have not read this myself.
Let me explain why this is difficult. If $X$ is defined in characteristic $p$, then $H^k(X)$ must be interpreted as cohomology with coefficients in $\mathbb{Q}_{\ell}$ (or, nowadays, $\mathbb{Q}_p$). Since these fields aren't ordered, we can't talk about positive definiteness.
Weyl dodges this obstacle by talking about the vector space of algebraic cycles. This is a the $\mathbb{Q}$-vector space spanned by algebraic cycles, which I'll denote $A^{m}$. In characteristic $0$, it is a subspace of $H^{2m}(X, \mathbb{Q}) \cap H^{m,m}(X, \mathbb{C})$. The Hodge conjecture says that this it is precisely this subspace. In any characteristic, we have a map $$A^m \to H^{2m}.$$ This map is either known or conjectured to be an injection, depending on exactly how you define $A^m$. Let's assume that it is an injection. The inner product $\langle, \rangle$ is $\mathbb{Q}$-valued on $A^m$, so it makes sense to talk about its signature restricted to sublattices of $A^m$.
I'm going to make a secret switch of notation here, and use $L^i$ and $M^i$ to now refer to constructions in $H^{2m}$ rather than $H^{m,m}$. In the end, we'll be interested in things like $A^m \cap L^i$ which, in characteristic $0$, would live in $H^{m,m}$ anyway. By making this switch, I avoid having to explain how the Hodge decomposition works (and doesn't) in characteristic $p$.
In the case where $X$ is a surface, the generator $\omega$ of $M^0$ lies in $A^m$. One can use this to show that $$A^m = (A^m \cap M^0) \oplus (A^m \cap M^1).$$ The analogue of the Hodge index theorem then says that $\langle, \rangle$ is positive definite on $A^m \cap M^0$ and negative definite on $A^m \cap M^1$.
In all higher dimensional cases, this falls apart. It is (I believe) not known that $\langle, \rangle$ is nondegenerate on $L^i$, so it is not known that we can define the $M^i$. It is certainly not known that $$A^m = \bigoplus (A^m \cap M^i).$$ And it is not known that $(-1)^i \langle, \rangle$ restricted to $A^m \cap M^i$ is positive definite. Grothendieck's standard conjectures assert that all of this works. This is a major, and challenging, field of research.
I'll close by mentioning a challenge that is more suited to a combinatorial algebraic geometer like me. Harry Tamvakis told me that he tried, and failed, to prove the hard Lefschetz and Hodge index theorems for grassmannians by brute force. Here the cohomology ring is given by well known formulas, so the difficulties are all combinatorial. I can't say this is an important problem, but it sounds fun.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 224, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957554817199707, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/265009/hyperbolas-on-an-imaginary-graph?answertab=oldest
|
# Hyperbolas on an Imaginary Graph
My first question is what this type of graph (of $x-y-i$) is called since I was unable to find any information about any such graph.
Now for the real question, I used the equation $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ and drew the graph on the $xy$ plane and then added an imaginary axis and found out the value $y = bi$. so I plotted it and since it was symmetric I decided to do it for $-b$; then I saw that the values fell sharply so now I think this graph will be an Ellipse. Am I correct?
Is there any research about the xy plane with imaginary axis?
-
## 1 Answer
First of all, I would not call it the "imaginary axis" - the name needs to be clearer as to what the value on that axis represents. If I understand your question correctly, it is the imaginary $y$-axis; in other words, you're allowing $y$ to be a complex number $y=y_0+iy_1$, while you are still requiring $x$ to be real.
If I understand your question correctly, you are wondering if your plot of the solutions to $$\frac{x^2}{a^2}-\frac{(y_0+iy_1)^2}{b^2}=1$$ is correct. Well, I'd advise thinking about it like this: $$\frac{x^2}{a^2}-\frac{(y_0+iy_1)^2}{b^2}=\bigg(\frac{x^2}{a^2}-\frac{y_0^2}{b^2}+\frac{y_1^2}{b^2}\bigg)-i\bigg(\frac{2y_0y_1}{b^2}\bigg)=1$$ The only way this is possible is if $y_0=0$ or $y_1=0$; otherwise, the imaginary part of the left side is non-zero, while the imaginary part of the right size is zero.
Thus, you're looking for the solutions to $$\frac{x^2}{a^2}-\frac{y_0^2}{b^2}+\frac{y_1^2}{b^2}=1$$ where either $y_0=0$ or $y_1=0$. The above equation defines a hyperboloid of one sheet, and so you're looking for the intersection of that hyperboloid with the $xy_1$-plane (where $y_0=0$) and $xy_0$-plane (where $y_1=0$).
In conclusion: Your plot seems to be of the right form, though it is not centered correctly (the center of the whole thing should be at the origin).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518630504608154, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/18634-finite-group-element-prime-order.html
|
Thread:
1. Finite group with element of prime order
Let G be a finite group with more than one element. Show that G has an element of prime order.
2. Let $a\in G, \ ord(a)=n$.
If $n$ is prime, we've done.
Else, let $p$ be a prime divisor of $n$.
Then $a^n=a^{pm}=(a^m)^p=e$.
Let $b=a^m\in G\Rightarrow b^p=e$.
Let $0<q<p$. Then $b^q=(a^m)^q=a^{mq}$.
But $mq<mp=n\Rightarrow a^{mq}\neq e$.
So $ord(b)=p$.
3. Originally Posted by red_dog
Let $a\in G, \ ord(a)=n$.
If $n$ is prime, we've done.
Else, let $p$ be a prime divisor of $n$.
Then $a^n=a^{pm}=(a^m)^p=e$.
Let $b=a^m\in G\Rightarrow b^p=e$.
Let $0<q<p$. Then $b^q=(a^m)^q=a^{mq}$.
But $mq<mp=n\Rightarrow a^{mq}\neq e$.
So $ord(b)=p$.
If I am reading this right, then, the only case left to show is to prove that no group can exist where the order of all of its elements is 1. (I admit this is practically trivial, but it wasn't addressed in the above proof.)
-Dan
4. Originally Posted by topsquark
If I am reading this right, then, the only case left to show is to prove that no group can exist where the order of all of its elements is 1. (I admit this is practically trivial, but it wasn't addressed in the above proof.)
-Dan
The only element of order 1 is the identity. He said "with more than 1 element (identity)".
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172893762588501, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/38722/limit-on-space-time-dimension-from-susy/38941
|
# Limit on space-time dimension from susy
I read an argument saying that it would be impossible to write down a super-symmetric theory in more than 11 dimensions, this limit coming from the dimension of the Clifford algebra that goes as $2^{\frac{N}{2}}$ or $2^{\frac{N-1}{2}}$ for $N$ even or odd, respectively.
I haven't studied a lot of susy and I don't see how it wouldn't be possible to create a super-symmetric multiplet in higher dimensions as long as we add enough scalar fields (${\cal{N}} =1$ in my example) to match the fermionic degrees of freedom.
-
1
You can go up to 12 dimensions, if you have 2 time dimensions. – Ron Maimon Nov 2 '12 at 17:36
## 1 Answer
Well, if we want massless supermultiplets, for instance massless gravitons, then the multiplet will also have to contain massless particles with spin greater than 2. Such particles have to be associated with a gauge symmetry, but it's not Yang-Mills as in massless spin-1, diffeomorphisms as in massless spin-2, or SUGRA as in massless 3/2. So what is that gauge symmetry physically?
-
There's an even stronger result if I recall correctly (the name escapes me at the moment): you can't have an interacting relativistic qft with a finite number of fundamental particles with spin greater than 2. (String theory evades this restriction by having an infinite tower of string modes.) – Michael Brown Jan 1 at 11:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174113273620605, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/09/02/pointwise-convergence/?like=1&source=post_flair&_wpnonce=420e8d64d7
|
# The Unapologetic Mathematician
## Pointwise Convergence
When we evaluate a power series at a point we get a number if the series converges at that point. We even know that for each power series we have a disk where evaluation gives an absolutely convergent series at every point. In this view we regard a power series as the limit of a sequence of polynomials, evaluate each of the polynomials to get a sequence of numbers, and then take the limit of that sequence.
But what if we change it up. Let’s say we already know that our power series will converge with radius $R$. Then inside the disk $D_R$ of radius $R$ each polynomial defines a function, and evaluation of the power series defines another function. It makes sense to regard the latter as “the limit” of the sequence of the former. That is, we already have $s=\lim\limits_{n\rightarrow\infty}p_n$ as elements of the ring of power series $\mathbb{C}[[X]]$. But now we regard them as living in the ring $(D_R)^\mathbb{C}$ of complex-valued functions on the disk of radius $R$.
And we have a topology on the ring $D^\mathbb{C}$ of complex-valued functions on a domain $\mathbb{R}$. Instead of defining this topology in terms of open sets as we usually do, we define this topology in terms of which nets converge to which points. In fact, we’ll make do with sequences, since the extension to convergence of nets is straightforward.
The topology we have staring us in the face is the “pointwise” topology. That is, we say that a sequence $f_n$ of functions on $D$ converges to a function $f$ if and only if for every point $z\in D$ the evaluations converge to the evaluation of $f$: $\lim\limits_{n\rightarrow\infty}f_n(z)=f(z)$.
Alternately we can read this as a recipe: given a sequence of functions $f_n$, if for each point $x\in D$ the sequence $f_n(z)$ of complex numbers converges, then we declare the limiting function to be that function $f$ defined by $f(z)=\lim\limits_{n\rightarrow\infty}f_n(z)$. If at any point $z\in D$ the sequence $f_n(z)$ fails to converge, we declare the sequence of functions to fail to converge.
Putting a topology on a space of functions marks the first dipping of our toes into the ocean of functional analysis. There’s a lot out there, and we’ll only be wading into the shallowest waters for now. Still, it gives a hint of the incredible depth that lays just beyond the breakers crashing on the shores of second-semester calculus.
## 3 Comments »
1. [...] got a problem with the topology of pointwise convergence. The subspace of continuous functions isn’t closed. What does that mean? It means that if we [...]
Pingback by | September 4, 2008 | Reply
2. [...] we restrict to those points where it converges, we get a function. That is the series of functions converges pointwise to a limiting function. What’s great is that for any compact set contained within the radius [...]
Pingback by | September 10, 2008 | Reply
3. [...] we can talk about pointwise convergence of a sequence of measurable functions. That is, for a fixed point we have the sequence which has [...]
Pingback by | May 10, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156761765480042, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/198/arent-constructive-math-proofs-more-sound?answertab=votes
|
# Aren't constructive math proofs more “sound”?
Since constructive mathematics allows us to avoid things like Russell's Paradox, then why don't they replace traditional proofs? How do we know the "regular" kind of mathematics are free of paradox without a proof construction?
-
6
I'm voting down as I don't think this question makes sense. As far as I know there's no relationship between Russell's paradox and constructive mathematics, so I don't understand what the question is asking. – Noah Snyder Jul 21 '10 at 5:21
11
@Noah: There is, at least historically - Weyl (1921) "On the New foundational Crisis of Mathematics" talks of paradoxes threateneing the chorence of mathematics as an enterprise, and recommending intuitionistic or predicative mathematics as the solution. – Charles Stewart Jul 21 '10 at 8:25
2
@Noah, they are related in a strong sense. I suggest you read about the history of constructivism. Intuitionists were trying to find a safer foundations that would avoid paradoxes arising in classical mathematics and set theory like Russell's paradox. – Kaveh Nov 12 '11 at 17:53
In fact the Russell paradox is perfectly constructive. It shows that naïve set theory is inconsistent, and does so in an entirely constructive way. No excluded middle is used, because even intuitionistically an equivalence $P\iff\lnot P$ is a contradiction (where $P$ is $R\in R$ for the set $R=\{x\mid x\notin x\}$): one has $(P\to\lnot P)\implies\lnot P$ and $(\lnot P\to P)\implies\lnot\lnot P$, and of course $\lnot P\land\lnot\lnot P\implies\bot$). The problem is with the axioms of naïve set theory, not with the logic used. – Marc van Leeuwen Mar 4 at 12:53
## 6 Answers
Proof theorists have obtained several "relative consistency" proofs between classical and constructive theories. These show that if certain theories of classical mathematics are inconsistent, then corresponding theories of constructive mathematics are also inconsistent. These relative consistency results are proved constructively. They show that the consistency problem does not simply disappear if we switch to constructive mathematics.
One of the more famous relative consistency techniques uses a "double negation translation". This method assigns each formula $\phi$ of a system a corresponding formula $\phi^N$ (the "translation" of $\phi$). The exact definition of the translation varies from author to author, depending on the system at hand. But the name is somewhat accurate: the definition of $\phi^N$ involves adding additional negation symbols to $\phi$ in the right places.
In 1933, Gödel proved there is a translation $N$ of formulas of Peano arithmetic so that whenever a formula $\phi$ is provable in Peano arithmetic, the corresponding formula $\phi^N$ is provable in the constructive system of Heyting arithmetic. Moreover, if $\phi$ is of the form $A \land \lnot A$ then Gödel's translation assigns it the formula $\phi^N = A^N \land \lnot A^N$, which is still contradictory. This means that if Peano arithmetic is inconsistent, so is its constructive counterpart Heyting arithmetic. Gödel's proof is constructive, like you would hope.
So if we were only worried about consistency, there would be no advantage to working in Heyting arithmetic instead of Peano arithmetic. But people who work in constructive mathematics do not do it only for the sake of consistency. Constructive proofs carry more information than classical proofs, so constructive provability is interesting even to classical mathematicians.
You can read a little more about Gödel's result at this wikipedia article.
-
A whole bunch of things in mathematics are inherently nonconstructive. For instance, invariant theory--recall the famous quote by Gordan that Hilbert's mathematics was "theology." (A quote which, I believe, was in jest.) The Hahn-Banach theorem, a fundamental tool in functional analysis (and a great tool for proving all sorts of results, like approximation results--Runge's theorem, the Stone-Weierstrass theorem, and more) relies on the axiom of choice, and is consequently nonconstructive. The fact that any proper ideal in a ring is contained in a maximal ideal is frequently used in algebra, and yet it needs the axiom of choice. The use of ultraproducts in logic (or the construction of hyperreal numbers) is inherently nonconstructive: you can't just exhibit a nonprincipal ultrafilter on the natural numbers.
Basically, a lot of mathematics just doesn't work without Zorn's lemma, and this is equivalent to the axiom of choice.
-
While Hilbert's arguments regarding invariant theory were non-constructive, there are nowadays constructive proofs, I think. – Mariano Suárez-Alvarez♦ Aug 4 '10 at 2:08
1
Any proper ideal is contained in a maximal ideal... – KCd Aug 17 '10 at 2:21
Fixed, thanks . – Akhil Mathew Aug 17 '10 at 6:34
– AD. Oct 28 '10 at 20:49
"Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. To prohibit existence statements and the principle of excluded middle is tantamount to relinquishing the science of mathematics altogether."
-David Hilbert
-
The distinction between constructive mathematics and traditional mathematics has nothing to do with Russell's Paradox.
Constructive mathematics simply requires working with one less basic postulate that many mathematicians have believed to be sensible and on which some proofs are based, namely the Axiom of Choice
-
3
Although you're right that constructive mathematics has nothing to do with Russell's paradox, my understanding is that the key point that constructivists deny is the law of the excluded middle (not the Axiom of Choice, though most constructivists don't use that either). – Noah Snyder Jul 21 '10 at 0:19
2
@Noah: I've browsed the ultimate authority on everything (wikipedia) and it seems you're right on this. Grad school was a long time ago! I just remember one faculty member walking out on my thesis defense when I admitted to using the axiom of choice. – donroby Jul 21 '10 at 2:48
@Noah: There isn't really such a thing as "the" axiom of choice constructively. Higher-order intuitionistic logic has the existence of Skolem functions for all Pi-0-2 sentences, which over ZF is equivalent to the axiom of choice, and this choice principle is generally known as the axiom of choice in intuitionistic type theory, and is much used. – Charles Stewart Jul 21 '10 at 8:42
if we deal only with finite sets, there is no real difference between constructive and non-constructive proofs. – mau Jul 21 '10 at 8:49
when dealing with infinite sets, many propositions cannot be proved mithout using non-constructive proofs. Axiom of Choice (AC, in brief) or an equivalent proposition is required. Russell's paradox is not a problem per se, you just rule out certain collections of things as sets; Banach-Tarski paradox (you may take a ball, "divide" it in a finite number of parts, translate and rotate them and obtain two balls equal to the first) may be worse indeed. But few mathematicians would prefer not to do a lot of maths because AC is not allowed!
-
Here is one elementary result you can't prove without the Axiom of Choice (that I used to think was not part of constructive mathematics): let $\left\{X_i\right\}_{i\in I}$ be an arbitrary family of non-empty sets. Then its product $\prod_{i\in I} X_i$ is a non-empty set.
Everybody knows how to prove that for a family of two sets. If two sets $X$ and $Y$ are non-empty, then their cartesian product $X \times Y$ is non-empty too: since $X$ is non-empty, you can choose one element from it: $x \in X$. For the same reason, you can choose also one $y \in Y$. So you have an element $(x,y) \in X\times Y$ and therefore it is non-empty. Right?
For a finite family of non-empty sets $X_1, \dots , X_n$ this procedure still works: you choose one element in each set starting from the first: $x_1 \in X_1$. And then keep on doing the same for the rest: choose $x_2$ in the second set $X_2$, $x_3$ in the third one $X_3$,... And so on, till you reach $x_n \in X_n$. You have thus produced an element of the cartesian product: $(x_1, \dots , x_n) \in X_1 \times \dots \times X_n$.
Even if you had an infinite countable family of non-empty sets $X_1, X_2, \dots , X_n, \dots$, you could produce an element of their product $\prod_{n=1}^{\infty} X_n$ this way: choose successively elements $x_1 \in X_1$, $x_2 \in X_2$,..., $x_n \in X_n$... And you would obtain your element $(x_n)$ of your infinite cartesian product. Hence $\prod_{n=1}^{\infty} X_n$ is non-empty too if all the $X_n$, $n=1, 2, \dots$ are non-empty.
Now, try to do the same with an arbitrary family of sets $X_i$, with the indexes $i$ belonging to an arbitrary indexing set $I$. That is, set $I$ can be infinite and not countable. This kind of cartesian products do exist in "nature". For instance: $I$ could be the set of real numbers $\mathbb{R}$. Take $X_i = \mathbb{R}$ for all $i \in \mathbb{R}$ too. This cartesian product $\prod_{i\in \mathbb{R}} X_i = \mathbb{R}^\mathbb{R}$ is the same as the set of all (not necessarily continuous) functions from $\mathbb{R}$ to $\mathbb{R}$.
Of course, you can prove that $\mathbb{R}^\mathbb{R}$ is non-empty by showing an element of it (for instance, $f(x) = x$ for all $x\in \mathbb{R}$), but how can you prove, in general, that the product of an arbitrary family of non-empty sets $\prod_{i\in I}X_i$ is non-empty too?
If you try to imitate what you have just done in the finite or countable cases, you'll find asking yourself: where do I start? Which is the first $\ i \in I$? Assuming there is a first $i$, which is the next one?
The answer is that, for an infinite, non-countable, set of indexes $I$ there is in general no such a thing as a first index $i$, nor a next one. So you can't use the previous algorithm in order to produce an element of the product $\prod_{i\in I} X_i$...
Except you have something that allows you to choose, one particular element of each set $x_i \in X_i$ -no matter how, even with no precise constructive algorithm as in the finite or countable cases.
If you were able to do that for no matter which set of indexes $I$, then you would have your element $(x_i)_{i\in I}$ and you would have proved that, if every $X_i \neq \emptyset$, so is their cartesian product $\prod_{i\in I} X_i$.
What do allow you for an infinite, non-countable, set of indexes $I$ to choose those $x_i \in X_i$ for every $i\in I$? Well, this is exactly what the Axiom of Choice says.
-
3
People often say that the axiom of choice is "nonconstructive", but many systems of constructive math include the axiom of choice. One example is Martin-Lof type theory, which has very strong intuitionist bona fides. Errett Bishop famously wrote that the axiom of choice "is unique in its ability to trouble the conscience of the classical mathematician, but in fact it is not a real source of nonconstructivity in classical mathematics. A choice function exists in constructive mathematics, because a choice is implied by the very meaning of existence." The classical "or" is much more problematic. – Carl Mummert Aug 4 '10 at 19:42
@Carl. Ok. I've changed the first sentence. – Agustí Roig Aug 5 '10 at 11:25
A very interesting and detailed exposition for how a "constructive" proof seems to require AOC. But surely you don't need AOC simply to prove that an uncountable product of nonempty sets is non-empty. Once you prove it for finite or countably infinite products, an injective mapping of any of those into the uncountable product should suffice, no? – DavidW Aug 5 '10 at 19:59
@DavidW: Thanks. As for your question: I'm afraid not. Think about the case of the product of two sets X x Y. There is no map X ---> X x Y. For products you only have maps in the opposite direction X x Y ---> X. In fact, if you look at the proof of an arbitrary product being non-empty it is just the Axiom of Choice. I mean: the fact that an arbitrary product is non-empty, if all its components are non-empty, is equivalent to the Axiom of Choice since elements of the product are, by definition, choice functions! :-) – Agustí Roig Aug 6 '10 at 1:06
@DavidW: Partial correction to my previous statement. When I said "there is no map X ---> X x Y", I actually meant "no canonical map". Of course there are plenty of them: choose one element of Y, y_0, and you have one: x \mapsto (x, y_0). But focus on what I needed to do: I had to choose an element of Y ! In the uncontable case you would need to do this... an infinite (uncontable) number of times. So you are using what? :-) – Agustí Roig Aug 6 '10 at 1:12
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411962628364563, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/89198-module-notation.html
|
# Thread:
1. ## module notation
Hi. I'm trying to figure out what $M=Rm$ means in an assignment question (I won't post the question since its being marked). Some possibly relevant context:
$M$ is an $R$ module
$R$ is a PID
$\textrm{ann}_R(m) = (p)$ for some $p \in R$
2. Originally Posted by badgerigar
Hi. I'm trying to figure out what $M=Rm$ means in an assignment question (I won't post the question since its being marked). Some possibly relevant context:
$M$ is an $R$ module.
it means $M,$ as $R$ module, is generated by $m \in M.$ in other words $M=Rm=\{rm: \ r \in R \}.$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.977523684501648, "perplexity_flag": "middle"}
|
http://www.exampleproblems.com/wiki/index.php/CoV20
|
CoV20
From Exampleproblems
Minimize $J(y)=\int 2\pi y \sqrt{1+y'^2} dx\,$
Our functional does not depend explicitly on x, so we can use the first integral $F - y' F_{y'} = c_1 \,$.
$y \sqrt{1+y'^2} - \frac{y y'^2}{\sqrt{1+y'^2}} = c_1$
$\frac{y}{\sqrt{1+y'^2}} = c_1$
$y' = \frac{\sqrt{y^2-c_2^2}}{c_2^2}$
$dx = \int \frac{c_2 dy}{\sqrt{y^2-c_1^2}}$
Letting $y = c_1 \cosh t, dy = c_1 \sinh t dt \,$, we get:
$dx = \int \frac{c_1^2 \sinh t dt}{c_1 \sinh t} = \int c_1 dt$
$x = c_1 t + c_2 \,$
$y = c_1 \operatorname{arccos} \frac{x-c_2}{c_1}$
This problem is equivalent to finding the equation of a hanging cable of uniform density with fixed endpoints. To minimize the total potential energy, we make our functional equal to the height of the cable multiplied by the arc length.
Calculus of Variations
Main Page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7890199422836304, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/49437?sort=votes
|
Why are so few operations with arity bigger than 2?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In usual algebraic structures, like groups, rings, monoids, etc, or in algebras coming from logics like Boolean algebras, Heyting algebras and the like the operations are usually of arity 0 (constants), 1 or 2. My question is two-fold:
1. Provide examples of algebras arising naturally in some field (I'm mainly interested in algebras coming from logics, but I'm open to any field) with operations of arity 3 or bigger.
2. Is there any reason (more or less profound) for being so few algebras with operations of arity bigger than 2?
Thank you in advance.
-
3
There are planar ternary rings, which are a way of re-encoding (coordinatized) projective planes (in the combinatorial sense): en.wikipedia.org/wiki/Planar_ternary_ring ; however these don't form a variety, which given the tag I expect you might want. – Harry Altman Dec 14 2010 at 21:47
8
I've always felt that our habits of 1. writing (and a written text is more or less 1-dimensional) and 2. using infix notations for the operations we like the most biased the game against big arity operations. To scientifically test this intuition of mine, one should rebuild a civilization where addition and multiplication are written in an other way (say, reverse Polish notation) and see what comes up. But who has the time? – Maxime Bourrigan Dec 15 2010 at 8:49
20 Answers
$\text{average}(x_1,\dots,x_n) = \dfrac{x_1 + \cdots + x_n}{n}.$
$\text{cross-ratio}(z_1,z_2;z_3,z_4) = \dfrac{(z_1-z_3)(z_2-z_4)}{(z_2-z_3)(z_1-z_4)}.$
-
+1 for two such great examples, and for representing analysis in an algebra-laden topic. – Adam Hughes Dec 15 2010 at 8:01
2
I resent analysis in this algebra laden topic =P. – Harry Gindi Dec 15 2010 at 11:27
5
Analysis? Both operations are thoroughgoingly algebraic. Introduce some limiting procedure somewhere to get analysis. – Todd Trimble Dec 15 2010 at 11:43
I realize. I was just playing off of Adam's comment. – Harry Gindi Dec 15 2010 at 12:47
Maybe one could argue that I what I was "representing" was geometry, and then Harry Gindi wouldn't have to resent it. – Michael Hardy Dec 15 2010 at 15:43
show 3 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
First a trivial remark: if you have a binary operation you automatically have higher arity operations by nesting. Hence I would not say that there are fewer such algebras. But there is a sense in which that is cheating. Examples of these are some of the triple systems, say Lie triple systems, which are to symmetric spaces what Lie algebras are to Lie groups: namely, the best linear approximation.
Starting at least in the 1940s, the Russian algebraist AG Kurosh and his school sought to generalise many of the algebraic structures with a binary operation to an $n$-ary operation. This is explained in the paper/monograph Multioperator rings and algebras from 1969 as well as in work of Baranovic and Burgin from 1975 on Linear $\Omega$-algebras. Perhaps the best known example of this kind of structure are the $n$-Lie algebras introduced by VT Filippov in 1980.
3-Lie algebras had previously appeared in work of Nambu trying to generalise Hamiltonian mechanics by replacing the symplectic form by a closed 3-form. This line of work was continued by Takhtajan and collaborators.
In the last few years, $n$-ary Leibniz algebras (but mostly $n=3$) have been given lots of attention due to the unexpected rôle they play in the AdS$_4$/CFT$_3$ correspondence for M2-branes. Two years ago I gave some lectures on some of the underlying algebraic story at Nordita (Stockholm) and wrote them up. You may wish to peruse them for the references.
-
Higher arity operations appear quite naturally when homotopy theory enters the stage; e.g., $A_\infty$-algebras, $L_\infty$-algebras and $E_\infty$-algebras.
-
9
But of course, then you may as well ask "why are so many important $A_\infty$-algebras formal?" – Ben Webster♦ Dec 14 2010 at 21:57
Examples of important and non-trivial $L_\infty$-algebras appear as the 'Lie algebras' of higher terms in the Whitehead tower for $O(n)$ (or more generally simple, finite-dimensional Lie groups) which are naturally smooth $k$-groupoids (various $k$), applicable in string theory. – David Roberts Dec 14 2010 at 23:51
Operations of arity 3 naturally arise in universal algebra. For example, one strand of research is to characterize the properties of the lattice of congruences of a variety by the existence of special terms -- these usually have arity 3. For example, if a variety has a ternary operation m(x, y, z) such that m(x, y, y) = x and m(x, x, y) = y, then the lattice of congruences is modular. (The converse is not true, but there is a weaker statement involving ternary operations that is true.) Examples of this include groups ($m(x, y, z) = x y^{-1} z$) and vector spaces ($m(x, y, z) = x - y + z$).
The ternary operation for vector spaces has a natural geometric interpretation as vector addition in affine space, where vectors are not required to be based at the origin. If you draw a vector from y to x and a vector from y to z, then $m(x, y, z)$ is the vector from y to x + z. You can think of addition as defined by drawing a parallelogram $xyzw$. Then $m(x,y,z)=w$.
-
I was just about to say this myself... – Todd Trimble Dec 14 2010 at 23:07
I knew that I was racing against time with that answer... – arsmath Dec 14 2010 at 23:08
I agree with you in the Mal'cev terms, but this is not exactly what I'm asking, since they are not fundamental operations of the algebra, but rather terms defined using the fundamental operations of the algebra. – Carlos Sáez Dec 14 2010 at 23:10
8
But Carlos, the fundamental operation on an affine space (vector space without an assigned origin) is ternary: it takes a triple $(x, y, z)$ to the fourth point of a parallelogram. Similar remark for torsors. – Todd Trimble Dec 14 2010 at 23:13
'Twas the first thing I thought of too... – David Roberts Dec 14 2010 at 23:52
show 1 more comment
Any $k$-ary relation can be expressed in terms of binary relations by means of projection maps, i.e. introduce new objects which correspond to $n$-tuples of the original objects ($n \leq k$), and introduce binary projection relations (i.e. $P(x,y)$ iff x is the first $n-1$ coordinates of $y$). Then $k$-ary relations are equivalent to a unary relation on $k$-tuples, and the $k$-tuples are all expressible in terms of the original objects via the binary projections maps.
In brief, 2-ary relations are sufficiently expressive to handle all arities. (And similarly 2-ary functions can express all functions)
-
Here's one I learned from Todd Trimble. Giving a set $X$ the structure of a compact Hausdorff space is the same as equipping $X$ with $J$-ary operations $X^J \to X$ for every set $J$, one for each ultrafilter $P$ on $J$, corresponding to the $P$-limit of a $J$-tuple of elements of $X$, with the appropriate compatibility relations.
-
3
Here all the finitary operations are mere projections, and the only interesting operations are all infinitary! – Todd Trimble Dec 14 2010 at 23:09
The composition law in a monoid is usually represented using a binary operation (multiplication) and a zeroary operation (unit), but I view it more naturally as an operation (say, a bracket) taking any finite list of arguments and being associative in the sense that brackets can be eliminated in any expression, for example we have the identities
[a,[],b,[[c,d],e],[f]]=[a,b,c,d,e,f]
and
[a]=a.
Then we can define a zeroary operation 1:=[] and a binary operation a*b:=[a,b], and recover the bracket from them, using identities such as [a,b,c,d]=[a,[b,[c,d]]]. These two operations satisfy the usual axioms of a monoid, and any two operations satisfying them can be extended to an associative finitary bracket.
I view the usual representation by a binary and a zeroary operation as an artifact for being able to produce simpler-looking proofs that the structures that we encounter are monoids.
My point is that naturally binary operations are not that common either! Perhaps an example is the Lie bracket.
-
2
This is precisely the idea behind the notions of PROPs and operads, in fact. – Mariano Suárez-Alvarez Feb 5 2012 at 21:58
1
Excellent answer! I recall how disturbed I was by the abstract algebra textbook example of $\{0,2,4\}$ as a subring of the ring of integers modulo 6. I wanted to shout BUT THE DAMNED THING IS NOT CLOSED UNDER MULTIPLICATION but had to restrain myself since I was the teacher. Twenty years later I still haven't fully recovered from part (b) of the exercise, which was to show that this "ring" has a unit. NO IT DOESN'T. AND THEREFORE IT'S NOT A RING. – Johan Wästlund Feb 6 2012 at 9:42
It's a rng! I view the concept of ring as inspired in the example of the endomorphism ring of an abelian group, and the concept of rng as inspired in the example of the example of... an ideal in a ring. – Marcos Cossarini Feb 6 2012 at 9:58
Mariano has mentioned the connection to PROPs and operads, but I think it's more basic: The forgetul functor from monoids to sets is monadic. – Martin Brandenburg Feb 6 2012 at 11:05
2
Johan, is it because 4*4 is really 16? Gerhard "Ask Me About System Design" Paseman, 2012.02.06 – Gerhard Paseman Feb 6 2012 at 16:03
show 6 more comments
1. I'd say that a lot of "higher-dimensional mathematics" concerns spaces with operations of arbitrary finite arity. I'm thinking of things like planar algebras, operads, ...
Let me mention one area that I like, which are various things I'd call "associative", and rather than trying to give precise definitions I'll mention planar algebras. A planar algebra includes in the data a $k$-ary operation for every way to draw nonintersecting curves (that are either closed or end on a boundary) on a disk minus $k$ subdisks. These operations are required to compose in any way that you can stick a disk-minus-holes into a hole in another disk (with the requirement that any curves ending on the glued-along boundary components match up). Then there's also an associativity requirement that says that everything only depends on the topology of the diagram, not the geometry.
Anyway, it is possible to write any "planar operation" as a composition of binary operations (although you need infinitely many "basic" binary operations), but this is the wrong way to think about it, I claim. In particular, there's really no canonical choice how to write something as a composition of binaries.
2. From this point of view, let's now revisit usual associative multiplication. The associativity says nothing more nor less than: `ab c = a bc`. Drawn this way, it's clear that this is again a statement that "only the topology matters, not the geometry". But the point is that the usual multiplication is "one-dimensional", in that the ambiant space where things like "a", "b", "c" are put is a line. (Compare planar algebras, which are inherently two-dimensional.) It took a while to invent two-dimensional mathematics, because we're used to thinking of "functions" acting consecutively in "time", and our experience is that "time" is one-dimensional. Anyway, the point is that if your mathematics is one-dimensional, then it's much easier to see how to break any one-dimensional picture into "basic" subpictures with only two things going on. I think this is the answer to your question 2, why most of the time we only think about 2-ary operations.
Finally, I'll mention that there's another direction you can go, which is to include "coalgebra" along with your algebra. By "algebra" I mean a theory with some "$k$-ary operations" that take in $k$ inputs and spit out one output. But "coalgebra" has operations that have multiple outputs. Coalgebraic operations are very important, especially in computing: you wouldn't want a computer program that only does one thing when you ran it, because then it couldn't also tell you that it had done it!
-
Lie and Jordan Triple system have arity 3. A Jordan triple system is a vector space with additional structure is given by a triplet $$(x,y,z)\rightarrow \{x,y,z\}$$ that satisfies the identities $$\{u,v,w\} = \{u,w,v\}$$ and $$\{u,v,\{w,x,y\}\} = \{w,x,\{u,v,y\}\} + \{w, \{u,v,x\},y\} -\{\{v,u,w\},x,y\}.$$ See link text. Every Jordan algebra can be embedded in a Jordan triple system but the converse is not true. Any Jordan triple system is a Lie triple system with respect to the product $$[u,v,w] = \{u,v,w,\} − \{v,u,w\}.$$ The structure of a Lie triple system is given by a bracket satisfying the identities $$[u,v,w] = − [v,u,w], \qquad [u,v,w] + [w,u,v] + [v,w,u] = 0$$ and $$[u,v,[w,x,y]] = [[u,v,w],x,y] + [w,[u,v,x],y] + [w,x,[u,v,y]].$$
-
Is this like the question why matrices are more common than multi-matrices? Feel free to flag this as spam because I don't have enough mathoverflow bucks to comment.
-
I don't think this is the same. – Andres Caicedo Dec 14 2010 at 22:17
1
@Andres: of course it is the same! – unknown (google) Feb 5 2012 at 23:39
I'm not convinced linear maps are more common than multilinear maps. In spite of the fact that any multilinear map can be written as a linear map by currying (en.wikipedia.org/wiki/Currying), many of the maps that arise in geometry are typically presented as multilinear. In typical presentations, derivatives, complex structures, and the Maurer-Cartan form are 1-linear, but metrics, symplectic forms, and Lie brackets are 2-linear, the Riemann curvature tensor is 3-linear, calibrations are k-linear (for any k), and volume forms are n-linear (in dimension n). – Vectornaut Feb 6 2012 at 3:19
Looking at my list above, it does seem that maybe 2-linear things are more common than linear things of other arities, which might be a better analogy to the original post. – Vectornaut Feb 6 2012 at 3:20
To add to the list of examples:
1. Heaps have a single ternary operation (identities on linked page). In short, a heap is to a group what an affine space is to a vector space: as soon as you pick an identity then you get a group.
2. Totally convex spaces which are spaces that allow arbitrary convex combinations. Simple examples are the unit balls of normed vector spaces, but others such as $(0,1)$ exist.
3. Similarly, $C^*$-algebras and there's a theory closely related to Banach algebras. See this page on the nLab where I started gathering together a few details on these.
To address the point as to why we often only use operations of arity at most 2, here's a neat little fact. Abstractly, we can consider operations of arbitrary arity with arbitrary identities, but in concrete situations the operations usually have a high level of compatibility. A common one to ask for is commutativity. This is commutativity of operations, which is ever-so-slightly different from what we normally think of as commutativity (though the two are very closely related). If we have a binary operation with a unit, then any operation that commutes with that operation (and its unit) turns out to be formed by iterating the binary operation. This is an easy generalisation of the Eckmann-Hilton argument. Therefore, once we start applying common identities, we find that we can often reduce the arity down to something palatable.
-
In an affine space $A$, the displacement (difference) between two points is a vector, and one can add a vector to a point, but not two points. However these can be replaced by a ternary operation in terms of points alone: the parallelogram rule $\nearrow : A \times A \times A \to A,\,\nearrow(p, a, b) = p+(a-b)$.
You can even add scalar multiplication of the difference into the bargain.
Why would you want to do this? Well affine spaces are more primitive than a vector space -- yet we use a vector space in defining them. To me the more natural approach is to define them without it, and watch the vector space (of displacements) drop out.
-
Perhaps you mean fundamental operations instead of operations. Others have noted that composition, projection, and changing one's point of view allows you to handle operations of higher arity.
I imagine that fundamental operations are usually of such low arity because we prefer simplicity. Doing the maximum or the sum of a tuple of numbers can be acheived by iterating the corresponding binary operation on certain parts of the tuple. Anything more gets uncomfortably complicated.
Having said that, there are examples like multi-linear functions (especially the determinant) that come up in various fields of analysis, not to mention infinitary operations like integration. Even then, we like to break things down into iterates of simpler terms, or compositions thereof.
William DeMeo has been doing many posts in MathOverflow in re universal algebra. He will probably suggest the majority function on the set {0,1}, varieties which have a ternary or 4-place discriminator term, ternary groups, and the like. He may also point to places in the literature where your question has been raised.
Gerhard "Memory Not So Good Lately" Paseman, 2010.12.14
-
You over-estimate my ability to keep up with MO :) I'm just now seeing this one. I like your answer, as well as Jose Figueroa-O'Farrill's. While I appreciate arsmath giving a nod to UA and Mal'cev, I don't know why he/she and others give examples involving terms. The question is clearly about fundamental operations. Ok, I see Todd Trimble has pointed out that some fundamental operations are ternary, but what algebras have a Mal'cev terms (or averages or cross-ratios) as fundamental operations? Maybe there are such algebras, in which case I apologize for my ignorance. – William DeMeo Apr 27 2011 at 9:29
William, if you should reread this thread, let me just say that the example of an affine plane that I mentioned in a comment on arsmath's answer has as its fundamental operation a ternary Mal'cev operation. – Todd Trimble Feb 7 2012 at 1:52
Someone already mentioned determinants. Here is a related $n$-ary operation, the vector product in dimension $n+1$: Fix a basis $b_1,\dots,b_{n+1}$ of $\mathbb R^{n+1}$. To $n$ elements $v_1,\dots,v_n$ of $\mathbb R^{n+1}$ assign the unique vector $v_{n+1}$ that is orthogonal to $v_1,\dots,v_n$, such that $v_1,\dots,v_{n+1}$ is of the same orientation as $b_1,\dots,b_{n+1}$ and such that the length of $v_{n+1}$ is the $n$-dimensional volume of the parallelopiped spanned by $v_1,\dots,v_n$.
-
I'm not sure to understand the question: don't you consider differential geometry, with all its multilinear algebra, as a good source of examples? e.g. the volume form on a manifold is a quite important and natural multilinear operator.
-
Carlos is asking for examples of algebras with basic (or fundamental) operations of higher arity (in the specific universal algebra sense of the term "basic operation"). For example, in a (multiplicative) group $\langle G, \cdot, ^{-1}, 1 \rangle$ the basic operations are the binary multiplication, $\cdot$, the unary inverse operation, $^{-1}$, and the nullary (constant) operation, 1. – William DeMeo Apr 27 2011 at 11:40
In higher order Fourier analysis, there are d-dimensional parallelopiped structures which can be viewed as a $2^d-1$-ary relation of the form "given all but one vertex of a parallelopiped as input, return the final vertex as output". (Here "parallelopiped" should be interpreted in a suitably abstract sense, as a family of $2^d$-tuples obeying a certain number of axioms.) This point of view is taken for instance in this paper of Camarena and Szegedy, building on the earlier work of Host and Kra. In the d=2 case, this ternary operation is essentially equivalent to an additive operation once one fixes an origin (in which case the ternary operation becomes $(x,y,z) \mapsto x-y+z$). For d=3, these operations are governed by 2-step nilpotent groups, and more generally d-dimensional parallelopiped structures are governed by d-1-step nilpotent groups.
These parallelopiped structures can be viewed as the abstract foundation of the Gowers uniformity norms, and they also share some formal resemblance to cubic complexes, which are constructs that appear mostly in algebraic topology and are discussed for instance here. There is a simplicial version of the latter concept known as a Kan complex, but I do not know the details of how they are used. But I think Kan complexes come equipped with high arity relations of the general form "given data for all but one face of a simplex, supply the data for the remaining face (and also for the interior) of that simplex". Among other things, such structures can be used to define n-groups; see for instance my blog post on this topic.
-
2
Technically, Kan complexes merely assert the existence of horn fillers, without specifying them. However, there is a notion of "algebraic Kan complex" where one has chosen horn fillers; this was discussed at the n-Category Cafe a while back, and there is some information about them at the nLab: ncatlab.org/nlab/show/algebraic+Kan+complex – Todd Trimble Feb 6 2012 at 20:30
In knot theory, splicing generally has more than just one or two inputs.
Splicing with one input generates things like Whitehead doubles:
and cabelling:
There are many $n$-ary operations. The first one noticed (historically) is connect-sum: . The issue one might have with the $n$-ary connect-sum is its generated by the 2-ary connect sum. So here is an example of a $2$-ary splice that's not a connect sum:
There are a countable infinite collection of $n$-ary splices for any $n \geq 1$, moreover, there's still countably-infinite many primitive ones for any $n \geq 1$, where primitive means "can't be expressed in terms of $j$-ary operations for j less than n" These primitive splicing operations turn out to be specified (uniquely) by hyperbolic $(n+1)$-component links in the 3-sphere $L \subset S^3$, $L=L_0 \sqcup L_1 \sqcup \cdots \sqcup L_n$ such that the sublink $L_1 \sqcup \cdots \sqcup L_n$ is the trivial link. The hyperbolicity means $S^3 \setminus L$ has a complete hyperbolic structure of finite volume.
Splicing can be put in an operadic framework and this is the topic of one of my papers. So you can turn it into a purely algebraic formalism as well, by taking the homology of the space of all knots and the splicing operad, respectively.
It's not clear to me there's any reason for the seeming prevalence of 2-ary operations in mathematics. It appears to be more of an accident -- two things interacting is simpler, easier to contemplate.
-
I am surprised that nobody came up with median algebras. That is, algebras that are equipped with a single ternary (fundamental) operation (see http://en.wikipedia.org/wiki/Median_algebra for a definition and some references). They generalize distributive lattices since the median function $(x \vee y) \wedge (y \vee z) \wedge (z \vee x)$ of any distributive lattice gives rise to a median algebra. Although median algebras still have many of the nice properties of distributive laticces, the concept is more subtle and the category of median algebras is not equivalent to that of distributive lattices. So the idea of having this single ternary fundamental operation really gives you something new and, at least in my oppinion, very interesting to look at.
To support my case: One might also be interested in Median algebras since they have beautiful duality with so-called Isbell spaces, first described by, you guessed correclty, John Isbell (the reference is given in the wikipedia article mentioned above). An Isbell space is a bounded Priestley spaces that is also equipped with certain (unary) complement operation.
-
I'd have thought it was just notational. When the arity is > 2, we usually end up coding the operands into a vector or tensor or whatever. The determinant mentioned above is an obvious example of that: a unary operation on a matrix, or a tensor acting on n vectors, depending on how we look at it.
-
More than two-ary operations pop up in algebraic approach to CSP (constraint satisfaction problem). See e.g. http://www.ams.org/mathscinet-getitem?mr=2470592 or http://www.ams.org/mathscinet-getitem?mr=2137072
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427919387817383, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/general-topology+algebraic-geometry
|
# Tagged Questions
1answer
83 views
### Niceness of the projection of a closed subscheme of affine space?
Let $k$ be an algebraically closed field, and suppose $C\subseteq \mathbb{A}^{n+m}_k$ is a closed subscheme. What can we say about the image under the projection \$\pi: \mathbb{A}^{n+m}_k\rightarrow ...
1answer
47 views
### studying the topology of a real algebraic set
Let $f_1,\ldots,f_n \in \mathbb{R}[x_1,\ldots,x_m]$ be polynomials with real coefficients and let $I$ be the ideal that they generate. Denote by $V_{\mathbb{R}}(I)$ the corresponding real variety, ...
2answers
55 views
### Looking for a (nonlinear) map from n-dimensional cube to an n-dimensional simplex
I am looking for a (nonlinear) map from n-dimensional cube to an n-dimensional simplex; to make it simple, assume the following figure which is showing a sample transformation for the case when $n=2$. ...
1answer
47 views
### topological properties of an algebraic set in the metric topology
Is there any good strategy of examining whether a given algebraic set is closed or dense in the metric topology of Euclidean space? For example suppose we are in $\mathbb{R}^3$ and consider the set ...
1answer
251 views
### $\mathbb A^n(k)$ and $\mathbb A^n(k)\setminus \{0\}$ are not homeomorphic
Let $k$ be an algebraic closed field. Why $\mathbb A^n(k)$ and $\mathbb A^n(k)\setminus\{0\}$ (for $n>1$) are not homeomorphic with respect to the Zariski topology?
1answer
27 views
### Lie bracket in local coordinates
Can you help for solving this.I have an manıfold exam and ı am working but ı have a problem about lie bracket. And ı am putting what ı did..
0answers
70 views
### Projective closure in the Zariski and Euclidean topologies
In Smith's An Invitation to Algebraic Geometry, following the definition of the projective closure of an affine variety, it was remarked that "the closure may be computed in either the Zariski ...
1answer
56 views
### Looking for a “prime-ish” family of subsets
Is there a nontrivial (what I mean is below) example of a compact Hausdorff space $X$ and a family $\mathscr{F}$ of subsets of $X$ with the following pair of properties? $\mathscr{F}$ is ...
0answers
78 views
### Relation between complex and real sphere
I want to understand relation between complex and real spheres. How to show? $S^1(\mathbb{C}) \approx \mathbb{R} \times S^1$ $S^3(\mathbb{C}) \approx \mathbb{R} \times S^3$ $\approx$ means homotopy ...
0answers
58 views
### Products of sites
Does the category of sites (i.e. small categories equipped with a Grothendieck topology) has products? Is there a connection to the product of locales (as discussed in Johnstone's Stone spaces, ...
1answer
58 views
### Open morphisms are dominant?
This seems very elementary but I haven't been able to prove it: If $f : X \to Y$ is an open map of irreducible topological spaces, then it is dominant (maps generic points to generic points). It ...
1answer
47 views
### Join and Zariski closed sets
A set in $\mathbb{C}^n$ is called Zariski-closed if it can be written as the set of zeroes of some set of polynomial equations V(f_1,...,f_m) = \left\{ z \in \mathbb{C}^n \mid f_1(z)=...=f_m(z)=0 ...
1answer
80 views
### A new(?) partial order on the set of continuous maps
Let $X,Y$ be topological spaces. Define a partial order on $\hom(Y,X)$ as follows: $f \leq g$ if $f^{-1}(U) \subseteq g^{-1}(U)$ for all open subsets $U \subseteq X$. Equivalently, $f(y)$ is a ...
1answer
189 views
### Equivalent definitions of Noetherian topological space
It is well known that we have many different definitions of noetherianity for rings. Namely, given a ring $R$, the following are equivalent: 1) every ideal of $R$ is finitely generated. 2) $R$ ...
2answers
74 views
### Fibers of the projection of a Zariski dense set are dense?
We will work over an infinite field $\Bbbk$. Let $U\subseteq\Bbbk^m\times\Bbbk^n=\Bbbk^{m+n}$ be a Zariski dense subset and for all $x\in\Bbbk^m$, consider U_x := \{\, y \in \Bbbk^n \mid (x,y)\in U ...
1answer
61 views
### Irreducibility preserved under étale maps?
I remember hearing about this statement once, but cannot remember where or when. If it is true i could make good use of it. Let $\pi: X \rightarrow Y$ be an étale map of (irreducible) algebraic ...
1answer
124 views
### Zariski topology analogue for non-algebraically closed fields
Let $k$ be a field and $\bar{k}$ its algebraic closure. The set $X$ of $n$-tuples over $\bar{k}$ can be given the Zariski topology in which the closed sets are the sets of zeros of sets of polynomials ...
1answer
64 views
### Can a closed subset of an affine scheme have empty interior?
I have an inclusion of closed subsets $V(J) \subset V(I)$ in an affine scheme $Spec(R)$ with the property that $V(I) = V(J) \cup \partial V(I)$. I would like to conclude that $V(J)=V(I)$. (Here ...
1answer
89 views
### Metric tensor of complex numbers & Hamiltonian Mechanics
The Euclidean $\mathbb{R}^2$ geometric space can be mapped onto $\mathbb{C}$. In other words I see it like this \vec{v} = x\vec{x}+y\vec{y} = x\vec{1}+y\vec{i}= \begin{bmatrix}x \\y\end{bmatrix} ...
1answer
182 views
### Fundamental group of multiplicative group in Zariski topology
What is the fundamental group of the multiplicative group of the complex numbers $\mathbb{G}_m(\mathbb{C})$ with respect to the Zariski topology. More precisely, what are the homotopy classes of ...
1answer
115 views
### Decomposition of Noetherian space into irreducible subsets
I am trying to relate two (maybe not) different decompositions of a noetherian topological space into irreducible subsets, given in Ravi Vakil's notes on algebraic geometry. Exercise 4.6.N : Let ...
2answers
129 views
### Sheafs and closed immersion
Let $f:X \rightarrow Y$ be a continuous map of topological spaces, such that it is closed immersion. Let $\mathfrak{F}$ and $\mathfrak{G}$ be sheafs on $X$ and $Y$ respectively. How to show, that ...
2answers
510 views
### What is algebraic geometry?
I am a second year physics undergrad, loooking to explore some areas of pure mathematics. A word that often pops up on the internet is algebraic geometry. What is this algebraic geometry exactly? ...
2answers
78 views
### About the proof of the existence of a decomposition of subset of $\mathbb{A}^n$
The following is the proposition 7.4.11 in "Advanced Topics in Linear Algebra" by Kevin O'meara et.al. Proposition 7.4.11 Every subset $X$ of $\mathbb{A}^n$ has a decomposition \$X = X_1 ...
2answers
62 views
### Closed sets of a curve
I am guessing that the closed sets of a curve (i.e. an algebraic variety of dimension 1) are finite . How would one go about proving this? Is it trivial?! Thanks for any help.
2answers
57 views
### Equivalence conditions on induced Zariski topology
This is a homework problem and I am looking for clarification of some of my doubts. (not solutions) Let $X\subset A^n$ or $P^n$, where $X$ is a non-empty algebraic set. Open sets of $X$ are given ...
1answer
78 views
### Is this a surjection of rings? What am I doing wrong?
Let $Z\newcommand{\df}{:=}\df\newcommand{\C}{\mathbb C}\C$ and $T\df\C^\times$. Then, the coordinate ring of $Z$ is $\C[z]$ and that of $T$ is $\C[t,t^{-1}]$. Consider another copy of $T$ with ...
2answers
67 views
### What do I need to know to understand the completion of the field of rational functions of a non-singular projective curve?
So the title gives the jist of my question. Specifically, let $X$ be a non-singular projective curve, $P$ a point on $X$, $v_P$ the discrete valuation associated to the ring $\mathcal{O}_P$. Then I ...
2answers
239 views
### Irreducibility of an Affine Variety and its Projective Closure
Volume I of Shaferevich's Basic Algebraic Geometry has the following as an exercise: Show that the affine variety $U$ is irreducible if and only if its closure $\bar U$ in a projective space is ...
0answers
71 views
### Continuous choice of basis for subspaces
Consider the flag variety (or flag manifold, depending on who you are) $V=\mathrm {Fl} (3,\mathbb C)$ of complete flags of subspaces of $\mathbb C^3$. That is, an element of M is a tuple (L , P) ...
0answers
61 views
### Euler characteristic of structure sheaf of symmetric product
I recently asked about calculating the Euler characteristic of the symmetric square of a space. There we determined that for a sufficiently well-behaved space $X$ there is a formula \chi(X \times ...
1answer
106 views
### Euler characteristic of a quotient space
I have a question relating to an answer on MathOverflow.net. The cited answer says: Let $X$ be a topological space for which [the Euler characteristic] $\chi(X)$ is defined and behaves in the ...
3answers
521 views
### Zariski Open Sets are Dense?
Is it true than any nonempty open set is dense in the Zariski topology on $\mathbb{A}^n$? I'm pretty sure it is, but I can't think of a proof! Could someone possibly point me in the right direction? ...
0answers
110 views
### Computing the hypercohomology of a complex of acyclic sheaves
Let $K^{\bullet}$ be a cochain complex of sheaves of finite-dimensional vector spaces, I wanted to compute $\mathbb{H}^{\bullet}(X,K^{\bullet})$ = the hypercohomology of the complex $K^{\bullet}$, the ...
2answers
379 views
### The prime spectrum of a Dedekind Domain
Let $A$ be a Dedekind Domain, let $X = \operatorname{Spec}(A)$. Are all open sets in $X$ basic open sets? Thinking about the Zariski topology (in the classical sense) of a non-singular affine curve, ...
0answers
56 views
### Recovering the topology of an affine scheme from the specialization preorder
Let $A$ be a commutative ring. The specialization preorder on $\mathrm{Spec}(R)$ is given by \$\mathfrak{p} \prec \mathfrak{q} \Leftrightarrow \mathfrak{p} \in \overline{\{\mathfrak{q}\}} ...
5answers
858 views
### Why Zariski topology?
Why in algebraic geometry we usually consider the Zariski topology on $\mathbb A^n_k$? Ultimately it seems a not very interesting topology, infact the open sets are very large and it doesn't satisfy ...
1answer
377 views
### An interesting topological space with $4$ elements
There is an interesting topological space $X$ with just four elements $\eta,\eta',x,x'$ whose nontrivial open subsets are $\{\eta\},\{\eta'\},\{\eta,\eta'\}, \{\eta,x,\eta'\}, \{\eta,x',\eta'\}$. This ...
0answers
87 views
### Algebraic varieties and Hausdorff spaces
Let $(X,\mathcal O_X)$ be an algebraic prevariety, by definition, it is an algebraic variety iff the diagonal $\Delta(X)$ is closed in the product $X\times X$. The above property is equivalent to the ...
1answer
270 views
### How do mathematicians think about high dimensional geometry?
Many ideas and algorithms come from imagining points on 2d and 3d spaces. Be it in function analysis, machine learning, pattern matching and many more. How do mathematicians think about higher ...
2answers
122 views
### The notion of a germ in singularity theory
I quote from my lecture: Let $X$ be a topological space (think of $X=\mathbb{C}^n$ with the classical topology), $p\in X$, $A,B\subseteq X$. Then $A\sim B$ if there exists an open subset ...
0answers
55 views
### Continuity of a map of a topological space to a pro-topological space
Let $(X_i)$ be a projective system of topological spaces. Let $X$ be the projective limit of $X_i$. Let $G$ be a topological space. What does it mean for $G\to X$ to be continuous? My guess is that ...
2answers
271 views
### Zariski Topology question
Could you please give a hint how to show that the zariski topology on $\mathbb{A}^2$ is not the product topology on $\mathbb{A}^1\times\mathbb{A}^1$
3answers
178 views
### Zariski topology in the complex plane: an example
I want to find the closure under the zariski topology, of this set $\left\{ {\left( {x,y} \right) \in {\Bbb C}^2 ;\left| x \right| + \left| y \right| = 1} \right\}$ I have no idea what I can do
5answers
537 views
### Every subspace of a compact space is compact?
It seems as if every subspace of a compact topological space (equipped with its relative topology) had ought to be compact as well. Is this true in general? And in particular, I want to use the fact ...
2answers
215 views
### Explaining the motivation behind two different definitions of a generic point
This question is primarily regarding the definition of a generic point of a topological space that I came across in Qing Liu's Algebraic Geometry and Arithmetic Curves. First I will give the ...
3answers
179 views
### $\operatorname{Spec} (A)$ as a topological space satisfying the $T_0$ axiom
I have been spending a few days now proving the last bit of the following problem of Atiyah Macdonald: Prove that $X = \operatorname{Spec}(A)$ as a topological space with the Zariski Topology ...
1answer
132 views
### Compact Sets in Projective Space
Consider the projective space ${\mathbb P}^{n}_{k}$ with field $k$. We can naturally give this the Zariski topology. Question: What are the (proper) compact sets in this space? Motivation: I ...
0answers
159 views
### homework problem about the projective real space
Sorry for ask this problem, but I am very complicated with this problem :/ . My course it´s of topology, the teacher said that we only need the definition of the quotient topology and of P_R^2 ...
1answer
91 views
### Curves on the projective plane
I have two little questions, I'm learning this, and I'm not accustomed yet )=. The questions are so simple. First define on $$R^3 - \left\{ {\left( {0,0,0} \right)} \right\}$$ the topology given ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 118, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134539365768433, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/225591/use-lagrange-multipliers-to-find-maximum-and-minimum-values
|
# Use LaGrange multipliers to find maximum and minimum values
I am having trouble understanding how to solve the problem below. Can anyone show me how to solve this? Here is the problem definition:
"Use LaGrange multipliers to find the maximum and minimum values of the function $f(x,y)=e^{xy}$ subject to the constraint $x^3+y^3=16$."
This is problem number 14.8.6 in the seventh edition of Stewart Calculus.
My work so far:
Use $\nabla f=\lambda\nabla g$ and $g(x,y)=16$
$f_x=ye^{xy}=\lambda g_x=\lambda 3x^2$
$f_y=xe^{xy}=\lambda g_y=\lambda 3y^2$
Solving these equations gives me $x=1, y=1, \lambda=\frac{e}{3}$
However, I am confused because $f(1,1)=e$, but $g(1,1)=2\ne 16$
How do I finish this problem correctly?
-
I think you might have an error in your calculations; I found $x=y=2$ and $\lambda=e^4/6$. – user12477 Oct 30 '12 at 22:48
@user12477 Thank you. I see how $x=y=2$ and $\lambda = \frac{e^4}{6}$ plug into the equation, but I do not see how to show the work to get those numbers without simply guessing them. Also, that is just one point. I have to find both maximum and minimum. Do you have any further suggestions? – CodeMed Oct 30 '12 at 22:55
## 1 Answer
Start with the equations that you have derived: \begin{eqnarray*} ye^{xy}&=&3\lambda x^2,\\ xe^{xy}&=&3\lambda y^2,\\ x^3+y^3&=&16. \end{eqnarray*} As a first step, show that none of $\lambda, x$ or $y$ can be zero. (If one of them is zero, then the first two equations show that all three must be zero, contradicting the third equation.) This is useful as we now know that we can divide by these terms at will.
Now comparing the first and second equations gives $x^3=y^3$, and since both are real, we get $x=y$. The third equation then gives $x=y=2$, and the first or second yields the value of $\lambda$. We find $f=e^4$ at this point, and it must be a maximum.
As regards the minimum, recall that the Lagrange multiplier method identifies the possible location of max/min points IF they exist. I don't think your example has a minimum: By taking $x^3 = N^3$ and $y^3=16-N^3$, where $N$ is a large positive number, the constraint $x^3+y^3=16$ is satisfied. But $xy\sim-N^2$ can be made arbitrarily large and negative, so that $e^{xy}$ can be made as close as we like to $0$, but of course would never equal zero. So we can say that the infimum $$\inf\{e^{xy}:x^3+y^3=16\}=0,$$ but the minimum $$\min\{e^{xy}:x^3+y^3=16\}$$ does not exist: for any candidate minimum at $(x_0,y_0)$, we can always find $(x_1,y_1)$ with $0<f(x_1,y_1)<f(x_0,y_0)$.
Another approach is to eliminate $y$ and to treat this as a single variable max/min problem for $h(x)=\exp[x(16-x^3)^{1/3}]$. This gives another way of understanding why there is no minimum point of the function.
-
Thank you very much. You state that $f(2,2)=e^4$ must be a maximum. Can you explain why? I think I understand the rest of your logic. +1 for helping me out. – CodeMed Oct 30 '12 at 23:42
– user12477 Oct 30 '12 at 23:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298431277275085, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/182130/complement-of-co-dense-set
|
# Complement of co-dense set.
Asaf's argument : (ZF) If $\mathbb{R}^k$ is a countable union of closed sets, then at least one has a nonempty interior
Let $X$ be a separable complete metric space. Let $D$ be a countable debse subset. Let $F$ be a closed subset with empty interior.
Here, how do i show that $D\setminus F$ is dense?
Ive been trying to figure it out for a day, and still stuck here..
-
The link actually points to Brian's answer... :-) – Asaf Karagila Aug 13 '12 at 18:43
## 4 Answers
Let $U$ be a non-empty open subset of $X$. Since $F$ is closed and has an empty interior we have that $U\setminus F$ is non-empty and open.
By density of $D$ we know that $D\cap(U\setminus F)=(D\setminus F)\cap U = (D\cap U)\setminus F$ is non-empty, therefore $D\setminus F$ is dense.
-
You surely mean: "Let $U$ be a non-empty open subset ..." :-) – celtschk Aug 13 '12 at 17:07
1
@celtschk: What did I write? :-) – Asaf Karagila Aug 13 '12 at 17:07
@Asaf is it really obvious thing, which can be done just by thought..? I even had to take some time to check this.. – Katlus Aug 13 '12 at 17:28
@Katlus: Say what? I don't understand your last comment. – Asaf Karagila Aug 13 '12 at 17:43
Choose an arbitrary nonempty open set $U$. Then $U$ is also Polish, and $D\cap U$ is countable dense and $F\cap U$ is a closed subset with empty interior. So it is enough to show that $D\setminus F$ is nonempty. But if a closed set contains a dense set, it is the entire space, which is a contradiction.
-
Let $D$ be a countable dense subset. Let $F$ be a closed subset of $X$ such that $\text{Int}(F) = \emptyset$. Let $U$ be any nonempty open set. $U \cap (X - F)$ is a nonempty open set because if $U \cap (X - F) = \emptyset$, then $U \subset F$, which contradicts $\text{Int}(F) = \emptyset$. Since $U \cap (X - F)$ is nonempty open and $D$ is dense, there exists $d \in D$ such that $d \in U \cap (X - F)$. Hence there exists a $d \in D - F$ such that $d \in U$. $U$ is arbitrary so $D - F$ is dense.
-
Regardless of the topological space, $$\overline{ D \setminus F } = \overline{ D \cap ( X \setminus F ) } = \overline{ \overline{D} \cap ( X \setminus F ) } = \overline{ X \cap ( X \setminus F ) } = \overline{ X \setminus F } = X \setminus \mathrm{Int} ( F ) = X \setminus \emptyset = X.$$ (I use the result here.)
-
Really nice. Thank you – Katlus Aug 13 '12 at 17:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276261925697327, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/93748/distribution-of-locations-after-performing-truncated-levy-walk
|
## distribution of locations after performing truncated lévy walk
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
let k be the total number of nodes distributed on a field. each node performs a truncated Lévy Walk, with the following characteristics:
• Motion speed: Motion speed remains constant, in our case it is the motion of pedestrians.
• Time interval of each displacement: The time interval of each displacement, $t_{fi}$ is directly related to the velocity and length of the displacement. $\xi_i$ is the length of the displacement and v=1 and $t_{fi} = \xi_i$.
• Direction of the displacement: the direction is uniformly distribution and the direction angle $\theta_i \in [0, 2\pi)$
• Length of the displacement: In Truncated Lévy Walk, the length of the displacement is assumed to have a Levy distribution. For $\alpha \le 2$, the density function $f_l(x)$ can be approximated by $\frac{1}{|x|^{1+\alpha}}$. Since it is a truncated Levy Walk, the range of the displacement length does not vary between ($- \infty, + \infty$) but between ($0, \tau_\xi$). $\tau_\xi$ is the maximum allowed displacement length. The value of $\alpha$ is assumed to be between 0.7 and 1.7
• Pause interval: TLW includes a pause time, that takes place after the end of each displacement. The pause time $T_p$ has a levy distribution, which, similarly to the displacement length, can be approximated to $\frac{1}{T_p^{1 + \gamma}}$ with ${T_p} \in (0, \tau_p)$. $\tau_p$ is the maximum allowed pause interval. $\gamma$ is assumed to be equal to 0.7
• Complete displacement time: This time, $\Delta t_s$ is the sum of the displacement time and the pause time.
What i am trying to find, is the resulting distribution of the locations of the k nodes at time t after performing a truncated levy walk for time interval equal to t. Does anyone have any recommendation as to how i can find the answer?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8940175175666809, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/140328-limit-factorial-alternating-series.html
|
# Thread:
1. ## Limit of factorial in Alternating Series
I need to find out if the Alternating Series is convergent or divergent by the Alternating Series Test.
One condition is to check if the limit is zero.
I have problems finding the limit, since I have a term (2k-1)! in the denominator and I do not know how to handle this...
$<br /> \Sigma^{\infty}_{k = 1} (-1)^{k-1}\frac{k!}{(2k-1)!}<br />$
$<br /> \lim_{k \to \infty}\frac{k!}{(2k-1)!}<br />$
How can I write the fraction, so that things cancel out?
$<br /> \frac{k!}{(2k-1)!} = ?<br />$
For the following example I was able to write one term:
$<br /> \frac{k!}{(k+1)!} = \frac{k (k-1)(k-2)(k-3)(k-4) ... (2)(1)}{(k+1)(k)(k-1)(k-2)(k-3)(k-4)....(2)(1)} = \frac{1}{k+1}<br />$
Can someone please let me know how to do this with
$<br /> \frac{k!}{(2k-1)!}<br />$
Thanks!
2. Originally Posted by DBA
I need to find out if the Alternating Series is convergent or divergent by the Alternating Series Test.
One condition is to check if the limit is zero.
I have problems finding the limit, since I have a term (2k-1)! in the denominator and I do not know how to handle this...
$<br /> \Sigma^{\infty}_{k = 1} (-1)^{k-1}\frac{k!}{(2k-1)!}<br />$
$<br /> \lim_{k \to \infty}\frac{k!}{(2k-1)!}<br />$
How can I write the fraction, so that things cancel out?
$<br /> \frac{k!}{(2k-1)!} = ?<br />$
For the following example I was able to write one term:
$<br /> \frac{k!}{(k+1)!} = \frac{k (k-1)(k-2)(k-3)(k-4) ... (2)(1)}{(k+1)(k)(k-1)(k-2)(k-3)(k-4)....(2)(1)} = \frac{1}{k+1}<br />$
Can someone please let me know how to do this with
$<br /> \frac{k!}{(2k-1)!}<br />$
Thanks!
You can do the same here and get $\frac{k!}{(2k-1)!}=\frac{1}{(k+1)(k+2)\cdots(2k-1)}$ as soon as $k\leq 2k-1$, i.e. $k\geq 1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414084553718567, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2009/05/19/orthogonal-complementation-is-a-galois-connection/?like=1&source=post_flair&_wpnonce=d91e0e6d9a
|
# The Unapologetic Mathematician
## Orthogonal Complementation is a Galois Connection
We now know how to take orthogonal complements of subspaces in an inner product space. It turns out that this process (and itself again) forms an antitone Galois connection.
Let’s just quickly verify the condition. We need to show that if $U$ and $W$ are subspaces of an inner-product space $V$, then $U\subseteq W^\perp$ if and only if $W\subseteq U^\perp$. Clearly the symmetry of the situation shows us that we only need to check one direction. So if $U\subseteq W^\perp$, we know that $W^{\perp\perp}\subseteq U$, and also that $W\subseteq W^{\perp\perp}$. And thus we see that $W\subseteq U^\perp$.
So what does this tell us? First of all, it gives us a closure operator — the double orthogonal complement. It also gives a sense of a “closed” subspace — we say that $U$ is closed if $U^{\perp\perp}=U$.
But didn’t we know that $U^{\perp\perp}=U$? No, that only held for finite-dimensional vector spaces. This now holds for all vector spaces. So if we have an infinite-dimensional vector space its lattice of subspaces may not be orthocomplemented. But its lattice of closed subspaces will be! So if we want to use an infinite-dimensional vector space to build up some analogue of classical logic, we might be able to make it work after all.}
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067366719245911, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/15493/morphism-closed-fibres-proper-proper
|
## morphism closed + fibres proper => proper?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is a closed morphism with proper fibres proper?
-
## 3 Answers
The answer is no. Consider an integral nodal curve $Y$ over an algebraically closed field, normalize the node and remove one of the two points lying over the node. Then you get a morphisme $f : X\to Y$ which is bijective (hence homeomorphic), separated and of finite type, and the fibers are just (even reduced) points. But $f$ is not proper (otherwise it would be finite and birational hence coincides with the normalization map).
In the positive direction, you can look at EGA, IV.15.7.10.
[Add] There is an elementary way to see that $f$ is not proper just using the definition. Let $Y'\to Y$ be the normalization of $Y$. So $X$ is $Y'$ minus one closed point $y_0$. It is enough to show that the base change of $f$ to $X\times Y' \to Y \times Y'$ is not closed. Consider the closed subset $$\Delta=\left\lbrace (x, x) \mid x\in X \right\rbrace \subset X\times Y'.$$ Its image by $f_{Y'} : X\times Y' \to Y\times Y'$ is $\left\lbrace (f(x), x) \mid x\in X\right\rbrace$ which is the graph of $Y'\to Y$ minus one point $(f(y_0), y_0)$. So $f$ is not universally closed, thus not proper.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Any surjective morphism between two curves is closed and have proper fibres. Obviously not all of them are proper.
-
You are absolutely right. Even this has nothing to do with the original question, the example with normalization just has some more nice properties (birational, unramified, universally injective...). – Qing Liu Nov 23 2010 at 20:46
@Qing Liu, absolutely. I did not mention this to contrast it to your solution, just to point out that the question is pretty far from having a whim of a chance to be true. Your solution points out that even asking a lot more would not be enough. Cheers. – Sándor Kovács Nov 23 2010 at 21:04
“The answer is no. Consider an integral nodal curve Y over an algebraically closed field, normalize the node and remove one of the two points lying over the node. Then you get a morphisme f:X→Y which is bijective (hence homeomorphic), separated and of finite type, and the fibers are just (even reduced) points. But f is not proper (otherwise it would be finite and birational hence coincides with the normalization map). ”
I am afraid that f is not closed. We can see it in the following: we choose a neighborhood U of the pre-image (the left point of the two points) of the node, then A=X\U is closed, but the image of A is not closed. Since f(A) doesn't contain the node but contain a branch of Y around the node.
-
@Xin-L: this is meant to be closed in the Zariski topology so in your example $A$ is a finite set and then so is its image, hence closed. You seem to be working in the Euclidean topology. – Sándor Kovács Nov 24 2010 at 5:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543314576148987, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/31458/on-linear-independence-of-exponentials/108547
|
## Problem.
Let $\{\lambda_n\}_{n\in\mathbb N}$ be a sequence of complex numbers . Let's call a family of exponential functions $\{\exp (\lambda_n s)\}_{n\in\mathbb N}$ $F$-independent (where $F$ is either $\mathbb C$ or $\mathbb R$) iff whenever the series with complex coefficients
$$f(s)=\sum\limits_{n=1}^{\infty}a_n e^{\lambda_n s},\qquad s\in F,$$ converges to $f(s)\equiv 0$ uniformly on every compact subset of $F$, we have that $a_n=0$ for all $n\in\mathbb N$.
Question. Assume that a sequence $\{\exp (\lambda_n s)\}_{n\in\mathbb N}$ is $\mathbb C$-independent. Is it $\mathbb R$-independent?
## Background and motivation.
A particularly interesting case for applications is when $|\lambda_n|\sim n$. A.F. Leont'ev (whose work was mentioned in a previous MO question) proved that if $n=O(|\lambda_n|)$ then the corresponding family of exponentials is $\mathbb C$-independent (see also this note). It is relatively easy to construct a sequence of exponentials which is not $\mathbb C$-independent (see, e.g., here).
The question is related to the problem of uniqueness of solutions to the so called gravity equation $$f(x+h)-f(x-h)=2h f'(x),\qquad x\in \mathbb R,$$ where $h>0$ is fixed. The equation appears in the study of radially symmetric central forces (the long history of the gravity equation and some known results are presented in this article by S. Stein).
Titchmarsh proved that an arbitrary solution to the gravity equation has the form $$f(x)=Ax^2+Bx+c+\sum\limits_{n=1}^{\infty}a_n e^{\lambda_n x},\qquad x\in \mathbb R,$$ where $a_n\in\mathbb C$, $n\in \mathbb N$ and $\lambda_n$ are the solutions of the equation $\sinh hz=hz$. Thanks to the Leont'ev result, the sequence $\{\exp (\lambda_n s)\}_{n\in\mathbb N}$ is $\mathbb C$-independent. If the answer to the question above is positive, then every sufficiently smooth function satisfying the gravity equation with two different $h_1$ and $h_2$ is a quadratic polynomial.
-
I would be surprised if the answer is yes (but also very pleased to see such a wonderful theorem!), since local uniform convergence on $\mathbb{C}$ is much, much stronger than on $\mathbb{R}$. However, the special form of exponentials compensates a lot for lack of complex analytic contour integration trickery; so anyway, I'm convinced that the question is interesting! I'm aiming for a counterexample though...! – Zen Harper Jul 16 2010 at 16:41
Zen, thank you for the comment. I have a similar gut feeling about this. – Andrey Rekalo Jul 16 2010 at 16:57
1
In the original message, "A. F. Lavrent'ev" should be "A. F. Leont'ev". – Alexandre Eremenko Aug 4 at 11:06
@Alexandre Eremenko: Many thanks! I stand corrected. – Andrey Rekalo Aug 5 at 9:03
## 3 Answers
I have some partial answers.
I. It is not hard to construct a Dirichlet series $$f(z)=\sum_{n=1}^\infty a_ne^{\lambda_n z}$$ which converges to $0$ absolutely and uniformly on the real line but does not converge at some points of the complex plane. It is constructed as a sum of 3 series $f=f_0+f_1+f_2.$ Let $f_1$ be a series with imaginary exponents $\lambda_n$ which converges to an entire function in the closed lower half-plane, but not in the whole plane. Such series is not difficult to construct, see V. Bernstein, page 34, (see the full reference below) and there are simpler examples, with ordinary Dirichlet series. Then put $f_2=\overline{f_1(\overline{z})}$, and $f_0=-f_1-f_2$. So all three functions are entire. Now, according to Leontiev, EVERY entire function can be represented by a Dirichlet series which converges in the whole plane. Thus we have a Dirichlet series $f_0+f_1+f_2$ which converges on the real line to $0$ but does not converge in the plane.
A counterexample to the original question also requires real coefficients, this I do not know how to do (for $f_0$).
II. It is clear from the work of Leontiev, that to obtain a reasonable theory, one has to restrict to exponents of finite upper density, $n=O(|\lambda_n|)$, otherwise there is no uniqueness in $C$. In the result I cited above the expansion of $f_0$ is highly non-unique.
Assuming finite upper density I proved that if a series is ABSOLUTELY and uniformly convergent on the real line to zero, then all coefficients must be zero. http://www.math.purdue.edu/~eremenko/dvi/exp2.pdf I don't know how to get rid of the assumption of absolute convergence.
But there is a philosophical argument in favor of absolute convergence: the notion of "linear dependence" should not depend on the ordering of vectors:-)
III. The most satisfactory result on my opinion, is that of Schwartz. Let us say that the exponentials are S-linearly independent if none of them belongs to the closure of linear span of the rest. Topology of uniform convergence on compact subsets of the real line. Schwartz gave a necessary and sufficient conditon of this: the points $i\lambda_k$ must be contained in the zero-set of the Fourier transform of a measure with a bounded support in R.
(L. Schwartz, Theorie generale des fonctions moyenne-periodiques, Ann. Math. 48 (1947) 867-929.)
A complete explicit characterization of such sets is not known, but they have finite upper density, and many of their properties are understood. These Fourier transforms are entire functions of exponenitial type bounded on the real line. The link I gave above contains Schwartz's proof in English. S-linear dependence is also non-sensitive to the ordering of functions, which is good.
IV. Vladimir Bernstein's book is "Lecons sur les progress recent de la theorie des series de Dirichlet", Paris 1933. This is the most comprehensive book on Dirichlet series, but unfortunately only with real exponents.
V. The application to the functional equation mentioned by the author of the problem is not a good justification for the study of the problem in such generality. The set of exponentials there is very simple, and certainly we have $R$-linear independence for SUCH set of exponentials. Besides the theorem stated as an application has been proved in an elementary way.
VI. Finally, I recommend to change the definition of $R$-linear independence by allowing complex coefficients (but equality to $0$ on the real line). Again in the application mentioned in the original problem, THIS notion of $R$-uniqueness is needed: the function is real, but the exponentials are not real, thus coefficients should not be real.
-
Thank you so much for the answer and reference. – Andrey Rekalo Oct 1 at 17:36
In the original text of my answer, the numeration of sections is natural: 1, 2, 3, 4, 5,... Why the web site produces something weird, I do not know. – Alexandre Eremenko Oct 1 at 21:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Still thinking about the interesting question!
Not an answer, but too big for a comment.
To show what I meant in my comment to Daniel Litt's answer about the difference between uniform absolute convergence and ordinary uniform convergence:
I think (but it was a while ago when I thought about it) that there exist $u_0, u_1, \ldots$ such that $\sum_{n=0}^\infty u_n z^n$ converges uniformly on the set $\{ z \in \mathbb{C} : |z| \leq 1 \}$, but not uniformly absolutely.
Thus $\sum_{n=0}^\infty |u_n| = +\infty$, but also $$\forall \, \epsilon>0, \quad \exists \, N(\epsilon)>0 \quad \text{such that}: \qquad \forall \, |z| \leq 1, \quad \forall \, m \geq n > N(\epsilon), \qquad \left| \sum_{k=n}^{m} u_k z^k \right| < \epsilon.$$ This function $f(z) = \sum_{n=0}^\infty u_n z^n$ satisfies $$f \in A(D) \setminus W_+(D),$$ which shows in particular that the disc algebra $A(D)$, consisting of functions continuous on the closed unit disc and analytic on the open unit disc, is strictly larger than the Wiener algebra $W_+(D)$ of power series absolutely convergent on the closed unit disc.
So the problem is going to be pretty hard because we can't use absolute convergence (unless I'm being stupid or there is a clever trick exploiting the special structure of the exponential functions).
The easiest "solution" is just to ignore it (i.e. assume absolute convergence!) This gives a slightly different, but still interesting, question.
-
EDIT: The following actually assumes uniform absolute convergence on compact regions, and thus addresses a slightly different problem, as Zen Harper points out in the comments.
This is an observation which is too big to fit in comments. I claim that if there exists $M>0$ such that for all $n$, $|\Im \lambda_n| < M$, then the answer is yes ($\mathbb{R}$-independence implies $\mathbb{C}$-independence). In this case it's easy to see that uniform convergence on compact intervals of $\mathbb{R}$ implies uniform convergence on compact regions of $\mathbb{C}$.
To see this, write $\lambda_n=x_n+y_ni$ where $x_n, y_n$ are real; writes $s=c+di$. Then $$\sum |a_ie^{\lambda_i s}|=\sum |a_i|e^{x_nc-y_nd}$$ which can be compared to $$\sum |a_i| e^{x_n c}.$$
But then uniform convergence on compact regions of $\mathbb{C}$ implies that the limit function $f$ is holomorphic, and it vanishes identically on $\mathbb{R}$, so $f$ must be identically zero. But then applying $\mathbb{C}$-independence, we have that all the $a_i=0$.
This points in the direction of a counterexample when the $y_i$ are unbounded---if we let the $|y_i|$ tend to $\infty$ rapidly this argument fails dramatically.
-
Am I being stupid? Aren't you assuming the stronger property of uniform ABSOLUTE convergence (which is still interesting, but a different question), rather than just uniform convergence, on compact sets? Or is it obvious that they're equivalent in this example? – Zen Harper Jul 22 2010 at 23:23
Ah indeed I am. Oops. – Daniel Litt Jul 22 2010 at 23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255373477935791, "perplexity_flag": "head"}
|
http://www.sciforums.com/showthread.php?111586-Gravity-never-zero/page17
|
• Forum
• New Posts
• FAQ
• Calendar
• Ban List
• Community
• Forum Actions
• Encyclopedia
• What's New?
1. If this is your first visit, be sure to check out the FAQ by clicking the link above. You need to register and post an introductory thread before you can post to all subforums: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.
# Thread:
1. Is this the alternative definition of Entropy?
2. Originally Posted by Robbitybob1
Is this the alternative definition of Entropy?
Machta asserts that, starting with algorithmic entropy (or complexity) you can derive forms of Shannon and Gibbs entropy.
Usually in physics you start with thermodynamic entropy of a 'Gibbs ensemble'. Shannon entropy has the same formula, so they seem to be isomorphic, and so "heat is information".
3. Originally Posted by arfa brane
Machta asserts that, starting with algorithmic entropy (or complexity) you can derive forms of Shannon and Gibbs entropy.
Usually in physics you start with thermodynamic entropy of a 'Gibbs ensemble'. Shannon entropy has the same formula, so they seem to be isomorphic, and so "heat is information".
So you might understand entropy better than all of us. In what way does entropy play in the expansion of space?
4. The expansion of space reflects an increase in entropy. If you wish to talk in terms of entropy equals complexity an expanding universe increases the complexity of all energy signals; red shift. The entropy within the expansion is needed by the second law. If we take away the red shift and the changes in space and time the entropy is too low.
5. I think it's important to first of all understand what entropy is. It does seem to be defined in quite different ways depending on the context.
Shannon entropy is a measure of the probability of receiving a message from a set of messages. So this somehow is related to the probability of detecting a black hole somewhere with a given mass (??).
Another way to think about it is "difference", things (like messages) that are different have mutual information entropy.
Messages containing different characters have more entropy of information than messages with identical characters (i.e. the algorithm that produces the first kind is more complex).
Different sized black holes are an expression of gravitational entropy--the "message" is encoded in a way that we can't decode, except as I mentioned, we do know the sizes of these messages (all of which contain identical "characters").
Entropy is really quite a general concept. You can understand it informationally, or thermodynamically, or both.
6. Another point of view: how much randomness is there in the way proteins are folded up?
The information contained in a protein's amino acid sequence dictates its three-dimensional structure. To quantitate the transfer of information that occurs in the protein folding process, the Kolmogorov information entropy or algorithmic complexity of the protein structure is investigated.
The algorithmic complexity of an object provides a means of quantitating its information content. Recent results have indicated that the algorithmic complexity of microstates of certain statistical mechanical systems can be estimated from the thermodynamic entropy.
In the present work, it is shown that the algorithmic complexity of a protein is given by its configurational entropy. Using this result, a quantitative estimate of the information content of a protein's structure is made and is compared to the information content of the sequence.
Additionally, the mutual information between sequence and structure is determined. It is seen that virtually all the information contained in the protein structure is shared with the sequence.
http://pre.aps.org/abstract/PRE/v54/i1/pR39_1
What can we conclude from this result? That the process of folding a sequence of amino acids into a protein is very efficient: it uses a shortest program length which this result says has the same complexity as the sequence it acts on. Protein folding is known to be mediated by enzymes, which are also proteins. A case of information "acting on itself".
7. There is another definition of entropy which comes into play, associated with work cycles. This applies to both gravitational work and enzymatic work. Entropy is also defined as inefficiency within the work cycles. If a process is 80% efficient, only 20% of the available energy goes into entropy. This is compared to 100% going into entropy if efficient was not important. Many assumptions and definition of entropy are not concerned with efficiency.
A common definition of entropy is connected to complexity. However, complexity and efficiency do not always go in the same direction, since higher efficiency means less entropy=complexity.
Let me give an example, the US government is a very complex organization. However, it is not very efficient. The low efficiency means it is based on higher entropy=complexity. If we wanted the government to become more efficient, we need to simplify departments such as consolidate and less redundancy. The complexity=entropy falls allowing more efficiency. The final result is still complex, but now we have less complexity=entropy.
If you look at an enzyme, the substrate fits into the active site like lock and key. The enzyme does it catalytic activity (increases its own complexity). But after it is done it returns to step 1 (lowers that complexity back to step 1). The result of this entropy loss is its high efficiency. This is why it is called a catalysts.
I think in terms of work cycles like gravitational work and efficiency. Things can still become complex, but less than expected if we assume a fully random system that is totally inefficient. Humans are evolving but this is subtle, Two obvious changes are the loss of wisdom teeth and tonsils. The loss is implicit of lowered complexity compared to retaining these with all else being equal. This is based on increasing efficiency so entropy=complexity drops.
In the world of science that is based on chaos and random, there is a bias which can see complexity=entropy but cannot fully see efficiency at play, since efficient is not exactly random like entropy suggests. Evolution is due to genetic changes which increase complexity and diversity. But natural selection is about efficiency and skinnies this wide range of genetic complexity down to the best of the best.
Gravity does work, which is force over a distance. This work is not 100 percent inefficient. The level of efficiency places a limit on complexity=entropy. The pressure on earth due to gravity will not allow the same diffusion rates into space we get on the moon. This is a restriction in entropy=complexity. The second law requires additional sources of complexity=entropy such as the expansion.
8. I'm wondering if this discussion can get past entropy? Which ultimately wins in the long run.
Can gravity ever be zero?
9. The discussion of entropy is important to the discussion of zero gravity. At zero gravity there is zero work due to gravity. This means there is no entropy deficit (100% ineffiency) and the universe acts as though the formation of entropy is 100% random. There would be no need for the expansion of the universe to maintain the second law when we have perfect inefficiency.
Let me show this with an example. I will start with two tanks of compressed gas. The first I will open up into room 1. The expansion of the gas, as it lowers pressure, will increase entropy as it flow into the room. The energy for the entropy is contained within the pressure.
In the next room, or room 2, I connect a similar air tank to a work cycle. This work cycle is 80% efficient, with only 20% energy loss into entropy. We essentially steal 80% of the energy the other room had, which went into entropy. At the exhaust end of our work device, the gas trickles out in a wimpy way (20% entropy) since I took away a lot of its energy could have used for entropy.
The two rooms will act differently, using the similar gas cylinders exhausted to the air. The work cycle took energy away from room 2 entropy. If I open the door between the two rooms, since there is an entropy potential between the two rooms, I can direct the flow of the entropy. It will go from room 1 to room 2. The pressure will push it that way.
Zero gravity is analogous to room 1 where the expansion of the gas goes into 100% into entropy, unobstructed by a work cycle, scavenging energy. Higher gravity is connected to room 2 where work occurs. The flow of entropy goes from zero gravity to higher gravity due to the entropy potential between 100% inefficiency and the efficiency of the work cycle.
The gravitational work is slick because it creates entropy potential and therefore increases the odds for things to happen in a directed way. The zero gravity loses its 100% inefficiency since it lose entropy toward the work cycles of gravity. To maintain the second law we need a way to increase entropy; expansion.
10. WW: ". . . . when we have perfect inefficiency" . . . of which my spouse reminds me, constantly!! (<--humor here!)
11. Originally Posted by wlminex
WW: ". . . . when we have perfect inefficiency" . . . of which my spouse reminds me, constantly!! (<--humor here!)
I tend to agree with her!
12. ## Admit the hyperbolic black hole galactic gravitational field (HBHF) as a postulate th
Admit the hyperbolic black hole galactic gravitational field (HBHF) as a postulate that explains Dark Matter
I have shown that it is expedient and practical to admit the hyperbolic black hole galactic gravitational field (HBHF) as a postulate – that is, as a mere tentative logical premise. There are several ways in which it could be confirmed as a contender for a place in the cosmological pantheon of physical “law”. If it could be seen as a real cosmic rule, every single one of the phenomena that are now ascribed to “Dark Matter” can be more parsimoniously charged to the HBHF. This is also because, by extension, the HBHF can be used to characterize the hyper-excited “inflaton particle” in the false vacuum of the ultra-high energy “inflaton field” that is supposed to have sprung into existence as a probabilistic quantum fluctuation. It offers a new way to forge another link between quantum dynamics and relativity theory.
When enough such links are made, we shall obtain a quantum theory of relativity without having to tolerate the putative overbearing “grand unified theories” or “theories of everything” like superstring theory or quantum loop gravity. These seem to offer no advantage other than the grandeur of hyper-complexity and the safe haven of unfalsifiability. In other words, the HBHF might allow theorists to “get real”. So, it is practical and expedient to admit the HBHF as just such a postulate.
The HBHF, if it can be allowed, would further reinforce Inflation Theory by providing a mechanism for the transition of the excited inflaton HBHF particle/field to a “ground state” inverse square gravitational field. It implies how potential energy in the inflaton field might have powered inflation and how it may now be powering “reinflation”, the accelerating Hubble expansion of the universe in the current epoch. It would seem to require endorsement of the “Many Worlds” interpretation of quantum mechanics/dynamics because the HBHF must have pre-existed inflation in a sort of “metatime” in a “multiverse”. But, this is implied by Alan Guth’s inflation hypothesis anyway. And then, if the universe was once a quantum entity, then it still is – with profound implications and more opportunities to forge links with GR.
Incidentally, the HBHF can certainly be admitted according to common interpretations of some theorems of general relativity if spacetime, in the moments before inflation, was indeed regarded as “flat”. That is, the HBHF can certainly be allowed by GR if the HBHF inflaton field is restricted to two dimensions. This gives a new twist to inflation. It may mean that inflation involved “unpacking” our spatially 3-D universe from a more compactified 2-D version.
And then, the deep interior of black holes at their singularities (as physical realities) might be viewed as recompactifications of spacetime – reconvolutions to a strictly 2-D format wherein the HBHF can persist with no contradiction to conventional interpretations of GR. Then, in our multiverse, the galactic 2-D HBHF sibling set might define orbital planes for each and every entity in its purview.
That this galactic field must be defined as a disk shaped oblate spheroid means that its tidal influence on the central super-massive Black Hole (SMBH) must be concentrated in the plane of the galaxy. The mass of the disk may be thousands of times the mass of the SMBH so, its (mutual) effects on the SMBH are very substantial. Thus, Einstein’s theory of the relativistic non-symmetric gravitational field must be used to characterize it and that of the SMBH. Nobody has ever done this. And Birkhoff’s Theorem or its congeners simply do not exactly apply to any real BHs.
Simple geometry is used to define radiant flux and other quantities that are posited to emanate from a point source. An imaginary sphere is constructed around the source. An infinitesimally small area is defined on the surface of this sphere. Then the flux, quantity of lines of force or light lines, through this fractional area must be proportional to 1/r^2 because the total area of a sphere is proportional to 1/r^2 and the spherical enclosure envelopes all the flux. Using this definition to prove that gravity must be an inverse square (1/r^2) phenomenon uses circular reasoning because it assumes as a premise that which is to be proven (it begs the question).
What if the source, even though it is a point, is assumed to be enclosed by an infinitesimally small space that is a very oblate spheroid by virtue of its extremely rapid rotation? What if this is the ultimate source, in fact. Then, what if this flux emission pattern is also very strongly oblately spheroidal? In addition, what if this flux was influenced by relativistic “frame dragging” and “thirring”? Also, what if the gravitational tidal influence of a galactic disk would also influence this spheroid to be even more oblate? The gravitational field of the disk must be perfectly coaxial and concurrent with the field of the SMBH. Its field must perfectly superpose. Then, the combined field must be treated in order to determine if there could be a hyperbolic field component. But, this combined field is even more “non-symmetric” and even more difficult to handle with GR, except by Einstein’s non-symmetric field theory, which has never been done. So, it is really impossible to prove by appeal to any theory or principle whatever whether the hyperbolic gravitational field is impossible. But, it is possible to appeal to strong geometric principles to argue that, indeed, it is possible.
I need a collaborator ! ! !
See more details at www.NeoCosmology.blogspot.com .
See the latest replies by Gary A on this forum under the titles:
Looking for a cosmology collaborator
No Trouble with Tribbles
13. Originally Posted by Gary A
Admit the hyperbolic black hole galactic gravitational field (HBHF) as a postulate that explains Dark Matter
Actually, it has been shown to you why this is incorrect. Carefully read the responses to your earlier threads.
14. ## "Cosmologists are always wrong, but never in doubt." - Lev Landau
Originally Posted by wellwisher
The expansion of space reflects an increase in entropy. If you wish to talk in terms of entropy equals complexity an expanding universe increases the complexity of all energy signals; red shift. The entropy within the expansion is needed by the second law. If we take away the red shift and the changes in space and time the entropy is too low.
"Cosmologists are always wrong, but never in doubt." - Lev Landau
Absolutely.
They overlook what is happening literally right in front of their very eyes. There is a supermassive black hole (SMBH) in the center of our galaxy. It contains at least 4 million solar masses and is still growing. Virtually all the galaxies in the universe (U) contain SMBHs. So, as the U expands cosmologists say entropy (S) is increasing and so, it is not conserved. Redshift is a symptom, not a cause. It is a sign of Hubble expansion. But the consolidation of matter into black holes all over the universe means that entropy is being sucked up by them (and is stored on the quasi-surface of the "event horizon") so that S is reduced in the whole rest of the universe by an amount that may be just equal to the S increase implied by Hubble expansion.
You might say, then, that S is still increasing if the entropy in black holes is counted in the audit. But, prophets say that the universe is proceeding toward a state wherein all the matter and energy in it will eventually reside in these black holes. They will all be SMBHs too, so they will become indistinguishable to even an observer who might see such a universe from a perspective in the multiverse in metatime.
This means that the whole universe will eventually collapse into just one representative SMBH and the U will effectively become a single particle once more. Especially, as the universe will eventually reach a state wherein galaxies and their embedded SMBHs will expand beyond "causal contact", the entropy of "the universe", as it must then be defined, must decline by at least as much as may have been gained by expansion.
So, entropy may be conserved after all and such conservation may be of crucial importance in discerning the details of the process of accelerating Hubble expansion according to the Friedmann equations under the FLRW relativistic metric. Friedmann uses an analog to the ideal gas equation with a work function, w and with the potential of defining and manipulating thermodynamic quantities like S, "delta S".
No cosmologists are working on this as though gravitational lensing has induced a sort of intellectual myopia.
15. The large font really helps to convince people.
16. ## Yours is not proof.
Originally Posted by origin
Actually, it has been shown to you why this is incorrect. Carefully read the responses to your earlier threads.
Yours is not proof.
Carefully read my stuff and you will see why.
It is irrelevant anyway. A tentative postulate needs no proof. Otherwise, Galileo, Kepler and Copernicus should have been burned at the stake for heresy. The Pope should have been proud. All attempts to induce a paradigm shift should have failed and we should all be still living on a flat Earth.
17. Originally Posted by James R
According to general relativity, frame dragging occurs at any speed.
Is this true ?
18. Originally Posted by hansda
Is this true ?
Yes. It is just very difficult to measure.
Frame-dragging and the curvature of space or spacetime are not entirely the same thing. At least to the extent that the curvature of space is associated with gravitation.
The curvature of space is more like a description of the shape of space, defined by the presence of mass.
Frame-dragging is more like a motion of space associated with the motion of mass through it. While frame-dragging can affect the motion of objects in space it does not add to the gravitational attraction between objects.
There is also a bit of a trap in this description, in that frame-dragging is not space moving with matter. It is a very weak interaction. Any analogy we make can only suggest the relationship. Keeping that in mind, you might think of frame-dragging as the way a stick stirring a pot of honey, very slowly.., pulls the honey in the direction of the stick's motion without making the honey swirl.
19. Originally Posted by OnlyMe
Yes. It is just very difficult to measure.
Frame-dragging and the curvature of space or spacetime are not entirely the same thing. At least to the extent that the curvature of space is associated with gravitation.
The curvature of space is more like a description of the shape of space, defined by the presence of mass.
Frame-dragging is more like a motion of space associated with the motion of mass through it. While frame-dragging can affect the motion of objects in space it does not add to the gravitational attraction between objects.
If your above statement is true , then Newton's First Law of Motion on inertia is no longer true ; because motion of objects will be affected at any speed without any gravitational force .
There is also a bit of a trap in this description, in that frame-dragging is not space moving with matter. It is a very weak interaction. Any analogy we make can only suggest the relationship. Keeping that in mind, you might think of frame-dragging as the way a stick stirring a pot of honey, very slowly.., pulls the honey in the direction of the stick's motion without making the honey swirl.
20. Originally Posted by hansda
If your above statement is true , then Newton's First Law of Motion on inertia is no longer true ; because motion of objects will be affected at any speed without any gravitational force .
Correct!
But not measureablely. For locally flat space and time, where Newtonian Mechanics can still be applied, any affect on the motion of an object is trivial and insignificant.
Though there is little said about how special and general relativity affect the first law, Newton's second law of motion, F = ma, from the perspective of special relativity, becomes $F = ma\gamma$ where $\gamma = \frac{1}{\sqrt{v^2/c^2}$, the Lorentz factor, which acts as a modifier for the influence of relatavistic velocities, in this relationship.
It is more difficult to explain how the Lorentz factor figures into Newton's formula for momentum, p = mv, but it plays a similar role, limiting velocities to less than the speed of light, c.
There is also little discussion as to how all of this affects Newton's third law of motion, "For every action, there is an equal and opposite reaction." If the motion of an object drags on space, then space must resist in some way. Some of the more recent attempts to explain inertia, as an interaction between a moving object and the zero-point field or vacuum energy of QM, begins to touch on this, at least indirectly... However, again this interaction remains insignificant at the classical velocities we are able to achieve at present.
Closed Thread
...
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
• BB code is On
• Smilies are On
• [IMG] code is On
• [VIDEO] code is On
• HTML code is Off
Forum Rules
All times are GMT -5. The time now is 12:37 PM.
sciforums.com
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423089623451233, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/58507/how-was-the-importance-of-the-zeta-function-discovered/58537
|
## How was the importance of the zeta function discovered?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is similar to http://mathoverflow.net/questions/1880/why-do-zeta-functions-contain-so-much-information , but is distinct. If the answers to that question answer this one also, I don't understand why.
The question is this: with the benefit of hindsight, the zeta function had become the basis of a great body of theory, leading to generalizations of CFT, and the powerful Langlands conjectures. But what made the 19th century mathematicians stumble on something so big? After all $\sum \frac{1}{n^s}$ is just one of many possible functions one can define that have to do with prime numbers. How and why did was the a priori fancifully defined function recognized as being of fundamental importance?
-
+1 While I don't believe the related question of why the zeta function is so important has a good answer at this point, the 'how' question seems a reasonable and interesting one. I might have a few remarks to contribute, but I am looking forward to reading other answers first. – Minhyong Kim Mar 15 2011 at 10:49
5
Do you not believe that many other functions were considered? Even just by Euler? Look at the size of his collected works! Mathematicians consider many more things than turn out to be beautiful, or interesting, or useful, and those that prove their worth stick around for us to learn about them. I like to think that given enough time, each useful idea would be discovered by someone. I think a more interesting question is WHEN an idea will be discovered. It seems that many ideas "have their time", the almost simultaneous invention of calculus by Newton and Leibnitz being the archetypical example – Barry Mar 15 2011 at 12:19
## 3 Answers
It was a classical problem going back to Mengoli to find a closed expression for the sum of inverse squares. This was solved by Euler, who saw more generally how to evaluate $\zeta(2k)$ at the positive even integers. Later, Euler "computed" the values of $\zeta(s)$ at negative integers as well and conjectured the functional equation of the zeta function. Euler also saw the connection with prime numbers and used the Euler factorization for estimating the number of primes up to $x$.
Most of Euler's results were made rigorous by Dirichlet (his proof of the infinitude of primes in arithmetic progression was built on Euler's results) and Riemann (who interpreted $\zeta(s)$ as a function on the complex plane, proved the functional equation, and indicated how the number of primes is connected with zeroes of the zeta function). There are many more names that should be mentioned (Kummer, Dedekind, Mertens, Landau, ...).
In any case, it was Euler who stumbled upon the zeta function more or less by accident, and he already recognized its importance.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Andre Weil has an article called "Prehistory of the zeta function" (reviewed by Jutila on mathscinet). I read this article many years ago, but this is basically what I remember of its content. Apparently the divergence of the harmonic series was known in 1650. Euler computed the special values at even integers and derived some kind of a functional equation. He also proved the Euler product formula and gave a proof of the infinitude of prime numbers using the Euler product. Dirichlet defined general L functions that now bear his name but only for real s>1. Riemann extended the definition of the zeta function to all complex values and proved the functional equation. According to Weil there were other people who had proved functional equations for functions that were closely related to the zeta function (namely, Malmstén, Schlömilch and Clausen from the review), but perhaps Riemann's contribution is the singular paper that established the importance of the zeta function as an important object to study. Weil believes that Riemann was influenced by his discussion with Eisenstein.
-
6
The divergence of the harmonic series was proven by Nicole Oresme in Questiones super geometriam Euclidis, around 1350. See plato.stanford.edu/entries/nicole-oresme – Stopple Mar 15 2011 at 15:18
4
The functional equation for Dirichlet's L-series for the quadratic character modulo 4 was discovered by Euler; before Landau made his work known, more general cases were covered by Malmsten et al. Precise references can be found in Landau's "Euler und die Funktionalgleichung der Riemannschen Zetafunktion", Bibl. Math. (3) 7 (1906), 69--79 as well as in Narkiewicz's book on the prime number theorem. – Franz Lemmermeyer Mar 15 2011 at 17:21
Here is a text which I remember I found nice:
http://www.dpmms.cam.ac.uk/~wtg10/zetafunction.ps
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582063555717468, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/79253/two-essentially-different-concretizaions
|
two essentially different concretizaions
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is sometimes emphasized that a "concrete category" is not a property of a category $C$, but rather a structure, i.e. a faithful functor from $C$ to $Set$. Thus, When people talk about a concrete category $C$ they really mean $C$ together with some implicitly defined and naturally understood functor from $C$ to $Set$ (most commonly, when the objects of $C$ are sets with some extra structure and the morphisms are some functions between those sets). This emphasize suggests that for a given category $C$, there might be several inequivalent concretizaions (of course there might also be none, but this is less important for this discussion) where I take "equivalent" to mean the two concretizing functors are naturally isomorphic (this seems natural, but is it the "correct" definition?).
In light of this, I would like to see an interesting as possible example of a category $C$ with two inequivalent concretizations. I guess there are tailor made examples with possibly even a finite category, though I most confess I didn't try to find such myself, so it might be also interesting to see such an example, but the most satisfying example would be of a category of some sort of "real life" mathematical structure with two meaningful inequivalent concretizations each giving some different intuition about the category (perhaps this is to much to ask, but it is only to clarify what I mean by "interesting").
-
Compose with any faithful functor $\mathrm{Set} \to \mathrm{Set}$. [It will be only coincidence when the result will be isomorphic to the original functor] – Martin Brandenburg Oct 27 2011 at 14:21
@Martin: Sure. But the question is to give interesting examples. In other words, examples where one might blithely refer to "the" underlying-set functor as the "obvious" one, but on reflection it's not so obvious: that's how I read KotelKanim's second sentence. – Todd Trimble Oct 27 2011 at 17:24
There are some very nice examples given by several people here. It seems that the two most common "tricks" are either to compose with a faithful functor $Set\to Set$ or to pre-compose with a faithful functor to some other concrete category. Some of those are quite natural though. There's also Qiaochu's "yoneda style" example which I haven't thought through yet (I mean, it is perfectly good, but I don't no of any natural example of this yet). I will accept Theo's answer since it was the first and a very good one as well. Thanks to everyone. – KotelKanim Oct 28 2011 at 8:13
3 Answers
One of course has many examples. To define the terms, a concretization of a category $C$ should be a faithful functor $C \to \mathrm{Set}$. In examples, concretizing functors are never full, so I will not ask this, but I do not mind asking, say, that the concretizing functor reflect isomorphisms. Maybe you want moreover for "concretization" to have some adjointness properties with some "free" or "discrete" functor $\mathrm{Set} \to C$, but maybe not. But even if you do ask for this, then there are still many such functors. For example, take any concretizing functor $F : C \to \mathrm{Set}$; then probably the functor $2\times F$, which assigns to $x\in C$ the set $F(x) \sqcup F(x) = \lbrace 0,1\rbrace \times F(x)$, is also concretizing.
An example that comes up in nature is the following. I can concretize the category of finite-dimensional Lie algebras by assigning to each Lie algebra the underlying set of its underlying vector space. Instead, I can embed the category of finite-dimensional Lie algebras into the category of Lie groups (by assigning to each Lie algebra its connected simply-connected group), and then I can concretize the category of Lie groups by assigning to each its underlying set. To see that these functors are not isomorphic, consider their actions on any non-trivial morphism $\mathbb R \to \mathfrak{so}(3)$. Under the first concretization, this morphism turns into an injection from the set $\mathfrak c$ with cardinality continuum into itself. Under the second concretization, the morphism also becomes a map $\mathfrak c \to \mathfrak c$, but it is not an injection.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here's an example with "natural" content.
Consider the category $Mat$ whose objects are natural numbers $n$ and whose morphisms $n \to m$ are $m \times n$ matrices with real entries. The "usual" concrete representation takes $n$ to (the underlying set of) $\mathbb{R}^n$.
But, for any finite $k$, the field $\mathbb{R}$ is Morita equivalent to the algebra $M_k$ of $k \times k$ matrices over $\mathbb{R}$, i.e., to $\hom(V, V)$ where $V$ is a $k$-dimensional vector space. What this means is that the category of finitely generated left projective modules over $\mathbb{R}$ is equivalent to the category of finitely generated left projective modules over $M_k$; the equivalence takes $\mathbb{R}^n$ to $V^n$, i.e., a vector space $W$ to the $M_k$-module $V \otimes_{\mathbb{R}} W$.
We have $Mat(m, n) \cong M_k-Mod(V^m, V^n)$. Thus, we may consider the functor $n \mapsto U(V^n)$ (the underlying set of the module $V^n$) as a completely natural concrete representation of $Mat$, no more and no less natural than the first, and yet it is completely different.
Many more such examples can be obtained by exploiting general Morita theory. The other example I was going to write about is based on an abstract equivalence between the category of Boolean algebras and the category of $k$-Post algebras for some fixed $k$, where the "usual" underlying set of a finite Boolean algebra has cardinality $2^n$ whereas the usual underlying set of the corresponding Post algebra under this equivalence has cardinality $k^n$. Thus, two functors
$$U: C \to Set$$
which lay equal claim to being considered "the" underlying set functor, but different sets according to whether we think of an object of $C$ as a Boolean algebra or a Post algebra. (I have written about this in other MO answers, explaining the connection with Morita theory, here and here.)
-
Two representable functors $\text{Hom}(A, -), \text{Hom}(B, -)$ are naturally isomorphic if and only if $A, B$ are isomorphic, and it is not hard to come up with many examples of categories with two non-isomorphic objects $A, B$ such that $\text{Hom}(A, -), \text{Hom}(B, -)$ are both faithful. In some sense the obvious examples of this form are not "interesting," though. In all of the examples of $A, B$ I can think of there is an object $X$ such that $\text{Hom}(X, -)$ is faithful and $A = X \sqcup A', B = X \sqcup B'$ for some $A', B'$. Can anyone think of examples not of this form?
The following example might be "interesting." On the one hand $\text{Set}$ has the trivial concretization given by the identity functor $\text{Set} \to \text{Set}$ (which is representable and given by $\text{Hom}(1, -)$). On the other hand there is also a concretization sending a set $A$ to the power set $2^A$ and sending a function $f : A \to B$ to the function $$f : 2^A \ni S \mapsto \{ f(x) : x \in S \} \in 2^B.$$
This functor is not representable in $\text{Set}$, but $\text{Set}$ embeds into $\text{Rel}$ (the category of sets and relations) and there this functor is $\text{Hom}(1, -)$ again.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536483883857727, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Neutral_current
|
# Neutral current
Weak neutral current interactions are one of the ways in which subatomic particles can interact by means of the weak force. These interactions are mediated by the Z boson. The discovery of weak neutral currents was a significant step toward the unification of electromagnetism and the weak force into the electroweak force, and led to the discovery of the W and Z bosons.
## Definition
The neutral current that gives the interaction its name is that of the interacting particles. For example, the neutral-current contribution to the ν
e
e− → ν
e
e− elastic scattering amplitude
$\mathfrak{M}^{\mathrm{NC}} \propto J_{\mu}^{\mathrm{(NC)}}(\nu_{\mathrm{e}}) \; J^{\mathrm{(NC)}\mu}(\mathrm{e^{-}})$
where the neutral currents describing the flow of the neutrino and of the electron are given by
$J^{\mathrm{(NC)}\mu}(f) = \bar{u}_{f}\gamma^{\mu}\frac{1}{2}\left(g^{f}_{V}-g^{f}_{A}\gamma^{5}\right)u_{f},$
and $g^{f}_{V}$ and $g^{f}_{A}$ are the vector and axial vector couplings for fermion $f$.
The Z boson can couple to any Standard Model particle, except gluons and photons. However, any interaction between two charged particles that can occur via the exchange of a virtual Z boson can also occur via the exchange of a virtual photon. Unless the interacting particles have energies on the order of the Z boson mass (91 GeV) or higher, the virtual Z boson exchange has an effect of a tiny correction ( $~(E/M_Z)^2$ ) to the amplitude of the electromagnetic process. Particle accelerators with energies necessary to observe neutral current interactions and to measure the mass of Z boson weren't available until 1983.
On the other hand, Z boson interactions involving neutrinos have distinctive signatures: They provide the only known mechanism for elastic scattering of neutrinos in matter; neutrinos are almost as likely to scatter elastically (via Z boson exchange) as inelastically (via W boson exchange). Weak neutral currents were predicted in 1973 by Abdus Salam, Sheldon Glashow and Steven Weinberg,[1] and confirmed shortly thereafter in 1974, in a neutrino experiment in the Gargamelle bubble chamber at CERN.
## References
1. "The Nobel Prize in Physics 1979". Nobel Foundation. Retrieved 2008-09-10.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996038436889648, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/tagged/classical-groups
|
## Tagged Questions
0answers
130 views
### multidimensional rotation terminology
Given an element $g$ of the orthogonal group $O(n)$, is there a name for the subspace of $R^n$ that's fixed by $g$, and a name for the orthogonal complement of this space? (The la …
2answers
871 views
### Connectedness of the linear algebraic group SO_n
I apologize in advance if my question is too elementary for MO. It is a well known fact that the linear algebraic group $G = \mathsf{SO}_n$ is connected, and there exist a few dif …
1answer
807 views
### Existance of certain almost invariant functions related to amenability and piece-wise transformations
We would like very much to know the answer to the following question: Let $\|\cdot\|$ be any norm on $\mathbb{Z}^d$ and let $W(\mathbb{Z}^d)$ be the group of all bijections of …
1answer
930 views
### Alternate and symmetric matrices
Greetings to all ! Let me first confess that this question was mentionned to me by Bernard Dacorogna, who doesn't sail on MO. Let $A\in M_{2n}(k)$ be an alternate matrix. Say tha …
0answers
166 views
### Totally singular subspaces in orthogonal vector spaces
This is for all that are interested in classical groups and their representations. We are investigating the following situation: Let $V$ be $d$-dimensional $k$-vector space (where …
1answer
313 views
### Symplectic groups Sp_{2m}(2) as 2-transitive permutation (i.e. Galois) groups
Hello, I am looking for information about the symplectic groups $Sp_{2m}(2)$ as permutation group acting on quadratic forms. Consider the block matrices \[e=\begin{pmatrix}0&1 …
0answers
256 views
### What is the “positive part” of the unit ball in $M_n(R)$ ?
In ${\bf M}_n(\mathbb R)$, let us consider the usual operator norm $$\|A\|=\sup\frac{\|Ax\|}{\|x\|},$$ where $\|x\|$ is the Euclidian norm. The closed unit ball $B$ is the set of …
3answers
918 views
### What is the subgroup generated by involutions?
I was recently taking some notes on the Cartan-Dieudonné theorem: if $(V,q)$ is a nondegenerate quadratic space of finite dimension $n$ over a field of characteristic not $2$, then …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9056897163391113, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/6985/constructing-the-caln-2-supersymmetric-non-abelian-chern-simons-theory
|
# Constructing the $\cal{N}=2$ supersymmetric non-Abelian Chern-Simon's theory
This is related to this earlier question I had asked.
I am using the so called Majorana" representation of gamma matrices in $2+1$ dimensions in which everything is real. After doing the dimensional reduction of the $\cal{N}=1$ supersymmetry transformations of the components of the vector superfield in $3+1$ dimensions the supersymmetry transformations of the resulting $\cal{N}=2$ components of the vector superfield in $2+1$ dimensions are,
$$\delta F_A = i \bar{\alpha}^a\lambda_{Aa}$$ $$\delta D_A = i \bar{\alpha}^a\gamma_3^\mu D_\mu \lambda_{Aa}$$ $$\delta V_{A\mu} = i \bar{\alpha}^a\gamma_{3\mu}\lambda_{Aa}$$ $$\delta \lambda_{Aa} = -\frac{1}{2}f^3_{A\mu \nu}\gamma_3^{\mu \nu}\alpha_a + D_A\alpha^a + \gamma_3^\mu D_\mu F_A\alpha ^a$$
where $A,B,..$ are the gauge group indices, $f^3_{A\mu \nu}$ is the non-Abelian field strength and $\alpha$ is a spinor parameter whose components are raised and lowered as, $\alpha^1 = \alpha_2$ and $\alpha^2 = -\alpha_1$.
Using the above one can derive the following transformations for the possible terms in the intended super-Chern-Simons' theory,
$$\delta(Tr[FD]) = Tr[t_At_B] \{ i\bar{\alpha}^a \lambda _{Aa}D_B - i \bar{\alpha}^a \gamma_3^\mu\lambda_{Ba}\partial_\mu F_A + i\bar{\alpha}^a \gamma_3^\mu \lambda_{Ca} C_{BB'C}V_{B'\mu}F_A\}$$
$$\delta(Tr[\bar{\lambda}_a\lambda_a]) = 2Tr[t_A t_B]\{ \frac{1}{2}\bar{\alpha}_a\gamma_{3\rho}\lambda_{Ba}f^3_{A\mu\nu}\epsilon ^{\mu \nu \rho} + \bar{\alpha}^a\lambda_{Ba}D_A - \bar{\alpha}^a\gamma_3^\mu \lambda_{Ba}\partial_\mu F_A$$
$$- \bar{\alpha}^a\gamma_3^\mu \lambda_{Ba}C_{AB'C}V_{B'\mu}F_C \}$$
$$\delta(Tr[\epsilon^{\mu \nu \rho}(V_\mu \partial_\nu V_\rho)]) = -i\epsilon^{\mu \nu \rho}Tr[t_At_B]\bar{\alpha}^a\gamma_{3\rho}\lambda_{Aa}(\partial_\mu V_{B\nu} - \partial_\nu V_{B\mu})$$
$$\delta(Tr[\epsilon ^{\mu \nu \rho}V_\mu V_\nu V_\rho]) = -\frac{3}{2}\epsilon^{\mu \nu \rho}Tr[t_At_D]C_{DBC}\bar{\alpha}^a\gamma_{3\mu}\lambda_{Aa}V_{B\nu}V_{C\rho}$$
(where $t_A$ are a chosen basis in the lie algebra of the gauge group such that the structure constants are defined as, $[t_A,t_B]=iC_{DAB}t_D$)
It is clear that by choosing a coefficient of $-2$ for the $Tr[FD]$ and $i$ for the $Tr[\bar{\lambda}_a\lambda_a]$ some of the terms the variation of the auxiliary fields can be cancelled and some of the remaining terms of the variation of the fermionic term totally cancel the supersymmetric variation of the kinetic term of the gauge fields.
What remains are,
$$\delta(Tr[\epsilon^{\mu \nu \rho}V_\mu\partial _ \nu V_\rho + i \bar{\lambda}_a\lambda_a - 2FD]) = Tr[t_At_B]\{i\bar{\alpha}_a\gamma_{3\rho}\lambda_{Aa}C_{BCD}V_{C\mu}V_{D\nu}\epsilon^{\mu \nu \rho}$$ $$- 2i\bar{\alpha}^a\gamma_3^\mu \lambda_{Ba}C_{AB'C}V_{B'\mu}F_C - 2i\bar{\alpha}^a\gamma_3^\mu \lambda_{Ca}C_{BB'C}V_{B'\mu}F_A\}$$
and
$$\delta(Tr[\epsilon ^{\mu \nu \rho}V_\mu V_\nu V_\rho]) = -\frac{3}{2}Tr[t_At_B]\bar{\alpha}^a\gamma_{3\mu}\lambda_{Aa}C_{BCD}V_{C\nu}V_{D\rho}\epsilon^{\mu \nu \rho}$$
• Its not clear that a coefficient can be chosen for the last term so that the supersymmetric variation of the sum of the LHSs go to zero.
The above terms seem to be structurally very different and hence its not clear how they will cancel. Like the variation of the fermionic self-coupling term produces a coupling of the fermionic component, gauge field and auxiliary field. Such a term is not produced by the variation of the gauge field cubed term!
One expects the lagrangian should look something like,
$$Tr[\epsilon^{\mu \nu \rho}(V_\mu\partial _\nu V_\rho -i\frac{2}{3}V_\mu V_\nu V_\rho ) + i \bar{\lambda}_a\lambda_a - 2FD ]$$
I would like to get some help in establishing the above!
• One progress would be if the two terms with the structure constant actually cancel i.e,
if
$$Tr[t_At_B]\{C_{AB'C}\lambda_{Ba} V_{B'\mu}F_C + C_{BB'C}\lambda_{Ca}V_{B'\mu}F_A\} = 0$$
But the above is not clear!
NB. My structure constants are defined as $[t_A,t_B]=iC_{DAB}t_D$
-
## 2 Answers
Without going through and doing the calculation myself, I can only make some general comments.
Your index contractions seem a little weird. In the first term on the RHS of the $$\begin{align} &\delta(Tr[\epsilon^{\mu \nu \rho}V_\mu\partial _\nu V_\rho + i \bar{\lambda}_a\lambda_a - 2FD]) \\ = &Tr[t_At_B]\{i\bar{\alpha}_a\gamma_{3\rho}\lambda_{Aa}C_{ABC}V_{B\mu}V_{C\nu} - 2i\bar{\alpha}^a\gamma_3^\mu \lambda_{Ba}C_{AB'C}V_{B'\mu}F_C \} \,, \end{align}$$ the Lorentz indices are not contracted, are you missing a $\epsilon^{\mu \nu \rho}$? Also, in the same term you have triply repeated gauge indices (i.e. the same index appears 3 times), which is unpleasant.
If you fix the above, and maybe use the symmetry of the $Tr[t_At_D]$ term to move the $D$ index in the RHS of the cubic term $\delta(Tr[\epsilon ^{\mu \nu \rho}V_\mu V_\nu V_\rho])$, then maybe you can get it cancel the first term talked about above.
Finally, the second term on the RHS of the above displayed equation can not be canceled by anything else. So check your result for $\delta(Tr[\bar{\lambda}_a\lambda_a])$ - maybe the trouble term is meant to vanish... Are you sure that your Susy variations are correct? Have you got a reference (e.g. http://arxiv.org/abs/hep-th/9506170) you can check against?
-
you have triply repeated gauge indices, which is unpleasant ... @Simon this is the Einstein summation convention and is used all the time. Not sure what you mean by "unpleasant". – user346 Mar 16 '11 at 0:47
@Deepak: Normally you only sum over doubly repeated indices - sticking to this convention avoids strange errors. In some situations you can bend that rule, such as when you have a sign factor that depends on the index... – Simon Mar 16 '11 at 0:53
@Deepak: In the 2nd term you have B' indices, but in the first you simply repeat the A and B indices. I thought perhaps it was by mistake. You don't seem to have chosen a basis for your gauge group (or at least the metric for your basis), so there's no reason to assume that they are trace orthogonal. Maybe you should choose $tr(t_a t_b)\propto\delta_{ab}$ and simplify the expressions. – Simon Mar 16 '11 at 0:55
@Simon let me give a simple example. Given three vectors, the expression $\textbf{A}\cdot\textbf{B}\times\textbf{C}$ gives the volume of the parallelepiped defined by these vectors. One can also write this as $\epsilon^{abc} A_a B_b C_c$ in index notation. – user346 Mar 16 '11 at 0:57
@Deepak: I'm not stupid. – Simon Mar 16 '11 at 0:57
show 9 more comments
@Deepak: triply repeated is operationally well-defined as part of Einstein summation convention, but is meaningless in general: contractions are supposed to be group invariants, and in that sense such triple "contractions" are meaningless.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 15, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328269362449646, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/121542/list
|
## Return to Question
2 edited body
Is there a nice elementary proof of the existence of Nash equilibria for 2-person games?
Here's the theorem I have in mind. Suppose $A$ and $B$ are $m \times n$ matrices of real numbers. Say a mixed strategy for player A is a vector $p \in \mathbb{R}^m$ with
$$p_i \ge 0 , \quad \sum_i p_i = 1$$
and a mixed strategy for player B is a vector $q \in \mathbb{R}^n$ with
$$q_i \ge 0 , \quad \sum_j q_j = 1$$
A Nash equilibrium is a pair consisting of a mixed strategy $p$ for A and a mixed strategy $q$ for B such that:
1. For every mixed strategy $p'$ for A, $p' \cdot A q \le p \cdot A q$.
2. For every mixed strategy $q'$ for B, $p \cdot A B q' \le p \cdot B q$.
(The idea is that $p \cdot A q$ is the expected payoff to player A when A chooses mixed strategy $p$ and B chooses $q$. Condition 1 says A can't improve their payoff by unilaterally switching to some mixed strategy $p'$. Similarly, condition 2 says B can't improve their expected payoff by unilaterally switching to some $q'$.)
Nash won the Nobel prize for a one-page proof of a more general theorem for $n$-person games here, but his proof uses Kakutani's fixed-point theorem, which seems like overkill, at least for the 2-person case. There is also a proof using Brouwer's fixed-point theorem; see here for the $n$-person case and here for the 2-person case. But again, this seems like overkill.
Earlier, von Neumann had proved a result which implies this one in the special case where $B = -A$: the so-called minimax theorem for 2-player zero-sum games. Von Neumann wrote:
As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.
I believe von Neumann used Brouwer's fixed point theorem, and I get the impression Kakutani proved his fixed point theorem in order to give a different proof of this result! Apparently when Nash explained his generalization to von Neumann, the latter said:
That's trivial, you know. That's just a fixed point theorem.
But you don't need a fixed point theorem to prove von Neumann's minimax theorem! There's a more elementary proof in an appendix to Andrew Colman's 1982 book Game Theory and its Applications in the Social and Biological Sciences. He writes:
In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
This proof is indeed very elementary. The deepest fact used is merely that a continuous function assumes a maximum on a compact set - and actually just a very special case of this. So, this is very nice.
Unfortunately, the proof is spelt out in such enormous elementary detail that I keep falling asleep halfway through! And worse, it only covers the case $B = -A$.
Is there a good references to an elementary but terse proof of the existence of Nash equilibria for 2-person games?
1
# Simple proof of the existence of Nash equilibria for 2-person games?
Is there a nice elementary proof of the existence of Nash equilibria for 2-person games?
Here's the theorem I have in mind. Suppose $A$ and $B$ are $m \times n$ matrices of real numbers. Say a mixed strategy for player A is a vector $p \in \mathbb{R}^m$ with
$$p_i \ge 0 , \quad \sum_i p_i = 1$$
and a mixed strategy for player B is a vector $q \in \mathbb{R}^n$ with
$$q_i \ge 0 , \quad \sum_j q_j = 1$$
A Nash equilibrium is a pair consisting of a mixed strategy $p$ for A and a mixed strategy $q$ for B such that:
1. For every mixed strategy $p'$ for A, $p' \cdot A q \le p \cdot A q$.
2. For every mixed strategy $q'$ for B, $p \cdot A q' \le p \cdot B q$.
(The idea is that $p \cdot A q$ is the expected payoff to player A when A chooses mixed strategy $p$ and B chooses $q$. Condition 1 says A can't improve their payoff by unilaterally switching to some mixed strategy $p'$. Similarly, condition 2 says B can't improve their expected payoff by unilaterally switching to some $q'$.)
Nash won the Nobel prize for a one-page proof of a more general theorem for $n$-person games here, but his proof uses Kakutani's fixed-point theorem, which seems like overkill, at least for the 2-person case. There is also a proof using Brouwer's fixed-point theorem; see here for the $n$-person case and here for the 2-person case. But again, this seems like overkill.
Earlier, von Neumann had proved a result which implies this one in the special case where $B = -A$: the so-called minimax theorem for 2-player zero-sum games. Von Neumann wrote:
As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.
I believe von Neumann used Brouwer's fixed point theorem, and I get the impression Kakutani proved his fixed point theorem in order to give a different proof of this result! Apparently when Nash explained his generalization to von Neumann, the latter said:
That's trivial, you know. That's just a fixed point theorem.
But you don't need a fixed point theorem to prove von Neumann's minimax theorem! There's a more elementary proof in an appendix to Andrew Colman's 1982 book Game Theory and its Applications in the Social and Biological Sciences. He writes:
In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.
This proof is indeed very elementary. The deepest fact used is merely that a continuous function assumes a maximum on a compact set - and actually just a very special case of this. So, this is very nice.
Unfortunately, the proof is spelt out in such enormous elementary detail that I keep falling asleep halfway through! And worse, it only covers the case $B = -A$.
Is there a good references to an elementary but terse proof of the existence of Nash equilibria for 2-person games?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578242301940918, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/93879/list
|
## Return to Question
2 added another reference.
I am talking about this in a course I am teaching, and hence am wondering: what are the various derivations of the values of Riemann zeta function at even integers? There are two incredibly cool proofs in Don Zagier's paper (section 1), but there must several other proofs floating around. Also, I recall reading that Euler originally proved the formula for $\zeta(2)$ by thinking of $\sin(x)$ as a polynomial -- has this argument been made rigorous since?
EDIT I did not realize that this was known as the "Basel Problem", so did not find @Yemon's answer myself. I conjecture, however, that the Robin Chapman list is incomplete, since I have found yet another proof, not contained in Robin's list, so maybe there are more yet out there...
1
# Riemann zeta at even integers
I am talking about this in a course I am teaching, and hence am wondering: what are the various derivations of the values of Riemann zeta function at even integers? There are two incredibly cool proofs in Don Zagier's paper (section 1), but there must several other proofs floating around. Also, I recall reading that Euler originally proved the formula for $\zeta(2)$ by thinking of $\sin(x)$ as a polynomial -- has this argument been made rigorous since?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9753983020782471, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/71420/the-reverse-mathematics-of-writing-a-set-as-a-union/71440
|
## The Reverse Mathematics of writing a set as a union?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
To be more precise, a countable collection of sets $(S_n)_{n \in \mathbb{N}}$ is encoded as the row of some given set $S$, i.e. $S_n = S^{[n]}$. Futhermore, for any function from $\mathbb{N} \rightarrow 2$, let $\bigcup_f S$ denote the union of the $S_n$ where $f(n) = 1$.
The question is what is the strength of the following statement (over $\text{RCA}_0$) : For all $X$, if for all $m \in X$, there exists a $n$ such that $m \in S_n$ and $S_n \subset X$, then there exist a $f : \mathbb{N} \rightarrow 2$ such that $X = \bigcup_f S$.
Clearly $\text{ACA}_0$ can prove this. However, I can not reverse this, over $\text{RCA}_0$. If it helps, this property feels very much like a special collection principle. That is for any $\Pi_1^0$ formula $\varphi(m,n)$ in free variable $m$ and $n$ : $(\forall m)(\exists n)\varphi(m,n) \Rightarrow (\exists X)(\forall m)(\exists n)(n\in X \wedge \varphi(m,n) \wedge (\forall n)(n \in X \Rightarrow (\exists m)\varphi(m,n))$. So this asserts that the solution for every $m$ exists in $X$ and all the elements of $x$ are solutions for some $m$. With this and using the $\Pi_1^0$ formula asserts $S_n$ is a subset, I can prove the union property above. However, I am not sure if I can go the other way. I am not certain of the strength of this collection principle either.
Could someone tell me if the union property or the collection principle is equivalent to any well known systems over $\text{RCA}_0$ or how they relate to well-known systems. Thanks for any help.
-
## 3 Answers
Let $Y$ be a member of the Turing degree $[Y\hspace{.04 in}]$. $\;$ Define `$canhalt : \omega \times \omega \to \{\text{false},\text{true}\}$` by
$canhalt(s,t) \iff$
there exists an $s$-state $Y$-oracle machine that runs exactly $t$ steps if started on a blank tape
Define $pair : \omega \times \omega \to \omega$ to be the Cantor pairing function. $\; \; pair$ has a graph and is a bijection.
There are only finitely many $m$-state $Y$-oracle machines, and these are easily enumerated,
so define $\langle S_0,S_1,S_2,S_3,...\rangle$ by
$((2\cdot n)\in S_{pair(s,t)}) \iff n=s$
and
$(((2\cdot n)+1)\in S_{pair(s,t)}) \iff (t\lt n$ and $canhalt(s,n))$
and note that for all $s$, `$\{t : canhalt(s,t)\}$` is finite.
Define $bb_Y : \omega \to \omega$ by `$bb_Y(s) = \operatorname{max}(\{t : canhalt(s,t)\})$`. $\;$ ($bb_Y$ does not necessarily have a graph)
Define `$E = \{n : n\, \text{ is even} \}$`. $\;$ By construction, for all members $n$ of $E$, $\; n\in S_{pair(m,bb_Y(m))} \subseteq E \;$.
Assuming the Union Principle, let $I$ be a subset of $\omega$ such that $\; \; \; \displaystyle\bigcup_{i\in I} \; S_i \; \; = \; \; X \; \; \;$.
By the construction of $\langle S_0,S_1,S_2,S_3,...\rangle$ and $I$, for all $s$ there exists $t$ such that $pair(s,t)\in I$,
and for all $s$ and $t$ if $pair(s,t)\in I$ then $bb_Y(s) \leq t$.
Let $\langle mach_0,mach_1,mach_2,mach_3,...\rangle$ be a reasonable enumeration of the $Y$-oracle machines. $\;$ Define $states : \omega \to \omega$ by $\; states(m) =$ the number of states in $mach_m \;$.
Since the enumeration is reasonable, $states$ has a graph.
For all $m$ and $t$, if $pair(states(m),t)\in I$ then
$mach_m$ halts within $t$ steps if started on a blank tape
$\implies$
$mach_m$ halts if started on a blank tape
$\implies$
$mach_m$ runs exactly a member of `$\{t : canhalt(states(m),t)\}$` steps if started on a blank tape
$\implies$
$mach_m$ halts within $bb_Y(states(m))$ steps if started on a blank tape
$\implies$
$mach_m$ halts within $t$ steps if started on a blank tape
Now, since the enumeration is reasonable, define `$H = \{m : mach_m\; \text{halts within}\; t\; \text{steps when started on a blank tape, where}\; pair(states(m),t)\in I \}$`. By the above, $[Y\hspace{.04 in}]' = [Y\hspace{.02 in}'] = [H\hspace{.02 in}]$ exists. $\;$ This works for all Turing degrees, so (RCA0 + Union Principle) proves all of ACA0. $\;$ Clearly ACA0 proves the Union principle, and ACA0 is stronger than RCA0.
Therefore the Union Principle is equivalent to ACA0 over RCA0.
-
It isn't possible to form the function $bb_Y(s)$ in $RCA_0$. When $Y = \emptyset$ this is the Busy Beaver function, which will not be in the $\omega$-model REC of $RCA_0$ because every function in that model is computable. It is true that for each $s$ the set $B_s = \{ t : canhalt(s,t)\}$ is finite, but there is no computable sequence of canonical indices for the sequence $(B_s)$, and this is what would be needed to define the bb function. By comparison, for each $s$ the set `$C_s = \{ 0 : s \text{ halts}\} \cup \{ 1 : s \text{ doesn't halt}\}$` is finite, but we can't form $f(s) = \max C_s$. – Carl Mummert Jul 27 2011 at 20:59
1
I retyped this in a different way as a community wiki post to help myself understand it. – Carl Mummert Jul 27 2011 at 22:07
1
OK. I sorted through the confusion. Your argument is basically correct, but you need to fix a few things to make it right. - The first step of your sequence of implications should not be there. - Your $H$ at the end is defined by a $\Sigma^0_1$-formula. You need to first use $I$ to define a function $h$ such that $(s,h(s)) \in I$. Then define $H$ to be the set {$m$ : $mach_m$ halts in $h(states(m))$ steps}. - Carl's objection to 'defining' $bb_Y$ is right. Try something like "consider the (external!) function $bb_Y$" to warn the reader that you're not claiming that $bb_Y$ actually exists. – François G. Dorais♦ Jul 27 2011 at 22:30
2
You don't need to follow my advice to the word. However, I strongly recommend that you do two things: (1) always announce what you're proving, and (2) always conclude your arguments. Once you start doing that, you will find that people will find your arguments much less confusing. – François G. Dorais♦ Jul 28 2011 at 1:40
1
The other tricky thing in this proof is showing that `$\{t : canhalt(s,t)\}$` is always bounded. This seems to require actually analyzing the complexity of the canhalt relation and associated functions, because just being in definable bijection with a bounded set is not good enough to ensure boundedness. The fact that not every machine will halt is another wrinkle. – Carl Mummert Jul 28 2011 at 2:12
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Due to my own confusion, I had a hard time reading Ricky Demer's proof, but I think it is correct. I couldn't fit this remark in a comment so this is a community wiki post where I will try to rephrase the proof in a way that I can grasp more quickly. Maybe it will help others as well.
We work in $RCA_0$. To establish $ACA_0$ it is sufficient to prove that the range of each injective function exists. Let $f\colon \mathbb{N} \to \mathbb{N}$ be injective.
For each $i$ define ```$$
S_{(i,j)} = \{2i\} \cup \{ 2k+1 : j < k \land f(k) < i\}
$$``` The sequence `$\{ S_{(i,j)} : i,j \in \mathbb{N}\}$` is uniformly definable with a bounded-quantifier formula relative to $f$ so it can be formed in $RCA_0$.
Because $f$ is injective, for each $i$ the set `$\{ k : f(k) < i\}$` is bounded, and so for each $i$ there is a $j$ such that `$S_{(i,j)} = \{2i\}$`. To prove that the set is bounded seems to require an argument using bounded $\Sigma^0_1$ comprehension to form the set of elements less than $i$ in the range, then using quantifier-free bounding to show the range of this is bounded. (Is there an easier way?) In general, the "bounding principle" for a class of formulas $\Gamma$ says that the image of a bounded set of numbers under a $\Gamma$-definable function is bounded.
Let $E$ be the set of even numbers. By the Union Principle, there is a set $I$ such that $E = \bigcup_{(i,j) \in I} S_{(i,j)}$. Note that if $(i,j) \in I$ then $S_{(i,j)} = \{2i\}$. Also note that for every $i$ there is at least one $j$ such that $(i,j) \in I$. Given $i$, let $h(i)$ be the first $j$ such that $(i,j) \in I$. Since $$(\exists k)(f(k) = \ell) \iff (\exists k < h(\ell+1))(f(k) = \ell)$$ we can define the range of $f$ using only bounded quantifiers. Thus we can form the range of $f$ in $RCA_0$.
-
@William Chan: I added a brief statement of what the bounding principle says. These sorts of principles are more well known in the study of fragments of first-order arithmetic, but it shows up occasionally in the context of reverse math. – Carl Mummert Jul 28 2011 at 1:50
Also, in my opinion this is just a rephrasing of Ricky Demer's proof, and if you would like to accept an answer I would prefer if you did not accept this one. – Carl Mummert Jul 28 2011 at 1:50
@Francois: thanks for rewording the last paragraph. – Carl Mummert Jul 28 2011 at 1:52
I only have a partial answer so far...
The Union Principle implies $\Sigma^0_1$-Separation (which is equivalent to the Weak König Lemma).
Let $h_0, h_1:\mathbb{N}\to\mathbb{N}$ be two functions with disjoint ranges. Define $$S_{2n+i} = \{ m : m = n \lor (\exists k \leq m)(h_i(k) = n)\}.$$ Note that either $S_{2n} = \{n\}$ or $S_{2n+1} = \{n\}$ (possibly both) so every set $X$ satisfies the precondition for the Union Principle.
Let $f_0,f_1:\mathbb{N}\to2$ be such that $\bigcup_{f_0} S_n$ is the set of even numbers and $\bigcup_{f_1} S_n$ is the set of odd numbers.
Note that if $f_0(4n) = 1$ then $2n$ is not in the range of $h_0$ and if $f_0(4n+1) = 1$ then $2n$ is not in the range of $h_1$. Since we must have either $f_0(4n) = 1$ or $f_0(4n+1) = 1$, the set $$X_0 = \{2n : f_0(4n) = 1\}$$ is such that all the even values of $h_1$ are in $X_0$ and none of the even values of $h_0$ are in $X_0$.
Similarly, if $f_1(4n+2) = 1$ then $2n+1$ is not in the range of $h_0$ and if $f_1(4n+3) = 1$ then $2n+1$ is not in the range of $h_1$. Since we must have either $f_1(4n+2) = 1$ or $f_1(4n+3) = 1$, the set $$X_1 = \{2n+1 : f_1(4n+2) = 1\}$$ is such that all the odd values of $h_1$ are in $X_1$ and none of the odd values of $h_0$ are in $X_1$.
It follows that $X_0 \cup X_1$ separates the ranges of $h_0$ and $h_1$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 175, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293193221092224, "perplexity_flag": "head"}
|
http://physics.aps.org/articles/print/v2/81
|
# Viewpoint: Large rare patches of order in disordered boson systems
, BAE Systems, Advanced Information Technologies, 6 New England Executive Park, Burlington, MA 01803, USA
Published September 28, 2009 | Physics 2, 81 (2009) | DOI: 10.1103/Physics.2.81
The existence, through statistical fluctuation, of arbitrarily large regions with a certain order in an otherwise disordered system, allow one to set bounds on various important thermodynamic properties.
The effects of disorder on the phases and phase transitions in the zero-temperature ground state of many-body quantum systems have been of intense interest for many decades. Perhaps most familiar is the transition from a metal to an insulator in noninteracting electronic systems, where increasing disorder in the atomic lattice interferes with an electron’s ability to hop coherently from site to site, eventually confining each to a finite volume, a phenomenon known as Anderson localization.
Analogous phenomena occur in Bose systems, such as $4He$ absorbed in porous media, or magnetically trapped atomic vapors in periodic or disordered optical potentials, but there is now a fascinating interplay between disorder, interactions (required in the absence of Pauli exclusion to avoid system collapse), and superfluidity [1]. The clean system is visualized in the top panel of Fig. 1, where one imagines populating a periodic potential with one boson per site (filling factor $n=1$). If the tunneling amplitude $J$ between sites is weak then, even in the absence of disorder, the onsite mutual hard-core repulsion $U$ localizes the effective single-particle wave functions to a finite size $ξ(J)$. The resulting phase is known as a Mott insulator, and is identified by the finite energy gap $ε(J)$ required to overcome the repulsion and add a particle (or hole) to the system. This phase is therefore also incompressible.
This system can now be driven into a superfluid phase in two distinct ways (bottom panel of Fig. 1). First, one may increase (or decrease) the chemical potential $μ$, overcome the energy gap, and add a small density of particles (or holes). These particles are free to propagate coherently throughout the system, and effectively form a dilute Bose superfluid on top of a uniform insulating background. Alternatively, one may increase the hopping strength $J$ (or lower the repulsion $U$) so that the ratio $J/U$ exceeds a critical value $(J/U)c$. The length $ξ(J)$ diverges, and particles and holes simultaneously gain sufficient mobility, tunneling coherently through each other to form a dense, strongly interacting superfluid. This phase transition is characterized by the existence of an extra particle-hole symmetry, and belongs to a different universality class [1] compared to the first scenario.
Consider now the addition of disorder, visualized in Fig. 2. If it is sufficiently bounded, a shrunken incompressible Mott phase with fixed filling $n=1$ still survives. A sufficiently large chemical potential can again be applied to overcome the energy gap and add extra particles (or holes) to the system. A key question now is “do these particles still form a superfluid?” At least for small hopping $J$, the answer must be no; the extra particles still see a residual random potential due to the distortion of the background Mott phase by the disorder. The usual Anderson localization arguments then imply that the effective single-particle energy states experienced by these particles must be localized, and the system remains insulating. However, the density can be varied continuously, so this phase, known as a Bose glass, is compressible.
As one continues to add particles or holes to the system, the background becomes gradually more smooth, the localization length increases, and the system eventually undergoes a superfluid transition (bottom panel of Fig. 2). A picture to keep in mind is that of isolated superfluid droplets that grow, join, and percolate to eventually span the system.
The question remaining, which has generated much controversy in recent years, is whether the Bose glass phase must completely surround the Mott lobe, or whether, in fact, a direct Mott-superfluid transition might take place at larger $J$, closer to the tip, or perhaps only through the tip. Strong arguments, based on this superfluid droplet picture, have been presented [1, 2] that forbid such a direct transition, but a rigorous proof remained elusive. In a recent paper appearing in Physical Review Letters [3], Lode Pollet, Nikolai Prokof’ev, and Boris Svistunov at the University of Massachusetts, US, and Matthias Troyer from ETH Zurich, Switzerland, have presented precisely such a proof. The previous arguments were based on proposals for the Mott phase boundary in terms of that of the periodic system, and the phase just beyond it. Unfortunately, this boundary is highly model dependent, and depends on reasonable, but unproven, assumptions about the influence of nonperturbative, finite amplitude disorder. The key insight in the rigorous proof is to focus instead on the superfluid phase boundary, and the nature of the phase just beyond it [3] (see, however, Ref. [4] where the implications of the proof for the Mott-Bose glass transition are addressed as well).
Pollet and his colleagues use so-called “large rare region” arguments that are based on the simple but powerful idea that a finite set of random variables will, with finite probability, take values within any specified range, no matter how restricted. Moreover, in an infinite, translation invariant system, any such finite probability event will take place infinitely often somewhere in the system. For example, independently assigned random site potentials will, through statistical fluctuations, give rise to arbitrarily large, arbitrarily near-uniform regions that mimic the properties of the bulk periodic system. Within the Bose glass phase, for example, there must exist arbitrarily large superfluid droplets. Though isolated, the known properties of these droplets allow one, for example, to prove that the superfluid excitation spectrum is gapless.
Taking this a step further, one may focus on large regions over which one probability distribution mimics another. Thus, for example, an infinite sequence of flips of a fair coin, will have arbitrarily long subsequences that appear to be those of an unfair coin, with, say, 2:1 (or any other) ratio of heads to tails. Applying this idea to the insulating phase near the superfluid transition line, it follows that there always exist large rare regions in which the disorder distribution mimics that of a system that lies on the superfluid side of its transition line. Thus the insulating phase must contain arbitrarily large superfluid droplets. Since the superfluid phase is compressible, this can be used to show that this phase not only has no energy gap, but is in fact compressible, i.e., it must be a Bose glass not a Mott phase [3, 4].
Such large rare regions are the bane of Monte Carlo simulations, which necessarily are limited to (relatively small) finite domains. This easily explains how very rare compressible regions could be missed in what otherwise numerically looks like a Mott phase. The weak disorder limit in some studies also leads to situations where the superfluid compressibility is irresolvably small for entirely different reasons, again mimicking a Mott phase.
Field-theoretic approaches contain all “rare region effects” because disorder is fully averaged from the beginning, but choosing the form of the action that describes the correct universal “fixed point” structure of the model is a subtle issue [2]. One may consider disorder that preserves particle-hole symmetry on average, and leads to no change in average filling; or disorder that breaks particle-hole symmetry outright and leads to a net change in filling. The latter generates terms in the action that appear to provide a much stronger perturbation to the periodic model than those generated by the former. However, the rare regions arguments of Pollet et al. prove this to be illusory. The operative notion is “spontaneous restoration of symmetry”, and occurs in many systems [5, 6]—the most familiar being the restoration of up-down Ising symmetry at liquid-vapor critical points. Thus it turns out that the former choice describes the correct fixed point model, and the terms breaking global particle-hole symmetry actually disappear at the superfluid transition. A fundamental error often made, then, is to not bother including the weaker terms under the assumption that they are not important, thereby obtaining an incorrect description of both the insulating phase and the superfluid phase transition. The key observation is that apparently weak terms that break a fundamental symmetry of the problem can have an outsized effect, and this is indeed the case here.
### References
1. M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989).
2. P. B. Weichman and R. Mukhopadhyay, Phys. Rev. B 77, 214516 (2008).
3. L. Pollet, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer, Phys. Rev. Lett. 103, 140402 (2009).
4. V. Gurarie, L. Pollet, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer arXiv:0909.4593v1.
5. M. P. A. Fisher, Physica (Amsterdam) 177A, 553 (1991).
6. A. T. Dorsey and M. P. A. Fisher, Phys. Rev. Lett. 68, 694 (1992).
### Highlighted article
#### Absence of a Direct Superfluid to Mott Insulator Transition in Disordered Bose Systems
L. Pollet, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer
Published September 28, 2009 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059352278709412, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Current_density
|
# Current density
This page is about the electric current density in electromagnetism. For the probability current density in quantum mechanics, see Probability current.
In electromagnetism, and related fields in solid state physics, condensed matter physics etc. current density is the electric current per unit area of cross section. It is defined as a vector whose magnitude is the electric current per cross-sectional area. In SI units, the electric current density is measured in amperes per square metre.[1]
## Definition
Electric current density J is simply the electric current I (SI unit: A) per unit area A (SI unit: m2). Its magnitude is given by the limit:[2]
$J = \lim\limits_{A \rightarrow 0}\frac{I}{A}$
For current density as a vector J, the surface integral over a surface S, followed by an integral over the time duration t1 to t2, gives the total amount of charge flowing through the surface in that time (t2 − t1):
$q=\int_{t_1}^{t_2}\iint_S \bold{J}\cdot\bold{\hat{n}}{\rm d}A{\rm d}t$
The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for charge carriers passing through an electrical conductor, the area is the cross-section of the conductor, at the section considered.
The vector area is a combination of the magnitude of the area through which the mass passes through, A, and a unit vector normal to the area, $\bold{\hat{n}}$. The relation is $\bold{A} = A \bold{\hat{n}}$.
If the current density J passes through the area at an angle θ to the area normal $\bold{\hat{n}}$, then
$\bold{J}\cdot\bold{\hat{n}}= J\cos\theta$
where · is the dot product of the unit vectors. This is, the component of current density passing through the surface (i.e. normal to it) is J cos θ, while the component of current density passing tangential to the area is J sin θ, but there is no current density actually passing through the area in the tangential direction. The only component of current density passing normal to the area is the cosine component.
## Importance
Current density is important to the design of electrical and electronic systems.
Circuit performance depends strongly upon the designed current level, and the current density then is determined by the dimensions of the conducting elements. For example, as integrated circuits are reduced in size, despite the lower current demanded by smaller devices, there is trend toward higher current densities to achieve higher device numbers in ever smaller chip areas. See Moore's law.
At high frequencies, current density can increase because the conducting region in a wire becomes confined near its surface, the so-called skin effect.
High current densities have undesirable consequences. Most electrical conductors have a finite, positive resistance, making them dissipate power in the form of heat. The current density must be kept sufficiently low to prevent the conductor from melting or burning up, the insulating material failing, or the desired electrical properties changing. At high current densities the material forming the interconnections actually moves, a phenomenon called electromigration. In superconductors excessive current density may generate a strong enough magnetic field to cause spontaneous loss of the superconductive property.
The analysis and observation of current density also is used to probe the physics underlying the nature of solids, including not only metals, but also semiconductors and insulators. An elaborate theoretical formalism has developed to explain many fundamental observations.[3][4]
The current density is an important parameter in Ampère's circuital law (one of Maxwell's equations), which relates current density to magnetic field.
In special relativity theory, charge and current are combined into a 4-vector.
## Calculation of current densities in matter
### Free currents
Charge carriers which are free to move constitute a free current density, which are given by expressions such as those in this section.
Electric current is a coarse, average quantity that tells what is happening in an entire wire. At position r at time t, the distribution of charge flowing is described by the current density:[5]
$\mathbf{J}(\mathbf{r}, t) = \rho(\mathbf{r},t) \; \mathbf{v}_\text{d} (\mathbf{r},t) \,$
where J(r, t) is the current density vector, vd(r, t) is the particles' average drift velocity (SI unit: m∙s−1), and
$\rho(\mathbf{r}, t)= qn(\mathbf{r},t)$
is the charge density (SI unit: coulombs per cubic metre), in which n(r, t) is the number of particles per unit volume ("number density") (SI unit: m−3), q is the charge of the individual particles with density n (SI unit: coulombs).
A common approximation to the current density assumes the current simply is proportional to the electric field, as expressed by:
$\mathbf{J} = \sigma \mathbf{E} \,$
where E is the electric field and σ is the electrical conductivity.
Conductivity σ is the reciprocal (inverse) of electrical resistivity and has the SI units of siemens per metre (S m−1), and E has the SI units of newtons per coulomb (N C−1) or, equivalently, volts per metre (V m−1).
A more fundamental approach to calculation of current density is based upon:
$\mathbf{J} (\mathbf{r}, t) = \int_{-\infty}^t \mathrm{d}t' \int \mathrm{d}^3\mathbf{r}' \; \sigma(\mathbf{r}-\mathbf{r}', t-t') \; \mathbf{E}(\mathbf{r}',\ t') \,$
indicating the lag in response by the time dependence of σ, and the non-local nature of response to the field by the spatial dependence of σ, both calculated in principle from an underlying microscopic analysis, for example, in the case of small enough fields, the linear response function for the conductive behaviour in the material. See, for example, Giuliani or Rammer.[6][7] The integral extends over the entire past history up to the present time.
The above conductivity and its associated current density reflect the fundamental mechanisms underlying charge transport in the medium, both in time and over distance.
A Fourier transform in space and time then results in:
$\mathbf{J} (\mathbf{k}, \omega) = \sigma(\mathbf{k}, \omega) \; \mathbf{E}(\mathbf{k}, \omega) \,$
where σ(k, ω) is now a complex function.
In many materials, for example, in crystalline materials, the conductivity is a tensor, and the current is not necessarily in the same direction as the applied field. Aside from the material properties themselves, the application of magnetic fields can alter conductive behaviour.
### Polarization and magnetization currents
Currents arise in materials when there is a non-uniform distribution of charge.[8]
In dielectric materials, there is a current density corresponding to the net movement of electric dipole moments per unit volume, i.e. the polarization P:
$\mathbf{J}_\mathrm{P}=\frac{\partial \mathbf{P}}{\partial t}$
Similarly with magnetic materials, circulations of the magnetic dipole moments per unit volume, i.e. the magnetization M lead to volume magnetization currents:
$\mathbf{J}_\mathrm{M}=\nabla\times\mathbf{M}$
Together, these terms form add up to the bound current density in the material (resultant current due to movements of electric and magnetic dipole moments per unit volume):
$\mathbf{J}_\mathrm{b}=\mathbf{J}_\mathrm{P}+\mathbf{J}_\mathrm{M}$
### Total current in materials
The total current is simply the sum of the free and bound currents:
$\mathbf{J} = \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{b}$
### Displacement current
There is also a displacement current corresponding to the time-varying electric displacement field D:[9][10]
$\mathbf{J}_\mathrm{D}=\frac{\partial \mathbf{D}}{\partial t}$
which is an important term in Ampere's circuital law, one of Maxwell's equations, since absence of this term would not predict electromagnetic waves to propagate, or the time evolution of electric fields in general.
## Continuity equation
Main article: Continuity equation
Since charge is conserved, current density must satisfy a continuity equation. Here is a derivation from first principles.[11]
The net flow out of some volume V (which can have an arbitrary shape but fixed for the calculation) must equal the net change in charge held inside the volume:
$\int_S{ \mathbf{J} \cdot \mathrm{d}\mathbf{A}} = -\frac{\mathrm{d}}{\mathrm{d}t} \int_V{\rho \; \mathrm{d}V} = - \int_V{ \frac{\partial \rho}{\partial t}\;\mathrm{d}V}$
where ρ is the charge density, and dA is a surface element of the surface S enclosing the volume V. The surface integral on the left expresses the current outflow from the volume, and the negatively signed volume integral on the right expresses the decrease in the total charge inside the volume. From the divergence theorem:
$\int_S{ \mathbf{J} \cdot \mathrm{d}\mathbf{A}} = \int_V{\mathbf{\nabla} \cdot \mathbf{J}\; \mathrm{d}V}$
Hence:
$\int_V{\mathbf{\nabla} \cdot \mathbf{J}\; \mathrm{d}V}\ = - \int_V{ \frac{\partial \rho}{\partial t} \;\mathrm{d}V}$
This relation is valid for any volume, independent of size or location, which implies that:
$\nabla \cdot \mathbf{J} = - \frac{\partial \rho}{\partial t}$
and this relation is called the continuity equation.[12][13]
## In practice
In electrical wiring, the maximum current density can vary from 4A∙mm−2 for a wire with no air circulation around it, to 6A∙mm−2 for a wire in free air. Regulations for building wiring list the maximum allowed current of each size of cable in differing conditions. For compact designs, such as windings of SMPS transformers, the value might be as low as 2A∙mm−2.[14] If the wire is carrying high frequency currents, the skin effect may affect the distribution of the current across the section by concentrating the current on the surface of the conductor. In transformers designed for high frequencies, loss is reduced if Litz wire is used for the windings. This is made of multiple isolated wires in parallel with a diameter twice the skin depth. The isolated strands are twisted together to increase the total skin area and to reduce the resistance due to skin effects.
For the top and bottom layers of printed circuit boards, the maximum current density can be as high as 35A∙mm−2 with a copper thickness of 35 µm. Inner layers cannot dissipate as much heat as outer layers; designers of circuit boards avoid putting high-current traces on inner layers.
In semiconductors, the maximum current density is given by the manufacturer. A common average is 1mA∙µm−2 at 25°C for 180 nm technology. Above the maximum current density, apart from the joule effect, some other effects like electromigration appear in the micrometer scale.
In biological organisms, ion channels regulate the flow of ions (for example, sodium, calcium, potassium) across the membrane in all cells. Current density is measured in pA∙pF−1 (picoamperes per picofarad), that is, current divided by capacitance, a de facto measure of membrane area.[clarification needed]
In gas discharge lamps, such as flashlamps, current density plays an important role in the output spectrum produced. Low current densities produce spectral line emission and tend to favour longer wavelengths. High current densities produce continuum emission and tend to favour shorter wavelengths.[15] Low current densities for flash lamps are generally around 1000A∙cm−2. High current densities can be more than 4000A∙cm−2.
## References
1. Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3
2. Richard P Martin (2004). Electronic Structure:Basic theory and practical methods. Cambridge University Press. ISBN 0-521-78285-6.
3. Alexander Altland & Ben Simons (2006). Condensed Matter Field Theory. Cambridge University Press. ISBN 978-0-521-84508-3.
4. Gabriele Giuliani, Giovanni Vignale (2005). Quantum Theory of the Electron Liquid. Cambridge University Press. p. 111. ISBN 0-521-82112-6.
5. Jørgen Rammer (2007). Quantum Field Theory of Non-equilibrium States. Cambridge University Press. p. 158. ISBN 0-521-87499-8.
6. Tai L Chow (2006). Introduction to Electromagnetic Theory: A modern perspective. Jones & Bartlett. pp. 130–131. ISBN 0-7637-3827-1.
7. Griffiths, D.J. (1999). Introduction to Electrodynamics (3rd Edition ed.). Pearson/Addison-Wesley. p. 213. ISBN 0-13-805326-X.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8683121204376221, "perplexity_flag": "middle"}
|
http://polymathprojects.org/2009/10/27/research-thread-v-determinstic-way-to-find-primes/?like=1&source=post_flair&_wpnonce=01a7653db7
|
# The polymath blog
## October 27, 2009
### (Research thread V) Determinstic way to find primes
Filed under: finding primes,research — Terence Tao @ 10:25 pm
It’s probably time to refresh the previous thread for the “finding primes” project, and to summarise the current state of affairs.
The current goal is to find a deterministic way to locate a prime in an interval $[z,2z]$ in time that breaks the “square root barrier” of $\sqrt(z)$ (or more precisely, $z^{1/2+o(1)}$). Currently, we have two ways to reach that barrier:
1. Assuming the Riemann hypothesis, the largest prime gap in $[z,2z]$ is of size $z^{1/2+o(1)}$. So one can simply test consecutive numbers for primality until one gets a hit (using, say, the AKS algorithm, any number of size z can be tested for primality in time $z^{o(1)}$.
2. The second method is due to Odlyzko, and does not require the Riemann hypothesis. There is a contour integration formula that allows one to write the prime counting function $\pi(z)$ up to error $z^{1+o(1)}/T$ in terms of an integral involving the Riemann zeta function over an interval of length $O(T)$, for any $1 \leq T \leq z$. The latter integral can be computed to the required accuracy in time about $z^{o(1)} T$. With this and a binary search it is not difficult to locate an interval of width $z^{1+o(1)}/T$ that is guaranteed to contain a prime in time $z^{o(1)} T$. Optimising by choosing $T = z^{1/2}$ and using a sieve (or by testing the elements for primality one by one), one can then locate that prime in time $z^{1/2+o(1)}$.
Currently we have one promising approach to break the square root barrier, based on the polynomial method, but while individual components of this approach fall underneath the square root barrier, we have not yet been able to get the whole thing below (or even matching) the square root. I will sketch the approach (as far as I understand it) below; right now we are needing some shortcuts (e.g. FFT, fast matrix multiplication, that sort of thing) that can cut the run time further.
– The polynomial method –
The polynomial method begins with the following observation: in order to quickly find a prime in $[z,2z]$, it suffices to be able to quickly solve the prime decision problem: given a subinterval $[a,b]$ of $[z,2z]$, decide whether such an interval contains a prime or not. If one can solve this problem in, say, $z^{0.499+o(1)}$ time, then one can find a prime in this time also by binary search.
Actually, using Odlyzko’s method we can already narrow down to an interval of length $z^{0.501+o(1)}$ with a lot of primes in it in $z^{0.499+o(1)}$ time, so we only need to break the square root barrier for the decision problem for intervals $[a,b]$ of length $z^{0.501+o(1)}$ or less.
The decision problem is equivalent to determining whether the prime polynomial
$f(x) := \sum_{a \leq p \leq b} x^p$ (1)
is non-trivial or not, where $p$ ranges over primes in the interval $[a,b]$.
Now, the prime polynomial, as it stands, has a high complexity; the only obvious way to compute it is to enumerate all the primes from $a$ to $b$, which could take $z^{0.501+o(1)}$ time in the worst case. But we can improve matters by working modulo 2; note that as the coefficients of $f$ are either 1 or 0, it suffices to decide whether $f$ is non-trivial modulo 2.
The reason we do this is the observation that if $n$ is a natural number, then the number of solutions to the diophantine equation $n=lm$ with $1 \leq l < m$ is odd when n is prime, and usually even when n is composite. (There are some rare exceptions to this latter fact, when n contains square factors, but it seems likely that one can deal with these latter cases by Möbius inversion, exploiting the convergence of the sum $\sum_{d=1}^\infty \frac{1}{d^2}$.) So, the prime polynomial f modulo 2 is morally equal to the variant polynomial
$\tilde f(x) := \sum_{1 \leq l < m: a \leq lm \leq b} x^{lm}.$ (2)
So a toy problem would be to decide whether (2) was non-zero modulo 2 or not in time $z^{0.499+o(1)}$ or better.
The reason that (2) is more appealing than (1) is that the primes have disappeared from the problem. Instead, one is computing a sum over a fairly simple region $\Omega := \{ (l,m): 1 \leq l < m, a \leq lm \leq b \}$ bounded by two hyperbolae and two lines.
The point is now this: if $\tilde f(x)$ vanishes modulo 2, then it also vanishes modulo $(2,g(x))$ for any low-degree polynomial g (degree $z^{0.01+o(1)}$ or better), and more generally \$\tilde f(x^n)\$ vanishes modulo $(2,g(x))$. Conversely (if one is lucky), if $\tilde f(x^n)$ vanishes modulo $(2,g(x))$ for sufficiently many $n, g$, then it should be that $\tilde f$ vanishes. So this leads to the following strategy:
• Goal 1: Find a collection of $n,g$ such that if $\tilde f(x^n)$ vanishes modulo $(2,g(x))$ for all the pairs $n,g$, then $\tilde f$ vanishes modulo 2.
• Goal 2: Find a way to decide whether $\tilde f(x^n)$ vanishes modulo $(2,g(x))$ for all the required $n,g$ in time $z^{0.499+o(1)}$ or better.
One way to achieve Goal 1 is to forget about $n$, and choose the $g$ so that the least common multiple of all the $g$ (modulo 2) cannot divide $\tilde f$. Since $\tilde f$ is basically a polynomial of degree $z^{0.501+o(1)}$ shifted by a monomial, one obvious way to proceed would be to pick more than $z^{0.501+o(1)}$ polynomials g. But then it looks unlikely that one can beat the square root barrier in Goal 2. Similarly if one varies n as well as g.
On the other hand, we have a partial result in Goal 2: for any fixed n and g, we can compute $\tilde f(x^n) \hbox{ mod } (2, g(x))$ in time below the square root barrier, e.g. in time $z^{0.49+o(1)}$. For instance, setting n=1 and $g(x)=x-1$, we can compute $\tilde f(1)$ in this time. Unfortunately a single n,g is not nearly enough to solve Goal 2 yet, so we either need a further advance on Goal 1, or some very efficient way to test non-vanishing of $\tilde f(x^n)$ modulo (2,g) for many pairs (n,g) at a time (e.g. by an FFT type approach).
The partial result is based on the fact that $\tilde f(x)$ has an arithmetic circuit complexity below the square root level, i.e. it can be expressed in terms of $O(z^{0.48+o(1)})$ (say) arithmetic operations. As such, for any low-degree g (say degree $O(z^{o.01+o(1)})$), $\tilde f(x^n) \mod (2,g(x) )$ can be computed in $O(z^{0.49+o(1)})$ time (using fast multiplication for mod g arithmetic if necessary).
Let’s sketch how the circuit complexity result works. Recall that (2) is a sum over the geometric region $\Omega$. Using the geometric series formula, one can convert this sum over $\Omega$ to a sum over what is basically the boundary of $\Omega$. This boundary has $O( \sqrt{z} )$ points, so this shows that $\tilde f$ has an arithmetic circuit complexity of $z^{1/2}$ already. But one can do better by using the Farey sequence to represent the discrete hyperbolae that bound $\Omega$ by line segments. The sum over each line segment is basically a quadratic sum of the form $\sum_{n=n_0}^{n_1} x^{a n^2 + bn + c}$ for various coefficients $a,b,c$. It seems that one can factorise this sum as a matrix product and use ideas from the Strassen fast multiplication algorithm to give this a slightly better circuit complexity than the crude bound of $O( n_1 - n_0 )$; see these notes; there may also be other approaches to computing this quickly (e.g. FFT).
Where we’re still stuck right now is scaling up this single case of Goal 2 to the more general case we need. Alternatively, we need to strengthen Goal 1 by cutting down substantially the number of pairs (n,g) we need to test…
## 42 Comments »
1. [...] http://polymathprojects.org/2009/10/27/research-thread-v-determinstic-way-to-find-primes/ Possibly related posts: (automatically generated)You’ll Always Find Your Way Back Home New ClipSOME BEST WAYS 2 FIND NEW MUSICRavelry Finds ~ New Ways to Warm UpNo Way Home: The Rules [...]
Pingback by — October 28, 2009 @ 5:08 pm
2. Regarding writing the prime generating function $f(X)$ as a sum $\sum_i \alpha_i(X)\beta_i(X)$, where the $\alpha_i$ and $\beta_i$ are sparse polynomials (say, $m$ terms each): I had a conversation with a friend of mine this past weekend who is an expert on real algebraic geometry, and who was in town for the FOCS conference here at Georgia Tech. He said that in the $R[x]$ context (the $\alpha_i, \beta_i$ are polynomials with real coefficients) this problem is a well-studied in terms of trying to generalize Descarte’s Rule of Sign. If a single polynomial has only $m$ terms, Descarte’s rule implies that it can have at most $O(m)$ non-zero real roots, and apparently it is a studied open problem to extend this to a sum of products of two sparse polynomials like we have (but in $R[x]$). Perhaps there are some techniques that algebraic geometers have developed that would be useful in bounding the number of roots of our analogous polynomials in $F_2[x]$, or at least good conjectures. I didn’t get the chance to ask him what is known on this problem, but I will soon, and report back if I discover anything on it…
Incidentally, *he* initiated the discussion of this problem — not me — as it pertained to one of the FOCS talks on arithmetic circuit complexity.
Comment by Ernie Croot — October 28, 2009 @ 6:40 pm
• Oops… I meant to say “bounding the number of roots… among $1,x,x^2,...,x^N, N \sim z^{0.49}$ in $F_2[x]/(2,g(x))$”.
Comment by Ernie Croot — October 28, 2009 @ 7:01 pm
• It seems that the result on Descartes Rule uses special properties of the reals, and probably won’t extend to the complex (or finite field) setting. Here is the problem I asked my friend:
—-
Problem. Suppose that $f_1, ..., f_k, g_1,...,g_k$ are all polynomials
in $F_2[X]$ having $m$ terms each. Let $g(X)$ be an irreducible polynomial,
such that the order of $x (mod 2,g(x))$ is at least $m^2 k^2$, say. Let
$f(X) = f_1(X)g_1(X) + ... + f_k(X)g_k(X).$
Show that $f(1), f(x), f(x^2), ..., f(x^(2mk))$ can’t all be $0$ in $F_2[X]/(g(x))$.
—-
And here was his response:
One remark about the problem. The fact that there exists any bound at all in terms of m and k in the real case has to do with the fact that we are bounding the number of real roots. Of course, no such bounds exist for complex roots — this makes me suspect that standard tools of algebraic geometry are probably not very useful in this context.
An exponential bound (in m as well as k) in the real case follows from Khovansky’s theory Pfaffian functions (you might want to look at the book “Fewnomials” by Khovansky).
But I do not think it can have applications in the finite field case where an exponential bound is probably obvious.
Comment by Ernie Croot — October 29, 2009 @ 10:19 pm
3. I am not sure of this would help but would an approach on de randomizing a probabilistic algorithm for finding primes help? I attended midwest theory day and it seemed it may apply.
http://pages.cs.wisc.edu/~jkinne/research/KvMS_full_manuscript.pdf
Comment by Joshua Herman — December 17, 2009 @ 4:01 am
4. [...] then we had Polymath2 related to Tsirelson spaces in Banach space theory , an intensive Polymath4 devoted to deterministically finding primes that took place on a special polymathblog, a [...]
Pingback by — March 17, 2010 @ 7:34 pm
5. I’ve discussed this problem briefly with Noam Elkies (on the train) and Hendrik Lenstra. Basically, if I understood correctly, Elkies thinks that the sort of combinatorial argument I gave for evaluating \sum_{n<=N} tau(n) quickly is essentially familiar to some experts (especially Lenstra), but Lenstra isn't sure that it isn't new as such.
—
This project seems to be dead now, or at least in a coma. What shall we do?
Comment by Harald — April 23, 2010 @ 8:15 am
• Hmm, good question. The project has certainly made definite progress, by identifying the right question (namely, breaking the square root barrier) and identifying a promising approach (to determine whether pi(b)-pi(a) is non-zero). And we are able to compute the mod 2 residue of this quantity pi(b)-pi(a) below the square root barrier, thanks to your observation and Ernie’s manipulations, and it is tempting to push this F_2 calculation up to F_2[x], but at that point we seem to require a miraculous identity which we do not currently possess.
One possibility is to pass the torch by writing up the parity calculation and some of its variants (as well as some of the counterexamples and other auxiliary results achieved along the way), and speculate on a future strategy. This of course is less satisfying than a full solution, but this is the nature of the game, of course…
Comment by — April 23, 2010 @ 4:52 pm
• Harald, sorry but I don’t check the polymath blog all that often (about once every week or two now). I think the problem of quickly evaluating sum of tau(n) is probably _not that_ new, in the sense that it is the -type of thing- known to experts… but I’m always hearing that from people about this or that idea I tell them, and then at the end of the day, it turns out to really be new… so, I would be skeptical of what Elkies says. In any case, that is only _one part_ of our overall algorithms for locating primes quickly — the totality of all our ideas are certainly new, and should probably be written up
Certainly, our ideas applied to the F_2[x] case (quickly evaluate the generating function) are fairly complex, and should be written up, even if we don’t produce a fast algorithm to find primes (I, however, am very bad at writing up ideas — it takes me _forever_ to write things up, and even then they are not easy for people to understand me. If we decide to write up the ideas, I would prefer not to be the person doing the actual writing… I will, however, help out by reading the draft, etc.)
…
I am still hopeful that we can make progress on the problem, and I have delayed working on it to focus on other things (principally writing up papers with Olof Sisask). I plan to work on it with some REU students this summer — actually maybe starting next week.
As I told you in an email, one extension of the ideas we have so far is to go to matrix rings, instead of F_2[x]: Basically, fix an n x n matrix A with entries in F_2[x], and then compute the prime generating function f at A — i.e. f(A) (mod 2, g(x)), for some low-degree g. Whereas it may take lots of evaluations in order to guarantee non-vanishing of f(x^j) (mod 2, g(x)), it may not take all that many when we use A — i.e. might only need f(A^j) (mod 2, g(x)) for a small number of j. I suppose one could diagonalize A (or put it into Jordan form) and convert the problem to polynomials, but perhaps if A has some special symmetry we can speed up the computations somewhat. That is one of the ideas I haven’t mentioned on the blog yet that I would like to explore.
Might there be some other structures we could use, besides matrix rings, where we can be guaranteed a huge speedup?
Comment by Ernie — April 27, 2010 @ 6:30 pm
• “other structures” e.g.: Use a Lie Algebra. I suppose the Lie bracket isn’t associative in general, so maybe that won’t work. What about endomorphism rings of elliptic curves?
Comment by Ernie — April 27, 2010 @ 7:06 pm
• These are interesting ideas, and would improve the “Goal 2″ side of things.
In my opinion, though, the bigger stumbling block right now is “Goal 1″, which I don’t see how to make good progress on. A good toy problem is this: suppose we have a polynomial f of degree $x^{0.501+o(1)}$ (say), and we want to know whether f mod 2 is non-zero or not. Let’s suppose that f has fairly small circuit complexity, e.g. $x^{0.1+o(1)}$ (to be very optimistic). This allows us to evaluate $f(x) \hbox{ mod } 2, g(x)$ or $f(A) \hbox{ mod } 2, g(x)$ reasonably quickly for any fixed $g$ or $A$ of low degree (e.g. degree at most $x^{0.1+o(1)}$). But I don’t see how to block the possibility that f is non-vanishing, but just happens to be divisible by all the g’s (and all the minimal polynomials of A’s) that we happen to test against, unless we use more than $x^{0.5}$ time or memory.
One amusing possibility here is to use P=BPP as a hypothesis. We figured out several months ago that P=BPP and a primality oracle were not sufficient, by themselves, to solve the problem; but P=BPP does allow for quick polynomial identity testing, which is basically the situation we’re in now…
Comment by — April 27, 2010 @ 8:46 pm
• Ah, my last comment was rubbish: we have a BPP algorithm to decide whether an interval contains a prime or not that runs in time $x^{0.499+o(1)}$, but to convert that algorithm into a P algorithm may well push one back over the square root barrier again.
Comment by — April 27, 2010 @ 9:17 pm
• First, let me say I don’t have time to comment much right now… maybe in a few days. But about Goal 1: I think if one could show that f(x^j) (mod 2, g(x)) doesn’t vanish for _some_ j up to n^(0.51), then one can produce a k x k matrix A so that some f(A^j) doesn’t vanish where j < n^(0.51)/k. So, you can get a reduction by a factor of k, but at the expense of having to work with matrices, instead of polynomials. Maybe something will come of that.
And about circuit complexity: The particular circuits we use aren't just of “low complexity'', but they are also of “low depth''. That might make a huge difference. As I have stated in comment 2. above, there might be a way to represent our prime generating function efficiently as a sum of products of sparse polynomials. If so, then maybe there are some algebraic-geometric tricks that can be used to get a handle on the non-vanishing (as in Saugata Basu's comments about Pfaffians).
Comment by Ernie — April 28, 2010 @ 4:22 pm
• Ernie,
I think you would be everybody a great service by writing up a quick, clear explanation of what you have (or don’t have) on F_2[x]. Let’s leave matrix rings for later. The details can then be written up by other people (= us), provided that we can bug you.
Comment by Harald — April 30, 2010 @ 12:24 pm
• There is a start on this at
http://michaelnielsen.org/polymath1/index.php?title=Polynomial_strategy
and see also
http://people.math.gatech.edu/~ecroot/fast_strassen.pdf
I don’t know if things have advanced much beyond the stage when these notes were written.
Comment by — April 30, 2010 @ 3:52 pm
6. Here’s one idea that occurred to me some time ago (in fact, it was inspired by this mathoverflow comment: http://mathoverflow.net/questions/3820/how-hard-is-it-to-compute-the-number-of-prime-factors-of-a-given-integer/10062#10062).
We now know how to compute the parity of $\pi(x)$ in time $x^{.45}$. We would be done (by binary search) if we could quickly find an interval $[x,2x]$ that contain an odd number of primes. Of course, this seems difficult.
It seems, although I haven’t attempted to work this out, that we should be able to run a twisted version of the above algorithm to compute \$\pi(x,a,q)\$ also in time around $x^{.45}$ (with some reasonable dependence on \$q\$.) If this works, it then suffices to find some arithmetic progression with small modulus (say $q \leq x^{.01}$) such that its intersection with the interval \$[x,2x]\$ contains an odd number of primes.
It seems that this problem might be amenable to sieve theory. Specifically, it should suffice to show something like:
$\sum_{q \leq x^{.01}} \sum_{(a,q)=1} (\pi(x,a,q) mod 2 - 1)^2 << x^{o(.02)}$.
Now I'm not sure how to go about dealing with the sum on the left. One idea is to expand out the square and try to rewrite the terms involving $\pi(x,a,q) mod 2$ using twisted divisor sum (as used in the algorithm above) and then hope that finite Fourier analysis and the large sieve could be used.
Comment by — April 28, 2010 @ 9:56 pm
• The sieve inequality I suggested above is nonsense since we should expect the summand on the left to be 1 or 0 about half the time. Although, it may still be useful to try to show that some AP with small modulus has an odd number of primes.
Comment by — April 28, 2010 @ 11:34 pm
• Well, Ernie’s showed that the prime counting polynomial $\sum_{a \leq p \leq b} x^p \hbox{ mod } 2, g(x)$ can be computed in subquadratic time if the gap between $b$ and $a$ is say at most $N^{0.51}$ and g has degree at most $N^{0.01}$. In particular, setting $g(x)=x^q-1$ we get the residue counting functions $\pi(x,a,q)$ as coefficients.
Unfortunately, probabilistic heuristics suggest that it is indeed possible to have a dense subset of (say) $[N, N+N^{0.51}]$ which has an even number of elements in every residue class $a \hbox{ mod } q$ with $q \leq N^{0.01}$, since a random set will satisfy each of the $O(N^{0.02})$ conditions with probability about 1/2, and there are $2^{N^{0.51}}$ possibilities. Of course, the primes themselves only occupy an extremely small portion of the configuration space, and heuristically the above scenario should not actually occur for the prime counting function, but I have no idea how to formally prove this.
Comment by — April 30, 2010 @ 3:58 pm
7. I have found 3 algorithms which determine primes without knowledge of previous
primes. Try these with an efficient program with which you are comfortable. I
use Fortran , a 1990 compiler, and run from DOS(the command promt). Very fast.
Comment by — May 5, 2010 @ 7:56 pm
8. Things seem to have ground to a halt. I do think that what we’ve got is interesting and deserves to be recorded, though; it should all be made into a short paper. Given that it will presumably be a collective work, wouldn’t it make sense for a third person (with the help of fourth or fifth persons…) to take care of some of the writing and the final editing?
I do sense Ernie has several interesting things to say that aren’t entirely clear to me yet, though. Anything cleanly stated coming from that direction would be deeply appreciated.
Comment by Harald Helfgott — June 10, 2010 @ 2:33 pm
• I think I’m inclined to agree that it may indeed be best to publish what we have. I wrote up a tentative outline of what the short paper could contain at
http://michaelnielsen.org/polymath1/index.php?title=Finding_primes#Outline_of_possible_paper
I think we could have an appendix doing Odlyzko’s method to localise pi(x) to accuracy O(x^{0.501}) in a bit more detail, as the references are not so well known. I was thinking of starting with the more elementary discussion of computing the parity of pi(b)-pi(a), and then moving to the more general discussion of the circuit complexity of sum_{a < p < b} t^p mod 2; strictly speaking the latter subsumes the former but I think for pedagogical reasons it would be good to discuss the former first (and this is how we found this approach historically).
As you know, I've got a number of other writing obligations to deal with in the near future, but I might be able to get a quick skeleton of such a paper in the near future. I was thinking to use Subversion to manage the task of editing the paper collectively; the previous approach of putting the raw LaTeX files on the wiki for Polymath1 was very clunky, and my experience so far with Subversion has been quite positive. It does require a small amount of software setup on each contributor's end, though.
Comment by — June 11, 2010 @ 12:25 am
• OK, I started writing an outline of a possible paper. Right now it has an introduction and several of the easier observations, and begins to touch the parity of pi(x) stuff, but I didn’t write anything on the more recent stuff on the prime polynomial mod 2, g. The LaTeX files can be found at
http://www2.xp-dev.com/sc/browse/86755/
If any of you are interested in working on the paper, if you can get an (free) xp-dev account and email me then I can add you to the list of editors on the project (though you’ll need to download subversion in order to check out a local copy of the paper and edit it on your own computer, see
http://sbseminar.wordpress.com/2008/06/18/subverting-the-system/
). Or else we can improvise by email or something.
Comment by — June 12, 2010 @ 5:34 am
• I signed up for a Subversion account and sent you an email.
Comment by — June 13, 2010 @ 7:38 pm
9. I have not had the chance to meet with my REU students yet to discuss polymath4. That should happen this week (one of my students was away in Hungary, so we agreed not to meet until June). I did type up a short note explaining in a little more detail how the polynomial approach works; unfortunately, I cannot put it on my website just now, as my password expired (I need to go in to the dept. and change it from my office — I am away from my office right now). Too bad wordpress doesn’t allow file attachments. Let me instead try to write it out here… perhaps you can cut-and-paste-and-latex it:
\documentclass[12pt]{article}
\usepackage{amssymb}
\newcommand{\F}{{\mathbb F}}
\newcommand{\eps}{\varepsilon}
\title{The generating function of primes mod 2}
\begin{document}
\maketitle
\section{Introduction}
Fix a polynomial \$f(x) \in \F_2[x]\$ of degree \$d\$. We wish to compute
\$\$
\sum_{p \leq N} x^p \pmod{2,f(x)}
\$\$
quickly. We will explain how to do this (actually, a related problem — we confine to short intervals)
using at most \$N^{1/2-\eps}\$ or so bit operations.
\bigskip
Using the obvious generalization of our work for computing the parity of \$\pi(x)\$, it suffices to
show that we can compute
\$\$
\sum_{n \leq N} \tau(n) x^n \pmod{2,f(x)}
\$\$
in time \$N^{1/2-\delta}\$, for some \$\delta > 0\$. The larger \$\delta > 0\$ is in our algorithm, the larger
\$\eps > 0\$ will be.
\bigskip
Now what’s the idea? Well, let’s first think about the idea for the parity of \$\pi(x)\$: Recall that
what we essentially had to do was compute
\begin{equation} \label{squareroot_sum}
\sum_{n \leq \sqrt{N}} \lfloor N/n \rfloor
\end{equation}
substantially faster than the trivial algorithm, which takes time \$N^{1/2 + o(1)}\$. The key observation
was that we can compute the sum of fractional parts
\$\$
\sum_{x \leq n \leq x+q} \{N/x \},
\$\$
to within an error of \$1/2\$, for certain \$x\$ and \$q\$, very quickly — essentially time \$O(1)\$, which is faster by a factor of \$q\$
over the trivial algorithm. What allows us to do this is the fact that \$N/n\$ can be “linearized”, in the sense that
\$\$
|\{ N/n\} – \{N/x – at/q\}|\ <\ 1/2q,
\$\$
where \$n = x+t\$, and where \$N/x^2 \sim a/q\$.
So really all we need to compute is
\$\$
\sum_{x \leq n \leq x+q} \{N/x – at/q\}.
\$\$
(And, we need to fuss over certain exceptional cases.) And this is easily handled.
The analogue of the sum (\ref{squareroot_sum}) in the polynomial context is
\begin{equation} \label{insteadof}
\sum_{d \leq \sqrt{N}} \sum_{m \leq N/d} x^{dm}\ \pmod{2,f(x)}.
\end{equation}
Now of course we can use the geometric series formula (or other identities) to compute this in time
\$N^{1/2+o(1)}\$, so as in non-polynomial case that is the bound to beat.
Of course evaluating {\it that} sum quickly would be overkill as far as locating primes quickly.
Remember that upon using Odlyzko's algorithm, all we {\it really} need to be able to do is
to locate a prime quickly inside an interval \$[N - N^{0.51}, N]\$, since we can always localize
to some interval like that containing primes, using only \$N^{0.49}\$ operations. The sort of sum
we would need to evaluate in order to compute this “short interval'' generating function is
\begin{equation} \label{dN}
\sum_{d \leq \sqrt{N}} \sum_{(N – N^{0.51})/d \leq m \leq N/d} x^{dm} \pmod{2,f(x)},
\end{equation}
since it's not hard to imagine that computing sums such as this can be used to compute
\$\$
\sum_{N-N^{0.51} < n \leq N} \tau(n) x^n \pmod{2,f(x)}
\$\$
efficiently, which can be used to compute
\$\$
\sum_{N-N^{0.51} < p \leq N \atop p\ {\rm prime}} x^p \pmod{2,f(x)}
\$\$
efficiently.
And as a bonus, it turns out that upon confining ourselves to such short intervals (width \$N^{0.51}\$) we can beat
the trivial algorithm!
\bigskip
But now how do we do this — how do we beat the trivial bound of \$N^{1/2+o(1)}\$ for computing (\ref{dN})
efficiently (using the geometric series formula)? Well, the first thing to do is to split off the small \$d\$'s from the rest; that is,
we just compute
\$\$
\sum_{d \leq N^{1/2-\delta}} \sum_{(N-N^{0.51})/d \leq N/d} x^{dm} \pmod{2,f(x)}
\$\$
using the geometric series formula. So far, we have only consumed \$N^{1/2-\delta+o(1)}\$ operations
(OK, we lost a \$N^{o(1)}\$ — not important… we can just change \$\delta\$ to compensate). What remains, then, is
\begin{equation} \label{dm51}
\sum_{N^{1/2-\delta} < d \leq N^{1/2}} \sum_{(N-N^{0.51})/d \leq N/d} x^{dm} \pmod{2,f(x)}.
\end{equation}
Now of course we don't have a function like \$\lfloor N/n \rfloor\$ to play with that we can linearize. But we {\it can}
linearize {\it something} here: The first thing to note is that even if we just added every term together, and didn't
use the geometric series formula, we would get an upper bound of
\$\$
N^{1/2+o(1)} \cdot (N^{0.51}/N^{1/2-\delta})\ =\ N^{0.51+\delta+o(1)}
\$\$
for the running time. So, all we have to do is to improve this by a factor \$N^{0.1 +\delta+o(1)}\$, and we are in business.
Now we need an observation: Notice that if
\$\$
m/d\ =\ a/q + O(1/q^2),
\$\$
then
\$\$
|dm – (d-tq)(m+ta)|\ \ll\ t(d/q) + t^2 aq,
\$\$
which is quite a bit smaller than \$dm\$, provided that \$q\$ is much smaller than \$d\$, and provided that \$t\$ isn't
“too big''. So, if \$dm\$ were not too near the endpoints of the interval \$[N-N^{0.51}, N]\$, then we would expect that
for “\$t\$ not too big'', \$(d-tq)(m+ta)\$ is also in that interval.
What this allows us to do is to take the sum in (\ref{dm51}), and decompose it into a bunch of sums that look like
\begin{equation} \label{t0t1}
\sum_{t_0 \leq t \leq t_1} x^{(d-tq)(m+ta)} \pmod{2,f(x)}.
\end{equation}
Okay…. but how does this help? Well, that's where another trick comes in: It turns out that there is a way to convert
the problem of evaluating sums such as this one into a polynomial interpolation problem, and then one can apply
Strassen's algorithm. Once the dust has settled, one arrives at an algorithm to evaluate (\ref{t0t1}) in time at
most \$(t_1 – t_0)^{0.9 + o(1)}\$ — so, there is a saving by a factor \$(t_1 – t_0)^{0.1}\$, which is all we need.
What about this polynomial interpolation algorithm? Well, you can find it by going to the following note:
\begin{verbatim}
http://www.math.gatech.edu/~ecroot/fast_strassen.pdf
\end{verbatim}
The vast majority of what makes my algorithm “technical and complex'' is that there are lots of little cases to
consider. e.g. What happens if \$dm\$ really is “near the endpoints''… how do we show that can't happen too often
to hurt us?
\end{document}
Comment by Ernie Croot — June 14, 2010 @ 5:16 am
• There is a small detail that should be corrected in the above: The fraction $m/d$ should come within $O(1/qQ)$ of the fraction $a/q$, where $1 \leq q \leq Q$, and $Q$ is to be carefully chosen — it will be some small power of $N$.
Comment by Ernie Croot — June 14, 2010 @ 2:43 pm
• I’ve converted this to a LaTeX file on the subversion repository at
http://www2.xp-dev.com/sc/browse/86755/
and also took the liberty of adding the fast Strassen file too for ease of reference.
Comment by — June 16, 2010 @ 1:10 am
• Terry, the note that I listed below is better — it took account of some other typos (and the 1/qQ issue). It is here:
http://www.math.gatech.edu/~ecroot/prime_generate.tex
Comment by Ernie Croot — June 16, 2010 @ 6:29 am
10. [...] By kristalcantwell The results of Polymath4 are being written up. See this post. The latex files for the paper are [...]
Pingback by — June 14, 2010 @ 6:12 pm
11. I just had another thought on the “polynomial approach” to the problem: Instead of computing
$\sum_{N-N^{0.51} < p < N} x^p \pmod{2,f(x)} (1)$
I think it might be possible to ALSO give a quick-running algorithm for
$\sum_{N-N^{0.51} < p < N} x^{p^2} \pmod{2,f(x)} (2)$
(And then also replace polynomials rings with matrix rings, or other sorts of rings.)
The idea is that the “linearization steps'' in the algorithm for handling (1) should also apply to (2), though I have not checked this carefully yet — the “linearization'' is basically just Weyl-differencing, so it should work. In fact, perhaps one can replace the $p^2$ here with ANY BOUNDED DEGREE INTEGER POLYNOMIAL. Why would that be useful? Well, maybe it will help with Goal 1. Certainly, it would give a new way to approach it, because now we not only get to try to locate $n$ and $f$ such that $g(x^n)$ doesn't vanish mod $(2,f(x))$, but we also get to CHOOSE the polynomial in the exponent.
Perhaps one can go even further here, and replace polynomials in the exponent with sums of exponential functions like $2^p$, say (I forget whether Weyl-differencing works with exponentials like that. I know sum-product estimates can be used to replace Weyl differencing in such a context, so maybe they can also be used here somehow…)
Ok, I sense nobody is seriously working on polynomath4 anymore… so I will save this for my REU students…
Comment by Ernie Croot — June 14, 2010 @ 11:05 pm
12. I have put up a revised, pdf version of the “short note” above on the polynomial algorithm. To see this note, go here:
http://www.math.gatech.edu/~ecroot/prime_generate.pdf
Comment by Ernie Croot — June 15, 2010 @ 11:11 pm
• Thanks! I will try to incorporate it into the short article that was already being drafted. If nothing else, it should serve as a good introduction to the REU project.
Comment by — June 16, 2010 @ 3:37 pm
13. In the end, are we using Subversion, or are we improvising things over email. I would like to go on contributing, but, since I am traveling, installing new software seems non-trivial.
Comment by Harald — June 18, 2010 @ 5:39 pm
• Well, I’ve set up a Subversion repository for the project at
http://www2.xp-dev.com/sc/browse/86755/
so you can _download_ the files at any time whenever you have internet access, but to upload anything you either need to install the Subversion software, or email me with any modified files etc. Either way should work fine, though the former way would of course be more direct.
Comment by — June 18, 2010 @ 6:28 pm
14. My first “.” should have been a “?”.
Comment by Harald — June 18, 2010 @ 5:44 pm
15. We never got around to figuring out whether the method in Vinogradov’s exercises is original to Vinogradov, or whether it appeared before somewhere. (Vinogradov never gives references in his textbook, not even to his own work.) Has anybody got some sort of sense of what is the case here?
Comment by Harald Helfgott — July 17, 2010 @ 6:08 pm
16. Hi there,
I did plot a graph for prime number against whole number on y axis. The graph was initially exponential and had became bit linear to the X-axis more and more as we had moved away from origin and parallel to X-axis.
There fore i am sure one point will cone after which we can’t come up with any prime number and that point would be the very intresting as well because at that point we can understand the spliting of a digit into two.
Thanking you
Shail
Comment by Shailendra — May 14, 2011 @ 9:10 am
17. Спонсор ищет девушку
для настоящих
отношений, параметрыкретерии: от 18 до 20 лет рост
до 175 см. волосы светлые, модельное телосложение. личный помощник т.79262036777 Руслан
теги:
объявления знакомства
знакомства через интернет
Comment by — May 18, 2011 @ 6:20 pm
18. Познакомься на Love Znakomstva.tk – это сайт знакомств для тех, кто находится в поиске. Тех, кто ищет романтических отношений, дружеской привязанности или просто ни к чему не обязывающей болтовни. Зарегистрируйтесь, разместите свою фотографию и уже через несколько минут к вам в почту посыпятся десятки, а может и сотни писем с предложениями знакомиться, дружить или любить.
Comment by — July 14, 2011 @ 2:09 am
19. I have found a similar question that might help a bit? Let K be given and S the set of polynomials with positive coefficients. Find a polynomial of degree k which is irreducible in S adjoin [x] (which is fairly simplistic because you only need to show is that the coefficients add up to a prime number strictly greater than K and all coefficients must be 1 or greater). I don’t know if my proof is original but it certainly works. Suppose by way of contradiction, f(x) has degree K and all coefficients are 1 or greater where p is prime and f(1) = p > K. Suppose f(x) = g(x)h(x) where both g(x) and h(x) have positive coefficients. Then f(1) = g(1)h(1). But this is impossible unless [g(1) = 1 and h(1) = p] or [h(1) = 1 and g(1) = p]. Since the sum of coefficients of h(1) = 1 or p, we will assume not p and prove h(1) is invalid. Clearly if h(1) = 1, h(x) = x^m And we could divide through to get a new f(x) to which we repeat this process of dividing by h(x) until h(x) is 1 and f(x) is irreducible.
(So the easy way to come up with a polynomial is to have x^n + bx^(n-1) + 2x^(n-2) + 2^(n-3) … +2 where b is even and 1 + 2*(n-2) + b is prime and so f(1) = a prime and f(x) is irreducible under Eisenstein’s Criterion).
Now suppose we have for each K at least one corresponding f(x). How do we know that for some constant m f(m) is prime for some not-so-general and very-integery m AND m does not change for each f(x)? Find the least prime number, q, which is greater than any coefficients for any of the f(x) AND q must also be greater than K. The coefficients of the f(x) when put together side by side should form a number base q that is at least pseudoprime.
Examples: x^2 + 2x + 2. f(1) = 5. 122 base 5 is 37. Also 122 base 3 is 17.
x^2 + 4x + 2. 142 base 5 is 47. 142 base 7 is 79.
x^10 + 4x^9 + 2x^8 + 2x^7 + 2x^6 + 2x^5 + 2x^4 + 2x^3 + 2x^2 + 2x + 2. Which according to Wolfram Alpha when x = 11 the expression is prime. 35840804903.
To summarize this problem has already been solved in the S adjoin x world if by prime we mean irreducible. For any K, f(x) = x^K + 3 works (with Eisenstein’s). I was hoping something really cool would pop out of the polynomials.
I apologize as this response was a bit fuzzy (as my brain was not feeling perfect). I have not checked what kind of numbers one would want to plug in for f(x) = x^K + p to spit out numeric primes.
Comment by — December 23, 2011 @ 10:17 am
20. Howdy all!
Just thought I’d let you know that I’ve proven Andrica’s Conjecture by re-examining the Sieve of Eratosthenes:
http://www.ugcs.caltech.edu/~kel/MPP/AndricaConjectureTrue.pdf
The proof shows that, at the n-th step of the sieve, the largest gap generated is at most \$2 p_{n-1}\$, from which Andrica follows. I call the state of elimination at the n-th step \$\kappa_n\$.
So, right off the bat, you can find a prime in \$N^{1/2}\$, even without the Riemann Hypothesis. (Note that this isn’t an asymptotic result, like \$N^{0.525}\$; it holds for all primes.)
However, I believe we can do a better search in practice. A naive generation of \$\kappa_n\$ up to \$N\$ requires \$O(log N)\$ computation and \$O(N)\$ storage. At that point, the density of primes is around
\$M_n = \Pi_{i=1}^{n} \frac{p_i -1}{p_i} ~ \frac{1}{e^\gamma \log{p_n}}\$
But a more sophisticated search would pre-calculate where in \$\kappa_n\$ \$N\$ lay and only keep around the requisite intervals, namely those that lie around a Biggest Resolution of \$p_B = p_n\$. Check out Theorems 2.8 and 2.9 for more details.
Btw, I’m firmly convinced Cramer’s Conjecture is true. I have developed a conditional proof that shows if Cramer’s Conjecture were true, then all the constellation infinity conjectures must be true simultaneously. Were Cramer true, then primes are packed incredibly tight and with much more regularity than we can currently show. The side effect of the conditional proof is that we could not only find _a_ prime after \$N\$ in log time, we could find the _exact next prime_ in log time.
I hope this helps you in your quest for deterministic prime finding!
Comment by — December 23, 2011 @ 2:31 pm
21. Is the algorithm given by Ribenboim in “Selling Primes” (My Numbers, My Friends), which constructs arbitrarily large proven primes, not sufficient?
Comment by Anonymous — May 14, 2013 @ 10:36 pm
• If you are referring to the algorithm in http://www.math.sunysb.edu/~moira/mat331-spr10/papers/1995%20RibenboimSelling%20Primes.pdf , the problem is that one needs to find (for a given prime p), an integer k for which 2kp+1 is prime. Such integers should exist in relative abundance, but there is no known rapid deterministic way of actually getting one’s hands on such a k other than by trying different k one at a time. (The section on “Feasibility of the algorithm” discusses this point.)
Comment by — May 14, 2013 @ 10:49 pm
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 147, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256058931350708, "perplexity_flag": "middle"}
|
http://planetmath.org/grouphomomorphism
|
# group homomorphism
Let $(G,\ast)$ and $(K,\star)$ be two groups. A group homomorphism is a function $\phi\colon G\to K$ such that $\phi(s\ast t)=\phi(s)\star\phi(t)$ for all $s,t\in G$.
A composition of group homomorphisms is again a homomorphism.
Let $\phi\colon G\to K$ a group homomorphism. Then the kernel of $\phi$ is a normal subgroup of $G$, and the image of $\phi$ is a subgroup of $K$. Also, $\phi(g^{n})=\phi(g)^{n}$ for all $g\in G$ and for all $n\in\mathbb{Z}$. In particular, taking $n=-1$ we have $\phi(g^{{-1}})=\phi(g)^{{-1}}$ for all $g\in G$, and taking $n=0$ we have $\phi(1_{G})=1_{K}$, where $1_{G}$ and $1_{K}$ are the identity elements of $G$ and $K$, respectively.
Some special homomorphisms have special names. If the homomorphism $\phi\colon G\to K$ is injective, we say that $\phi$ is a monomorphism, and if $\phi$ is surjective we call it an epimorphism. When $\phi$ is both injective and surjective (that is, bijective) we call it an isomorphism. In the latter case we also say that $G$ and $K$ are isomorphic, meaning they are basically the same group (have the same structure). A homomorphism from $G$ on itself is called an endomorphism, and if it is bijective then it is called an automorphism.
Major Section:
Reference
Type of Math Object:
Definition
Groups audience:
## Mathematics Subject Classification
20A05 Axiomatics and elementary properties
## Comments
### Re: proofs
By all means, please add the proofs. While things like biographies
of politicians (unless those politicians also happened to be
mathematicians) are off topic, proofs of mathematical statements,
however humble are definitely on topic. Even if this isn't the
most profound theorem, it is definitely worth having for completeness
and could be of use to someone new to algebra or needing a formal
proof of this fact which most mathematicians take for granted.
### Re: proofs
Look at the "latest addtioins" column on the right of the page.
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: yark
Added: 2001-11-08 - 21:24
Author(s): yark
## Versions
(v27) by unlord 2013-05-17
(v26) by yark 2013-05-17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067816734313965, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/125806/possible-error-about-properties-of-boundary-points-in-simmonss-topology-and-mod
|
# Possible error about properties of boundary points in Simmons's Topology and Modern Analysis
GF Simmons, Introduction to Topology and Modern Analysis Section 11, Pg 68-69
Let $X$ be a metric space and $A$ a subset of $X$. A point in $X$ is called a boundary point of $A$ if each open sphere centered on the point intersects both $A$ and $A'$, and the boundary of $A$ is the set of all boundary points. This concept possesses the following properties:
(1) The boundary of $A$ equals $A \cap A'$;
(2) The boundary of $A$ is a closed set;
(3) $A$ is closed $\iff$ it contains boundary
The first property is wrong I suppose? Else all boundary sets will be empty sets. Any idea what can be a replacement to that property? For example, did the author actually intend to say that
(1) The boundary of $A$ equals $\bar{A} \cap A'$
where $\bar{A}$ means closure of $A$.
-
1
If by $A'$ one means the complement of $A$, then the correct replacement is: $\partial A = \overline{A}\cap \overline{A'}$ ($\partial A$ is a common notation in geometry for the boundary). – William Mar 29 '12 at 6:48
## 1 Answer
It looks as if he meant $\overline{A} \cap \overline{A^\prime}$. Have a look here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307820796966553, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/92883/small-4-chromatic-coin-graphs/92931
|
Small 4-chromatic coin graphs
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A coin graph is a graph that can be represented by a set of disjoint, except possibly touching, unit disks in the plane (i.e. the disks are the vertices and the edges correspond to the pairs that touch each other). It's easy to show by induction that $\chi(G)\leq4$ for every coin graph $G$, as there's always a vertex of degree at most 3.
My question is: what is the smallest order (i.e. the number of vertices) of a 4-chromatic coin graph?
In this paper by Erdos http://www.renyi.hu/~p_erdos/1987-27.pdf there is a coin graph of order 19 that is 4-chromatic (see Figure 1) by I doubt it's the smallest one (it was constructed for a different purpose, having to do with the independence number). The question I asked was proposed for an IMO competition in 1979, see p. 138 question 73 in Djukic, Jankovic, Matic, Petrovic: the IMO Compendium (there is no solution there, however).
Clearly, coin graphs are also unit distance graphs, for the definition see http://en.wikipedia.org/wiki/Unit_distance_graph. The smallest 4-chromatic unit-distance graph is probably the Moser spindle http://en.wikipedia.org/wiki/Moser_spindle that has 7 vertices. There is a similar notion of matchstick graphs: those are unit distance graphs drawn in the plane with non-crossing straight-line segments, see http://en.wikipedia.org/wiki/Matchstick_graph Note that the Moser spindle is NOT a matchstick graph, although it's planar and unit-distance.
The second (related) question is: what is the smallest order of a 4-chromatic matchstick graph?
I think the answer (to the second question) is 8.
-
3
I can do 11 for coin graphs. take a coin surrounded by 6 coins and delete one outer coin. The two outer coins adjacent to the deleted coin must have the same color in a $3$-coloring. Now take two of these $6$-coin configurations, identify one of the same-color coins, and make the other two touch. Don't know if this is best possible. – Flo Pfender Apr 2 2012 at 10:50
1
Ok, I can do $9$ now. One can replace one of the $6$-vertex configurations above by the $4$-vertex diamond from the Moser spindle. – Flo Pfender Apr 2 2012 at 10:56
2
if i understand correctly, your second construction is a matchstick graph, but not a coin graph, right? – puzzly Apr 2 2012 at 12:22
1
oops, yes, you are correct. So I can do no better than $11$ for coin graphs. – Flo Pfender Apr 2 2012 at 12:52
3 Answers
Flo's example with 11 is best possible. Let $G$ be a minimal coin-graph which chromatic number four. Then $G$ does not have a cut vertex, nor a vertex of degree at most two. Moreover, there does not exist a separation $(A, B)$ of $G$ of order two with $G[A \cap B]$ an edge (as opposed to a pair of non-adjacent vertices). Otherwise we could 3-color $G[A]$ and $G[B]$ and glue the colorings together to get a coloring of $G$.
Since $G$ is 2-connected, every face is bounded by a cycle. Consider the cycle $C$ bounding the infinite face. The cycle $C$ must be induced, as otherwise there is a 2-separation whose cut set is an edge. $G$ has no vertex of degree two, so every vertex of $C$ has a neighbor in $V(G) - V(C)$ (and specifically, there is at least one such vertex).
If there is exactly one vertex in $V(G) - V(C)$, then it is adjacent every vertex of $C$ and $G$ is a wheel on 7 vertices which is 3-colorable. Thus, there exist at least two vertices in $V(G) - V(C)$. However, then $|V(C)| \ge 8$. If $|V(G)| \le 10$, then $|V(C)| = 8$, and there are exactly two vertices in $V(G) - V(C)$. It follows that the graph $G$ must be equal to an 8-cycle $C$ with vertices $v_1, \dots, v_8$ and two additional vertices $x, y$, each adjacent to a subset of the vertices ${v_1, \dots, v_8}$.
For each of the vertices $x$ and $y$, their neighbors must form a subpath of $C$, say $P_x$ and $P_y$. The paths $P_x$ and $P_y$ can intersect only at their endpoints. Given that $G$ is a coin graph, $|V(P_x)| \le 5$ and $|V(P_y)| \le 5$, and we see now that there are two possible cases: either $|V(P_x)| = |V(P_y)| = 5$ and $P_x$ and $P_y$ have both endpoints in common, or alternatively, $|V(P_x)| = |V(P_y)| = 4$ and $P_x$ and $P_y$ are disjoint. In either case, the resulting graph is 3-colorable, a contradiction.
-
great, sounds convincing enough, except that i don't see (at least not without a bit of geometry) how |V(G)-V(C)|\geq 2 implies |V(C)|\geq 8. did u have in mind a trivial way to see that? – puzzly Apr 3 2012 at 12:58
Yes, that part was a little bit hand-wavey. Fix a layout of a coin graph containing a cycle C and two additional vertices contained in the disc bounded by C. Let C' be the piecewise linear curve in the plane defined by the center of the coins of V(C). We may assume that the disc bounded by C' is convex, and so we may assume the two coins not in C are touching. If we place 8 coins around the interior pair of coins as tightly as possible, we see there exists a curve C'' of length 8 contained in the disc bounded by C'. Thus, C' has length at least 8, and if exactly 8 then C' = C'' – Paul Wollan Apr 3 2012 at 15:28
1
well $C''$ is not unique. Notice that you can shift the $8$ coins around the two middle coins a bit, so that either both middle coins have degree $4$, or both have degree $5$. But I do think that your proof shows that the $11$-coin configuration is unique. $3$ internal vertices would require a $9$-cycle, and there are only two ways an induced $9$-cycle can have each vertex touch at least one of the two central vertices, and one of these ways is $3$-colorable. – Flo Pfender Apr 3 2012 at 16:28
Thanks Flo, that's right. So, the final claim in the proof that we must be equal to two adjacent vertices x and y each adjacent to five of the boundary vertices was not correct. I fixed the proof above. – Paul Wollan Apr 4 2012 at 14:31
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Here is Flo Pfender's 11-coin graph:
-
I think Paul's proof also shows that my construction on 9 vertices is uniquely optimal for matchstick graphs.
By the same argument, the cycle $C$ bounding the infinite face must be induced, and every vertex on it has a neighbor inside the cycle.
If there is only one vertex inside $C$, the graph is the wheel on $7$ vertices, but then it is $3$-colorable.
By planarity, notice that the neighborhoods of the internal vertices on $C$ are paths which can only intersect in the end points. Thus, $G$ consists of a number (one for each internal vertex) of partial wheels, where neighboring partial wheels either overlap in a point or are connected by an edge.
If there are at least three internal vertices, as every internal vertex has degree at least $3$, this implies that $C$ has at least $6$ vertices. But in fact, $6$ is not possible, as three diamonds can not be arranged in a triangle.
So there are exactly $2$ internal vertices. Again, $C$ can not contain only $6$ vertices as otherwise the two internal vertices must be in the same place (and the resulting graph would be $3$-colorable anyways), so $C$ contains at least $7$ vertices. A short analysis of the possible sizes of the partial wheels shows that there are exactly two possible configurations ($2$ partial wheels with $5$ vertices each, or one with $4$ and one with $6$, overlapping in one vertex), and only the second one is not $3$-colorable.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487797021865845, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/homework+torque
|
# Tagged Questions
1answer
65 views
### How is torque equal to moment of inertia times angular acceleration divided by g?
How is the following relation true $$\tau = \large\frac{I}{g} \times \alpha$$ where $\tau$ is torque, $I$ is moment of inertia, $g= 9.8ms^{-2}$, and $\alpha=$ angular acceleration.
1answer
36 views
### Pendulum system: how is derived the output as Energy?
Good day to everyone, I want to understand in which way the "Energy equation" is been implemented to this pendulum system. $x_1(t)$: The angular position of the mass $x_2(t)$: The angular velocity ...
1answer
31 views
### Find the bending moment of a pole attached to a moving block
I'm having trouble with the following problem. What I've done so far: x-y is the usual coordinate system. $a=\frac{F}{m}=\frac{800}{60}$ and the y component of this is $a_y=a\sin{60^\circ}$. To ...
1answer
58 views
### Direction of the torque
In each one of the following figures there's a pole of length $1.2 \text{m}$ and there's a force $\vec F = 150 \text{N}$ acting on it. Determine the torque that is created by the force relative ...
1answer
111 views
### Calculating the acceleration of a car
I'm trying to calculate the maximum acceleration a car can achieve with the current gear ratio. I am ignoring drag forces and friction to keep it simple. I'm doing this by: calculating the torque ...
0answers
241 views
### Neglecting friction on a pulley?
So, this is how the problem looks: http://www.aplusphysics.com/courses/honors/dynamics/images/Atwood%20Problem.png Plus, the pulley is suspended on a cord at its center and hanging from the ceiling. ...
0answers
67 views
### Physics Problem concerning Torque and nonuniform motion [closed]
If the earth was spinning on its axis and it weighed 100 N with a radius of 42 m, and the earth spun 1 revolution every 24 hours, how many rockets would it take to counteract the earth spin? How do ...
1answer
141 views
### How can the torque a bicycle experiences be calculated based on the center of gravity, weight and a force?
The position center of gravity of a bicycle and its rider is known, and the distance from it to the point of contact of the front wheel with the ground, in terms of horizontal and vertical distance (x ...
0answers
134 views
### Torque required to rotate sand mixer [closed]
the given data is- Mixer having one impeller(rotor) rotating in horizontal plane about vertical axis. capacity of mixer to design $C = 250kg$ rotor dia. $D = 45" = 1143mm$ friction coefficient ...
0answers
125 views
### How do you determine the torque caused by the mass of a lever?
Suppose we have two objects sitting on two side of a lever, and the lever also has a mass, and those objects have masses. Then how we can balance $\sum τ$? This is what I have done: ...
1answer
139 views
### Relationship between torques and centre of mass or centre of gravity [closed]
1) A wardrobe is $2$m high and $1.6$m wide. When empty, it has a mass of $110$kg and its centre of gravity is $0.8$m above the centre of its base. What is the minimum angle through which it must be ...
2answers
232 views
### Force applied off center on an object
Assume there is a rigid body in deep space with mass $m$ and moment of inertia $I$. A force that varies with time, $F(t)$, is applied to the body off-center at a distance $r$ from its center of mass. ...
0answers
25 views
### Determine the dilation temperature so as to double the speed
There's a metallic rod of length $l_1$ which is spinning around a vertical ax which passes through its center. The ends of the rod are spinning with $\omega_1$ angular speed. Determine the temperature ...
1answer
326 views
### Calculating torque in a structure
I posted this on math stack exchange but realize it is more a physics question. I have a structure which is set up as shown in the image. A weight hangs from point A with mass $m$. Joint B is free ...
0answers
258 views
### Torque required to rotate a cement mixer..? [closed]
I need to design a motor to rotate a cement mixer which should mix one cubic meter. So, I calculated the required volume to be 1600 liters as it is an horizontal cylinder. Consider that the mixer ...
3answers
809 views
### Proving angular momentum is conserved for a particle moving in a central force field $\vec F =\phi(r) \vec r$
A problem I am trying to work out is as follows: A particle moves in a force field given by $\vec F =\phi(r) \vec r$. Prove that the angular momentum of the particle about the origin is constant. ...
0answers
55 views
### Two particles rest [closed]
Two particles rest a distance $d$ apart on the edge of a table. One of the particles, of mass $m$, falls off the edge and falls vertically. Using the other particle (still at rest) as the origin, ...
0answers
107 views
### Load vs Leverage vs Friction on a floating pivot [closed]
If I had a standard 'cherry-picker' type of crane (lifting device), 7' high, with a 7' boom, suspending 1000# at its end,its vertical support mounted atop a 6" diameter ball-type pivot: how much force ...
1answer
361 views
### Can anyone solve this simple static equilibrium problem? [closed]
A bridge is 50m long, has a mass of 20,000kg, and rests on two pivots, A and B. The distance between A and the left side of the bridge is 10m. The distance between B and the right side of the bridge ...
1answer
451 views
### Gravitational torque about a bolt that a mass is hanging from [closed]
A uniform rectangle sign h=20.0cm high and w=11.0cm wide loses three of its four support bolts(at points p_1, P_3, and p_4) and rotates into the position as shown, with p_1 directly over p_3. It is ...
2answers
286 views
### Finding work done by rotational force?
A disk with a rotational inertia of 5.0 kg·m2 and a radius of 0.25 m rotates on a fixed axis perpendicular to the disk and through its center. A force of 2.0 N is applied tangentially to the ...
2answers
288 views
### How do you choose the locations of forces when calculating moments?
A uniform bridge of weight 1200kN and of length 17m rests on supports at each end which are 1m wide. A stationary lorry of weight 60kN has it’s centre of mass 3.0m from the centre of the bridge. ...
1answer
334 views
### if a force is 1 newton metre, what is it at 2 meters?
If I have a force, say 24 kg/cm what would that equate to at 2cm? I would like to know the formulae for calculating this. For example. If a motor can hold an object of 24kg at 1cm from its pivot ...
1answer
980 views
### Moment calculation [closed]
Consider image below. The weight of the fire-fighter is 840 N. What is the torque of the fire-fighter's weight about P and what is the value of the force C which cancels out the torque?
3answers
1k views
### Torque homework
We have learned that Torque is equal to a force that is perpendicular to a radius (displacement); however, I just cannot grasp one of the study questions we received: A hammer thrower accelerates ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308706521987915, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=601410
|
Physics Forums
## Waterflow out of a tank
Hi, this is some questions about fluid dynamics(mostly). There are three somewhat connected questions here, I will try to organize it as best as I can.
1. The problem statement, all variables and given/known data
A sylindrical tank filled with water is standing on a table, the tank has a small hole at the side of the tank at the very bottom(not underneath). The tank is placed on the table so that the water coming out of the hole will drop directly to the floor(it won't touch the table).
Variables:
Hight of table(hight from the hole in the tank to the ground) = H
Hight of waterlevel inside the tank = h
(inside)Radius of hole at the bottom of the tank = r
(inside)Radius of the tank = R
Speed at wich the waterlevel inside the tank drops = V
Speed of the water flowing out of the hole = v
Distance on the floor, from the floor directly beneath the hole to where the water hits the floor = L.
I've added a sketch of the variables.
The airpressure is constant everywhere, g is gravity and works in the negative y direction, and the density of the water is constant.
Q1: What is the relation between the speed v and the speed V?
Q2: The speed V is related to the timedependancy of hight h by some differentiate. What is the relation?
Q3: What is the length L as a function of hight h?
2. Relevant equations
- Bernoullis equation for incompressible liquid(see attempt at solution).
3. The attempt at a solution
A1: The relation can be given by $\rho t v \pi r^2 = \rho t V \pi R^2 = constant$
Wich means $vr^2 = VR^2 = constant$. I don't know what more i need to do in order to show the relation, is this enough?
A2: I don't know how to solve this.
A3: I used Bernoullis equation to get a model for v: $v = \sqrt{2gh}$. So now I have the speed v out of the hole(i think), but don't know how to get from there. It is supposed to be a function of hight h, so I think I'm not supposed to mix time into it, in wich case it would be easier.
I'm mostly interested in a small push in the right direction, and any help would be much appreciated=)
Edit: I forgot to say the mass is conserved!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Q2: The speed V is related to the timedependancy of hight h by some differentiate. What is the relation? V is the speed the water level drops. Look at its units. What derivative in variable h has the same units and, of course, defines the same thing. Q3: What is the length L as a function of hight h? This is a typical projectile being shot off a cliff problem.
Quote by LawrenceC Q2: The speed V is related to the timedependancy of hight h by some differentiate. What is the relation? V is the speed the water level drops. Look at its units. What derivative in variable h has the same units and, of course, defines the same thing. Q3: What is the length L as a function of hight h? This is a typical projectile being shot off a cliff problem.
Hmm, okey, so given that my A1 is correct, I would think I could solve Q2 like this:
Velocity is the derivative of position with respect to time, so if h is the displacement:
$\frac{dh}{dt} = V(t)$
If I am correct in A1, then $vr^2 = VR^2$ wich means $\frac{vr^2}{R^2} = V(t)$ wich again means; $\frac{dh}{dt} = \frac{vr^2}{R^2}$. Is this correct?
And for Q2: I haven't really been doing any cannonball off cliffs problems, but I would solve it as:
$v(t) = \sqrt{2gh} \Rightarrow position = t\sqrt{2gh}$ wich represents the position of the water in x-direction as a function of time. And:
$acc = -g \Rightarrow Velocity = -gt \Rightarrow position = -\frac{1}{2}gt^2$ wich represents the position of the water in y-direction, giving me:
$position = x(t) = \sqrt{2gh} \boldsymbol{i} - \frac{1}{2}gt^2 \boldsymbol{j}$ Wich I guess would be nice for a graph. But how do I go from there? Would it be possible to solve it as a quadratic equation (?) :
$x(t) = -gt^2 + \sqrt{2gh}t + H$, and pick the positive root?
## Waterflow out of a tank
Q2: The speed V is related to the timedependancy of hight h by some differentiate. What is the relation?
Unless I am misunderstanding the question, I would simply answer by saying V = dh/dt.
For the water passing through the air part, first determine how long it takes the water to reach the ground. The basic assumption is that the horizontal velocity is constant. The portions of these problems are solved separately. You already have written the correct formula to determine the time for it to reach the ground - y-direction computation for t. Solve for t and use it to determine range L.
Quote by LawrenceC Q2: The speed V is related to the timedependancy of hight h by some differentiate. What is the relation? Unless I am misunderstanding the question, I would simply answer by saying V = dh/dt. For the water passing through the air part, first determine how long it takes the water to reach the ground. The basic assumption is that the horizontal velocity is constant. The portions of these problems are solved separately. You already have written the correct formula to determine the time for it to reach the ground - y-direction computation for t. Solve for t and use it to determine range L.
I'm gonna work a bit more on it tomorrow=) Thanks for all the help!
Thread Tools
| | | |
|----------------------------------------------|----------------------------------------------|---------|
| Similar Threads for: Waterflow out of a tank | | |
| Thread | Forum | Replies |
| | Engineering, Comp Sci, & Technology Homework | 8 |
| | Electrical Engineering | 0 |
| | General Physics | 3 |
| | Mechanical Engineering | 3 |
| | Engineering Systems & Design | 0 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454465508460999, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/83370/two-constructions-for-buz
|
## Two constructions for BU×Z
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the following two ways of getting the zeroth space in the $K$-theory spectrum $BU \times \mathbb{Z}$:
1) Take the groupoid of finite dimensional complex inner product spaces with isometries as morphisms and apply Quillen $S^{-1}S$-construction to it. This amounts to forming a new category, whose objects are pairs $(V_+, V_-)$. A morphism $(V_+, V_-) \to (W_+,W_-)$ is an equivalence class of triples $(A, f_+, f_-)$, where $A$ is another finite dimensional inner product space and $f_{\pm} \colon V_{\pm} \oplus A \to W_{\pm}$ is an isometric isomorphism (i.e. a morphism in the former category). The equivalence relation identifies isomorphic objects $A$ and $B$ and the corresponding maps $f_{\pm}^A$ and $f_{\pm}^B$. All this can be found in a paper by Daniel Grayson and is also sketched in section 7 of this paper. Take the nerve of this new category to get the first model.
2) According to Segal we could also take the category of finite length chain complexes of finite dimensional vector spaces together with chain homotopy equivalences as morphisms and take the nerve of that to get $BU \times \mathbb{Z}$. (This is part of the $\Gamma$-space construction of $BU_{\otimes}$.)
How are the two models related?
I tried to construct a functor from the category in 2) to the one in 1), that has the chance of being an equivalence, but failed so far. Taking the homology of the chain complexes in 2) yields a functor that ends up in 1), but only "sees" the morphisms where $A = 0$, since chain homotopy equivalences always provide isomorphisms when taking homology. Nevertheless: Is this the right thing to consider?
-
The current policy, I think, is not to use TeX in titles because scripts do not process it. I edited the title accordingly. – Dmitri Pavlov Dec 13 2011 at 23:26
You might only get a span (or even zig-zag) of weak equivalences... – David Roberts Dec 13 2011 at 23:57
@Dmitri: Thanks! I wasn't aware of that. – Ulrich Pennig Dec 14 2011 at 7:18
I guess it's just language, but the first sentence grates. There are lots of ways of getting the zeroth space of BU x Z and these are two of them. Your first sentence implies (for me, at any rate) that these are the only two. – Andrew Stacey Dec 14 2011 at 8:07
@Andrew: Thanks, I changed the first sentence. – Ulrich Pennig Dec 14 2011 at 8:53
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266719818115234, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/19420?sort=oldest
|
## what mistakes did the Italian algebraic geometers actually make?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It's "well-known" that the 19th century Italian school of algebraic geometry made great progress but also started to flounder due to lack of rigour, possibly in part due to the fact that foundations (comm alg etc) were only just being laid, and possibly (as far as I know) due to the fact that in the 19th century not everyone had come round to the axiomatic way of doing things (perhaps in those days one could use geometric plausibility arguments and they would not be shouted down as non-rigorous and hence invalid? I have no real idea about how maths was done then).
But someone asked me for an explicit example of a false result "proved" by this school, and I was at a loss. Can anyone point me to an explicit example? Preferably a published paper that contained arguments which were at the time at least partially accepted by the community as being OK but in fact have holes in? Actually, to be honest I'd probably prefer some sort of English historical summary of such things, but I do have access to (living and rigorous) Italian algebraic geometers if necessary ;-)
EDIT: A few people have posted solutions which hang upon the Italian-ness or otherwise of the person making the mathemtical mistake. It was not my intention to bring the Italian-ness or otherwise of mathematicians into the question! Let me clarify the underlying issue: a friend of mine, interested in logic, asked me about (a) Grothendieck's point of view of set theory and (b) a precise way that one could formulate the statement that he "made algebraic geometry rigorous". My question stemmed from a desire to answer his.
-
1
Are you aware of this question which has a similar flavor? mathoverflow.net/questions/17352/… – jc Mar 26 2010 at 13:39
1
@jc: no ;-) Thanks! That question turned out to have a narrower remit I guess. – Kevin Buzzard Mar 26 2010 at 13:54
31
ftp.mcs.anl.gov/pub/qed/archive/209 This illuminating email by David Mumford is a concise example of how a modern algebraic geometer might feel about the work of the Italian school. – bhwang Mar 26 2010 at 19:14
2
Perhaps *the 19th century Italian school of algebraic geometry* should be the the 20th century... – Chandan Singh Dalawat Feb 11 2011 at 9:16
## 6 Answers
As for a result that was not simply incorrectly proved, but actually false, there is the case of the Severi bound(*) for the maximum number of singular double points of a surface in P^3. The prediction implies that there are no surfaces in P^3 of degree 6 with more than 52 nodes, but in fact there are such surfaces in P^3 with 64 nodes (and this is optimal).
(*) Francesco Severi; "Sul massimo numero di nodi di una superficie di dato ordine dello spazio ordinario o di una forma di un iperspazio." Ann. Mat. Pura Appl. (4) 25, (1946). 1--41.
-
1
There is also Severi's incorrect proof of the irreducibility of the spaces of plane curves of degree d with r nodes. A proof of the result was given by Harris (ams.org/mathscinet/search/…). – damiano Mar 26 2010 at 16:20
I think I am forced to accept this answer as being precisely what I was looking for! However I find all the answers interesting: one thing this has going for it is that it has the most precise references in. Note to damiano: there exist people without access to mathscinet, and for them your answer might be rather more cryptic than it could be. – Kevin Buzzard Mar 27 2010 at 12:55
I have added an explicit reference to the paper by Severi. The reference for Harris is: Joe Harris; "On the Severi problem." Invent. Math. 84 (1986), no. 3, 445--461. – damiano Mar 28 2010 at 11:22
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
[Added disclaimer: What follows is the product of probably faulty memory combined with a limited understanding in the first place, so should be taken with a grain of salt.]
Dear Kevin,
I believe that Brill--Noether of curves gives the kind of examples you are looking for. (My understanding, probably imperfect if not completely wrong, is that they made certain general position arguments about existence of linear systems that were just wrong, because they didn't realize that certain kinds of geometric condition were universal, and so, although they look special, are in fact general.)
You might try looking at the old papers of Harris (or maybe Eisenbud and Harris) about linear systems on curves.
Also, the introduction (by Zariski) to Zariski's collected works is interesting. He began in the Italian school, but then became instrumental in introducing algebraic tools.
Also, I think that the newest edition of his book on algebraic surfaces (a report on the results of the Italian school) has annotations by Mumford, which are very illuminating with regard to the differences and similarities between the Italian style and a more modern style.
P.S. Here's a way to imagine the kind of error one could make in general position arguments (although obviously any actual such error made by the Italians would be many times more subtle): Let $P_1,\ldots,P_8$ be eight points. Choose two elliptic curve $E_1$ and $E_2$ passing through the 8 points, and now try to choose them in general position (with respect to the property of containing the 8 points) so that the 9th point of intersection is in general position with regard to the $P_i$. This might seem plausibly possible if you don't think it through, but of course is in fact impossible, because the 8 given points uniquely determine the 9th one. (The possible $E_i$ lie in a pencil.) My impression is that the Italians made errors of that sort, but in much more subtle contexts.
-
A technical objection: while B-N theory does fit the bill of far-sightedness but imprecision in algebraic geometry, both Alexander von Brill and Max Noether were in fact German... (And perhaps they were followers of Riemann in the way they thought about families of curves?) – Tim Perutz Mar 26 2010 at 13:51
"eight points"-->"eight points in P^2" of course. I see your point! I can believe that "generic" would have been a stumbling-block before we understood what a generic point really was. – Kevin Buzzard Mar 26 2010 at 13:51
PS @Tim: fair point! But of course my question wasn't specifically anti-italian ;-), I just wanted to see how one could make mistakes in algebraic geometry if one wasn't really too fussed about technical results in commutative algebra. – Kevin Buzzard Mar 26 2010 at 13:53
Dear Tim, Thank you for pointing this out. My understanding of this field, both now and historically, is pretty hazy; I was trying to remember a comment of Joe Harris (either made in person or in a paper; I don't remember which anymore) about an error of the Italians in studying the moduli of curves. Now that I think again, I wonder if I am confusing a memory of reading Joe Harris with a memory of reading Mumford. I will leave my response, since it may still be helpful for someone, but will add a disclaimer. – Emerton Mar 26 2010 at 18:03
I had the impression that there were false claims concerning rationality of certain Fano varieties, but I don't have any specific references on hand. For a more definite example, take a look the introduction to Mumford's "Rational equivalence of 0-cycles on surfaces". In this paper, he disproves something that Severi took as self evident.
-
1
Ironically, Mumford used Severi's ideas to do so. – jvp Mar 26 2010 at 14:44
4
Yes, which he acknowledges. To quote, "Now after criticizing Severi like this, I have to admit...". I've always admired Mumford's scholarship and sense of fairness. Thanks to which we have "Castelnuovo-Mumford" regularity... – Donu Arapura Mar 26 2010 at 15:13
Fano's list of 3-dimensional "Fano varieties" (so named by V.A.Iskovskikh) missed an entire class, of genus 12 if I recall correctly. This list was made complete later by Iskovskikh and Mukai-Umemura.
-
Of course, we all know great mathematicians who constantly make mistakes even now, and not because of foundations.
In any case, it's not like "long dead Italian algebraic geometers" is a category of people who were all uniformly bad. For example, Enriques was notoriously careless, while Castelnuovo was much more scrupulous (I may be wrong, but as far as know he has not made any real mistake). I remember reading of a competion for a paper on resolution of singularities of surface; Castelnuovo and Enriques were in the committee. Beppo Levi presented his famous paper on the resolution of singularities for surfaces; Enriques asked him for a couple of examples and was convinced; Castelnuovo was not. The discussion got heated. Enriques exclaimed "I am ready to cut my head if this does not work" and Castelnuovo replied "I don't think that would prove it either".
-
1
I should perhaps stress again that my question was most definitely not supposed to be an "anti-Italian" rant! It was an attempt to get some insight, on my part, of how one can do algebraic geometry badly if one doesn't have a big dollop of commutative algebra to back it up. Although I accepted damiano's answer, in truth I think Emerton told me the most: the point is that the closer a "generic point" is to a vague idea than to a non-closed point on a scheme, the more likely you are to be in trouble. – Kevin Buzzard Mar 28 2010 at 8:09
15
I understand, and most certainly I was not offended. My point is that the general opinion that the old Italian algebraic geometers made mistakes because they did not have the proper foundations may be roughly right, but also simplistic. It it true that the Italian school went slowly astray, as discussed in Mumford's very interesting email message; but how much of it was due to personalities of the leaders of the school (particularly Severi) and how much to lack of proper foundations, I'll leave to others more competent than me to answer. – Angelo Mar 28 2010 at 10:31
3
As I feel it (with not enough historical competence to prove it) in this and similar ages, the successive rigorous foundation came as an answer, after a call raised by the possibilities offered by new ideas and new fields that were disclosed to mathematicians. The same happened in different times with analysis and set theory. Mathematicians are not intrinsically rigorous; the rigor always came as a safety tool. – Pietro Majer Jun 24 2010 at 8:56
A beautiful survey article on the Italian school, with a discussion of several errors of all kinds by Severi, can be found in
• The Legacy of Niels Henrik Abel (Oslo 2002), Springer-Verlag 2004: Brigaglia, Ciliberto, Pedrini, The Italian school of algebraic geometry and Abel's legacy, 295--347
-
1
There is also a longer article by Brigaglia and Ciliberto, "Italian algebraic geometry between the two world wars" (originally a chapter in a book on Italian mathematics of the interwar period), translated into English and published as Queen's Papers in Pure and Applied Mathematics, vol 100, 1995, Kingston, Ontario – Victor Protsak Aug 16 2010 at 4:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633267521858215, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=108706
|
Physics Forums
## unbiased estimator
Hi, I'm working on the following problem and I need some clarification:
Suppose that a sample is drawn from a $$N(\mu,\sigma^2)$$ distribution. Recall that $$\frac{(n-1)S^2}{\sigma^2}$$ has a $$\chi^2$$ distribution. Use theorem 3.3.1 to determine an unbiased estimator of $$\sigma$$
Thoerem 3.3.1 states:
Let X have a $$\chi^2(r)$$ distribution. If $$k>-\frac{r}{2}$$ then $$E(X^k)$$ exists and is given by:
$$E(X^k)=\frac{2^k(\Gamma(\frac{r}{2}+k))}{\Gamma(\frac{r}{2})}$$
My understanding is this:
The unbiased estimator equals exactly what it's estimating, so $$E(\frac{(n-1)S^2}{\sigma^2})$$is supposed to be$$\sigma^2$$ which is 2(n-1).
Am I going the right way here?
CC
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Ok, So after hours of staring at this thing, here's what I did: I let k=1/2 and r=n-1, so the thing looks like this: $$E[S]=\sigma(\sqrt{\frac{2}{n-1}}\frac{\Gamma\frac{n}{2}}{\Gamma\frac{n-1}{2}}$$ so I use the property of the gamma function that says: $$\Gamma(\alpha)=(\alpha-1)!$$ which leads to: $$E[S]=\sigma\sqrt\frac{2}{n-1}(n-1)$$ So now do i just flip over everything on the RHS,leaving $$\sigma$$ by itself and that's the unbiased estimator, i.e. $$\sqrt{2(n-1)}E[S]=\sigma$$ Any input will be appreciated. CC
OK Anyone who looked and ran away, here at last is the solution: (finally) $$E[S]=\sigma\sqrt{\frac{2}{n-1}} \frac{\Gamma\frac({n}{2})}{\Gamma\frac({n-1}{2})}$$ is indeed correct, however my attempt to reduce the RHS with the properties of the Gamma function is wrong. The unbiased estimator is obtained by isolating the $$\sigma$$ on the RHS and then using properties of the Expectation to get: $$E\left(\sqrt\frac{n-1}{2}\frac{\Gamma(\frac{n-1}{2})}{\frac\Gamma(\frac{n}{2})}S\right)=\sigma$$ So at last it has been resolved. WWWWEEEEEEEEEEEeeeeeeee CC
Thread Tools
| | | |
|-----------------------------------------|----------------------------|---------|
| Similar Threads for: unbiased estimator | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 3 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 2 |
| | Electrical Engineering | 15 |
| | Current Events | 14 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902810275554657, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/tagged/random-generation%20hypothesis-testing
|
# Tagged Questions
3answers
373 views
### Testing data against a known distribution
I asked a question similar to this a while ago, and the general answer was "your question is too vague". So let me try again with a little more detail... I have written a program which generates ...
1answer
151 views
### How different random number generators can be more similar than identical ones?
I have 6 random number generators. They are "black boxes", i.e. I do not know if they are the same or different. For example I do not know if they provide the same arithmetic averages and/or root mean ...
1answer
509 views
### Scrambling and correlation in low discrepancy sequences (Halton/Sobol)
I am currently working on a project where I generate random values using low discrepancy / quasi-random point sets, such as Halton and Sobol point sets. These are essentially $d$-dimensional vectors ...
8answers
475 views
### Testing random variate generation algorithms
Which methods are used for testing random variate generation algorithms?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080485105514526, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/66286-mechanics-stuck-sum-simple-harmonic-motion-help.html
|
Thread:
1. Mechanics stuck on a sum on simple harmonic motion:help:
A particle P is moving on a straight line with S.H.M. of period pi/3 s. Its maximum speed is 5 m/s. Calculate the amplitude of the motion and the speed of P 0.2s after passing through the centre of oscillation.
My workings:
T=2pi/w So w=6
x=0, v=5
5^2=6^2(a^2-0) so a=5/6 (matched with book)
t=0.2s
v=-awsinwt=-5/6*6sin (1.2)=4.66
The answer given in book is 1.81.
Somebody help. What did go wrong with this very simple sum?
2. Originally Posted by ssadi
A particle P is moving on a straight line with S.H.M. of period pi/3 s. Its maximum speed is 5 m/s. Calculate the amplitude of the motion and the speed of P 0.2s after passing through the centre of oscillation.
My workings:
T=2pi/w So w=6
x=0, v=5
5^2=6^2(a^2-0) so a=5/6 (matched with book)
t=0.2s
v=-awsinwt=-5/6*6sin (1.2)=4.66
The answer given in book is 1.81.
Somebody help. What did go wrong with this very simple sum?
x = (5/6) sin (6t) NOT (5/6) cos (6t).
You should think about why ......
3. Originally Posted by ssadi
A particle P is moving on a straight line with S.H.M. of period pi/3 s. Its maximum speed is 5 m/s. Calculate the amplitude of the motion and the speed of P 0.2s after passing through the centre of oscillation.
My workings:
T=2pi/w So w=6
x=0, v=5
5^2=6^2(a^2-0) so a=5/6 (matched with book)
t=0.2s
v=-awsinwt=-5/6*6sin (1.2)=4.66
The answer given in book is 1.81.
Somebody help. What did go wrong with this very simple sum?
Interesting. I've never heard of this before, but according to wikipedia, simple harmonic motion is given by:
$x(t)=A\cos{(2\pi ft+\phi)}$
where $x(t)$ is displacement, $t$ is time, $A$ is amplitude, $f$ is frequency, and $\phi$ is phase.
We also note that period $T$ is given by:
$T=\frac{1}{f}$
This means that $f=\frac{3}{\pi}$. So:
$x(t)=A\cos{(2\pi \frac{3}{\pi}t+\phi)}=A\cos{(6t+\phi)}$
To find velocity, we take the derivative:
$v(t)=x'(t)=-6A\sin{(6t+\phi)}$
To find velocity extrema, we take the derivative of $v(t)$ and set it equal to zero:
$v'(t)=a(t)=-36A\cos{(6t+\phi)}$
$-36A\cos{(6t+\phi)}=0$
$6t+\phi=\frac{(2n-1)\pi}{2}$
$t=\frac{(2n-1)\pi-2\phi}{12}$
Let's plug that into our velocity function:
$-6A\sin{[6\frac{(2n-1)\pi-2\phi}{12}+\phi]}=5$
$A=-\frac{5}{6\sin{[\frac{(2n-1)\pi}{2}]}}$
$A=\{-\frac{5}{6\sin{[\frac{\pi}{2}]}},-\frac{5}{6\sin{[\frac{3\pi}{2}]}}\}$
$A=\{-\frac{5}{6},\frac{5}{6}\}$
Since the amplitude must be a positive value, we can just say:
$A=\frac{5}{6}$
Now, the center of oscillation is another way of saying $x(t)=0$. So:
$x(t)=\frac{5}{6}\cos{(6t+\phi)}=0$
$\cos{(6t+\phi)}=0$
$6t+\phi=\frac{(2n-1)\pi}{2}$
$t=\frac{(2n-1)\pi-2\phi}{12}$
Let's let $t=0$ and $n=1$:
$0=\frac{([2(1)-1]\pi-2\phi}{12}$
$\phi=\frac{\pi}{2}$
And we plug that into our velocity function:
$v(t)=-5\sin{(6t+\frac{\pi}{2})}$
Then plug in $t$:
$v(0.2)=-5\sin{(6[0.2]+\frac{\pi}{2})}\approx-1.81$
But of course we know that speed is relative, and so the value is actually:
$v(t_0+0.2)\approx\pm1.81\;|\;x(t_0)=0$
4. Originally Posted by mr fantastic
x = (5/6) sin (6t) NOT (5/6) cos (6t).
You should think about why ......
Tell me why, I am new at the chapter and is not yet acclimatised.
5. Originally Posted by hatsoff
Interesting. I've never heard of this before, but according to wikipedia, simple harmonic motion is given by:
$x(t)=A\cos{(2\pi ft+\phi)}$
where $x(t)$ is displacement, $t$ is time, $A$ is amplitude, $f$ is frequency, and $\phi$ is phase.
We also note that period $T$ is given by:
$T=\frac{1}{f}$
This means that $f=\frac{3}{\pi}$. So:
$x(t)=A\cos{(2\pi \frac{3}{\pi}t+\phi)}=A\cos{(6t+\phi)}$
To find velocity, we take the derivative:
$v(t)=x'(t)=-6A\sin{(6t+\phi)}$
To find velocity extrema, we take the derivative of $v(t)$ and set it equal to zero:
$v'(t)=a(t)=-36A\cos{(6t+\phi)}$
$-36A\cos{(6t+\phi)}=0$
$6t+\phi=\frac{(2n-1)\pi}{2}$
$t=\frac{(2n-1)\pi-2\phi}{12}$
Let's plug that into our velocity function:
$-6A\sin{[6\frac{(2n-1)\pi-2\phi}{12}+\phi]}=5$
$A=-\frac{5}{6\sin{[\frac{(2n-1)\pi}{2}]}}$
$A=\{-\frac{5}{6\sin{[\frac{\pi}{2}]}},-\frac{5}{6\sin{[\frac{3\pi}{2}]}}\}$
$A=\{-\frac{5}{6},\frac{5}{6}\}$
Since the amplitude must be a positive value, we can just say:
$A=\frac{5}{6}$
Now, the center of oscillation is another way of saying $x(t)=0$. So:
$x(t)=\frac{5}{6}\cos{(6t+\phi)}=0$
$\cos{(6t+\phi)}=0$
$6t+\phi=\frac{(2n-1)\pi}{2}$
$t=\frac{(2n-1)\pi-2\phi}{12}$
Let's let $t=0$ and $n=1$:
$0=\frac{([2(1)-1]\pi-2\phi}{12}$
$\phi=\frac{\pi}{2}$
And we plug that into our velocity function:
$v(t)=-5\sin{(6t+\frac{\pi}{2})}$
Then plug in $t$:
$v(0.2)=-5\sin{(6[0.2]+\frac{\pi}{2})}\approx-1.81$
But of course we know that speed is relative, and so the value is actually:
$v(t_0+0.2)\approx\pm1.81\;|\;x(t_0)=0$
I used the formulas from book:
x=acoswt
v=-awsinwt
a=-aw^2coswt=-w^2coswt=-wx^2
Believe me, the sum isn't supposed to be that long
6. Originally Posted by ssadi
I used the formulas from book:
x=acoswt
v=-awsinwt
a=-aw^2coswt=-w^2coswt=-wx^2
Believe me, the sum isn't supposed to be that long
Your book's function is just a simplification where the phase is zero. The important thing to remember is that you're not looking for $v(0.2)$. Rather, you're looking for $v(t_0+0.2)\;|\;x(t_0)=0$.
7. Originally Posted by ssadi
Tell me why, I am new at the chapter and is not yet acclimatised.
From the data given in the question, you can assume the centre of motion to be at x = 0 and for the particle to start from this position (ie. x = 0 at t = 0). This avoids the detailed calculations that have been provided ....
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 70, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079316258430481, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/10/07/commutant-algebras-in-general/?like=1&_wpnonce=76a2d9cd23
|
# The Unapologetic Mathematician
## Commutant Algebras in General
And in my hurry to get a post up yesterday afternoon after forgetting to in the morning, I put up the wrong one. Here’s what should have gone up yesterday, and yesterday’s should have been now.
Now we can describe the most general commutant algebras. Maschke’s theorem tells us that any matrix representation $X$ can be decomposed as the direct sum of irreducible representations. If we collect together all the irreps that are equivalent to each other, we can write
$\displaystyle X\cong m_1X^{(1)}\oplus m_2X^{(2)}\oplus\dots\oplus m_kX^{(k)}$
where the $X^{(i)}$ are pairwise-inequivalent irreducible matrix representations with degrees $d_i$, respectively. We calculate the degree:
$\displaystyle\deg X=\sum\limits_{i=1}^k\deg\left(m_iX^{(i)}\right)=\sum\limits_{i=1}^km_id_i$
Now, can a matrix in the commutant algebra send a vector from the subspace isomorphic to $m_iX^{(i)}$ to the subspace isomorphic to $m_jX^{(j)}$? No, and for basically the same reason we saw in the case of $X^{(i)}\oplus X^{(j)}$. Since it’s an intertwinor, it would have to send the whole $\mathbb{C}[G]$-orbit of the vector — a submodule isomorphic to $X^{(j)}$ — into the target subspace $m_jX^{(j)}$, but we know that that submodule itself has no submodules isomorphic to $X^{(i)}$.
And so any such matrix must be the direct sum of one matrix in each commutant algebra $\mathrm{Com}_G\left(m_iX^{(i)}\right)$. But we know that these matrices are of the form $M_{m_i}\boxtimes I_{d_i}$. And so we can write
$\displaystyle\mathrm{Com}_G(X)=\left\{\bigoplus\limits_{i=1}^k(M_{m_i}\boxtimes I_{d_i})\bigg\vert M_{m_i}\in\mathrm{Mat}_{m_i}(\mathbb{C})\right\}$
which has dimension
$\displaystyle\dim\mathrm{Com}_G(X)=\sum\limits_{i=1}^km_i^2$
## 1 Comment »
1. [...] each is an irreducible representation of degree . Then we know that we can [...]
Pingback by | October 8, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017373919487, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/game-theory+nash-equilibrium
|
# Tagged Questions
0answers
38 views
### Nash equilibria in 3-player game
Consider 3-player game. Players $x,y,z$, each player has two strategies. $x$: $x_1$ and $x_2$, $y$: $y_1$ and $y_2$, $z:z_1$ and $z_2$. The outcome of the game are represented by the triple ...
3answers
73 views
### nash equilibirum help! seems tricky
Any advice for finding all nash equilibrium for this symmetric game? (B,B) looks like one but I feel like there are more. I tried looking for strictly dominant strategies, but only A weakly dominates ...
1answer
42 views
### Is there an example of zero-sum game that has a Nash equilibrium which is not subgame perfect?
As a refinement of Nash equilibrium, it is known that not all Nash equilibria are subgame perfect. But it seems to me in zero-sum games of perfect information, Nash equilibrium coincides with subgame ...
0answers
25 views
### Algorithm to verify that a weak Nash equilibrium is an ESS, or a strict Nash equilibrium
Is there any algorithm that might assist me in checking whether a weak Nash equilibrium in a signalling game is also an Evolutionarily Stable Strategy, or a strict Nash?
0answers
50 views
### Game theory: Efficient and stable mechanisms
I am having some trouble understanding the notion of efficient and stable mechanisms in game theory. Could someone explain both concepts informally?
1answer
29 views
### Question on the construction of mapping from space of strategy profile into itself in Nash(1951)
To appeal to Brouwer fixed point theorem, Nash(1951) constructed a continuous mapping $\operatorname{T}$ from strategy profile space into inself: For player $i$, the probability of a pure strategy ...
0answers
64 views
### Unexpected hanging paradox maxmin strategies
I have a question about strategies of the players of Unexpected hanging paradox (I am very sorry for a long topic, topic exist already for a while, during this time I try to develop idea how to solve ...
1answer
98 views
### Finding the payoff matrix of a game
A two player zero-sum game can be represented by a $m\times n$ payoff matrix $M$ having $m$ rows and $n$ columns with values in $[0,1]$. The value $M(x,y)$ represent the payoff given to player $1$ ...
2answers
114 views
### Cournot-Nash Equilibrium in Duopoly
This is a homework question, but resources online are exceedingly complicated, so I was hoping there was a fast, efficient way of solving the following question: There are 2 firms in an industry, ...
1answer
38 views
### Theorem that stable equilibria in iterated games are equivalent to coalition-based static equilibria
Consider an $n$-player nonzero sum finite game $G$. I have a vague recollection of a wonderful paper proving an equivalence between (1) steady state Nash equilibria of $G$ played countably many times ...
1answer
232 views
### Nash equilibria and best response functions
a) Let $G=(A,u)$ be a strategic game such that, for each $i \in N$ $A_i$ is a nonempty, convex, compact subset of $R^{m_i}$ $u_i$ is continuous For each $a_{-i}$, $u_i(a_{-i}, . )$ is quasi-concave ...
1answer
127 views
### Prove set of Nash equilibria is closed?
Is this even possible with just the formal definition of a Nash equilibrium, that is, without any additional conditions, such as the utility function is continuous? Thanks.
2answers
108 views
### What is the Nash Equilibrium of the Monty Hall Problem?
The Monty Hall problem or paradox is famous and well-studied. But what confused me about the description was an unstated assumption. Suppose you're on a game show, and you're given the choice of ...
0answers
73 views
### Comparing Nash equilibrium and Pareto optimal actions
Suppose that $(x_{i}, x_{j})$ identify actions for two players $(i,j)$. If we define Pareto optimal actions by $$h(x_i) +h(x_j)- \eta[p(x_i)+p(x_j)]=2\gamma$$ and Nash equilibrium actions by ...
1answer
125 views
### Finding mixed Nash equilibria in continuous games
I'm taking my first (graduate-level) game theory class. I understand how to find Nash equilibria in simple games, such as those given in finite tables, and can see (usually) how to find the mixed ...
2answers
188 views
### Are all Nash equilibrium pure strategies also Nash equilibrium mixed strategies.
while going over wiki page on Battle of the Sexes game I found something funny. This game has two pure strategy Nash equilibria, one where both go to the opera and another where both go to the ...
0answers
243 views
### Mixed Strategy Nash Equilibrium of Rock Paper Scissors with 3 players?
It seems like most game theory tutorials focus on 2-player games and often algorithms for finding Nash equilibria break down with 3+ players. So here is a simple question: Is ...
0answers
255 views
### Analytically solving (calculating Nash equilibrium for) 3-player extensive form games
Let's say we extend the popular half-street Kuhn poker variant to 3 players. The rules would be as follows: ...
0answers
45 views
### Can the Nash bargaining solution be applied in repeated game?
I am trying to develop a model involving two agents who interact strategically to set an optimal time for a joint work. These agents will have to meet repeatedly. I want to derive the optimal time for ...
2answers
178 views
### Does chess have more Nash equilibria than you can find through backwards induction?
All equilibria found with backwards induction on a tree of a perfect information game are Nash equilibria, but in general the reverse is not true: ...
1answer
67 views
### vickery auction question(second-price auction)
The question is as follow, Alice and Bob would both like to own the same manuscript. The manuscript is worth 5 million to Alice and worth 3 million to Bob. The present owner of the manuscript ...
1answer
39 views
### Correlated Equilibrium - Transforming a non-linear objective function into a linear one
I am trying to transform a non-linear objective function into a linear one, in order to create a LP. How might I go about to do this (I have never taken a course in linear programming). I have that I ...
1answer
77 views
### subgame perfect nash equilibrium for war of attrition
the question is as follow: suppose that two players are playing war of attrition, that means both of them could choose either to fight or quit, if either one of them quit, the game ends, and if ...
2answers
761 views
### cournot competition with N-firms
The question is as follow: Here is how we can think of N-firm Cournot competition. Assume all the firms have the same marginal cost C > 0. Firm 1 chooses Q1, Firm 2 chooses Q2, and so on. The market ...
1answer
529 views
### cournot equilibrium and stackelberg equilibrium question
Question is as follow: there are 2 firms that want to enter the apple juice market in country A. There are no existing firms in the market or potential entrants. They need to decide on yearly ...
1answer
95 views
### mixed strategy nash equilibrium question!
Suppose the game consists of only $2$ players, player $1$ and player $2$, and each of them has only $2$ strategies to choose between. This gives a $2$ by $2$ payoff matrix. Player $2$ has no ...
1answer
122 views
### Question on mixed nash equilibrium!
The question is as follows: Think of the Golden Ball game. Now player 1 is money-minded and jealous, and player 2 is very good-hearted, so the payoff matrix is follows: ...
2answers
63 views
### Is equilibrium selection in zero sum game trivial?
Does a zero sum game always has a unique payoff, whatever the nash equilibrium selected is ? even with mixed strategies ? If so, what is the proof ?
2answers
398 views
### Subgame Perfect Nash Equilibrium
My homework question is summarized below: There are 7 players (say P1,P2,...,P7) trying to split 100 dollars. The game starts with P1 proposing an allocation of the 100 dollars to each ...
1answer
244 views
### Iterated prisoners dilemma with discount rate and infinite game averages
Suppose we have two players who are perfectly rational (with their perfect rationality common knowledge) playing a game. On round one both players play in a prisoners dilemma type game. With payoffs ...
1answer
211 views
### Finding Nash Equilibria with Calculus
The problem is summarized as: There are two players. Player 1's strategy is h. Player 2's strategy is w. Both of their ...
1answer
284 views
### Unable to find Nash equilibria in mixed strategies
Here is the strategic form game: Player 2 Left Middle Right Top 2,2 0,0 1,3 Player 1 Middle 1,3 3,0 1,0 ...
0answers
52 views
### what exactly does symmetric game and symmetric equilibrium mean?
I am confused about the ideas of a symmetric game and symmetric equilibrium of a game under the following conditions. 1) pure strategy Nash equilibrium 2) Nash bargaining game where players set a ...
1answer
802 views
### Mixed-strategy Nash equilibria
I didn't find in books, so I'm asking - Mixed-strategy Nash equilibria is always only one or doesn't exist for the one certain game? And I know that there can be several(and can not be at all) pure ...
1answer
154 views
### Find the Nash Equilibrium for a Cournot Game
Consider a Cournot game with $2$ firms. Firm $i$ has constanct marginal cost $C_i$, where $C_1 \lt C_2$. Inverse demand is linear: $p(q)=A-q$ (where $A \gt 2C_2 - C_1$). Find the Nash Equilibrium.
2answers
122 views
### Am I correct in thinking this game has neither Nash Equilibria nor dominant strategies?
I've taken this example from some lecture slides. The slides state there is no Nash equilibrium. I suspect there is also no dominant strategy for either player. Is this true? Two players $i$ and $j$ ...
1answer
72 views
### Nash equilibria of mixed strategies
I am given the following game to find nash equilibria in pure and mixed strategies: \$\begin{pmatrix}& & Litte John &\\ & & c & w \\Big John & c & (5,3) & (4,4) \\ ...
2answers
1k views
### Cournot Nash Equilibrium Between Two Firms
Suppose we have two firms with specialized, but similar products. Suppose market demand for the two products is: $$p_1(q_1,q_2)=a-bq_1-dq_2$$ $$p_2(q_1,q_2)=a-bq_2-dq_1$$ where $d \in (-b,b)$. Suppose ...
1answer
289 views
### Finding Nash equilibria using Support Enumeration
Chapter 3 of the Book "Algorithmic Game Theory" introduces an algorithm (page 8 of that PDF) to find mixed Nash equilibria for a bimatrix game $(A, B)$, which I struggle to understand. ($M$ and $N$ ...
1answer
117 views
### Mixed strategy nash equilibria in from 2xN bimatrix form
I'm looking for a way of finding (manually!) mixed strategy Nash equilibria in a 2xN game. Calling player 1 the player with two strategies and player 2 the one with N strategies, I've constructed ...
1answer
135 views
### Apply game theory/Nash equilibrium in computer security scenario
I want to apply the game theory to a scenario in security . where two ppl the optimal outcome of a game is one where no player has an incentive to deviate from his or her chosen strategy after ...
1answer
220 views
### Meaning of a partial derivative here?
I am given a 'tariff' function for two countries, $i=1, 2$. Both players can select a tariff between 0 and 100. If player $i$ selects $x_i$ and player $j$ selects $x_j$, country $i$ gets a payoff of ...
1answer
250 views
### Finding Nash equilibrium aka finding where lines intersect
I am tagging this as multivariable calculus because it potentially involves taking partial derivatives. I am working on some mathematical treatment for Cournot duopoly models (not homework, just ...
1answer
172 views
### Algebraically finding a Nash equilibrium
Here's the problem that relates to a whole class of problems to which I am trying to figure out a general solution. Given two players 1 and 2 who can select a number from the interval $[0, 1]$, ...
3answers
219 views
### Is there experimental evidence that people ever play mixed Nash equilibrium in real games?
Have any studies been done that demonstrate people (not game theorists) actually using mixed Nash equilibrium as their strategy in a game?
2answers
224 views
### Newspaper competition
A newspaper launches a competition. It said that readers should submit one number between 1 and 1000. A £2000 prize would be awarded to the person that got the closest to 2/3 of the mean of all the ...
1answer
865 views
### Symmetric nash equilibrium
I was reading this paper on position auctions for web ads. Basically, there are N slots each with an expected number of clicks (in a particular time period) $x$. Each agent makes a bid $B_i$ of how ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283905029296875, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/34924/physical-meaning-of-the-sign-basis-in-quantum-mechanics/34980
|
# Physical meaning of the sign basis in quantum mechanics
If we take a hydrogen atom as qubit, let
$\lvert0\rangle$ = unexcited state
$\lvert1\rangle$ = excited state
then what is the meaning of measuring the qubit value in the sign basis? If the atom may only be in excited or unexcited state, but $\lvert+\rangle$ and $\lvert-\rangle$ are superpositions of those states — then what would the outcome of the measurement be — also a superposition of $\lvert+\rangle$ and $\lvert-\rangle$? Can anyone please help to understand the idea behind the sign basis?
-
## 2 Answers
As the measurement postulate says, if you projectively measure a qubit, initially in a state $|\psi\rangle$, in the basis $\{|+\rangle,|-\rangle\}$, you will get the state $|+\rangle$ with probability $|\langle+|\psi\rangle|^2$, and similarly for $|-\rangle$.
For the particular implementation you mention, a two-level atom whose eigenstates are the logical $|0\rangle,|1\rangle$ states, there is no general, useful, real physical quantity${}^1$ represented by the operator $$X=|0\rangle\langle1|+|1\rangle\langle0|$$ whose eigenstates are $|+\rangle$ and $|-\rangle$ (check it!). To do a projective measurement on that basis, the standard (though not necessarily unique) procedure is to apply a $\pi/2$ Rabi pulse which will bring $|+\rangle$ to $|0\rangle$ and $|-\rangle$ to $|1\rangle$, and measure in the computational basis. One can then apply an inverse pulse if needed.
There are other implementations, however, where this basis has a more physical significance. For example, if your logical states are the up and down states of a spin-$\frac{1}{2}$ particle measured along the $z$ direction, then $X$ is the spin along the $x$ direction (which is no coincidence).
${}^1$ For any given atom, though, you can probably find detectable physical properties of interest. If, say, $|0\rangle$ is an $s$ state and $|1\rangle$ is a $p_z$ state, which may very well be the case, you'll find that the $|\pm\rangle$ states are localized towards either pole. A measurement of position above/below the $xy$ plane will closely approximate an $X$ measurement in most such circumstances. Similarly, a measurement of momentum going to positive or negative $z$ will approximate a measurement along $Y=i|0\rangle\langle1|-i|1\rangle\langle0|$, whose eigenstates $|\pm i\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm| i\rangle)$ look like $e^{\pm ikz}$ near the origin.
-
It is not true that the atom may be only in either unexcited or excited state; as everywhere in quantum mechanics, the atom may be in any superposition of those. However, those states are not eigenstates of the hydrogen atom, therefore the superposition will evolve (in the qubit picture, you'll get an intrinsic $z$-rotation of the qubit). Also note that the lifetime of your (evolving) superposition will not be longer than the lifetime of the excited state (and the emission of the photon on decay certainly counts as measurement in the eigenbasis, because the photon will only be emitted for the excited state).
However, superpositions of two longer-lived excited states of atoms are indeed used to implement qubits.
Note that, of course, you get the $\lvert+\rangle$ state in the atom as sum of the wave functions of the ground state and the excited state (times normalization, of course). Thus you can easily calculate what the state "looks like" in space (that of course depends on the specific excited state you've chosen).
-
The question was about the outcome of the measurement. Measurements collapse superpositions, so it's not helpful to inform that atoms can be in superpositions before measurement. What is being asked is what is being measured, physically. – user1247 Aug 26 '12 at 20:51
@user1247: After an ideal measurement the atom is in the state you measured, that's how an ideal measurement is defined. Before the measurement it probably wasn't. – celtschk Aug 26 '12 at 20:53
Isn't the question how the measurement is made, not what the wave function might look like after it is made? – user1247 Aug 26 '12 at 21:19
In other words, what physically corresponds to the operator |+->? – user1247 Aug 26 '12 at 21:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411724209785461, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/280157/number-theory-and-congruency/280161
|
# Number Theory and Congruency
I have the following problem: $$2x+7=3 \pmod{17}$$
I know HOW to do this problem. It's as follows:
$$2x=3-7\\ x=-2\equiv 15\pmod{17}$$
But I have no idea WHY I'm doing that. I don't really even understand what the problem is asking, I'm just doing what the book says to do. Can someone explain what this problem is asking and what I'm finding? Thanks
-
I know that x=6.5 isn't a solution, but why? Is it just understood that we don't accept fractional solutions? I read this problem as "I took a number x, multiplied it by 2, added 7, and got a number y. I then divided y by 17, and found the remainder to be 3". How does the restriction on x having to be an integer enter into the problem? – barrycarter Jan 16 at 18:58
## 6 Answers
Starting with the simpler question: I took a number, multiplied it by 2, added 7, and got 3. What is the number?
The answer, as you point out, is -2.
How is the actual question different?: I took an INTEGER x (fractions not allowed), multiplied it by 2, added 7 and got a number y. When I divide y by 17, the remainder is 3. What is x?
Does the original answer still work? If we take x=-2, divide it by 17 and take the remainder, we get 15. Now, we multiply by 2 to get 30, and add 7 to get y=37. Is this a valid solution? When we divide y by 17, the remainder is 3. So, yes y=37 is a solution. So the answer is x=-2.
Are there other solutions? Let's try x=35. If we multiply x by 2 and add 7, we get 77. Then, if we divide by 17, and take the remainder, we get 9. So, x=35 is not a solution.
If we try many integers for x, we'll find that x must be one of the following:
...., -36, -19, -2, 15, 32, 49, ....
In other words, any multiple of 17 plus 15 will work for x, and ONLY those numbers will work. Of course, fractions like 13/2 (ie 6.5) will also work, but we don't include those in modulo problems.
So the "solution set" (not just a single solution) is:
{x = 15 + 17n, any integer n}
Another way of saying the above is: "x=15 (mod 17)" [with 3 lines in the equal sign, standing for "congruence"].
Of course, you don't have to use trial and error to find x. As you noted, you actually only need to solve for one value of x, and the others will differ by multiples of 17.
-
You are just solving a linear equation in one variable, except that you are doing it modulo $17$, that is, in the field $\mathbf{Z} / 17 \mathbf{Z}$.
-
And maybe it is worth noting 17 is prime? – Peter Tamaroff Jan 16 at 20:06
The $\bmod{17}$ congruence classes are represented by $0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16$. You're trying to find out which of those classes $x$ belongs to. That's why you're doing that.
-
So if x=-2...what exactly does that tell me? – Charlie Yabben Jan 16 at 17:17
It tells you which of the $17$ congruence classes $x$ belongs to. – Michael Hardy Jan 16 at 18:56
The first two lines are solving a linear equation as you are used to. I suspect the problem is the conversion $-2 \equiv 15 \pmod {17}$ In modular arithmetic we regard numbers that differ by the modulus as equivalent. If you plug you solution in, you get $2\cdot 15 +7=37$ Now we subtract off all the multiples of $17$ we can and get $2 \cdot 15+7=37 \equiv 3 \pmod {17}$
-
Your original question is entirely equivalent (for $x,y\in \mathbb Z$) to: $$2x+7=3+17y$$ with the perspective that you want to know the value of $x$ and are not particularly interested in the value of $y$. Note that if you have found a solution $(x,y)$ and you set $x'=x+17k$ then:$$2x'+7=2x+34+7=3+17y+34=3+17(y+2)=3+17y'$$
where $y'=y+2$. So you can always find a solution with $0 \leq x \leq 16$.
Now we start on your solution, noting that $y$ must be even, so that we can put $y=2z$:$$2x+7=3+34z$$$$2x=-4+34z$$$$x=-2+17z$$
And we choose $z$ to give us the least nonnegative value of $x$, though this equation gives us the whole family of solutions.
Basically the notation of modular arithmetic enables the distinct roles of $x$ and $y$ to be formalised in such a way that these calculations can be done more efficiently and presented more economically, without attention to detail which turns out to be irrelevant.
-
You are essentially finding the value of the congruence class of $17$ to which $x$ belongs. The congruence classes for $\mod{17}$ are all integers $k$ such that $0 \le k \lt 17$: $k\in \{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16\}$.
You have found that $x$ belongs to the equivalence class of $k = 15 = [15]_{\mod 17}$. This equivalence class has an infinite number of integers: $[15]_{\mod 17} = \{..., -19, -2, 15, 32, 49, ...\}:\;$ $x$ can be any number that when divided by 17 leaves gives a integer quotient and a remainder of $15$, but we typically use its equivalence class representative as the solution to identify the class it belongs to.
The congruence classes of, say 17 are defined so that for each class represented by $k$, the class consists all integers $n, \;n = 17m + k$, where $m$ is the quotient when divided by $17$, and $k$ is the remainder left over when divided by $17$.
You're looking for the remainder $k$ when $x$ is divided by $17$. $x$ can actually be any number that has a remainder of $15$ when divided by $17$.
In your problem you are solving to find the value $x = k$ for which it is true that $$2x + 7 \equiv 3 \pmod {17} \implies x\equiv 15 \pmod {17}$$ $$\implies (x - 15) \equiv 0 \pmod {17} \implies x = k = 15$$ so that $x = k = 15 \implies x - k$ has no remainder when divided by 17.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586532115936279, "perplexity_flag": "head"}
|
http://polymathprojects.org/2009/07/27/proposal-boshernitzans-problem/
|
# The polymath blog
## July 27, 2009
### Proposal: Boshernitzan’s problem
Filed under: polymath proposals — Terence Tao @ 2:32 am
Another proposal for a polymath project is the following question of Michael Boshneritzan:
Question. Let $x_1, x_2, x_3, \ldots \in {\Bbb Z}^d$ be a (simple) path in a lattice ${\Bbb Z}^d$ which has bounded step sizes, i.e. $0 < |x_{i+1}-x_i| < C$ for some C and all i. Is it necessarily the case that this path contains arbitrarily long arithmetic progressions, i.e. for each k there exists $a, r \in {\Bbb Z}^d$ with r non-zero such that $a, a+r, \ldots, a+(k-1)r \in \{x_1,x_2,x_3,\ldots\}$.
The d=1 case follows from Szemerédi’s theorem, as the path has positive density. Even for d=2 and k=3 the problem is open. The question looks intriguingly like the multidimensional Szemerédi theorem, but it is not obvious how to deduce it from that theorem. It is also tempting to try to invoke the Furstenberg correspondence principle to create an ergodic theory counterpart to this question, but to my knowledge this has not been done. There are also some faint resemblances to the angel problem that has recently been solved.
In honour of mini-polymath1, one could phrase this problem as a grasshopper trying to jump forever (with bounded jumps) in a square lattice without creating an arithmetic progression (of length three, say).
It is also worth trying to find a counterexample if C, d, k are large enough. Note that the continuous analogue of the problem is false: a convex curve in the plane, such as the parabola $\{ (x,x^2): x \in {\Bbb R}\}$, contains no arithmetic progressions, but is rectifiable. However it is not obvious how to discretise this example.
In short, there seem to be a variety of tempting avenues to try to attack this problem; it may well be that many of them fail, but the reason for that failure should be instructive.
Andrew Mullhaupt informs me that this question would have applications to short pulse rejected Boolean delay equations.
## 22 Comments »
1. A very small remark, though one that does have some bearing on how one might approach the problem, is that the d=1 case follows even from van der Waerden’s theorem, since each interval of size C (on at least one side of the origin) must contain a point in the set. One can therefore partition into equal intervals and colour them according to which member belongs to the set.
If it really is the case that no counterexample is known for any C, d and k, then that certainly sounds a tempting thing to think about.
Comment by — July 27, 2009 @ 1:34 pm
• I can see, by the way, that it is going to be difficult to restrain participants from making useful comments on a proposed project before the project really starts :-). But perhaps having a limited number of such comments serve as a good “warm-up” before the project begins, somewhat analogous to the preliminary laps an indoor cyclist takes before an Olympic race. Perhaps what we can do here in this pre-polymath stage is adhere much more strictly to the rule that no offline thinking about the problem is allowed.
Comment by — July 27, 2009 @ 2:51 pm
• I think there’s a big difference between making remarks that occur to one immediately (in the above case, I was familiar with the general principle that sets with bounded gaps “only need van der Waerden” so didn’t have to think) and making remarks that are a genuine attempt to get things going.
As for offline thinking, there is an argument for allowing it to a limited extent. Consider, for example, the case of DHJ. If I had said that anybody who wanted to was welcome to spend a month trying to solve the problem, then it is fairly unlikely that anybody would have, but at the end of the month there might have been more people who were familiar enough with the relevant ideas to be able to follow the discussion and participate. However, there could be a problem that everybody saved up their preliminary ideas and the result was an even huger burst of initial activity than we actually experienced. So in the end, perhaps the best solution is to have no offline thinking but to take very seriously the need for a lot of expository work to accompany the research.
Comment by — July 27, 2009 @ 4:02 pm
• I think there is only 5 levels of reply possible and when you reply to the 6 level it bump the highest level reply to the next numbering sort of the RSK algorithm.
Comment by — July 27, 2009 @ 5:46 pm
• Not only does the d=1 case follow from vdW, it *implies* vdW. Consider a coloring of the naturals with r colors (r minimial) and without long APs. Clearly, all r colors are used infinitely often. The set of naturals colored red either have bounded gaps (and so has long APs), or it has arbitrarily long gaps. By looking at the r-1 coloring of ever longer segments between reds, and applying the usual compactness-type argument, we get an r-1 coloring of the naturals, and by induction we’re done.
Comment by Kevin O'Bryant — July 29, 2009 @ 2:47 am
2. Gil, I have set the comment depth at 10. I would imagine that this would be adequate for just about any conceivable discussion…
Comment by — July 27, 2009 @ 6:04 pm
3. Two concerns about this problem are: (1) Is the problem sufficiently well known; (2) As this problem is fairly closely related to polymath1 there is a “strategic” question if it is a good idea to have the next project in the same area, and if the same general area is chosen how this problem is compared with others.
An advantage of the problem is that the answer is not known, it can go both ways, so it has a different nature from polymath1, polymath2 and the minipolymath.
If there are a veriety of promising avenues it may be useful to write them down (even if some are fairly obvious to experts) and this can be useful to a researcher working on this problem or even on other problems in this area regardless if the problem is chosen.
Comment by — July 27, 2009 @ 9:33 pm
4. The following very nice related problem was posted and solved at the MathLinks forum a while back (see http://www.mathlinks.ro/viewtopic.php?t=5294).
“On the plane with lattice points,there is a frog. First it is on the point(0,0).At each second if it is on point (x,y) it goes to point (x+1,y) or (x,y+1). Prove that for each n there are n collinear points on the frog’s path.”
The general Boshernitzan question for d=2, if true, would solve the above problem directly. It might be a good idea to study the ‘frog’ problem first.
Comment by Raghu — July 28, 2009 @ 6:15 pm
5. I asked a variation of this question at the Western Number Theory Conference in 2001, see http://www.math.colostate.edu/~achter/wntc/problems/problems2001.pdf , problem 000:12. Tom Brown had worked on this and related questions, and pointed out that the answer is known to be “no”. In particular (quoting the pdf file referenced above; I haven’t read the article itself):
F. M. Dekking, Strongly nonrepetitive sequences and progression-free sets, JCT-A 27 (1979) 181–185, MR 81b:05027.
Dekking shows that there is an infinite sequence of plane lattice points where each gap is (0, 1) or (1, 0) such that no 5 points are in AP.
Comment by Kevin O'Bryant — July 29, 2009 @ 2:33 am
• It looks to me like Dekking solves the problem (in the negative), doesn’t he? The way he phrases it is that he builds infinite words, in an alphabet of r letters, so that no n consecutive blocks are permutations of each other. If we turn these letters into steps in the unit directions in Z^r, then this gives paths that avoid n+1 term arithmetic progressions. Dekking gives counter-examples for (r,n) = (4,2) and (3,3). He states that (25,2) was done by Evdokimov http://www.ams.org/mathscinet-getitem?mr=234842 and (2,5) by Justin http://www.ams.org/mathscinet-getitem?mr=301119 .
Am I missing something?
Comment by — July 29, 2009 @ 12:53 pm
• Ah, I see, the problem wasn’t open for all (k,d).
In that case, let me make a suggestion: Terry says that the case (k,d) = (3,2) is open. Dekking has solved (3,4). Could we modify Dekking’s solution so that its projection to the plane remains a solution?
Comment by — July 29, 2009 @ 1:05 pm
• If the following is true:
Dekking shows that there is an infinite sequence of plane lattice points where each gap is (0, 1) or (1, 0) such that no 5 points are in AP.
Don’t we have a complete solution to the problem as posted?
If dimension is one we are done.
If we have two linearly independent possible steps we can use them in place of the gaps (0,1) and (1,0) in Dekkings proof and limit arithmetic progressions to length 5 which solves the problem as stated since we are trying to block arbitrarily long arithmetic progressions and we can in fact block all those of length six.
Comment by kristalcantwell — July 29, 2009 @ 4:02 pm
6. In addition to Kevin, I had also asked a related sort of question in the past (to Olof Sisask), though I didn’t publish it anywhere. You can find the note I sent to Olof at
http://www.math.gatech.edu/~ecroot/dendrites.tex
Comment by Ernie Croot — July 29, 2009 @ 1:37 pm
• If it doesn’t turn out to have an unexpectedly easy solution, then that’s a very nice question!
Comment by — July 29, 2009 @ 2:15 pm
• Thanks! The third question in the note I feel is probably quite hard, but the second one can surely be worked out easily. I suppose one can think of the third problem as a type of “Turan-type analogue” to Michael B.’s problem.
On an unrelated matter, I am glad to hear about the paper of Dekking on paths without 5APs. I had come to that problem myself, and had asked two grad students here at Georgia Tech (my student Evan Borenstein and Michael Lacey’s student Bill McClain) about whether they could construct such paths (without 5APs), but we didn’t ever produce one, though we sort of knew how to do it.
Comment by Ernie Croot — July 29, 2009 @ 2:38 pm
• Can anyone post the Dekking paper or give a short summary of the construction? I couldn’t find the paper on the web and my (like most) library doesn’t have access to JCT volumes as far back as 1979.
Comment by Anonymous — July 29, 2009 @ 3:26 pm
• It was the third one that I was referring to. I like the way it seems to invite a curious mixture of topological arguments with more conventional density ones — indeed, so curious that it isn’t obvious at all how a proof could go if the answer was positive.
Comment by — July 29, 2009 @ 4:16 pm
7. I’ve set up a rudimentary wiki page for this problem at
http://michaelnielsen.org/polymath1/index.php?title=Boshernitzan’s_problem
It also incorporates some information sent to me by email by Michael Boshneritzan. As always, further contributions are welcome (currently, for instance, Croot’s problems are not on the wiki).
Comment by — August 3, 2009 @ 1:12 pm
8. “It is also worth trying to find a counterexample if C, d, k are large enough. Note that the continuous analogue of the problem is false: a convex curve in the plane, such as the parabola \{ (x,x^2): x \in {\Bbb R}\}, contains no arithmetic progressions, but is rectifiable. However it is not obvious how to discretise this example.”
If the problem is changed so integers are replaced by real numbers then it is still false as there is only a finite number of points forming lines and an uncountable number of points that are real numbers distance C from the original point.
Comment by kristalcantwell — August 3, 2009 @ 7:15 pm
9. In all of the negative results given by Dekking, the counterexamples have step sizes that are not only bounded, but are actually all equal to one. However, in the case (d,k)=(2,3), one can verify by hand that if a counterexample exists, it must have step sizes greater than one.
For the two remaining cases, I wrote a little algorithm to find long paths which avoid AP’s, assuming we only allow a step size of one. It found paths of length > 80 for both (d,k)=(2,4) and (d,k)=(3,3) in a matter of minutes. This seems to suggest that it is possible to construct a counterexample in these cases. Unfortunately, the constructed paths don’t appear to follow any obvious pattern.
Comment by Kevin V. — August 4, 2009 @ 7:27 am
10. Another question of this type: Let $x_i$ be lattice points in $d$ dimensions, and suppose that $\{x_{i+1}-x_i \}$ is a finite set. Must the set $\{x_i\}$ contain arbitrarily large symmetric subsets?
The set $S$ is symmetric if there is a $c$ (not necessarily in $S$) such that $S=c-S$. For example, arithmetic progressions are symmetric.
Comment by Kevin O'Bryant — August 9, 2009 @ 8:28 am
• Oops, this has been solved, too:
MR1881964 (2003b:05153)
Banakh, T. O.(UKR-LVV-MM); Kmit, I. Ya.(UKR-LST-NMP); Verbitsky, O. V.(UKR-LVV-MM)
On asymmetric colorings of integer grids. (English summary)
Ars Combin. 62 (2002), 257–271.
Comment by Kevin O'Bryant — August 17, 2009 @ 6:06 am
RSS feed for comments on this post. TrackBack URI
Theme: Customized Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508295059204102, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/combinatorics+partitions
|
# Tagged Questions
2answers
128 views
### Partition a set into $k$ non-empty subsets
The Stirling number of the second kind is the number of ways to partition a set of $n$ objects into $k$ non-empty subsets. In Mathematica, this is implemented as ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8906158804893494, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.