url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://gilkalai.wordpress.com/2012/11/12/karim-adiprasito-flag-simplicial-complexes-and-the-non-revisiting-path-conjecture/?like=1&source=post_flair&_wpnonce=8149a33672
Gil Kalai’s blog ## Karim Adiprasito: Flag simplicial complexes and the non-revisiting path conjecture Posted on November 12, 2012 by This post is authored by Karim Adiprasito The past months have seen some exciting progress on diameter bounds for polytopes and polytopal complexes, both in the negative and in the positive direction.  Jesus de Loera and Steve Klee described simplicial polytopes which are not  weakly vertex decomposable and the existence of non weakly k-vertex decomposable polytopes for k up to about $\sqrt{d}$ was proved  by Hähnle, Klee, and Pilaud in the paper  Obstructions to weak decomposability for simplicial polytopes. In this post I want to outline a generalization of a beautiful result of Billera and Provan in support of the Hirsch conjecture. I will consider the simplicial version of the Hirsch conjecture, dual to the classic formulation of Hirsch conjecture. Furthermore, I will consider the Hirsch conjecture, and the non-revisiting path conjecture, for general simplicial complexes, as opposed to the classical formulation for polytopes. Theorem [Billera & Provan `79] The barycentric subdivision of a shellable simplicial complex satisfies the Hirsch Conjecture. The barycentric subdivision of a shellable complex is vertex decomposable. The Hirsch diameter bound for vertex decomposable complexes, in turn, can be proven easily by induction. This is particularly interesting since polytopes, the objects for which the Hirsch conjecture was originally formulated, are shellable. So while in general polytopes do not satisfy the Hirsch conjecture, their barycentric subdivisions always do! That was a great news! Shellability is a strong combinatorial property that enables us to decompose a complex nicely, so it does not come as a surprise that it can be used to give some diameter bounds on complexes. Suprisingly, however, shellability is not needed at all!  And neither is the barycentric subdivision! A simplicial complex Σ is called flag if it is the clique complex of its 1-skeleton. It is called normal if it is pure and for every face F of Σ of codimension two or more, Lk(F,Σ) is connected. Theorem (Adiprasito and Benedetti):  Any flag and normal simplicial complex Σ satisfies the non-revisiting path conjecture and, in particular, it satisfies the Hirsch conjecture. This generalizes the Billera–Provan result in three ways: – The barycentric subdivision of a simplicial complex is flag, but not all flag complexes are obtained by barycentric subdivisions. – Shellability imposes strong topological and combinatorial restrictions on a complex; A shellable complex is always homotopy equivalent to a wedge of spheres of the same dimension, and even if a pure complex is topologically nice (if, for example, it is a PL ball) it may not be shellable, as classic examples of Goodrick, Lickorish and Rudin show. Being normal still poses a restriction, but include a far wider class of complexes. For example, every triangulation of a (connected) manifold is normal, and so are all homology manifolds. – Instead of proving the Hirsch conjecture, we can actually obtain the stronger conclusion that the complex satisfies the non-revisiting path conjecture, which for a given complex is stronger than the Hirsch conjecture. A geometric proof of our theorem appeared in a recent paper “Metric geometry and collapsibility”  with Bruno Benedetti. . I will give here a short combinatorial proof. ### Construction of a combinatorial segment Preliminaries Lk(F,Σ) shall denote the link of a face F of Σ, and St(F,Σ) shall denote the star of F in Σ. Let $d_\Sigma(x,y)$ denote the distance between two vertices in the 1-skeleton of Σ. Let $d_\Sigma(S,T)$ denote the distance between two vertex sets S, T in Σ. Let $p_\Sigma(S,T)$ denote the pairs of points in S, T that realize the distance $d_\Sigma(S,T)$, and let $p_\Sigma(x,T)$ resp. $g_\Sigma(x,T)$ denote the set of vertices of T realizing the distance $d_\Sigma({x},T)$ resp. the set of points y in Lk(x,Σ) with the property that $d_\Sigma(\{y\},T)+1=d_\Sigma(\{x\},T)$. A vertex path shall mean a path in the 1-skeleton of Σ, and facet path is short for facet-ridge path. Part 1: From a facet X to a vertex set S. We construct a facet path from a facet X of Σ to a subset S of the vertex set of Σ, i.e. a facet path from X to a facet intersecting S, with the property that S is intersected by the path Γ only in the last facet of the path. If Σ is 1-dimensional, choose a shortest vertex path realizing the distance $d_\Sigma(X,S)$. The edges in that path, including X, give the desired facet path. If Σ is of a dimension d larger than 1, set $X_0:=X$, and proceed as follows: 1. If $X_0$ intersects S, stop the algorithm. If not, proceed to step 2. 2. Let $x_0$ be any vertex of $X_0$ that minimizes the distance to S. Set $S_0=p_\Sigma({x_0},S)$. Using the construction for dimension d-1, we can construct a facet path in $\mathrm{Lk}(x_0,\Sigma)$ from the facet $\mathrm{Lk}(x_0,X_0)$ to the vertex set $g_\Sigma(x_0,S_0)$. By considering the join of the elements of that path with $x_0$, we obtain a facet path from $X_0$ to the vertex set $g_\Sigma(x_0,S_0)$. Call the last facet of the path $X_1$, and the vertex of $g_\Sigma(x_0,S_0)$ it intersects $x_1$. Repeat the procedure with $x_{i+1}$ instead of $x_i$, $X_{i+1}$ instead of $X_i$, and $S_{i+1}=d_\Sigma({x_{i+1}},S)$ instead of $S_i$. The process stops once the facet path constructed intersects S. Part 2: From a facet X to another facet Y. Using Part 1., construct a facet path from X to a vertex z of the vertex set of Y, and let Z denote the last facet of the path. If Σ is of dimension 1, complete the path to a facet path from X to Y by adding the facet Y to the path. If Σ is of dimension d greater than 1, apply the (d-1)-dimensional construction to construct a facet path in Lk(z,Σ) from Lk(z,Z) to Lk(z,Y), and lift this to a facet path in Σ by joining the elements of the path with $x_0$. This finishes the construction. We call the facet paths constructed combinatorial segments. ### The combinatorial segment is non-revisiting. We start off with some simple observations and notions for combinatorial segments: 1. A combinatorial segment Γ comes with a path $(x_0, x_1, \dots , x_m)$ (see Part 1. of the construction). This is a shortest path in the 1-skeleton, realizing the distance $m = d_\Sigma(X,S)$, called the thread t of the combinatorial segment Γ. 2. Every facet of the combinatorial segment Γ is associated to a vertex of the thread like this: F intersects the thread t, and there is a unique i such that F contains $x_i$ in t, but not $x_{i+1}$. Call $x_i$  the pearl of F in t. We consider a combinatorial segment and its thread with the natural order from X to S resp. from X to Y. Lemma S: If F is a facet of Γ, where F has pearl \$latex x_i\$ in t, and v is a vertex of Γ s.t. F and $x_{i+1}$ lie in St(v,Σ), then the first facet G of Γ whose pearl is $x_{i+1}$ is a facet of St(v,Σ) as well, and the part $\Gamma_{FG}$ of the combinatorial segment from F to G lies in St(v,Σ). Proof: The lemma is clear if v is in t (i.e. v coincides with $x_i$ or $x_{i-1}$). To see the case v not in t, we can use induction on the dimension of Σ: For 1-dimensional complexes, this is again clear. If Σ is of dimension d larger than 1, consider the (d-1)-complex $\Sigma'=\mathrm{Lk}(x_i,\Sigma)$. $F'=\mathrm{Lk}(x_i,F)$ is a facet of the combinatorial segment $\Gamma' =\mathrm{Lk} (x_i,\Gamma)$. Since the complex $\mathrm{St}(v,\Sigma)$ contains $x_{i+1}$ and since Σ is flag, we obtain that St(v,Σ’) contains $x_{i+1}$. Furthermore, F’ is clearly contained in St(v,Σ’). Thus, by induction assumption, the portion $\Gamma'_{F'G'}$ of $\Gamma'=\mathrm{Lk}(x_i,\Gamma)$ from F’ to the first facet G’ of Γ’ containing $x_{i+1}$ is contained in St(v,Σ’). Since the combinatorial segment $\Gamma_{FG}$ in the relevant part from $F=x_i\ast F'$ to $G=x_i\ast G'$ is obtained from $\Gamma'_{FG}$ by join with $x_i$ (i.e. $\Gamma_{FG}=x_i\ast \Gamma'_{FG}$), we have the desired statement. This finishes the proof of the Lemma. This suffices to prove that a combinatorial segment Γ must be non-revisiting: Proof of the Theorem: Consider a combinatorial segment Γ that connects a facet X with a facet Y of Σ. Let A, B be any two facets of Γ, with pearls $x_i,\, x_j$ in t respectively that both lie in the star of a vertex of v in Σ. Then the part $\Gamma_{AB}$ of Γ from A to B (B coming, w.l.o.g., after A in Γ) lies in the star St(v,Σ) of v entirely.To see this, there are two cases to consider: If i=j This case follows directly from Lemma S, since B is somewhere between A and the first facet G of Γ to be associated with $x_{i+1}$, so $\Gamma_{AB}\subset\Gamma_{AG}\subset \mathrm{St}(v,\Sigma)$, as desired. If i<j In this case, we have i+1=j. Now, St(v,Σ) includes $x_j=x_{i+1}$ since it includes B, and if C is the first facet of Γ associated to $x_{i+1}$, then $\Gamma_{AC}$ lies in St(v,Σ) by Lemma S, and, since $\Gamma_{CB}$ lies in St(v,Σ) by the argumentation for i=j, we have that $\Gamma_{AB}=\Gamma_{AC}\cup \Gamma_{CB}\subset \mathrm{St}(v,\Sigma)$, as desired. Thus, for $v \in \Sigma$ and $A, B \in \Gamma \cap \mathrm{St}(v,\Sigma)$, we have $\Gamma_{AB}\subset \Gamma \cap \mathrm{St}(v,\Sigma)$, finishing the proof that Γ is non-revisiting. About these ads ### Like this: Like Loading... This entry was posted in Convex polytopes, Guest blogger and tagged Convex polytopes, Flag complexes, Hirsch conjecture, Karim Adiprasito. Bookmark the permalink. • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 68, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8382309079170227, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4132/markowitz-mean-variance-optimization-as-error-maximization/4136
# Markowitz mean-variance optimization as “error maximization” I hear it said a lot that standard MV optimization "maximizes errors". But I can't find a good explanation for what exactly they mean by this "maximization" of estimation error. I understand that if you simulate $500$ matrices of returns $T-t$ months into the future from $t$ (now) to $T$ (future), and you do MV optimization on each matrix at $T$ to arrive at $500$ frontiers, then these will differ wildly from the MV optimization at $t$. (Figure 1 here). But what's this saying? - @user2921: You can accept one of the answers if you are satisfied by it :-) – vonjd Jan 28 at 16:18 ## 3 Answers I think the original reference of mean-variance portfolios being “error maximizing portfolios” is: Michaud, R. (1989). “The Markowitz Optimization Enigma: Is Optimization Optimal?” Financial Analysts Journal 45(1), 31–42. The reason is that even small changes in the estimated means can result in huge changes in the whole portfolio structure. Have a look at this new piece from Andrew Ang which explains this quite well ("4.1 Sensitivity to Inputs", p. 26-27): Mean-Variance Investing by Andrew Ang EDIT: For a different perspective see this paper from Mark Kritzman: Are Optimizers Error Maximizers? Hype versus reality? From the abstract: Small input errors to mean-variance optimizers often lead to large portfolio misallocations when assets are close substitutes for one another. In fact, when the assets are close substitutes, the return distribution of the presumed optimal portfolio is actually similar to the distribution of the truly optimal portfolio. Contrary to conventional wisdom, therefore, mean-variance optimizers usually turn out to be robust to small input errors when sensitivity is measured properly. A free version can be found on pages 165-168: Here. - Do you have any references that you could refer me to that deal with the reasons why re-sampling means that estimation error is less problematic? – user2921 Sep 17 '12 at 11:42 2 +1. There's alao a 2006 paper by Sebatian Ceria and Robert Stubbs that also illustrates this with an example. They both are at Axioma so you can also find some more research there. – Quant Guy Sep 17 '12 at 12:48 1 I think of Michaud's resampling as an application of Bayesian techniques. Under Black-Litterman/Entropy Pooling framework, if you take views on every asset and perform unconstrained optimization with equal confidence in the views, then the optimal portfolio is an average of the individual portfolios if you had full confidence in them. You could come up with arbitrary views by sampling from $\mu_{r}\sim N\left(\mu,\frac{\Sigma}{T}\right)$ where $T$ is the number of observations. More observations, less estimation error. – John Sep 17 '12 at 21:14 – Quant Guy Sep 18 '12 at 19:16 1 – Kevin Schmit Dec 22 '12 at 7:58 show 4 more comments One of the most salient empirical examples of "error maximization" is provided by Chopra and Ziemba (1993): Chopra, Vijay K., and William T. Ziemba. 1993. “The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice.” Journal of Portfolio Management, vol. 19, no. 2 (Winter):6–11. The authors compare the performance of mean-variance optimization using (a) historical data and traditional sample estimators against a portfolio formed with (b) perfect information of the future. The authors find after comparing the performance of (a) relative to the clairvoyant portfolio (b), 1. Using historical returns to estimate the covariance matrix is sufficient. 2. Using historical returns to estimate the mean return incurs a massive performance shortfall. Thus, using a shrinkage estimator, or simply setting all returns equal to a constant $\hat{\mu}_i = c$ $\forall i$ (equivalent to the minimum variance portfolio), is a superior alternative. - Let $\mu$ and $\Sigma$ be the expected mean and covariance matrices for a mean-variance optimization. For a standard, unconstrained, utility-based optimization, it can be shown that the optimal weights will equal $$w=\frac{1}{\lambda}\Sigma^{-1}\mu$$ where $\lambda$ is an arbitrary risk aversion coefficient. In order to measure the sensitivity of the weights to the expected return, you could calculate $$\frac{\partial w}{\partial\mu}=\frac{1}{\lambda}\Sigma^{-1}$$ As a result of the nature of the inverse of the covariance matrix, this formula suggests that arbitrary changes in $\mu$ tend to lead to large changes in portfolio weights. - John: Could you pls. expound on "the nature of the inverse of the covariance matrix" - thank you. – vonjd Sep 18 '12 at 11:52 1 Well imagine that there is one element in the covariance matrix, so the inverse is one over the variance of the asset. If the standard deviation is 20%, the inverse of the variance is 25. This is why small changes in the means matter. – John Sep 18 '12 at 13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8843872547149658, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73130/if-a-sequence-converges-in-l2-and-we-compose-every-function-with-a-non-singular-f
## If a sequence converges in L2 and we compose every function with a non-singular function, does it still converge? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $F:(0,1)\rightarrow (0,1)$ be a non-singular function with respect to the lebesgue measure $\mu$ (so $\mu\sim\mu \circ F$ ) . let $\lbrace f_n/n\in N\rbrace\subset L^{2}([0,1])$ be a sequence of simple integrable functions and $f\in L^{2}([0,1])$ such that $f_n\rightarrow f$ in the 2-norm. is it correct that also $f_n\circ F\rightarrow f\circ F$ ? If not, what are the conditions on $F$ such that this inplication is correct? - 5 This question is at the homework level in a basic real analysis course. Did you try math.stackexchange.com MO is devoted to research level questions. – Bill Johnson Aug 18 2011 at 14:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183359742164612, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/78723/how-does-the-backward-forward-algorithm-work-if-there-is-no-end
# How does the backward/forward algorithm work if there is no end? I'm using Jason Eisner's spreadsheet to understand HMM more better. There's a box at the top that have a transition matrix. I see the Cold day and Hot day options, but don't understand why there's a stop option there(its 10% of occurring). I think its needed for the backwards part of the algorithm, but I'm not sure what I can replace it with if there is no end. I want to feed data to train the model with no specific end date. I am trying to train a model based on this question, and all options are equally likely at first, so I'm trying to take them and give them all the same percent chance. Can anyone help me understand this? - Afaik you stop once your changes are smaller than some $\varepsilon$ assuming that this signifies convergence. – Raphael Nov 3 '11 at 22:03 I don't understand, I'm kind of learning as I go along. Can you explain it in layman's terms? – Lostsoul Nov 4 '11 at 19:40 Forward-backward iteratively towards something like a maximum likelihood estimator (only a local maximum, though). If it converges, chances are you will keep getting infinitesimal small improvements not worth your computing time. Therefore, you stop computing when your probabilities have stabilised enough for your taste. – Raphael Nov 5 '11 at 13:29 Ahh I understand. Can you post that as a answer so I can accept it. – Lostsoul Nov 5 '11 at 14:36 ## 1 Answer The Forward-backward algorithm is used to train HMM model probabilities from observations iteratively. It moves towards something like a maximum likelihood estimator (only a local maximum, though). If it converges, chances are you will keep getting infinitesimal small improvements not worth your computing time until changes fall below your floating point precision. Therefore, you stop computing when your probabilities have stabilised enough for your taste. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273986220359802, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28538/high-school-double-lens-optics-question
# High school double lens optics question This is a first year high school homework question (in the Finnish high school), and I'm having serious trouble solving it. I apologize for possible non-standard terms: I'm doing the translation from Finnish to English and all of the words may not be how they would be in an English physics book. This was originally posted to math SO. The following paragraph no longer applies after the move: (I am aware that this is the Math and not the Physics stackexchange. I decided to ask this here, because I believe that my trouble lies mainly in the mathematical execution and not the physical understanding. Math also seems significantly more active, thus improving my chances of getting an answer. If that is required or strongly suggested, I will copy this to Physics and either remove it from here or leave it, whichever is desired.) The question: A convex lens creates a real image from an object on the main axis, $85.0$ cm from the object. When a concave lens is placed between the object and the convex lens, $65.0$ cm from the object, the image moves to $140.0$ cm from the object. What is the focal length of the concave lens? No other information is given. Knowledge of the equation $1/a + 1/b = 1/f$ is assumed, as well as the way multi-lens systems work (the image of a previous lens is the object of the next lens) I have approached the problem by creating three lens equations: $1/a_1 + 1/b_1 = 1/f_1$, where $a_1$ is the distance from the object to the convex lens, $b_1$ is the distance from the convex lens to the first image, and $a_1 + b_1 = 85$ cm $1/a_2 + 1/b_2 = 1/f_2$, where $a_2 = 65.0$ cm is the distance from the object to the concave lens and $b_2$ is the distance from the concave lens to the fake image produced $1/a_3 + 1/b_3 = 1/f_3$, where $a_1$ is the distance from the fake image to the convex lens, $b_3$ is the distance from the convex lens to the (second) real image and $f_3 = f_1$, since it's the same lens In addition, there are a few geometrical equations that can be extracted: $$\begin{align*} &a_1 + b_3 = 140 \text{ cm}\\ &a_2 - b_2 + b_3 + b_3 = 140 \text{ cm} \end{align*}$$ leading to: $$b_2 = a_2 + a_3 - a_1$$ I know that the key is to calculate $b_2$, but it's proving to be difficult. The farthest I have got by combining the $b_2$ equation above and $f_1 = f_3$ is $$b_2=\frac{a_3(140\text{ cm}-a_1)}{a_1(85\text{ cm}-a_1)}\cdot 85\text{ cm}-75\text{ cm}$$ However, there are still $2$ unknowns, which is $2$ too much. I have the feeling that I am not seeing some geometrical equation that would somehow allow me to calculate the values for $a_1$ and $a_3$. It is also possible that the wording allows the assumption that the second real image is equal in size to the first real image, but 1) I can't see that helping too much and 2) from the way it's worded in Finnish I wouldn't dare make that assumption in an exam, for example. The correct answer, according to the book, is $f_2 = -27.3$ cm. Any hints what to try are welcome, as well as a more complete solution. I have tried to solve this for at least two hours, checking every geometrical relations I can think of and just generally trying to manipulate the various equations in a way that would help me get forward. - Please check that I correctly transcribed the equations. – Brian M. Scott May 17 '12 at 22:45 I'm going to migrate this question to the physics.SE site. There will be a link that appears below the question here that you can follow to the new location of your question. If you need help associating an account on physics.SE, you can flag your question for moderator attention, and someone over there will help out. – Zev Chonoles May 18 '12 at 2:35 Thank you Brian, everything seems to be correct. – DohnJoe May 18 '12 at 7:20 A note about the expected difficulty level: this was classified as a "challenging" question, meaning that it should be solveable by a good student in a classroom or exam environment in about 45 minutes of hard work. – DohnJoe May 18 '12 at 7:43 Great example of how a homework question should be asked. The way your translated question is worded, one could also interpret it to mean that the convex lens is always at 65 cm in both cases. Unfortunately this also leads to 2 unknowns in 1 equation. – ptomato May 18 '12 at 8:19 ## 2 Answers I think we need to assume that the concave lens is placed up against the convex lens, both of which are $65$ cm from the object. Otherwise, there is not enough information given to solve the problem. Hint: When lenses are against each other, their strengths (the reciprocals of their focal lengths) are added. Thus, $$\frac{1}{65\text{cm}}+\frac{1}{20\text{cm}}=\frac{1}{f_{\text{convex}}}$$ $$\frac{1}{65\text{cm}}+\frac{1}{75\text{cm}}=\frac{1}{f_{\text{convex}}}+\frac{1}{f_{\text{concave}}}$$ - Unfortunately, this does not yield the correct answer. I get $f_{concave}$ = 15.789 cm – DohnJoe May 18 '12 at 7:41 Erm, my bad, it DOES yield the correct answer as soon one remembers how the minus sign works... I am still baffled by this, however, because absolutely nothing in the wording of the text indicates this kind of an assumption. We don't usually use kind of "indirect" assumptions such as "I have to assume this, otherwise this is unsolvable". – DohnJoe May 18 '12 at 8:03 @DohnJoe: The original question was simply stated poorly. Since the answers match, the assumption I made was evidently correct. However, if the problem were stated better, no such assumption would be necessary. – robjohn May 18 '12 at 17:49 You're right, there's not enough information to solve the problem. You can see this by considering one of the variables to be an unspecified but known value, and solving the equations in terms of that variable. For example, let the distance from the original object to the convex lens be $x$, and then you can solve for the focal length of the concave lens as a function of $x$. (I'll omit that actual solution so you can do it for practice.) When you do that, you can then plug in some sample values of $x$ and see what kind of results come out. In this case, you'll find that any value of $x$ between $65\text{ cm}$ and $75\text{ cm}$ yields a valid answer for the focal length of the concave lens, and the range of these values is $-27.27\text{ cm}$ to $0\text{ cm}$. With the information given, there is nothing that allows you to identify any one value in that range as the correct answer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9682170152664185, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4262352
Physics Forums ## Phase relation between current and electromagnetic field generated Dear Forumers I am having a bit problem understanding the phase relation between current source and the generated eletromagnetic field components. Assume a very small current element( a very small current running in direction x)(essentially an electric dipole) in a non-homogenous loss periodic medium. the only knowledge about the medium is the mu and epsilon profile plus eigenvalues of the medium. The structure is like a waveguide so most of the radiation is expected to couple one of the eigenvalues. what is the phase relation between the generated electromagnetic field(dominant eigenvalue) and the driving current?(assuming phasor fields). It must be somehow related to the tangental and normal electric and magnetic fields of the dominant eigenvalue. could it be independent of the meduim, simply 1 or -1? Best PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Science Advisor From Maxwell's Equations, for a time-harmonic field the electric field it appears that it may lead the current by 90 degrees since the time derivative of the electric field is equal to the curl of the magnetic field and the source current. But this is only at the source point, due to the retardation of the fields, there is another phase shift that arises as the fields propagate. Let us take the z component of the electric field from a z directed point source current. The field for a unity current source is: $$E_z = \frac{i\omega\mu}{4\pi k^2} \left[ ik - \frac{1+k^2z^2}{r^2} - \frac{3ikz^2}{r^2} + \frac{3z^2}{r^3} \right] \frac{e^{ikr}}{r^2}$$ So we find that different parts of the field are 90 degrees and 180 degrees out of phase of the current even before we take into account the spatial phase shift. As you go away from the source though, only the first term remains and you have a field that is 180 degrees out of phase plus a spatial phase shift. So in the situation where you have a waveguide, then you have to contend with the superposition of the reflections which would make it even more difficult. But my guess is you will have a hard time determining a rule for this. Tags dipole, dipole phase, electrodynamics, electromagentism, electromagnetics Thread Tools | | | | |-----------------------------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: Phase relation between current and electromagnetic field generated | | | | Thread | Forum | Replies | | | General Physics | 2 | | | Advanced Physics Homework | 1 | | | Classical Physics | 41 | | | Advanced Physics Homework | 4 | | | Classical Physics | 12 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.894616961479187, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/229058/elementary-question-on-modular-arithmetic/229066
# Elementary question on modular arithmetic I know, this is very simple and dumb question, i just cannot come to understand, the problem is: Why and how happens this in mathematics? $$-5 \pmod 4 = 3$$ I know how to get this for positive numbers, but how does it work for negative ones. I need the explanation of what happens in the background when solving this. Is it the distance from $0$ to $4$? - 1 by the definition of modular arithmetic, 4 must divide $-5 - x$. Any x for which $4|(-5 - x)$ is a solution. Since we usually want to end up with a least positive choice for x, if we let x = 3, we have $4|(-5 - 3)$, i.e., $4|-8$. – amWhy Nov 4 '12 at 18:52 Add or substract multiples of $4$ until you get back into the "ballpark". – littleO Nov 4 '12 at 18:53 ## 3 Answers Since you seem to be using "mod" as a binary operator rather than as the context for a congruence relation, let's define "mod" precisely: assuming $b > 0$, $$a \bmod b = a - b\lfloor a/b \rfloor$$ That is, $a \bmod b$ denotes the distance to $a$, from the largest multiple of $b$ that is not greater than $a$. If you imagine the "number line" with the multiples of $b$ all marked out, then $a \bmod b$ is the distance to the point $a$ from the closest marked point on its left. In your particular case, of $-5 \bmod 4$, note that the list of all integer multiples of $4$ is: $$\dots, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, \dots$$ In this list, the largest number (multiple of $4$) that is to the left of $-5$ is $-8$. And the distance from $-8$ to $-5$ is $3$; that is why we say that $-5 \bmod 4 = 3$. (This is exactly the same way we would calculate $5 \bmod 4$: in the list, the largest number that is to the left of $5$ is $4$, and the distance from $4$ to $5$ is $1$, so we say $5 \bmod 4 = 1$.) - Thanks ShreevatsaR, the best explanation! thanks a lot – doniyor Nov 4 '12 at 19:03 $n \mod m$ is the distance between $n$ and $k$: the largest multiple of $m$ such that $k <= n$ In this case, the largest multiple of 4 which is not larger that -5 is -8. Therefore $$-5 \mod 4 = |-5 - (-8)| = 3$$ - great @pedrosorio, thanks a lot. – doniyor Nov 4 '12 at 19:08 $$c \equiv b \pmod{a}$$ is a short hand notation to denote $a \vert (c-b)$ (or) equivalently $b$ is the remainder when $c$ is divided by $a$. Typically, for convenience people have $b \in \{0,1,2,\ldots,a-1\}$. In your case, you want to evaluate $-5 \pmod{4}$. By that I assume you want to mind $b \in \{0,1,2,3\}$ such that $-5 \equiv b \pmod{4}$ i.e. we want to find $b \in \{0,1,2,3\}$ such that $4 \vert (-5-b)$. Since $4 \vert (-5-b)$, we have that $4 \vert (5 + b)$ and hence $4 \vert (1+b)$. Hence, we get that $b=3$. Therefore, $$-5 \equiv 3 \pmod{4}$$ To interpret $b \pmod{a}$ as a distance, in the sense you mean, it is the distance of $b$ from the largest multiple of $a$ no larger than $b$ i.e. the largest multiple of $a$ that falls to the left of $b$ or $b$ itself on the real number line. I have tried to illustrate this with couple of diagrams below. The first picture indicates $b\pmod{a}$ when $b$ falls between $4a$ and $5a$. The second picture indicates $b\pmod{a}$ when $b$ falls between $-3a$ and $-2a$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497672915458679, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/9238/find-the-origin-and-the-destination-of-a-trip-from-a-serie-of-tickets
# Find the origin and the destination of a trip from a serie of tickets I was asked to design an algorithm that solves the following problem : Consider a travel from city A to city B, made of several trips by train through other cities in between. With an access to the unordered list of train tickets from which you can read the cities of departure and arrival of each trip, find A and B. Unfortunately, I was unable to give an efficient algorithm when I needed to (in pseudo code), but once I got home, I came up with this answer (in JavaScript). ````var tickets = [ {from:'Paris', to:'Berlin'}, {from:'London', to:'Paris'}, {from:'Zurich', to:'Milan'}, {from:'Berlin', to:'Zurich'} ]; function getTrip(tickets) { var ticket = tickets.shift(); var trip = {from:ticket.from, to:ticket.to}; while(tickets.length > 0) { ticket = tickets.shift(); if(ticket.from == trip.to) { trip.to = ticket.to; } else if(ticket.to == trip.from) { trip.from = ticket.from; } else { tickets.push(ticket); } } return trip; } var trip = getTrip(tickets); console.log('The trip was from %s to %s', trip.from, trip.to); ```` While this might look like a very simple problem, I am still curious to see if there is a more efficient solution (for both time and space) or simply considerations I completely forgot. This may not need to be in JavaScript (especially if the language gives greater control over some aspects of the problem). - Look at the tickets date and time. (Ok, just joking) – petervaz Jan 28 at 15:26 You should consider giving feedback on the answers. – Baboon Jan 28 at 23:25 ## 5 Answers Several answers have already been given, but I still think this is somewhat cuter. Construct a bidirectional map $m: \mathcal{C} \to \mathcal{C}$ from cities to cities. This map means that if $m(x) = y$, then there is a path of tickets such that you start in $x$, take an arbitrarily long path (using only your tickets), and end up in $y$. The map encodes connected component with entry and exit points. Now, the algorithm is as follows: On ticket $(u,v)$, • if $m(v)$ exists, let $m(u) = m(v)$ and delete $m(v)$, • if $m^{-1}(u)$ exists, say $m^{-1}(u) = w$, let $m(w) = v$, and • else add $m(u) = v$. What you end up with, should be one entry in $m$, $m(s) = t$, where $s$ is starting city and $t$ is destination. This is under the assumption that the tickets you have form a path. - Using a hash table, record for each city mentioned how many times it was mentioned as a source and how many as destination. The source of the trip is the city that has appeared as a source an odd number of times, and similarly for the destination of the trip. That's linear time and space (with high probability). Edit: As Jan comments, the statistic that should be odd is the total number of times that the city has appeared. If a city appeared an odd number of times, then it is the source if it appeared as a source more than as a destination, and vice versa for destination. - It's not a linear time algorithm. (Specially in space). – Saeed Amiri Jan 28 at 8:59 @SaeedAmiri: Why is it not linear time or space (on average, when using hash tables)? Size of task is number of tickets, each ticked is only processed once, there can't be more sources/destinations than tickets, so the number of entries is also at most linear. – Jan Hudec Jan 28 at 9:09 Actually for oriented graph, you need to compare number of times the city appears as source to number of times it appears as destination. – Jan Hudec Jan 28 at 9:10 If A!=B (which is what I reasoned with), A could be the city that appeared 0 time as destination, and B the one that appeared 0 time as a departure, right? – Antoine Lassauzay Jan 29 at 5:26 Anyhow, the interviewer who asked me this question said there was a better approach for the time for time when I proposed a similar solution, because it involved two loops : one to create the map, and one to find A and B in the map. I am not sure how to formally measure if your solution is superior to the solutions given by Pal GD or me. – Antoine Lassauzay Jan 29 at 5:30 show 1 more comment Yuval's answer is on right track, but being oriented graph, you need to compare number of times a city appears as arrival with number of times it appears as departure. Overall departure is the city that appears more times as departure than as arrival (usually once as departure and never as arrival) and overall arrival is the city that appears more times as arrival than as departure (again usually once as arrival and never as departure). - The train tickets form a directed walk. The departure place will have one more leaving arc than entering arcs, and the destination one more entering arc than leaving ones. - You don't have to reorder everything: A is the ticket that has no matching departure B is the ticket that has no matching destination When you're looking for efficiency, abuse the specifics of the system. - With a linear or quadratic algorithm? – Pål GD Jan 28 at 18:39 @PålGD I'm saying you don't need an algorithm. – Baboon Jan 28 at 18:42 Thanks for your answer. This was my first idea conceptually but I don't understand how could you find A and B without an algorithm ? – Antoine Lassauzay Jan 29 at 5:08 It depends what you consider an algorithm. Going through a collection once to find an element matching a boolean condition is trivial. – Baboon Jan 29 at 9:36 @Baboon: It is an algorithm, albeit a trivial one. – vonbrand Jan 29 at 12:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516053795814514, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Focal_length
# Focal length The focal point F and focal length f of a positive (convex) lens, a negative (concave) lens, a concave mirror, and a convex mirror. "Rear focal distance" redirects here. For lens to film distance in a camera, see Flange focal distance. The focal length of an optical system is a measure of how strongly the system converges or diverges light. For an optical system in air, it is the distance over which initially collimated rays are brought to a focus. A system with a shorter focal length has greater optical power than one with a long focal length; that is, it bends the rays more strongly, bringing them to a focus in a shorter distance. In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection. ## Thin lens approximation For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive, and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative, and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. ## General optical systems Thick lens diagram For a thick lens (one which has a non-negligible thickness), or an imaging system consisting of several lenses and/or mirrors (e.g., a photographic lens or a telescope), the focal length is often called the effective focal length (EFL), to distinguish it from other commonly-used parameters: • Front focal length (FFL) or front focal distance (FFD) is the distance from the front focal point of the system to the vertex of the first optical surface.[1][2] • Back focal length (BFL) or back focal distance (BFD) is the distance from the vertex of the last optical surface of the system to the rear focal point.[1][2] For an optical system in air, the effective focal length (f and f′) gives the distance from the front and rear principal planes (H and H′) to the corresponding focal points (F and F′). If the surrounding medium is not air, then the distance is multiplied by the refractive index of the medium (n is the refractive index of the substance from which the lens itself is made; n1 is the refractive index of any medium in front of the lens; n2 is that of any medium in back of it). Some authors call these distances the front/ rear focal lengths, distinguishing them from the front/ rear focal distances, defined above.[1] In general, the focal length or EFL is the value that describes the ability of the optical system to focus light, and is the value used to calculate the magnification of the system. The other parameters are used in determining where an image will be formed for a given object position. For the case of a lens of thickness d in air, and surfaces with radii of curvature R1 and R2, the effective focal length f is given by: $\frac{1}{f} = (n-1) \left[ \frac{1}{R_1} - \frac{1}{R_2} + \frac{(n-1)d}{n R_1 R_2} \right],$ where n is the refractive index of the lens medium. The quantity 1/f is also known as the optical power of the lens. The corresponding front focal distance is: $\mbox{FFD} = f \left( 1 + \frac{ (n-1) d}{n R_2} \right),$ and the back focal distance: $\mbox{BFD} = f \left( 1 - \frac{ (n-1) d}{n R_1} \right).$ In the sign convention used here, the value of R1 will be positive if the first lens surface is convex, and negative if it is concave. The value of R2 is positive if the second surface is concave, and negative if convex. Note that sign conventions vary between different authors, which results in different forms of these equations depending on the convention used. For a spherically curved mirror in air, the magnitude of the focal length is equal to the radius of curvature of the mirror divided by two. The focal length is positive for a concave mirror, and negative for a convex mirror. In the sign convention used in optical design, a concave mirror has negative radius of curvature, so $f = -{R \over 2}$, where $R$ is the radius of curvature of the mirror's surface. See Radius of curvature (optics) for more information on the sign convention for radius of curvature used here. ## In photography 28 mm lens 50 mm lens 70 mm lens 210 mm lens An example of how lens choice affects angle of view. The photos above were taken by a 35 mm camera at a fixed distance from the subject. Camera lens focal lengths are usually specified in millimetres (mm), but some older lenses are marked in centimetres (cm) or inches. Focal length (f) and field of view (FOV) of a lens are inversely proportional. For a rectilinear lens, FOV = 2 arctan (x / (2 f)), where x is the diagonal of the film. When a photographic lens is set to "infinity", its rear nodal point is separated from the sensor or film, at the focal plane, by the lens's focal length. Objects far away from the camera then produce sharp images on the sensor or film, which is also at the image plane. Images of black letters in a thin convex lens of focal length f are shown in red. Selected rays are shown for letters E, I and K in blue, green and orange, respectively. Note that E (at 2f) has an equal-size, real and inverted image; I (at f) has its image at infinity; and K (at f/2) has a double-size, virtual and upright image. To render closer objects in sharp focus, the lens must be adjusted to increase the distance between the rear nodal point and the film, to put the film at the image plane. The focal length ($f$), the distance from the front nodal point to the object to photograph ($S_1$), and the distance from the rear nodal point to the image plane ($S_2$) are then related by: $\frac{1}{S_1} + \frac{1}{S_2} = \frac{1}{f}$. As $S_1$ is decreased, $S_2$ must be increased. For example, consider a normal lens for a 35 mm camera with a focal length of $f=50 \text{ mm}$. To focus a distant object ($S_1\approx \infty$), the rear nodal point of the lens must be located a distance $S_2=50 \text{ mm}$ from the image plane. To focus an object 1 m away ($S_1=1000 \text{ mm}$), the lens must be moved 2.6 mm further away from the image plane, to $S_2=52.6 \text{ mm}$. The focal length of a lens determines the magnification at which it images distant objects. It is equal to the distance between the image plane and a pinhole that images distant objects the same size as the lens in question. For rectilinear lenses (that is, with no image distortion), the imaging of distant objects is well modelled as a pinhole camera model.[3] This model leads to the simple geometric model that photographers use for computing the angle of view of a camera; in this case, the angle of view depends only on the ratio of focal length to film size. In general, the angle of view depends also on the distortion.[4] A lens with a focal length about equal to the diagonal size of the film or sensor format is known as a normal lens; its angle of view is similar to the angle subtended by a large-enough print viewed at a typical viewing distance of the print diagonal, which therefore yields a normal perspective when viewing the print;[5] this angle of view is about 53 degrees diagonally. For full-frame 35 mm-format cameras, the diagonal is 43 mm and a typical "normal" lens has a 50 mm focal length. A lens with a focal length shorter than normal is often referred to as a wide-angle lens (typically 35 mm and less, for 35 mm-format cameras), while a lens significantly longer than normal may be referred to as a telephoto lens (typically 85 mm and more, for 35 mm-format cameras). Technically, long focal length lenses are only "telephoto" if the focal length is longer than the physical length of the lens, but the term is often used to describe any long focal length lens. Due to the popularity of the 35 mm standard, camera–lens combinations are often described in terms of their 35 mm equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm-equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor. ## See also • Depth of field • f-number or focal ratio • Dioptre • Focus (optics) ## References Wikimedia Commons has media related to: Focal length 1. ^ a b c John E. Greivenkamp (2004). Field Guide to Geometrical Optics. SPIE Press. p. 6–9. ISBN 978-0-8194-5294-8. 2. ^ a b Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. pp. 148–9. ISBN 0-201-11609-X. 3. Jeffrey Charles (2000). Practical astrophotography. Springer. p. 63–66. ISBN 978-1-85233-023-1. 4. Leslie Stroebel and Richard D. Zakia (1993). The Focal encyclopedia of photography (3rd ed.). Focal Press. p. 27. ISBN 978-0-240-51417-8. 5. Leslie D. Stroebel (1999). View Camera Technique. Focal Press. p. 135–138. ISBN 978-0-240-80345-6.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8869529962539673, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111981/apparent-contradiction-using-mayer-vietoris-for-sheaf-cohomology?answertab=votes
# Apparent contradiction using Mayer-Vietoris for Sheaf cohomology Trying to solve Exercise 2.7 b) of Chapter III of Hartshorne's Algebraic Geometry I got stucked about an apparent contradiction. The exercise asks to prove that $H^1(S^1, \mathcal{R})=0$, where $S^1$ is the circle with its usual topology and $\mathcal{R}$ is the sheaf of continuous real-valued functions. I thought to apply the Mayer-Vietoris sequence for the cohomology with support to the closed subsets of $S^1$: $U:=\{(cos\theta, sen\theta): \theta \in [o,\pi]\},$ $V:=\{(cos\theta, sen\theta): \theta \in [\pi,2\pi]\},$ Their intersection is the pair of points $\{(1,0),(-1,0)\}$. We observe that $\mathcal{R}$ is a flasque sheaf (i.e. if $U\subseteq V$ are open sets then the restriction map $\mathcal{R}(V) \rightarrow \mathcal{R}(U)$ is a surjection, thanks to Exercise 2.3 c) of the same Chapter, all the cohomology group of grade greater than $0$ computed on a flasque sheaf are trivial. This would solve the exercise by itself but just keep thinking in this direction for a moment...). And we recall that $H^0_Y(X,\mathcal{F})=\Gamma_Y(X,\mathcal{F})$ we get the (short) exact sequence: $0 \rightarrow \Gamma_{U \cap V}(S^1, \mathcal{R}) \rightarrow \Gamma_U(S^1, \mathcal{R})\oplus\Gamma_V(S^1, \mathcal{R}) \rightarrow \Gamma(S^1, \mathcal{R}) \rightarrow 0.$ Taking a closer look to the groups involved I noticed that $\Gamma_{U \cap V}(S^1, \mathcal{R})\simeq 0$ because the germ of a continuous function on a point which is zero outside of that point must be zero also on the point, by continuity. For the other terms $\Gamma_U(S^1, \mathcal{R}) \simeq \Gamma_V(S^1, \mathcal{R}) \simeq \mathcal{C}([0,1])_{\partial=0}$ the group of continuous functions on $[0,1]$ which are zero on the boundary. And $\Gamma(S^1, \mathcal{R}) \simeq \mathcal{C}([0,1])_{per}$ the group of continuos functions on $[0,1]$ such that $f(0)=f(1)$. By the sequence above we deduce that $\mathcal{C}([0,1])_{per} \simeq \mathcal{C}([0,1])_{\partial=0}\oplus \mathcal{C}([0,1])_{\partial=0}$. But really I don't see it (I tried to do something with Borsuk-Ulam theorem but the matter is I don't think it is true). Can someone please explain where my error lies? Or, otherwise, how the groups as above are isomorphic? - ## 1 Answer It is not true that $\mathcal R$ is flasque: the function $1/(x-1)$ on the circle minus $(0,0)$ cannot be extended continuously to the circle . Moreover $H^1(S^1,\mathcal R)=H_{DR}^1(S^1,\mathbb R)=\mathbb R$ because De Rham cohomology coincides with derived functor cohomology on paracompact spaces. And this is what Hartshorne writes in a later printing: you seem to have an old version with a typo. - Thank you for your help! Actually my edition of Hartshorne's (ISBN 0-387-90244-9) clearly asks to prove $H^1(S^1,\mathcal{R})=0$, it is probably a typo corrected in other editions. – Giovanni De Gaetano Feb 22 '12 at 11:40 2 Dear @Student: ah, I see. Sorry for having said you had misread the exercise. I have edited my post to remove that unfair assertion. – Georges Elencwajg Feb 22 '12 at 13:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451755285263062, "perplexity_flag": "head"}
http://mathoverflow.net/questions/113840/for-consecutive-primes-a-lt-b-lt-c-prove-that-ab-ge-c/113843
## For consecutive primes $a\lt b\lt c$, prove that $a+b\ge c$. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For consecutive primes $a\lt b\lt c$, prove that $a+b\ge c$. I cannot find a counter-example to this. Do we know if this inequality is true? Alternatively, is this some documented problem (solved or unsolved)? - 7 Just to complement the responses below: The prime number theorem says that the $n$-th prime is asymptotically $n\log n$, whence your sum $a+b$ is asymptotically $2c$. So your inequality holds for large $c$ without any calculation, in fact $2.001 c>a+b>1.999 c$ for large $c$. – GH Nov 20 at 0:10 (Of course 2.001 can be replaced by 2 unconditionally.) – Charles Feb 10 at 4:13 ## 3 Answers Yes, this is true. In 1952, Nagura proved that for $n \geq 25$, there is always a prime between $n$ and $(6/5)n$. Thus, let $p_k$ be a prime at least 25. Then $p_k+p_{k+1} > 2p_k$. But by Nagura's result we have that $p_{k+2} \leq 36/25 p_k < 2p_k$. It is easy to verify the conjecture for small values of $p$. - Hard to believe that math as modern as 1952 is needed in order to prove such an elementary-sounding statement. The 1850 Bertrand–Chebyshev theorem almost, but not quite, does the job. – Ben Crowell Nov 19 at 21:58 1 @Ben: I think the statement quoted from 1952 is elementary and can be proved in much the same way as Bertrand-Chebyshev. – GH Nov 20 at 0:04 You can get away with only using work of Chebyshev for large enough a: Let f(n) = \sum log(p) over all primes p up to n (usually denoted theta(n)). If a+b<c then c>2a and so there's at most one prime between a+1 and 2a, hence f(2a)-f(a) < log(2a). He showed that f(a) < a*log(4), and he proved a bound pi(N) > 0.9N/log(N) for N large, so we should have f(a) >= 0.7a for a large. Then for such a we have log(2a) > f(2a)-f(a) >= (1.4-log(4))a > 0.0137a, which is impossible if a is large in the above sense and at least 505. – Steven Sivek Nov 20 at 0:42 @Steven: Thanks for this argument. I believe Chebyshev proved $f(a)<a *\log 4$ with a better constant than $\log 4$. The factor $\log 4$ comes from Erdős's elegant proof based on $\prod_{n<p<2n}p\leq\binom{2n}{n}$. Apologies in advance if I am wrong here. – GH Nov 20 at 1:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Ramanujan (1919), see Eq. (18): $$\pi(x) - \pi(x/2) \ge 2 \quad \text{ for } x\ge 11$$ Whence, with $x= 2p_k$ for $p_k \ge 7$, $$p_{k+2} \le 2 p_k \lt p_k+p_{k+1},$$ and $5\le 2+3$, $7\le 3+ 5$, $11 \le 5+7$. - As a matter of fact, P. L. Chebyshev knew already that for any $\epsilon > \frac{1}{5}$, there exists an $n(\epsilon) \in \mathbb{N}$ such that for all $n\geq n(\epsilon),$ $\pi((1+\epsilon)n)-\pi(n)>0.$ In [2], one can find a short report on the problem of determining the smallest $n(\epsilon)$ explicitly once that $\epsilon$ has been fixed. References [1] P. L. Chebyshev. Mémoire sur les nombres premiers. Mémoires de l'Acad. Imp. Sci. de St. Pétersbourg, VII, 1850. [2] H. Harborth & A. Kemnitz. Calculations for Bertrand's Postulate. Mathematics Magazine, 54 (1), pp. 33-34. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143898487091064, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33097/relation-between-coordinates-and-frames-of-reference/33229
# Relation between coordinates and frames of reference I always get a little uneasy that all the theories I can think of (at least since Newton) are constructed in a way such that they would be true in heaven and on earth ... but we can never go everywhere and test it out. So here is the question: Is there some good justification to implement something like the principle of relativity in scientific theories other than it turned out to work good so far? Some more motivation: We have an understanding of different places in space (and time) and what different velocities are. Like imagine me and my droogs cruising our skateboards down the neighborhood and there is a truck driving in the other direction. I see a cactus on the roadside and I wonder how the trucker in his ride sees it. Now in the maths, space $\vec x$ and spacetime $t$ represent physical space and physical time. And if I know my coordinates, the form of the plant and its location and orientation in space, I can find out what I see and also what the trucker from his position sees. A coordinate transformation (replacing some letter on a piece of paper with some other letters in a systematic way) is conventually interpreted as taking the data from one "perspective" and transforming it into "another perspecitive". It's supposed to be a fruitful approach to physics to consider only the observable quantities. Maybe I interpret the principle of relativity the wrong way, but I find it funny that a theory tells me there are spacetime events where I can never get to (outside the light cone). And simultaneously I'm guaranteed that if I'm there I would also be able to physics and come to the right conclusions. At the very least, I feel this is somewhat redundant - why not drop it? - ## 7 Answers but I find it funny that a theory tells me there are spacetime events where I can never get to (outside the light cone).At the very least, I feel this is somewhat redundant - why not drop it? Mathematical theories do not come a la cart, i.e. they are not patched together, cut as you go, constructs. Theories are axiomatic self consistent and sustained. They arise and are accepted because they explain usually a large number of observations to great accuracy. A theory is either invalidated by disagreeing with some data, or is consistent with all known data until further experimental research. Now mathematics being what it is, the theories are extended to non physically accessible regions, and one accepts the conclusions since the theory fits the known regions. The specific example you use is not a particularly useful one, since calculations are done off the light cone in Feynman diagrams and now even more complicated calculations, which in the end are absolutely consistent with data to high accuracy, since the light cone excursions are virtual. Even if one could construct a theory where only the inside of the light cone were mathematical described , it would be a wrong theory for particle physics data. - But in standard quantum field theory (like the Standard Model and its extensions) space-time coordinates are not linked to observables, unlike in classical mechanics of point particles or in quantum mechanics (regarding spatial coordinates). However, mean lifetimes and mean free paths are connected to physical measurements of time and length and these respects causality owing to the cluster decomposition principle. (I think Anna v does know this, but maybe the reader doesn't.) – drake Aug 5 '12 at 17:46 To expand upon Alfred's point: The scientific method basically involves making assumptions and using them to make testable predictions. The results of your tests on those predictions are taken as evidence for or against your assumptions. In theoretical physics, many of these fundamental assumptions regard the basic symmetries of spacetime, such as the principle of relativity (Lorentz invariance). So, in that sense, we can't ever hope to derive Lorentz invariance, in the same way as we can't derive F=ma from any deeper principle of classical mechanics. If it were possible to derive Lorentz invariance from a more fundamental assumption, as Alfred says, we would just be moving our problem one level down. Thus the fundamental assumptions can only be justified by our tests on the predictions that can be made from it. In the case of the principle of relativity, a very small number of assumptions can be used to make a great many rich and interesting predictions, all of which have been found to be true. This is taken as evidence that the principle of relativity is indeed observed in Nature. This is the reason a lot of time, money and effort is spent looking for violations of symmetries in particle physics. For example, until a few decades ago CP (charge-parity) symmetry was thought to be a fundamental symmetry of nature. When it was discovered that it is not, new physics had to be developed. So you see, it's entirely possible (though unlikely) that the principle of relativity may turn out not to be an exact symmetry at some point in the future. In that case it will be superseded by a more fundamental theory, in the same way that Einstein's relativity overthrew Galilean relativity, which was a more naive interpretation of the same principle. - It's nice when theories are based on a simple idea. I don't know of any God given reason why the universe should be basically simple, but I find it an immensely appealing idea that it is. To take your specific example, the truckers view of the cactus is obviously different to yours. OK, so what is his point of view? To explain it the only thing you need to know is that the line element: $$ds^2 = -c^2t^2 + dx^2 + dy^2 + dz^2$$ is an invarient i.e. every observer will calculate the same value of $ds^2$ for the spacing between any two points $(t, x, y, z)$ and $(t + dt, x + dx, y + dy, z + dz)$. This single fact tells you everything about special relativity, and knowing it tells you not only how the truckers view differs from yours, but how every observer anywhere in the universe will see the cactus. It's certainly true that when we look more deeply into the equation for the line element we discover odd things like the breakdown of simultaneity, time dilation, length contraction, twin paradox and so on. But we don't choose to believe SR because of all the weird stuff. We choose to believe it because it's so beautifully simple (and of course because it works :-). - Is there some good justification to implement something like the principle of relativity in scientific theories other than it turned out to work good so far? This is not so much an answer as it is an observation: if there were some good justification (for the principle of relativity in scientific theories) other than "it works", would not that justification then be the actual principle at work and thus also subject to your question? - Only under the condition I'd know of it. – Nick Kidman Jul 31 '12 at 19:08 I think your question belongs more to philosophy than it does to physics. 1) "......but we can never go everywhere and test it out. " Actually you don't need to. The physics that you get here, will also work there, because you and here is mere accident. You can always understand whether your physics belongs only to here, if here and there are different, physically. It can hardly be that the physics we discover is limited only to where we live and go on to live forever with that, for we would know that it is incomplete or wrong, when we find it not agreeing with some other physics we are sure is not dependent on any here. 2)"Is there some good justification to implement something like the principle of relativity in scientific theories other than it turned out to work good so far?" Suppose you at once forget there is anything as the principle of relativity, or suppose that you just don't know it. Then you do some physics. Then you do some experiments or go cruising in skateboards with your droogs. You'll find that your physics does not quite describe the happenings in your lab or the neighbouhood. It is not that we first found the principle of relativity and then went around labs and neighborhoods seeing whether it "works" or not. The principle was self emerging. It does not require any justification. And if you really find any justication, it would simply be the actual principle at work as Alfred pointed out, but then you would be back to where you started, the justication will seem to you as something again at "work". 3)".....that a theory tells me there are spacetime events where I can never get to (outside the light cone). And simultaneously I'm guaranteed that if I'm there I would also be able to physics and come to the right conclusions." The problem is you can't be "there". There is no point in discussing or speculating about the physics "there". - Since you wrote: Now in the maths, space and spacetime t represent physical space and physical time. I will assume that you are talking about the Galilean and the Special Principle of Relativity because the hole argument prevents that interpretation in General Relativity and the latter case has almost nothing to do with the formers. I will just give an answer to the question: why do we impose invariance under Lorentz transformations when we are looking for new physical laws (this is what one really imposes and not the principle of relativity. In principle, it could exist (and they do exist) more transformations that connects inertial observers). It is totally based on a derivation of Lorentz and Galilei transformations begun by W. von Ignatowsky's (see here for references) in 1911; which is unfortunately little-known. Since in this derivation these transformations can be derived without making reference to any specific physical phenomenon or law like Maxwell theory or motion in inclined planes (this the most important property of this derivation) and they mostly rely on properties of space and time, one can claim that implementing Lorentz invariance is equivalent to assume some features of space and time. So that, as usual with physical explanations, this answer simply brings your question to another perhaps deeper question: Why is space-time homogenous and isotropic? Consider two inertial observers. Assume: 1. Space and time are homogeneous (there is not especial points in space or time). 2. Isotropy of space (no especial directions in space). 3. The transformations have a group law (there is one transformation that connects one observer with himself, if two observers are connected by one transformation, and the second one is connected with a third observer, then first and the third are also connected). This requirement is due to the equivalence of inertial observers. 4. If two events happen at the same place for one observer, their time order must be the same for all the observers. Then, the most general transformation between observers are Lorentz and Galilei transformations (the latter saturates the former). The relativity or absoluteness of simultaneity permits one to physically distinguish one from the other. Footnote: Of course, space and time are neither homogenous nor isotropic in the real world, but they are in the domain where the Relativity Principle holds. Added: Here http://arxiv.org/abs/gr-qc/0107091 you can find more details and references about what I have tried to summarize. You will find that the interpretations of the natural coordinates are readings of clocks and rods. Note that this derivation do not assume that spacetime is a metrical space. Indeed, as it is well-known, Galilei spacetime is not compatible with a four dimensional metric tensor. - First, it was hard to me get your questions, so likely my answer will be terribly unrelated. By I hope I'll provide some points that will help you. We have an understanding of different places in space (and time) and what different velocities are. Maybe I'm too nagging and it doens't really matters here, but there is no such things as places. There bodies (objects or call them as you like) and relations between them. These relations determine whether two given bodies can affect each other or not. Moreover, if the two body's can affect each other this relations come into play to say how much is the effect. Maybe I interpret the principle of relativity the wrong way, but I find it funny that a theory tells me there are spacetime events where I can never get to (outside the light cone). And simultaneously I'm guaranteed that if I'm there I would also be able to physics and come to the right conclusions. At the very least, I feel this is somewhat redundant - why not drop it? Suppose you are not in right relations to observe the event, but later (your later) you can observe it consequences. You can pretend you were here and do the physics to model how it was and see how it predicts what you have observed. That's kind of archeology, you know the rules, you know the results, you reconstruct the history. Though you can try to model the future as well. UPD I've just remembered the thought that if everything is determined there is in some sense no time. So to the extent one can predict (things are determined) space-time restrictions on our knowledge are weaker. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573407769203186, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/18569/asymptotic-expansion-negative-powers?answertab=oldest
# Asymptotic expansion, negative powers The question was inspired by this discussion: How to expand a function into a power series with negative powers? I am interested in asymptotic behavior of a function at infinity: ````f[r_]:=(0.04962 Exp[-2 r] (-1.000+r))/((0.06119+(Exp[-2 r])^(2/3))^2 r) ```` or (TeX) $$f(r)=\frac{0.04962 e^{-2 r} (r-1.000)}{\left(\left(e^{-2 r}\right)^{2/3}+0.06119\right)^2 r}$$ Tried `Series[f[r],{r,0,10}]` for expansion in negative powers at infinity (as suggested) and got: $$-\frac{0.04406}{r}+0.02146+0.02106 r+0.004405 r^2-0.001355 r^3-0.001205 r^4-0.0003607 r^5-\left(8.402\times 10^{-6}\right) r^6+0.00004149 r^7+0.00001982 r^8+\left(3.921\times 10^{-6}\right) r^9-\left(6.018\times 10^{-7}\right) r^{10}+O\left(r^{11}\right)$$ Seems like the function decays faster than $1/r^n$ and the expansion is meaningless. But what does the term $-\frac{0.04406}{r}$ mean then? The function is strictly positive at infinity and I am kinda confused by that. Does this mean that the asymptotic form of the function is something plus the term $-\frac{0.04406}{r}$ which effectively gives the observed behavior? Can someone clarify it? How can one explain the term $-\frac{0.04406}{r}$? - 1 1) you forgot the underscore after r in the definition of f. Probably not essential in this case. 2) you have round brackets around r in the `Series` call. 3) In the answer you linked to, the series is developed at infinity, you use 0. – Sjoerd C. de Vries Jan 27 at 18:52 1 $-0.04406 = 0.04962(-1.000) / (1 + 0.06119)^2$ exactly as it should: this is the limiting behavior of $r f(r)$ as $r$ approaches $0$. – whuber Jan 27 at 19:08 `Series[f[r],{r,0,10}]` is equivalent to `Series[f[r],{1/r,Infinity,10}]`, isn't it? we can not use negative powers in `Series`, though – molkee Jan 27 at 19:20 I am just wondering whether or not this term has anything to do with the behavior of the function at infinity. – molkee Jan 27 at 19:34 It goes to zero ~ 13.25*Exp[-2*r]. – Daniel Lichtblau Jan 27 at 20:50 show 1 more comment ## 1 Answer It is not quite clear, why do you expand the expression around 0, if you want to study its behavior in infinity? I trust that it is not its limit at infinity that you are interested in, since this limit is clear without any calculations. You need probably one or few largest terms. I would in this case go to a new variable x=Exp[-2r] that tends to zero, when r goes to infinity, and rewrite the expression in its terms ````f[r_] := (0.04962 Exp[-2 r] (-1.000 + r))/((0.06119 + (Exp[-2 r])^(2/3))^2 r); g[x_] := f[r] /. {E^(-2 r) -> x, r -> -Log[x]/2} // Simplify ss=Series[g[x], {x, 0, 2}] // Normal ```` which yields ````x^(5/3) (-433.157 - 866.314/Log[x]) + x (13.2524 + 26.5049/Log[x]) + x^(7/3) (10618.3 + 21236.7/Log[x]) ```` And go back to the variable r. ```` ss /. {x -> Exp[-2*r], a_/Log[x] -> a/(-2 r)} // PowerExpand ```` This brings the following: ````E^(-14 r/3) (10618.3 - 10618.3/r) + E^(-2 r) (13.2524 - 13.2524/r) + E^(-10 r/3) (-433.157 + 433.157/r) ```` You are right that there are both the terms with exponents and with 1/r, but the most importantis the term containing the exponent with the smallest decrement. - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884768009185791, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/25031/reconstructing-a-fraction-from-its-first-digits/25036
## Reconstructing a fraction from its first digits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is not difficult to see that any reduced fraction $\frac{p}{q}$ where $0 < p < q$ and both $p$ and $q$ have at most $N$ digits (where $N$ is a fixed integer) can be reconstructed from its first $2N$ digits. In other words, if we let ${\cal F}_N= \lbrace (p,q) | 0 < p < q < {{10}^N} \rbrace$ and define the mapping $f : \ {\cal F}_n \to { \mathbb N}$ by $f(p,q)=$ integer_part( $\frac{10^{2N}p}{q}$) , then $f$ is injective. So there is a left inverse $g$, such that $g(f(p,q))=(p,q)$ for any $(p,q) \in {\cal F}_N$. What is the best way to compute $g$ effectively ? There's always brute search, of course, but ... - 1 Is this a terminating or a repeating decimal? If it's a terminating decimal, it's a trivial solution: just multiply by 10^n for your numerator and stick the 10^n in the denominator and reduce. If it's a repeating decimal, just multiply p/q by 10^n and subtract p/q to get your repeating portion, then divide by (10^n - 1) to get your fraction. Then, reduce. Example: 0.123456789... Multiply by 10^9 to get 123456789.123456789... Subtract repeating portion to get 123456789 Divide by (10^9 - 1) to get 123456789/999999999 Reduce to 13717421/111111111 – Gabriel Benamy May 17 2010 at 16:59 @Gabriel, consider 1/7, which has decimal 0.142857 repeating. This can be reconstructed from just 0.14 if we let N=1. – jc May 17 2010 at 17:05 Ah, now I fully understand what the question is. So you're saying that every reduced a/b is unique up to the first 2n digits, where n = ceiling(log10(max(a,b))) (number of base-10 digits of the larger of a,b)? That's an interesting question... – Gabriel Benamy May 17 2010 at 17:10 This is reminiscent of the Berlekamp-Massey Algorithm which finds a LFSR (linear feedback shift register) of length $n$ producing a binary stream of length $2n$. I'll bet that you can modify to the B-M algorithm to solve this problem. en.wikipedia.org/wiki/… – Victor Miller May 17 2010 at 17:22 ## 4 Answers Taking the continued fraction approximations of your decimal expansion until the denominators get larger than 10^N ought to work. Edit: Let me add that you have to do a tiny bit more work to get the best rational approximants from the continued fraction, and that's probably the algorithm that should be used. See http://en.wikipedia.org/wiki/Continued_fraction#Best_rational_approximations - +1 for continued fractions. From this, you should be able to easily reproduce a simple poly(N)-time algorithm, which furthermore can be used to easily obtain improved approximations at each step without having to 'unravel' the continued fraction anew at each stage (i.e. by not explicitly formulating them as continued fractions). – Niel de Beaudrap May 17 2010 at 19:09 What would happen if I gave the algorithm, say, .19, which is NOT generated by any a/b where a,b < 10? – Gabriel Benamy May 18 2010 at 22:27 1 You'll get the closest approximation possible with a,b<10. In the case of 0.19, the approximants are 1/5, 1/(5+1/3)=3/16, 1/(5+1/(3+1))=4/21, 1/(5+1/(3+1/(1+1/4)))=19/100. Try playing around with maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/… – jc May 18 2010 at 23:57 Note that my comment refers to the "best rational approximation" algorithm in my edited answer, and not the original continued fraction truncation algorithm. Try e.g. 0.08 whose continued fraction truncation would be 0 (next convergent is 1/12), but whose best approximation (with a,b strictly <10) is really 1/9. – jc May 19 2010 at 0:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Henry Pollak has written a nice series of articles about how given a positive decimal one can construct a rational fraction that is approximately equal to the given decimal number. The first of these articles appeared in COMAP's (Consortium for Mathematics and Its Applications) newsletter Consortium, and can be found at this link: http://webmail.comap.com/www.comap.com/pdf/749/Cons92.pdf while the second article is here: http://webmail.comap.com/www.comap.com/pdf/1004/C95.pdf and the last article: http://ns.comap.com/www.comap.com/pdf/1028/Con96.pdf - Say I have the number x=0.282051282, and I want to know which fraction that is. Here is an algorithm: • 0/1 < x < 1/0 add the numerators and add the denominators to get 1/1. Compare x to 1/1: • 0/1 < x < 1/1 add the numerators and add the denominators to get 1/2. Compare x to 1/2: • 0/1 < x < 1/2 add the numerators and add the denominators to get 1/3. Compare x to 1/3: • 0/1 < x < 1/3 add the numerators and add the denominators to get 1/4. Compare x to 1/4: • 1/4 < x < 1/3 add the numerators and add the denominators to get 2/7. Compare x to 2/7: • 1/4 < x < 2/7 add the numerators and add the denominators to get 3/11. Compare x to 3/11: • 3/11 < x < 2/7 add the numerators and add the denominators to get 5/18. Compare x to 5/18: • 5/18 < x < 2/7 add the numerators and add the denominators to get 7/25. Compare x to 7/25: • 7/25 < x < 2/7 add the numerators and add the denominators to get 9/32. Compare x to 9/32: • 9/32 < x < 2/7 add the numerators and add the denominators to get 11/39. Compare x to 11/39: • x = 11/39 - 1 What do you expect the upper bound on the run-time of this algorithm would be? It seems to me that if the number is (10^N - 2)/(10^N - 1), we get a sequence of lower bounds 1, 1/2, 2/3, 3/4, 4/5, ... which would take exponentially long to converge! – Niel de Beaudrap May 17 2010 at 17:22 The length of the algorithm is exactly equal to the sum of all the numbers occurring in the continued fraction expansion of 0.282051282. This algorithm can certainly be improved, to directly produce the sub-sequence 0/1, 1/3, 1/4, 2/7, 11/39, which is just the continued fraction expansion of 0.282051282. So my answer is not that different from jc's answer. – André Henriques May 17 2010 at 19:32 @André Henriques: No, your algorithm does not take the same time; it is not even poly-time equivalent, as is evident by the example I give above. Specifically: in the case of 99998/99999, where continued fractions stops in just two iterations (precisely 1/1 and 99998/99999). While this is indeed a subsequence of the almost-hundred-thousand-long sequence of values 1/1, 1/2, 2/3, ..., 99998/99999 obtained by your method, the two processes are not meaningfully equivalent. Producing a 'supersequence' is not sufficient for algorithmic equivalence. – Niel de Beaudrap May 17 2010 at 20:37 Interpret "can certainly be improved" in my above comment to mean "can be replaced by a different algorithm". – André Henriques May 18 2010 at 6:41 The nicest answers to your fraction come from taking the partial convergents to the continued fraction expansion for the decimal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8846787810325623, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/140150/show-that-the-ring-of-all-rational-numbers-which-when-written-in-simplest-form/140910
# Show that the ring of all rational numbers, which when written in simplest form has an odd denominator, is a principal ideal domain. Show that the ring of all rational numbers $m/n$ with $n$ an odd integer is a principal ideal domain. We haven't really discussed principal ideal domains. I've heard that this is easy, but I just lack the basic knowledge of what a principal ideal domain is. - 2 – Matthew Conroy May 2 '12 at 23:34 5 Sorry, but... if you don't have the "basic knowledge of what a principal domain is", then how can you possibly expect a proof showing that something is a principal ideal domain to be "easy"? Even a basic reader would be hard if you try to read it in russian and you don't know the alphabet! I could tell you that this is easy because the ring in question is a localization of a PID, but that's unlikely to be "easy" or enlightening to you... – Arturo Magidin May 2 '12 at 23:40 – lhf May 3 '12 at 1:07 ## 3 Answers Hint $\$ Generally, an ideal in a PID is generated by any element $\rm\:b\:$ having the least number of prime factors. Indeed, such a minimal $\rm\:b\in I\:$ divides every other element $\rm\:c\in I,\:$ otherwise $\rm\:d = gcd(b,c)\:|\:b\:$ properly so has fewer primes than $\rm b,\:$ and $\rm (b,c) = (d)\subset I,$ contra min $\rm b.$ In your PID $\rm\:D,\:$ every odd prime $\rm\:p\:$ is a unit since $\rm\:1/p \in D,\:$ So the only prime that survives in $\rm\:D\:$ is $\rm\:p=2,\:$ Thus, by above, an ideal of $\rm\:D\:$ is generated by any one of its elements having the least number of factors of $2.\:$ Thus every ideal has form $\rm\:(2^n),\:$ hence every ideal is principal. Remark $\:$ Implicit in the above is the following pretty generalization of the Euclidean algorithm to arbitrary PIDs. The Dedekind-Hasse criterion states that a domain $\rm\:D\:$ is a PID iff given any $\rm\:0\ne b,c \in D,\:$ either $\rm\:b\:|\:c\:$ or there exists a $\rm D$-linear combination of $\rm\:b,c\:$ that's smaller than $\rm b,\:$ where size is measured by naturals (or any ordinal), so that induction (or descent) works. It is clear that such a domain must be a PID, since the smallest element in an ideal must divide all others. Conversely, since a PID is UFD, an adequate metric is the number of prime factors (since if $\rm\:b\nmid c\:$ then their gcd $\rm\:d\:$ must have fewer prime factors; for if $\rm\:(b,c) = (d)\:$ then $\rm\:d\:|\:b\:$ properly, else $\rm\:b\:|\:d\:|\:c\:$ contra hypothesis). Notice Euclidean descent by the Division Algorithm is just a special case, hence Euclidean $\Rightarrow$ PID ($\Rightarrow$ {UFD, Bezout} $\Rightarrow$ GCD). - An integral domain is a commutative ring with unity that has no zero divisors. A Principal Ideal Domain is an integral domain in which every ideal is principal; that is, $R$ is a Principal Ideal Domain if and only if for every ideal $I$ of $R$ there exists $a\in R$ such that $I = (a) = aR = \{ax\mid x\in R\}$. Examples of PIDs are: the integers (if $I$ is an ideal, then either $I=(0)$, or $I=(a)$ where $a$ is the smallest positive integer in $I$; a consequence of the division algorithm); polynomials with coefficients in $\mathbb{Q}$, in $\mathbb{R}$, in $\mathbb{C}$, or more generally in any field (again, a consequence of the division algorithm for polynomials); any field (the only ideals are $(0)$ and $(1)$) and others. Let $R$ be the ring of all rational numbers which, when written in lowest terms, have odd denominator. Note that this is the same as the ring of all rationals that can be written with an odd denominator, since if $\frac{a}{b}$ is not in least terms and $b$ is odd, then the reduced fraction $\frac{m}{n}$ with $\frac{a}{b}=\frac{m}{n}$ has $n|b$, and so $n$ is also odd. Let $I$ be an ideal of $R$, and assume that $I\neq (0)$. If $0\neq\frac{a}{b}\in I$ with $b$ odd, let $n$ be the largest nonnegative integer such that $2^n|a$. I claim that $2^n=\frac{2^n}{1}\in I$: indeed, write $a = 2^nc$ with $c$ an odd integer. Since $\frac{a}{b}\in I$ and $\frac{b}{c}\in R$, then $\frac{b}{c}\frac{a}{b} = \frac{a}{c} = \frac{2^n}{1}\in I$. Now let $S$ be the collection of all positive integers $n$ such that $2^n\in I$; since $I\neq (0)$, then $S$ is nonempty. Let $m$ be the smallest positive integer in $S$. I claim that $I = (2^m) = (\frac{2^m}{1})$. Clearly, $2^m\in I$ by construction, so $(2^m)\subseteq I$. Let $\frac{a}{b}\in I$ with $b$ odd; write $a=2^kc$ with $c$ odd. Then $k\geq m$ because from what we saw above, $\frac{2^kc}{b}\in I$ with $b$ and $c$ odd implies $2^k\in I$, hence $k\in S$, hence $k\geq m$. Therefore, $\frac{2^{k-m}c}{b} \in R$, and $\frac{a}{b} = \frac{2^{k-m}c}{b}\frac{2^m}{1}\in (2^m)$; that is, $I\subseteq (2^m)$. Thus, $I$ is principal; this proves that all ideals of $R$ are principal. $\Box$ What's behind the argument is that every odd integer is a unit, so multiplication by units does not affect ideals in $R$; the only thing that matters are the primes of $\mathbb{Z}$ that are not units when considered in $R$, and the only such is $2$. This is a consequence of the fact that $R$ is the localization of $\mathbb{Z}$ at the multiplicative set of odd integers; every ideal of $R$ is the extension of an ideal of $\mathbb{Z}$, and since every ideal of $\mathbb{Z}$ is principal, so is every extension of an ideal of $\mathbb{Z}$, hence so is every ideal of $R$. - I very much appreciate you giving the definition in a clear manner. The examples also help a lot. I will have to look over the proof some more, but I am going to take the way to write the definition to try to make some other connections. Very much appreciated! – Nick Thomas May 3 '12 at 1:12 Perhaps we can attack your problem from the following point of view using some commutative algebra: Let $R$ be your ring of all rationals which in lowest terms has an odd denominator. Then clearly this contains $\Bbb{Z}$ since any integer can be viewed as sitting inside $R$ via the map that sends $x \mapsto \frac{x}{1}$; clearly $1$ is an odd number and this fraction is always in its lowest terms. Now we can view $R$ as sitting inside of $\Bbb{Q}$ too, which is the fraction field of the integers. It follows that since $$\Bbb{Z} \subset R \subset \Bbb{Q}$$ and $\Bbb{Z}$ is a PID (Principal Ideal Domain) that $R$ is also a PID. This result that I just used is proved here: A subring of the field of fractions of a PID is a PID as well. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 115, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522867202758789, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1297/desirable-s-box-properties
# Desirable S-box properties What desirable properties should an S-box have? My current standard selection process is to just pick them at random and verify that they fit the following criteria: • The probability that any random two bits $S[a]_b$ and $S[c]_d$ are equal (for any random $a$, $b$, $c$ and $d$) is 50%. • The probability that any random two bits $S[a]_n$ and $a_n$ are equal (for any random $a$ and $n$) is 50%. • No entries exist such that $S[a] = a$ • No entries exist such that $S[a] = \bar{a}$ Are there any other important properties that need to be applied? Edit My reasons for asking are that I wish to combine this S-Box design with a CBC mode cipher as discussed on this question. - My rational is that `S[a] = a` provides no benefit and `S[a] = !a` will always maintain the same bit "pattern" as its input. Since the idea of an S-box is to provide "confusion" (as defined by Shannon), it seems reasonable to ensure that neither of these cases are allowed. I may, however, be incorrect. – Polynomial Nov 23 '11 at 14:41 The output of the cipher has the avalanche property and appears random, but the construction of the S-box is not random. It's a case of not allowing any correlation, rather than specifying that a particular output is not allowed. – Polynomial Nov 23 '11 at 14:52 – woliveirajr Nov 23 '11 at 15:00 It doesn't really explain why they made the choices they did, though. It just says "this is the S-box and these are the choices we made". I'm really looking for answers that provide both an explanation of the facts and the reasoning behind making the choices. – Polynomial Nov 23 '11 at 15:05 – woliveirajr Nov 23 '11 at 15:20 ## 2 Answers The following information about the DES S-Box might be useful (taken from here): DES Design Criteria • there were 12 criterion used, resulting in about 1000 possible S-Boxes, of which the implementers chose 8 • these criteria are CLASSIFIED SECRET • however, some of them have become known The following are design criterion: R1: Each row of an S-box is a permutation of 0 to 15 R2: No S-Box is a linear of affine function of the input R3: Changing one input bit to an S-box results in changing at least two output bits R4: S(x) and S(x+001100) must differ in at least 2 bits The following are said to be caused by design criteria R5: S(x) [[pi]] S(x+11ef 00) for any choice of e and f R6: The S-boxes were chosen to minimize the difference between the number of 1's and 0's in any S-box output when any single input is held constant R7: The S-boxes chosen require significantly more minterms than a random choice would require For Rijndael, things were different as the S-Box in Rijndael had to meet certain requirements mathematically and cryptanalytically - Could you explain criteria R5 and R7, please? And is criteria R2 essentially "No S[x] must exist for x where the result is a rotation of x, e.g. `01010010 -> 10010100`"? – Polynomial Nov 24 '11 at 6:58 @Polynomial: There are many more linear and affine functions than just rotations. Basically, R2 says (assuming the mean linear/affine over $\{0,1\}$) that no S-box may be writable as $S(x) = a_0 \oplus a_1x_1 \oplus \dotsb \oplus a_nx_n$, where $x_1 \dotsc x_n$ are the bits of $x$ and $a_0 \dotsc a_n$ are arbitrary bitstrings. – Ilmari Karonen Nov 24 '11 at 12:16 The answer is: it depends. It depends on how you plan to use your S-box. Presumably you are going to use your S-box in some block cipher. In that case, you have to look at what properties you need from the S-box, and then generate the S-box accordingly. You can't separate the design of the S-box from the design of the rest of the cipher. There is no universal set of criteria that make for a good S-box. For instance, AES had one set of criteria for their S-boxes. DES had a totally different set of criteria. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486016035079956, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/50550/list
## Return to Question 2 added 772 characters in body This question is in response to one of the questions asked here. The OP wanted to know if the percentage of statements provable from ZFC tended to some value, and if so, what it was. In particular, the OP seemed interested in determining a possible asymptotic density: $\lim_{n \rightarrow \infty} \frac{T(n)}{n}$ where $T(n)$ represents the number of statements provable from ZFC with Gödel number at most $n$ where the Gödel coding is bijective between the set of Natural numbers and formulas in the language of set theory. Of course, the Gödel coding should be computable in the sense that the set of the codes for the ZFC theorems should be semi-decidable. (i.e., We should be able to list the theorems of ZFC with a computer program.) One problem that immediately comes to mind is the possibility that ZFC is inconsistent whereby all statements would be provable making the asymptotic density trivially have value 1. Another potential problem that I can see is the fact that the provability relation is computably enumerable but not computable (unless ZFC is inconsistent). Therefore, let's fix a proof system and instead consider the function $T_n(n)$, which will represent the number of statements with Gödel number at most $n$ provable in at most $n$ lines from the finite collection of axioms of ZFC all having Gödel number at most $n$. The relevant question then seems to be: What behavior does the function $T_n(n)/n$ exhibit for large $n$? EDIT: Another problem pointed out by Chris and Carl is that we don't have a canonical numbering of the formulas. To try to salvage the meaningfulness of this question, let's fix a coding that corresponds to Gödel's original numbering. In this way, we assign each symbol a Natural number and encode a formula with $n$ symbols with respective codes $c_1, c_2, \ldots c_n$ from left to right as $\prod_{i=1}^n p_i^{c_i}$ where $p_i$ represents the $i^{th}$ prime. Then to ensure that the coding is bijective, compose this function with the one with domain of valid Gödel numbers sending the $i^{th}$ least code to $i$. Of course, the question is still dependent on the rules of inference of our proof system, but fix an accordingly natural one as well. 1 # Asymptotic density of provable statements in ZFC This question is in response to one of the questions asked here. The OP wanted to know if the percentage of statements provable from ZFC tended to some value, and if so, what it was. In particular, the OP seemed interested in determining a possible asymptotic density: $\lim_{n \rightarrow \infty} \frac{T(n)}{n}$ where $T(n)$ represents the number of statements provable from ZFC with Gödel number at most $n$ where the Gödel coding is bijective between the set of Natural numbers and formulas in the language of set theory. Of course, the Gödel coding should be computable in the sense that the set of the codes for the ZFC theorems should be semi-decidable. (i.e., We should be able to list the theorems of ZFC with a computer program.) One problem that immediately comes to mind is the possibility that ZFC is inconsistent whereby all statements would be provable making the asymptotic density trivially have value 1. Another potential problem that I can see is the fact that the provability relation is computably enumerable but not computable (unless ZFC is inconsistent). Therefore, let's fix a proof system and instead consider the function $T_n(n)$, which will represent the number of statements with Gödel number at most $n$ provable in at most $n$ lines from the finite collection of axioms of ZFC all having Gödel number at most $n$. The relevant question then seems to be: What behavior does the function $T_n(n)/n$ exhibit for large $n$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336852431297302, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/113085/indian-claims-finding-new-cube-root-formula/113205
# Indian claims finding new cube root formula Indian claims finding new cube root formula It has eluded experts for centuries, but now an Indian, following in the footsteps of Aryabhatt, one of the earliest Indian mathematicians, claims to have worked out a simple formula to find any number's cube root. Nirbhay Singh Nahar, a retired chemical engineer and an amateur mathematician, claims he has found a formula that will help students and applied engineers to work out the cube roots of any number in a short time. "Give me any number - even, odd, decimals, a fraction...and I will give you the cube root using a simple calculator to just add and subtract within a minute and a half. We do have methods and patterns, but no formula at the moment. Even the tables give cube roots of 1 to 1,000, not of fractions or of numbers beyond 1,000, for which people have to use scientific calculators," Nahar, who retired as an engineer from Hindustan Salts Ltd at Sambhar (Rajasthan), said. Is there any sense to this claim? Is it possible to have an algorithm that gives the cube root which uses only additions and subtractions? - 10 "I will not disclose the formula till it is patented because I want the credit for my work to go to India." It's hard to talk about math that you can't see. – Dylan Moreland Feb 25 '12 at 1:02 22 "Writing to Prime Minister to arrange meeting with world's top mathematicians" (for a cube root calculation shortcut!). In footsteps of Aryabhat. Hmm... I am thinking of changing my name till this question is deleted :-) – Aryabhata Feb 25 '12 at 1:37 11 I'm voting to reopen. OP's actual question "Is it possible to have an algorithm that gives the cube root which uses only additions and subtractions?" is perfectly valid without knowing the details of Nahar's claimed algorithm. – Austin Mohr Feb 25 '12 at 2:15 13 In Mathematical Cranks, Underwood Dudley describes a number of amateur mathematicians making comparable claims. They usually turn out to be methods for iteratively computing successive rational approximations to the desired cube root (or whatever). Even though the authors are wont to describe their work as solving age-old impenetrable riddles, such methods are completely without practical use since computers and inexpensive calculators can already compute cube roots for us. Often the methods are not even new, the authors just having rediscovered something that was known already. – Henning Makholm Feb 25 '12 at 3:38 6 If one wants to merely get better and better approximations of cube roots, then as readers of Surely You're Joking, Mr Feynman! may recall, this can be done on an abacus... – user16299 Feb 25 '12 at 9:00 show 8 more comments ## 3 Answers If by "formula", Nirbhay (the former engineer featured in the article) means "closed form formula", then the answer is obviously not; because the answers would all have to be rational, whereas e.g the cube root of 2 is not. However, if by "formula" Nirbhay means "algorithm which computes the digits in sequence, which converges on the answer and terminates in the case of integers and other terminating fractions which are perfect cubes", then the answer is yes, if you allow either simple multiplications, or perform it in base 2. (Of course, you can reduce multiplication to addition if you like, but beyond small integers or shifting numbers by place values, that feels like cheating.) No innovation is required; it's a simple modification of the digit-by-digit algorithm for computing square roots. The intuition behind this algorithm is that at each point, you are trying to get better and better approximations to X by constructing rational numbers Y1, Y2, Y3, ... having more and more digits, whose cubes approximate X from below. We chip away at the error between X and the cubes of these approximations by trying to "complete the cube". Each new digit of X that we consider allows us to try to chip away more of the difference between Yj ³ and X, by defining Yj+1 = Yj + δ for a suitable increment δ; we then determine the error involved in the new approximation by computing Yj+1³ = Yj ³ + 3δYj ² + 3δ²Yj  + δ³. The trick is how to do this with simple arithmetic operations; but it's not too difficult. I demonstrate the algorithm in binary, because that's the base where it would be easiest in practise to compute the cube roots. The generalization to arbitrary bases (e.g. decimal) is an exercise for the reader. To simplify the presentation, we may assume without loss of generality that the number X whose cube-root we compute is less than 8. We may repeatedly divide by 8 until this is the case; when we have obtained the square root, simply multiply the answer the same number of times by 2. This is just observing that $\sqrt[3]{8^{-n} \cdot X\;} = 2^{-n} \cdot \sqrt[3]{X\;}$, and allows me to emphasize that the algorithm works just as well for fractions as for integers. Because of how the algorithm works, it looks somewhat as though it's starting with a number in octal, and obtaining a root in binary (for square roots it looks like it produces a binary root from a number in quartal); but of course we can represent the base 8 "digits" by groups of three binary digits. Let X = x0 + x1/81 + x2/82 + ... , where each xj ranges from 0 to 7. We compute binary digits yj representing Y = y0 + y1/21 + y2/22 + ...  such that Y3 = X, as follows. • If x0 = 0, set y0 = 0; otherwise, set y0 = 1. Let R := x0 − y0. • Set A = B = C = y0 . • Set j = 1, and repeat the following until the desired number of digits is obtained (or until R = 0 and only if all the remaining digits xj , xj+1 , ... are also zero): 1. Set D = 8R + xj , and consider the largest binary digit yj such that $$D \;\;\geqslant\;\; y_j^3 + 6Ay_j^2 + 12By_j + 8C.$$ Of course, we're working with binary digits; all the powers of yj are either zero or one. So what we're really testing is whether or not $$D \;\;\geqslant\;\; 1 + 2A + 4A + 4B + 8B + 8C,$$ where multiplication by 2, 4, and 8 can be realized by shifting integers one, two, or three places (tacking on zeros). If the inequality above holds, set yj = 1; otherwise set yj = 0. 2. The integer A will actually be the number whose binary representation is the bit-sequence y0 y1 ... yj−1 in order; B is equal to A², and C is equal to A³. To maintain this invariant for the next iteration j, we compute new values A', B', and C' as updated values. For A, we define $$A' = 2A + y_j \;.$$ For B', we take advantage of the fact that we have B and A: $$\begin{align*} B' &= (A')^2 = (2A + y_j)^2 = 4A^2 + 4Ay_j + y_j^2 \\&= 4B + 4Ay_j + y_j^2 \;. \end{align*}$$ Similarly, for C': $$\begin{align*} C' &= (A')^3 = (2A + y_j)^2 = 8A^3 + 12A^2y_j + 6Ay_j^2 + y_j^3 \\&= 8C + 8By_j + 4By_j + 4Ay_j^2 + 2Ay_j^2 + y_j^3 \;. \end{align*}$$ Again, multiplications by zero, one, or powers of two are trivial inbinary representation. We then update A := A', B := B', and C := C'. 3. We update R, which is meant to represent the error of C as an approximation to the integer formed by the first j digits (in octal) of X: we set $$R' = \bigl(8R + x_j\bigr) - C' = D - C' \;,$$ and then set R := R'. • If ever we obtain R = 0 and with all of the remaining digits of X equal to zero, we stop (with an exact solution). In fact, it is not very difficult to modify this algorithm to obtain a procedure for extracting fourth roots, fifth roots, or nth roots for any constant n (though the number of parameters you have to carry around with you, and the sizes of the additions you have to carry out, will grow as you work harder and harder to avoid explicitly performing any multiplications). - Bravo, but somehow I doubt this is what Mr Nahar has in mind. – Gerry Myerson Feb 26 '12 at 3:43 10 Quite possibly he has in mind something much more suitably mysterious, using extremely clever ideas from ancient Indian mathematical tradition. I doubt that we shall ever learn what his algorithm is, so we must be content with the half-dozen known fast algorithms for computing cube roots. – Niel de Beaudrap Feb 26 '12 at 10:22 – robjohn♦ Apr 17 '12 at 19:33 To put things in context I'll first expose a straightforward method inspired by the classical evaluation of square roots (shortly : "if we know that $a^2 \le N <(a+1)^2$ then the next digit $d$ will have to verify $(10a+d)^2 \le 10^2 N <(10a+d+1)^2$. This means that we want the largest digit $d$ such that $(20a+d)d\le 10^2(N-a^2)$") : To evaluate the cubic root of $N$ let's suppose that $a^3 \le N <(a+1)^3$ then the next digit $d$ will have to verify $(10a+d)^3 \le 10^3 N <(10a+d+1)^3$. So that we want the largest digit $d$ such that $\left(30a(10a+d)+d^2\right)d \le 10^3(N-a^3)$. To get a feeling of this method let's evaluate $\sqrt[3]{2}$ starting with $N=2,\ a=1$ : $\begin{array} {r|l} 2.000.000 & 1\\ \hline \\ -1.000.000 & 1.25\\ 1.000.000 & \\ -728.000 & \\ 272.000 & \\ -225.125 & \\ 46.875 & \\ \end{array}$ $a=1$ so that the first decimal must verify $(30(10+d)+d^2)d \le 1000$ that is $d=2$. $a=12$ and the second decimal must verify $(360(120+d)+d^2)d \le 272000$ so that $d=5$. (let's notice that this is 'nearly' $360\cdot 120\cdot d \le 272000$ so that $d=5$ or $d=6$ : we don't really need to try all the digits!) I could have continued but observed that for $d=6$ the evaluation returned $272376$ so that the relative error on $d$ is $\epsilon_1 \approx \frac{376}{272376+360\cdot 6^2}\approx 0.001318$ giving $d\approx 5.9921$ and the solution $\sqrt[3]{2}\approx 1.259921$. Now let's give a chance to Nirbhay Sngh Nahar's method exposed here. Let's consider $N=2000$ then $x=1\cdot 10=10$ The NAHNO approximative formula is : $$A= \frac 12\left[x+\sqrt{\frac{4N-x^3}{3x}}\right]= \frac 12\left[10+\sqrt{\frac{4\cdot 2000-10^3}{3\cdot 10}}\right]\approx 12.6376$$ Doesn't look very good... Let's give the formula a second chance by providing a much better value of $x=12.5$ then the formula returns $A=12.5992125$ not so far from $2^{\frac 13}= 12.59921049894873\cdots$ but $x=12.5$ is really near the solution so let's compare this method with Newton's iterations $\displaystyle x'=x-\frac{x^3-N}{3x^2}$ $x_0=12.5\to x_1=12.6\to x_2=12.599210548\cdots \to x_3=12.5992104989487318\cdots$ EDIT: I missed the 'Precise Value of Cube Root' using following formula : $$P=A\frac{4N-A^3}{3N}$$ (I updated the picture and added this formula as well as the third Newton iteration) The NAHNO approximative formula is better than the first Newton iteration but weaker than the second. The precise NAHNO formula is beaten only by the third Newton approximation as you may see in this picture (the curves are from top to bottom : first Newton iteration, Approximative NAHNO, second Newton iteration, Precise NAHNO, third Newton iteration ; the NAHNO curves are darker, the vertical scale is logarithmic and 'lower is better') : The vertical axis shows $\ \log \left| \frac {A(N)}{N^{\frac 13}}-1\right|$ for $N$ in $(1000,50000)$. The vertical lines are values $N$ such that $2\sqrt[3]{N}$ is integer (when the initial estimation is nearly the solution). So that, considered as approximate formulas, NAHNO formulas are rather good and could be made more precise with a better first approximation (especially for $x$ between $1$ and $2.5$ more values should be provided in the table). Avoiding extravagant claims could be an advantage too! :-) - Unless there is something lost in translation, the claims in the article are inconsistent... Even the tables give cube roots of 1 to 1,000, not of fractions or of numbers beyond 1,000, for which people have to use scientific calculators," Nahar, said. While Newton's formula arrives at an approximation, Nahar claims his formula leads to direct and perfect value, and no approximation. So, he claims to be able to do this for any number, including fractions, and get the perfect value, not an approximation.... Great, so he is able to calculate all the digits of $\sqrt[3]{2}$.... Judging by this, I have big doubts. Of course it matters what he means by a formula; technically a formula of the type $x=10^{\frac{\log_{10} x}{3}}$ is simple enough and easy to use if one uses a logarithmic scale for the real numbers... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.931533932685852, "perplexity_flag": "middle"}
http://cstheory.stackexchange.com/questions/14349/motivation-for-developing-shortest-path-simplex-algorithms
# Motivation for Developing Shortest Path Simplex Algorithms I'm reading Efficient Shortest Path Simplex Algorithms by Donald Goldfarb, Jianxiu Hao and Shen-Roan Kai who considered "the specialization of the primal simplex algorithm to the problem of finding a tree of directed shortest paths from a given node to all other nodes in a network of n nodes or finding a directed cycle of negative length. Two efficient variants of this shortest path simplex algorithm are analyzed and shown to require at most $(n − 1)(n − 2)/2$ pivots and $O(n^3)$ time." I'm trying to find motivation for this article and wonder isn't Bellman-Ford algorithm good enough? It works in $O(nm)$ time which and good for the type of graph which the problem above algorithm deals with. - ## 1 Answer A major open problem in mathematical programming is designing a strongly polynomial time linear programming algorithm. A related problem is whether any variant of the simplex algorithm runs in strongly polynomial time. It makes sense to first prove strong polynomial time bounds for variants of simplex applied to problems for which we already know strong polynomial time algorithms exist. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171903729438782, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/05/23/tensor-products-and-bases/
# The Unapologetic Mathematician ## Tensor Products and Bases Since we’re looking at vector spaces, which are special kinds of modules, we know that $\mathbf{Vec}(\mathbb{F})$ has a tensor product structure. Let’s see what this means when we pick bases. First off, let’s remember what the tensor product of two vector spaces $U$ and $V$ is. It’s a new vector space $U\otimes V$ and a bilinear (linear in each of two variables separately) function $B:U\times V\rightarrow U\otimes V$ satisfying a certain universal property. Specifically, if $F:U\times V\rightarrow W$ is any bilinear function it must factor uniquely through $B$ as $F=\bar{F}\circ B$. The catch here is that when we say “linear” and “bilinear” we mean that the functions preserve both addition and scalar multiplication. As with any other universal property, such a tensor product will be uniquely defined up to isomorphism. So let’s take finite-dimensional vector spaces $U$ and $V$, and bases $\left\{e_i\right\}$ of $U$ and $\left\{f_j\right\}$ of $V$. I say that the vector space with basis $\left\{e_i\otimes f_j\right\}$, and with the bilinear function $B(e_i,f_j)=e_i\otimes f_j$ is a tensor product. Here the expression $e_i\otimes f_j$ is just a name for a basis element of the new vector space. Such elements are indexed by the set of pairs $(i,j)$, where $i$ indexes a basis for $U$ and $j$ indexes a basis for $V$. First off, what do I mean by the bilinear function $B(e_i,f_j)=e_i\otimes f_j$? Just as for linear functions, we can define bilinear functions by defining them on bases. That is, if we have $u=u^ie_i\in U$ and $v=v^jf_j\in V$, we get the vector $B(u,v)=B(u^ie_i,v^je_j)=u^iv^jB(e_i,f_j)=u^iv^je_i\otimes f_j$ in our new vector space, with coefficients $u^iv^j$. So let’s take a bilinear function $F$ and define a linear function $\bar{F}$ by setting $\bar{F}(e_i\otimes f_j)=F(e_i,f_j)$ We can easily check that $F$ does indeed factor as desired, since $\bar{F}(B(e_i,f_j))=\bar{F}(e_i\otimes f_j)=F(e_i,f_j)$ so $F=\bar{F}\circ B$ on basis elements. By linearity, they must agree for all pairs $(u,v)$. It should also be clear that we can’t define $\bar{F}$ any other way and hope to satisfy this equation, so the factorization is unique. Thus if we have bases $\left\{e_i\right\}$ of $U$ and $\left\{f_j\right\}$ of $V$, we immediately get a basis $\left\{e_i\otimes f_j\right\}$ of $U\otimes V$. As a side note, we immediately see that the dimension of the tensor product of two vector spaces is the product of their dimensions. About these ads ### Like this: Like Loading... Posted by John Armstrong | Algebra, Linear Algebra ## 5 Comments » 1. [...] Given two finite-dimensional vector spaces and , with bases and respectively, we know how to build a tensor product: use the basis [...] Pingback by | May 26, 2008 | Reply 2. [...] Like we saw with the tensor product of vector spaces, the dual space construction turns out to be a functor. In fact, it’s a [...] Pingback by | May 28, 2008 | Reply 3. [...] a bilinear multiplication, which is just a linear map from the tensor square to . And we know that the basis for a tensor product consists of pairs of basis elements. So we can specify this linear map on a basis and extend by linearity — bilinearity — [...] Pingback by | July 28, 2008 | Reply 4. [...] on what such a thing looks like by assuming has finite dimension and picking a basis . Then we have bases for tensor powers: a basis element of the th tensor power is given by an -tuple of basis elements for . We’ll [...] Pingback by | December 22, 2008 | Reply 5. [...] go. You can also review tensor products in the context of vector spaces and linear transformations here. What we want to think of here is that the matrix shuffles around the two copies of the irrep , [...] Pingback by | October 5, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## RSS Feeds RSS - Posts RSS - Comments • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169302582740784, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/131219-mathematical-induction.html
# Thread: 1. ## Mathematical Induction Prove by mathematical induction that 7^(2n) - 48n - 1 is a multiple of 2304 I am ok with the first and second steps, but I am confused with the third i.e. when substituting n = k + 1. Thanks a lot 2. Originally Posted by yobacul Prove by mathematical induction that 7^(2n) - 48n - 1 is a multiple of 2304 I am ok with the first and second steps, but I am confused with the third i.e. when substituting n = k + 1. Thanks a lot Hint: Note that $7^{2(k + 1)} - 48(k + 1) - 1 = 7^{2k + 2} - 48k - 48 - 1 = 7^{2k + 2} ~\underbrace{{\color{red} - 7^2 \cdot 48k + 7^2 \cdot 48k}}_{\text{this is zero}}~ - 48k - 7^2$ 3. Originally Posted by yobacul Prove by mathematical induction that 7^(2n) - 48n - 1 is a multiple of 2304 I am ok with the first and second steps, but I am confused with the third i.e. when substituting n = k + 1. Thanks a lot hi yobacul, F(n) $7^{2n}-48n-1$ is a multiple 2304 ? F(k+1) $7^{2(k+1)}-48(k+1)-1$ is a multiple of 2304 if the "k"th term is ? Does the hypothesis F(n) cause this to be true ? $7^{2k+2}-48k-48-1=7^2\left(7^{2k}\right)-48k-49$ $=7^2\left(7^{2k}\right)-48k-7^2=7^2\left(7^{2k}-1\right)-48k$ Now express -48k as a multiple of $7^2$ To do this we need to subtract another 48(48k) {and therefore also add that amount} $7^2\left(7^{2k}-1\right)-(48k)-48(48k)+48(48k)$ $=7^2\left(7^{2k}-1\right)-49(48k)+48(48k)$ $=7^2\left(7^{2k}-48k-1\right)+48^2k$ The final term is $48^2k=2304k$ therefore, the term-by-term link is established. Hence, if F(n) is valid, F(n+1) also is
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911421895027161, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/210922-compactification-print.html
Compactification Printable View • January 7th 2013, 10:01 AM Siron Compactification Hi, Let $X$ be non-compact topological space and define $X^{*}=X \cup \{\infty\}$ with the topology $\mathcal{T}^{*}=\{U \subset X^{*}| X \cap U \in \mathcal{T} \ \mbox{and} \ \infty \in U \Rightarrow X \setminus U \ \mbox{is compact and closed}\}$ How can I prove that $X^{*}$ is compact? I know the proof with covers and subcovers, but I was wondering how I can prove it only using filters (thus the fact that $X^{*}$ is compact if and only if every ultrafilter converges) Thanks in advance! All times are GMT -8. The time now is 11:05 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90438312292099, "perplexity_flag": "middle"}
http://twofoldgaze.wordpress.com/category/data-visualization/
Sharpen the twofold gaze, perception and sight. # Category Archives: Data Visualization ## Mathematics Visualization Posted on April 13, 2011 First, I should make long-overdue mention of a visualization done by Rachel Binx using some of the data I had for tagging of papers as a way of figuring out the relationships between different mathematics fields. I’ve never met her, but she is apparently “a feisty young woman operating out of the bay area.”  She was kind enough to let me know about it last year.  So now, I’m letting you guys know about it (all three of you).  The visualization is interactive and you can check it out by clicking here. It can be hard to keep up with a blog especially when you have a lot of other things going on … for instance, being the Editor-in-chief on a second blog. So, I will try not to neglect this one. I will also, probably, do something cheap on occasion like link to posts on the other blog. Posted in Data Visualization ## A Map of Math? Posted on March 13, 2010 There has been some recent discussion at Reddit of an attempt I made to turn some data from arXiv.0rg into a visualization of the relationships between research areas in mathematics.  I was never completely pleased with the dataset. Continue reading → → 1 Comment Posted in Data Visualization, Fun, Mathematics ## Planarity Posted on November 29, 2009 I recently discovered the game, Planarity. The goal of the game is to take a graph and put it in a form where none of the edges cross. That is to say, to prove that it is planar. If you don’t know what I’m saying; or you think it sounds boring; or you think it sounds complicated; go play it anyway. It’s lots of fun. Posted in Data Visualization, Fun, Mathematics Tagged graph theory, Planarity ## The Monty Hall Problem Posted on November 7, 2009 When I was younger, I would often play a game with myself: He knows.  I know he knows.  He knows I know he knows. I pondered these sentences, turning them over slowly in my mind, thinking that eventually, I would have a feel for them.  Unsurprisingly, for the mind of a little boy, the examples that came to mind were adversial.  He tried to out know me, as I tried to out know him.  So, I could see this as a game of countering strategies.  When I felt I had an intuition built up, I tried for more, I know he knows I know he knows. He knows I know he knows I know he knows! I know he knows I know he knows I know he knows!! He knows I know he knows I know he knows I know he knows!!! I know he knows I know he knows I know he knows I know he knows!!!! There was always a point at which the sentences attained true meaninglessness.  It was at that point, I gave things a rest until next time. Continue reading → Posted in Data Visualization, Mathematics Tagged math, Mathematics, Monty Hall problem, probability, visualisation, visualization ## An Attempt at Mapping Mathematics Posted on November 1, 2009 Update: A new version of this diagram can be found at the bottom of the article. Click for full version On October 25th, Terry Tao wrote about the idea of creating a display visualizing the applications of mathematics.  Soon after, there was a lot of activity on MathOverflow, collecting possible uses of different fields of mathematics.  The original idea came from a visualization which illustrated the uses of the elements of the Periodic table. I started writing this blog because of an overlap which I perceived between Tufte’s view of information display and mathematical thought. Tufte’s insight was mainly that effective information display requires an exacting form of clarity of expression where what is extraneous is identified as deleterious, and is therefore stripped away. In this view there is little room for ornamentation for the sake of ornamentation. Whether in computer code or in good writing, mathematical or otherwise, there is always something to be said for concise and clear expression. Continue reading → → 2 Comments Posted in Data Visualization, Mathematics Tagged classification, coding, data, fotosketcher, knowledge work, math, Mathematica, Mathematics, models, science, thoughts, visualisation, visualization ## Information and Display Posted on May 5, 2009 There was a point not too long ago when people were really excited about the display of information. This excitement was epitomized for me by Edward Tufte’s work in information display. It brought home to me a unity in the several ways of transcribing information from the way we understand it to the page. When I write, read, program, draw, manipulate equations or design graphical displays, there is some unity in what needs to be done. I should have clarity, of thought and of expression; and this requires both a clarity of understanding and a facility with the tools at hand. I am being abstract so I’ll just say what brought on this discussion: good displays of information don’t always have to be flashy, sometimes a list will do. Terry Tao has a really cool break down of several numbers that are relevant to the current discussion of the economic climate. I think it would have taken deliberate effort on my part to have come away without understanding the relative proportions of the US budget and the numbers concerning the housing crisis better. The list succeeds by bridging the key difficulty in understanding the financial numbers — that they are simply too large. It surmounts this difficulty by analogizing what we understand poorly with what we understand well. This all seems very straightforward and basic and yet, we have probably each seen literally millions of examples of graphics, advertisements, textbook illustrations, or journal article figures, where the designer did not accomplish this. He or she was not sure what the key difficult was and failed to make the information as understandable as it could have been. I am sure I am guilty of this myself. This is because displaying information is not a straightforward task and there is often room for improvement. Posted in Data Visualization, Nontechnical ## Matrix multiplication Posted on December 20, 2007 Have you ever wondered what you are doing when you multiply a vector by a matrix?  There is a lot more to this idea than I will discuss here.  However, I wanted to share this image which gives quite a bit of geometric intuition.  The blue area represents a set of randomly generated points which lie within a circle of radius 1 which is centered at the origin.   I randomly generated a 2 by 2 matrix and multiplied the coordinates of all of the points shown in blue by that matrix.  The result was the coordinates of all the red points shown in the top left of the illustration.  If I again multiply all the red points in the previous image by the same matrix, I get the red points in the top center image.  If I repeat this process a few times I get the above illustration.  The orientation of the ellipse and the change in scale are related to the eigenvectors and eigenvalues of the matrix I generated. When you multiply the points in a unit circle by a 2 by 2 matrix, the result is always an ellipse (possibly a circle) where the length of the major and minor axes are the eigenvalues of the matrix.  This idea generalizes to any m by n matrix which maps an m dimensional unit sphere to an n dimensional ellipsoid where the length of the axes of the ellipsoid are the square root of the eigenvalues of the matrix created by multiplying the m by n matrix by its transpose. → 2 Comments Posted in Data Visualization ## Which Root? Posted on December 20, 2007 I thought it might be interesting to see the previous image coloured according to the root to which the iteration eventually converges. Here for simplicity, I concentrate on a single root whose location is circled. All points which converge to this root are colored red with the intensity of the color representing the same information as in previous diagrams. Points which converge to all other roots are colored blue. I also have a version of this image where the root is not identified which I find more aesthetically appealing. Continue reading → Posted in Data Visualization ## Another Chaotic search Posted on December 19, 2007 Newton’s method is a more simple way of finding a root.   The formula is $x_{n+1}=x_n - \frac{f^{'}(x)}{f(x)}$ However, the behavior of the algorithm in finding roots is anything but simple.  Above, I display the behavior of this algorithm on the complex interval where the real part of and the imaginary part of each complex number if between -1 and 1.  The coloring scheme is identical to the previous post. Posted in Data Visualization ## A Chaotic search Posted on December 10, 2007 As consumers of mathematical and computational intellectual captial, we often find ourselves implementing algorithms which we do not understand. With massive computer resources, the number of methods discovered which seem to work but whose mechanisms remain shrouded in mystery are steadily growing. The behavior of some algorithms can astound us with their beautiful perplexity. Below, I display the number of steps (dark for small numbers and light for large numbers) to convergence of Halley’s method being used to solve $\displaystyle x^7+x^2-1=0$ which has 7 solutions. The areas of nonuniform color are clearly chaotic. For each point in the image, I implemented the following iteration ${x_{n+1} = x_n-\frac{F(x_n)}{F^{'}(x_n)-\frac{F^{''}(x_n)F(x_n)}{2F^{'}(x_n)}}}$ which terminates when the solution is within machine precision of a root of the equation. The largest value of n is then stored. The numbers on which I executed the algorithm are in the interval where the real part and imaginary part of the number are between -1 and 1. Continue reading → Posted in Data Visualization • ### Blog Stats • 30,847 hits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438283443450928, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/81910/list
## Return to Question 3 edited tags 2 appropriate tags 1 # When does the blow-up of $CP^2$ at N points embed in $CP^4$? Write $X_N$ for this blow up. Place the N points in 'general position' as needed. Then $X_6$ embeds in $CP^2$ as a smooth cubic surface. (See, eg, Griffiths and Harris.) But there is no other $N$ (except $N=0$) for which $X_N$ embeds in $CP^3$. (Proof: The topology of the blow-up disagrees with that of a smooth surface of degree $d$ in $CP^3$. (Gompf-Stipsisz p. 21.) On the other hand, $X_N$ embeds in $CP^5$ simply because any smooth algebraic surface $X$ so embeds. (Harris, `Algebraic Geometry, a first course', p. 193.) Embarrassingly, I don't even know the answer for $N=1$ where $X_1$ is the 1st Hirzebruch surface! (I'm betting it does embed.) Motivation: This question began in an attempt to better understand the 27 lines on the cubic and my initial surprise at how the construction described in GH of $X_6$ yielded a smooth surface in $CP^3$, and how all such surfaces arise through that construction by varying the 6 points. I am hoping answers might help me understand the moduli of blow-ups as I move the N points about the plane, and orient me as a novice to algebraic surfaces. ag gt
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276527166366577, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/92383/list
## Return to Answer 3 added 290 characters in body These answers look at bit complicated so maybe there is something obviously wrong with the following argument: Every embedded two-sphere $\Sigma \subset S^2 \times {\mathbb R}^2$ is displaceable: there is a one-parameter group (or family) of homeomorphisms $\varphi_t$ from $S^2 \times {\mathbb R}^2$ to itself such that $\varphi_T (\Sigma)$ is disjoint from $\Sigma$ for some (large) $T$. Indeed, just translate in the second variable far enough. However, it is impossible to displace the zero section of $TS^2$ because its self-intersection number is $2$. I read somewhere that to distinguish homeomorphism type of homotopic spaces one could look at the homotopy invariants of configuration spaces. I wonder : Is the homotopy type of the (two-point) configuration space $C_2(S^2 \times {\mathbb R}^2)$ different from that of $C_2(TS^2)?$. Edit. It turns out that the answer to the preceeding question is yes as is nicely explained here by Paolo Salvatore. This provides yet another way of proving that $S^2 \times {\mathbb R}^2$ and $TS^2$ are not homeomorphic. 2 deleted 9 characters in body; deleted 1 characters in body These answers look at bit complicated so maybe there is something obviously wrong with the following argument: If $\Sigma$ is an Every embedded two-sphere in $\Sigma \subset S^2 \times {\mathbb R}^2$ , then it is displaceable: there is a one-parameter group (or family) of homeomorphisms $\varphi_t$ from $S^2 \times {\mathbb R}^2$ to itself such that $\varphi_T (\Sigma)$ is disjoint from $\Sigma$ for some (large) $T$. Indeed, just translate in the second variable far enough. However, it is impossible to displace the zero section of $TS^2$ because its self-intersection number is $2$. I read somewhere that to distinguish homeomorphism type of homotopic spaces , one could look at the homotopy invariants of configuration spaces. I wonder : Is the homotopy type of the (two-point) configuration space $C_2(S^2 \times {\mathbb R}^2)$ different from that of $C_2(TS^2)?$. 1 These answers look at bit complicated so maybe there is something obviously wrong with the following argument: If $\Sigma$ is an embedded two-sphere in $S^2 \times {\mathbb R}^2$, then it is displaceable: there is a one-parameter group (or family) of homeomorphisms $\varphi_t$ from $S^2 \times {\mathbb R}^2$ to itself such that $\varphi_T (\Sigma)$ is disjoint from $\Sigma$ for some (large) $T$. Indeed, just translate in the second variable far enough. However, it is impossible to displace the zero section of $TS^2$ because its self-intersection number is $2$. I read somewhere that to distinguish homeomorphism type of homotopic spaces, one could look at the homotopy invariants of configuration spaces. I wonder : Is the homotopy type of the (two-point) configuration space $C_2(S^2 \times {\mathbb R}^2)$ different from that of $C_2(TS^2)?$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486321806907654, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/01/25/coproduct-root-systems/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## Coproduct Root Systems We should also note that the category of root systems has binary (and thus finite) coproducts. They both start the same way: given root systems $\Phi$ and $\Phi'$ in inner-product spaces $V$ and $V'$, we take the direct sum $V\oplus V'$ of the vector spaces, which makes vectors from each vector space orthogonal to vectors from the other one. The coproduct $\Phi\amalg\Phi'$ root system consists of the vectors of the form $(\alpha,0)$ for $\alpha\in\Phi$ and $(0,\alpha')$ for $\alpha'\in\Phi'$. Indeed, this collection is finite, spans $V\oplus V'$, and does not contain $(0,0)$. The only multiples of any given vector in $\Phi\amalg\Phi'$ are that vector and its negative. The reflection $\sigma_{(\alpha,0)}$ sends vectors coming from $\Phi$ to each other, and leaves vectors coming from $\Phi'$ fixed, and similarly for the reflection $\sigma_{(0,\alpha')}$. Finally, $\displaystyle\begin{aligned}(\beta,0)\rtimes(\alpha,0)=\beta\rtimes\alpha&\in\mathbb{Z}\\(0,\beta')\rtimes(0,\alpha')=\beta'\rtimes\alpha'&\in\mathbb{Z}\\(\beta,0)\rtimes(0,\alpha')=(0,\beta')\rtimes(\alpha,0)=0&\in\mathbb{Z}\end{aligned}$ All this goes to show that $\Phi\amalg\Phi'$ actually is a root system. As a set, it’s the disjoint union of the two sets of roots. As a coproduct, we do have the inclusion morphisms $\iota_1:\Phi\rightarrow\Phi\amalg\Phi'$ and $\iota_2:\Phi'\rightarrow\Phi\amalg\Phi'$, which are inherited from the direct sum of $V$ and $V'$. This satisfies the universal condition of a coproduct, since the direct sum does. Indeed, if $\Psi\subseteq U$ is another root system, and if $f:V\rightarrow U$ and $f':V'\rightarrow U$ are linear transformations sending $\Phi$ and $\Phi'$ into $\Psi$, respectively, then $(a,b)\mapsto f(a)+g(b)$ sends $\Phi\amalg\Phi'$ into $\Psi$, and is the unique such transformation compatible with the inclusions. Interestingly, the Weyl group of the coproduct is the product $\mathcal{W}\times\mathcal{W}'$ of the Weyl groups. Indeed, for every generator $\sigma_\alpha$ of $\mathcal{W}$ and every generator $\sigma_{\alpha'}$ of $\mathcal{W}'$ we get a generator $\sigma_{(\alpha,0)}$. And the two families of generators commute with each other, because each one only acts on the one summand. On the other hand, there are no product root systems in general! There is only one natural candidate for $\Phi\times\Phi'$ that would be compatible with the projections $\pi_1:V\oplus V'\rightarrow V$ and $\pi_2:V\oplus V'\rightarrow V'$. It’s made up of the points $(\alpha,\alpha')$ for $\alpha\in\Phi$ and $\alpha'\in\Phi'$. But now we must consider how the projections interact with reflections, and it isn’t very pretty. The projections should act as intertwinors. Specifically, we should have $\displaystyle\pi_1(\sigma_{(\alpha,\alpha')}(\beta,\beta'))=\sigma_{\pi_1(\alpha,\alpha')}(\pi_1(\beta,\beta'))=\sigma_\alpha(\beta)$ and similarly for the other projection. In other words $\displaystyle\sigma_{(\alpha,\alpha')}(\beta,\beta')=(\sigma_\alpha(\beta),\sigma_{\alpha'}(\beta'))$ But this isn’t a reflection! Indeed, each reflection has determinant $-1$, and this is the composition of two reflections (one for each component) so it has determinant ${1}$. Thus it cannot be a reflection, and everything comes crashing down. That all said, the Weyl group of the coproduct root system is the product of the two Weyl groups, and many people are mostly concerned with the Weyl group of symmetries anyway. And besides, the direct sum is just as much a product as it is a coproduct. And so people will often write $\Phi\times\Phi'$ even though it’s really not a product. I won’t write it like that here, but be warned that that notation is out there, lurking. ## 6 Comments » 1. Doesn’t the determinant argument leave open the possibility of (2n+1)-way products? A more general method might be to look at the dimensions of the various eigenspaces. Comment by Chad | January 25, 2010 | Reply 2. Yes, Chad, the argument doesn’t cover odd-arity products, but a more technical argument like you suggest can cover those cases. Even without going down that road, though, it should be clear that we definitely don’t have all finite products, and so whatever remains (by your proposal: nothing) is far from the natural notion that coproducts are. But it still doesn’t explain the ubiquitous use of $\times$ to denote coproduct root systems, even or odd. Indeed, it’s easy to find lists of two-dimensional root systems that include (if not start with) “$A_1\times A_1$“. Comment by | January 25, 2010 | Reply • Whoever said coproducts had to look like addition and products had to look like multiplication? In categories that behave more like $\text{Set}^{op}$ than $\text{Set}$, such as $\text{CRing}$, exactly the opposite is true. Comment by | January 25, 2010 | Reply 3. Qiaochu, Of course they don’t have to look like that. But the use is so widespread that when one sees $\times$ one expects products. More substantively, nobody really says what they mean by $\times$ in these contexts. It’s just taken as obvious. And they definitely don’t talk about coproducts (which they should!) Comment by | January 25, 2010 | Reply 4. [...] to each other, and together they span the whole of . Thus we can write , and thus is the coproduct [...] Pingback by | January 27, 2010 | Reply 5. [...] we have no information about their relative lengths. This is to be expected, since when we form the coproduct of two root systems the roots from each side are orthogonal to each other, and we should have no [...] Pingback by | January 28, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 56, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501542448997498, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/79644-exponential-inverse.html
# Thread: 1. ## Exponential Inverse Ok. I have to show that f(z)=exp(1-2iz) does not have an inverse function. Now I am trying to prove that the f(z) is not one one, but am having a hard time trying to come up with two values, that when inserted in f(z) result in the same answer. Any help would be great Thanx Bex 2. Setting… $s=f(z)=e^{1-2\cdot i\cdot z}$ … we derive… $\ln s= 1-2\cdot i\cdot z \rightarrow z=\frac {1-\ln s}{2\cdot i}$ … which is the requested inverse function… Kind regards $\chi$ $\sigma$ 3. Thanks but you have shown the inverse of f(z), however the question asks me to prove that f(z) DOES NOT have and inverse. Originally Posted by chisigma Setting… $s=f(z)=e^{1-2\cdot i\cdot z}$ … we derive… $\ln s= 1-2\cdot i\cdot z \rightarrow z=\frac {1-\ln s}{2\cdot i}$ … which is the requested inverse function… Kind regards $\chi$ $\sigma$ 4. Originally Posted by bex23 Ok. I have to show that f(z)=exp(1-2iz) does not have an inverse function. Now I am trying to prove that the f(z) is not one one, but am having a hard time trying to come up with two values, that when inserted in f(z) result in the same answer. Any help would be great Thanx Bex $f(z) = f(z + n \pi)$ where n is an integer. 5. Originally Posted by mr fantastic $f(z) = f(z + n \pi)$ where n is an integer. So would this give f(z+n \pi)=exp(1-2iz-2nipi) And because of the different values of n, f(z) would never be one one and thus have no inverse.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255234599113464, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/953/analogue-to-covering-space-for-higher-homotopy-groups/1508
## Analogue to covering space for higher homotopy groups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The connection between the fundamental group and covering spaces is quite fundamental. Is there any analogue for higher homotopy groups? It doesn't make sense to me that one could make a branched cover over a set of codimension 3, since I guess, my intuition is all about 1-D loops, and not spheres. - ## 5 Answers There's certainly a homotopy-theoretic analogue. A universal cover of a connected space X is (up to homotopy) a simply connected space X' and a map X' -> X which is an isomorphism on πn for n >= 2. We could next ask for a 2-connected cover X'' of X': a space X'' with πkX'' = 0 for k <= 2 and a map X'' -> X' which is an isomorphism on πn for n >= 3. The homotopy fiber of such a map will have a single nonzero homotopy group, in dimension 1--it will be a K(π2X, 1). (For the universal cover the fiber was the discrete space π1X = K(π1X, 0).) An example is the Hopf fibration K(Z, 1) = S1 -> S3 -> S2. Geometrically it's harder to see what's going on with the 2-connected cover than with the universal cover, because fibrations with fiber of the form K(G, 1) are harder to describe than fibrations with discrete fibers (covering spaces). - Thanks! I hadn't thought of it in that way before, but it makes perfect sense. – jc Oct 17 2009 at 23:51 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Just like there is a universal cover of every space, there is a natural n-connected space X_n that maps to any space X. To construct this space, you can add cells of dimension n+2 and higher to X to get a space Y together with a map X \to Y which is an isomorphism on \pi_i for i \leq n, but such that \pi_i(Y)=0 for i>n. The homotopy fiber X_n \to X of this map is then the "n-connected cover" of X; X_n is n-connected but has the same homotopy groups as X above n, as can easily be seen from the long exact sequence of the fibration. Details of this, as well as a proof of uniqueness of the n-connected cover, are in Hatcher starting on page 410. More generally, if you started with an (n-1)-connected space, you could both kill the homotopy groups of X above n and kill a subgroup of \pi_n(X), and then the homotopy fiber would be an "n-cover" of X corresponding to that subgroup of \pi_n(X). - There is not a universal cover for every space. One needs conditions, such as being semi-locally simply connected. – AH Oct 18 2009 at 0:21 1 I think Eric is probably using the word "space" the way I use it, namely, a cofibrant-fibrant object of whatever model category you like which is Quillen equivalent to spaces. – Reid Barton Oct 21 2009 at 0:32 3 Or the way most people use it, namely "a manifold or CW complex or some similarly nice thing". =) – Tom Church Oct 21 2009 at 2:16 Rather than just taking homotopy groups for a single dimension, you can also think about the kind of algebraic entity that detects homotopy in two consecutive dimensions, or indeed any number of consecutive dimensions. In this discussion, following on from an earlier post, we're looking at the fundamental 2-group which picks up homotopy in dimensions 1 and 2. - The previous answers gave analogues to the universal covering space and looked at the homotopy groups of these analogues. However, the analogy to the n=1 case is not complete: While \pi_ 1(X) classifies the automorphisms over X of the universal covering space, the \pi_ n(X) don't classify the automorphisms over X of these n-connected analogues. In fact, the higher \pi_ n(X) don't seem to classify anything else than homotopy classes of maps S^n-->X. This is one of the motivations for using n-groupoids as invariants of spaces, see the discussion here, right before the references: http://ncatlab.org/nlab/show/fundamental+group+of+a+topos or, starting on page 17 with a nice story on the invention of the higher homotopy groups, and the desire for a non-abelian homology: http://www.intlpress.com/hha/v1/n1/a1/ - 1 pi_n(X) will classify something when X is (n-1)-connected, though (after all there's no other information present in the n-groupoid). – Reid Barton Oct 21 2009 at 1:25 There may be a geometric but partial answer to your question. This is an idea I learnt from Dennis Sullivan. As we know, passing from $X$ to its universal cover $\widetilde{X}$ kills the fundamental group. Now, by Hurewicz we can assume that $H_2(\widetilde{X})=\pi_2(\widetilde{X})$, whence killing $H_2(\widetilde{X})$ suffices. If we assume $H_2(\widetilde{X})$ is torsion free then each generator $\alpha_i\in H_2(\widetilde{X})$ corresponds to a circle bundle $E_i$ over $\widetilde{X}$, i.e., $H_2(E_i)=H_2(\widetilde{X})/\mathbb{Z}\alpha_i$. Thus, if $\widetilde{X}$ was a manifold of dimension $n$ and $H_2(\widetilde{X})$ was free of rank $k$ then taking successive circle bundles we get a manifold $E$ of dimension $n+k$. This has the same higher homotopy groups ($\pi_i$ for $i>2$) as that of $X$. The example given by Reid Barton is an illustration of this. However, for manifolds this is as far as you can go since killing even the free part of $\pi_3(\widetilde{X})$ (or, equivalently the free part of $H_3(E)$) requires bundles over $E$ with fibre $\mathbb{CP}^\infty$, which lands us outside the realm of finite dimensional manifolds. - 2 This is a special case of Reid Barton's example. It's the Hurewicz-Serre "killing homotopy groups" technique. You kill the first non-trivial homotopy group, and turn the map X' --> X into a fibration. It's a standard technique to compute homotopy groups of spheres and appears in many algebraic topology textbooks. – Ryan Budney Nov 21 2009 at 21:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278342127799988, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/65729?sort=newest
What are “perfectoid spaces”? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This talk is about a theory of "perfectoid spaces", which "compares objects in characteristic p with objects in characteristic 0". What are those spaces, where can one read about them? Edit: A bit more infos can be found in Peter Scholze's seminar description and in Bhargav Bhatt's. Edit: Peter Scholze posted yesterday this beautiful overview on the arxiv. Edit: Peter Scholze posted today this new survey on the arxiv. - 1 They were defined by the speaker of this talk in his recent work. So I geuss the best place to start is math.uni-bonn.de/people/scholze . – Chris Wuthrich May 22 2011 at 20:45 2 I found nothing to read there – Thomas Riepe May 22 2011 at 20:50 3 It might be the case that there is nowhere that one can read about them yet. I learnt about them from a talk that Scholze gave in Nottingham: they are, vaguely speaking, "perfections of affinoid algebras", that is, take an algebra like $\mathbf{Q}_p\langle T\rangle$ and then adjoin all $p^n$th roots of $T$. But it's a bit more delicate than this really. If you really want to know then just email Scholze. Or wait patiently until the author writes something! – Kevin Buzzard May 22 2011 at 20:58 @Riepe: thanks for asking the question! it elicited Scholze's response! – SGP May 31 2011 at 22:29 2 Answers Update: The lecture notes of the CAGA lecture series on perfectoid spaces at the IHES can now be found online, cf. http://www.ihes.fr/~abbes/CAGA/scholze.html. It seems that it's my job to answer this question, so let me just briefly explain everything. A more detailed account will be online soon. We start with a complete non-archimedean field $K$ of mixed characteristic $(0,p)$ (i.e., $K$ has characteristic $0$, but its residue field has characteristic $p$), equipped with a non-discrete valuation of rank $1$, such that (and this is the crucial condition) Frobenius is surjective on $K^+/p$, where $K^+\subset K$ is the subring of elements of norm $\leq 1$. Some authors, e.g. Gabber-Ramero in their book on Almost ring theory, call such fields deeply ramified (they do not require that they are complete, anyway). Just think of $K$ as the completion of the field $\mathbb{Q}_p(p^{1/p^\infty})$, or alternatively as the completion of the field `$\mathbb{Q}_p(\mu_{p^\infty})$`. In this situation, one can form the field $K^\prime$, given as the fraction field of `$K^{\prime +} = \lim_{\leftarrow} K^+/p$`, where the transition maps are given by Frobenius. Concretely, in the first example it is given by the completion of `$\mathbb{F} _p((t^{1/p^\infty}))$`, where $t$ is the element `$(p,p^{\frac 1p},p^{\frac 1{p^2}},\ldots)$` in `$K^{\prime +}=\mathrm{lim}_{\leftarrow} K^+/p$`. Now we have the following theorem, due to Fontaine-Wintenberger in the examples I gave, and deduced from the book of Gabber-Ramero in general: Theorem: There is a canonical isomorphism of absolute Galois group $G_K\cong G_{K^\prime}$. At this point, it may be instructive to explain this theorem a little, in the example where $K$ is the completion of $\mathbb{Q}_p(p^{1/p^\infty})$ (this assumption will be made whenever examples are discussed). It says that there is a natural equivalence of categories between the category of finite extensions $L$ of $K$ and the category of finite extensions $L^\prime$ of $K^\prime$. Let us give an example, say $L^\prime$ is given by adjoining a root of $X^2 - 7t X + t^5$. Basically, the idea is that one replaces $t$ by $p$, so that one would like to define $L$ as the field given by adjoining a root of $X^2 - 7p X + p^5$. However, this is obviously not well-defined: If $p=3$, then $X^2 - 7t X + t^5=X^2 - t X + t^5$, but $X^2 - 7p X + p^5\neq X^2 - p X + p^5$, and one will not expect that the fields given by adjoining roots of these different polynomials are the same. However, there is the following way out: $L^\prime$ can be defined as the splitting field of $X^2 - 7t^{1/p^n} X + t^{5/p^n}$ for all $n\geq 0$, and if we choose $n$ very large, then one can see that the fields $L_n$ given as the splitting field of $X^2 - 7p^{1/p^n} X + p^{5/p^n}$ will stabilize as $n\rightarrow \infty$; this is the desired field $L$. Basically, the point is that the discriminant of the polynomials considered becomes very small, and the difference between any two different choices one might make when replacing $t$ by $p$ become comparably small. This argument can be made precise by using Faltings's almost mathematics, as developed systematically by Gabber-Ramero. Consider $K\supset K^+\supset \mathfrak{m}$, where $\mathfrak{m}$ is the maximal ideal; in the example, it is the one generated by all $p^{1/p^n}$, and it satisfies $\mathfrak{m}^2 = \mathfrak{m}$, because the valuation on $K$ is non-discrete. We have a sequence of localization functors: $K^+$-mod $\rightarrow$ $K^+$-mod / $\mathfrak{m}$-torsion $\rightarrow$ $K^+$-mod / $p$-power torsion. The last category is equivalent to $K$-mod, and the composition of the two functors is like taking the generic fibre of an object with an integral structure. In this sense, the category in the middle can be seen as a slightly generic fibre, sitting strictly between an integral structure and an object over the generic fibre. Moreover, an object like $K^+/p$ is nonzero in this middle category, so one can talk about torsion objects, neglecting only very small objects. The official name for this middle category is $K^{+a}$-mod: almost $K^+$-modules. This category is an abelian tensor category, and hence one can define in the usual way the notion of a $K^{+a}$-algebra (= almost $K^+$-algebra), etc. . With some work, one also has notions of almost finitely presented modules and (almost) etale maps. In the following, we will often need the notion of an almost finitely presented etale map, which is the almost analogue of a finite etale cover. Theorem (Tate, Gabber-Ramero): If $L/K$ finite extension, then $L^+/K^+$ is almost finitely presented etale. Similarly, if $L^\prime/K^\prime$ finite, then $L^{\prime +}/K^{\prime +}$ is almost finitely presented etale. Here, $L^+$ is the valuation subring of $L$. As an example, assume $p\neq 2$ and $L=K(p^\frac 12)$. For convenience, we look at the situation at a finite level, so let $K_n=\mathbb{Q}_p(p^{1/p^n})$ and $L_n=K_n(p^\frac 12)$. Then $L_n^+ = K_n^+[X] / (X^2 - p^{1/p^n})$. To check whether this is etale, look at $f(X)= X^2 - p^{1/p^n}$ and look at the ideal generated by $f$ and its derivative $f^\prime$. This contains $p^{1/p^n}$, so in some sense $L_n^+$ is etale over $K_n^+$ up to $p^{1/p^n}$-torsion. Now take the limit as $n\rightarrow \infty$ to see that $L^+$ is almost etale over $K^+$. Now we can prove the theorem above: finite etale covers of $K$ = almost finitely presented etale covers of $K^+$ = almost finitely presented etale covers of $K^+/p$ [because (almost) finite etale covers lift uniquely over nilpotents] = almost finitely presented etale covers of $K^{\prime +}/t$ [because $K^+/p = K^{\prime +}/t$, cf. the example] = almost finitely presented etale covers of $K^{\prime +}$ = finite etale covers of $K^\prime$. After we understand this theory on the base, we want to generalize to the relative situation. Here, let me make the following claim. Claim: $\mathbb{A}^1_{K^\prime}$ "equals" $\lim_{\leftarrow} \mathbb{A}^1_K$, where the transition maps are the $p$-th power map. As a first step towards understanding this, let us check this on points. Here it says that $K^\prime = \lim_{\leftarrow} K$. In particular, there should be map $K^\prime\rightarrow K$ by projection to the last coordinate, which I usually denote $x^\prime\mapsto [x^\prime]$ (because it is a related to Teichm"uller representatives) and again this can be explained in an example: Say $x^\prime = t^{-1} + 5 + t^3$. Basically, we want to replace $t$ by $p$, but this is not well-defined. But we have just learned that this problem becomes less serious as we take $p$-power roots. So we look at $t^{-1/p^n} + 5 + t^{3/p^n}$, replace $t$ by $p$, get $p^{-1/p^n} + 5 + p^{3/p^n}$, and then we take the $p^n$-th power again, so that the expression has the chance of being independent of $n$. Now, it is in fact not difficult to see that $\lim_{n\rightarrow \infty} (p^{-1/p^n} + 5 + p^{3/p^n})^{p^n}$ exists, and this defined $[x^\prime]\in K$. Now the map $K^\prime\rightarrow \lim_{\leftarrow} K$ is given by $x^\prime\mapsto ([x^\prime],[x^{\prime 1/p}],[x^{\prime 1/p^2}],\ldots)$. In order to prove that this is a bijection, just note that $K^{\prime +} = \lim_{\leftarrow} K^{\prime +}/t^{p^n} = \lim_{\leftarrow} K^{\prime +}/t = \lim_{\leftarrow} K^+/p \leftarrow \lim_{\leftarrow} K^+$. Here, the last map is the obvious projection, and in fact is a bijection, which amounts to the same verification as that the limit above exists. Afterwards, just invert $t$ to get the desired identification. In fact, the good way of approaching this stuff in general is to use some framework of rigid geometry. In the papers of Kedlaya and Liu, where they are doing extremely related stuff, they choose to work with Berkovich spaces; I favor the language of Huber's adic spaces, as this language is capable of expressing more (e.g., Berkovich only considers rank-$1$-valuations, whereas Huber considers also the valuations of higher rank). In the language of adic spaces, the spaces are actually locally ringed topological spaces (equipped with valuations) (and affinoids are open, in contrast to Berkovich's theory, making it easier to glue), and there is an analytification functor $X\mapsto X^{\mathrm{ad}}$ from schemes of finite type over $K$ to adic spaces over $K$ (similar to the functor associating to a scheme of finite type over $C$ a complex-analytic space). Then we have the following theorem: Theorem: We have a homeomorphism of underlying topological spaces $|(\mathbb{A}^1_{K^\prime})^{\mathrm{ad}}|\cong \lim_{\leftarrow} |(\mathbb{A}^1_K)^{\mathrm{ad}}|$. At this point, the following question naturally arises: Both sides of this homeomorphism are locally ringed topological spaces: So is it possible to compare the structure sheaves? There is the obvious problem that on the left-hand side, we have characteristic $p$-rings, whereas on the right-hand side, we have characteristic $0$-rings. How can one possibly pass from one to the other side? Definition: A perfectoid $K$-algebra is a complete Banach $K$-algebra $R$ such that the set of power-bounded elements $R^\circ\subset R$ is open and bounded and Frobenius induces an isomorphism $R^\circ/p^{\frac 1p}\cong R^\circ/p$. Similarly, one defines perfectoid $K^\prime$-algebras $R^\prime$, putting a prime everywhere, and replacing $p$ by $t$. The last condition is then equivalent to requiring $R^\prime$ perfect, whence the name. Examples are $K$, any finite extension $L$ of $K$, and $K\langle T^{1/p^\infty}\rangle$, by which I mean: Take the $p$-adic completion of $K^+[T^{1/p^\infty}]$, and then invert $p$. Recall that in classical rigid geometry, one considers rings like $K\langle T\rangle$, which is interpreted as the ring of convergent power series on the closed annulus $|x|\leq 1$. Now in the example of the $\mathbb{A}^1$ above, we take $p$-power roots of the coordinate, so after completion the rings on the inverse limit are in fact perfectoid. In characteristic $p$, one can pass from usual affinoid algebras to perfectoid algebras by taking the completed perfection; the difference between the two is small, at least as regards topological information on associated spaces: Frobenius is a homeomorphism on topological spaces, and even on etale topoi. [This is why we dont have to take $\lim_\leftarrow \mathbb{A}^1_{K^\prime}$: It does not change the topological spaces. In order to compare structure sheaves, one should however take this inverse limit.] The really exciting theorem is the following, which I call the tilting equivalence: Theorem: The category of perfectoid $K$-algebras and the category of perfectoid $K^\prime$-algebras are equivalent. The functor is given by $R^\prime = (\lim_{\leftarrow} R^\circ/p)[t^{-1}]$. Again, one also has $R^\prime = \lim_{\leftarrow} R$, where the transition maps are the $p$-th power map, giving also the map $R^\prime\rightarrow R$, $f^\prime\mapsto [f^\prime]$. There are two different proofs for this. One is to write down the inverse functor, given by $R^\prime\mapsto W(R^{\prime \circ})\otimes_{W(K^{\prime +})} K$, using the map $\theta: W(K^{\prime +})\rightarrow K$ known from $p$-adic Hodge theory. The other proof is similar to what we did above for finite etale covers: perfectoid $K$-algebras = almost $K^{+}$-algebras $A$ s.t. $A$ is flat, $p$-adically complete and Frobenius induces isom $A/p^{1/p}\cong A/p$ = almost $K^+/p$-algebras $\overline{A}$ s.t. $\overline{A}$ is flat and Frobenius induces isom $\overline{A}/p^{\frac 1p}\cong \overline{A}$, and then going over to the other side. Here, the first identification is not difficult; the second relies on the astonishing fact (already in the book by Gabber-Ramero) that the cotangent complex $\mathbb{L}_{\overline{A}/(K^+/p)}$ vanishes, and hence one gets unique deformations of objects and morphisms. At least on differentials $\Omega^1$, one can believe this: Every element $x$ has the form $y^p$ because Frobenius is surjective; but then $dx = dy^p = pdy = 0$ because $p=0$ in $\overline{A}$. Now let me just briefly summarize the main theorems on the basic nature of perfectoid spaces. First off, an affinoid perfectoid space is associated to an affinoid perfectoid $K$-algebra, which is a pair $(R,R^+)$ consisting of a perfectoid $K$-algebra $R$ and an open and integrally closed subring $R^+\subset R^\circ$ (it follows that $\mathfrak{m} R^\circ\subset R^+$, so $R^+$ is almost equal to $R^\circ$; in most cases, one will just take $R^+=R^\circ$). Then also the categories of affinoid perfectoid $K$-algebras and of affinoid perfectoid $K^\prime$-algebras are equivalent. Huber associates to such pairs $(R,R^+)$ a topological spaces $X=\mathrm{Spa}(R,R^+)$ consisting of continuous valuations on $R$ that are $\leq 1$ on $R^+$, with the topology generated by the rational subsets ${x\in X\mid \forall i: |f_i(x)|\leq |g(x)|}$, where $f_1,\ldots,f_n,g\in R$ generate the unit ideal. Moreover, he defines a structure *pre*sheaf $\mathcal{O}_X$, and the sub*pre*sheaf $\mathcal{O}_X^+$, consisting of functions which have absolute value $\leq 1$ everywhere. Theorem: Let $(R,R^+)$ be an affinoid perfectoid $K$-algebra, with tilt $(R^\prime,R^{\prime +})$. Let $X=\mathrm{Spa}(R,R^+)$, with $\mathcal{O}_X$ etc., and $X^\prime = \mathrm{Spa}(R^\prime,R^{\prime +})$, etc. . i) We have a canonical homeomorphism $X\cong X^\prime$, given by mapping $x$ to $x^\prime$ defined via `$|f^\prime(x^\prime)| = |[f^\prime] (x)|$`. Rational subsets are identified under this homeomorphism. ii) For any rational subset $U\subset X$, the pair `$(\mathcal{O}_X(U),\mathcal{O}_X^+(U))$` is affinoid perfectoid with tilt `$(\mathcal{O}_{X^\prime}(U),\mathcal{O}_{X^\prime}^+(U))$`. iii) The presheaves $\mathcal{O}_X$, $\mathcal{O}_X^+$ are sheaves. iv) For all $i>0$, the cohomology group $H^i(X,\mathcal{O}_X)=0$; even better, the cohomology group $H^i(X,\mathcal{O}_X^+)$ is almost zero, i.e. $\mathfrak{m}$-torsion. This allows one to define general perfectoid spaces by gluing affinoid perfectoid spaces. Further, one can define etale morphisms of perfectoid spaces, and then etale topoi. This leads to an improvement on Faltings's almost purity theorem: Theorem: Let $R$ be a perfectoid $K$-algebra, and let $S/R$ be finite etale. Then $S$ is perfectoid and $S^\circ$ is almost finitely presented etale over $R^\circ$. In particular, no sort of semistable reduction hypothesis is required anymore. Also, the proof is much easier, cf. the book project by Gabber-Ramero. Tilting also identifies the etale topoi of a perfectoid space and its tilt, and as an application, one gets the following theorem. Theorem: We have an equivalence of etale topoi of adic spaces: `$(\mathbb{P}^n_{K^\prime})^{\mathrm{ad}}_{\mathrm{et}}\cong \lim_{\leftarrow} (\mathbb{P}^n_K)^{\mathrm{ad}}_{\mathrm{et}}$`. Here the transition maps are again the $p$-th power map on coordinates. Let me end this discussion by mentioning one application. Let $X\subset \mathbb{P}^n_K$ be a smooth hypersurface. By a theorem of Huber, we can find a small open neighborhood $\tilde{X}$ of $X$ with the same etale cohomology. Moreover, we have the projection $\pi: \mathbb{P}^n_{K^\prime}\rightarrow \mathbb{P}^n_K$, at least on topological spaces or etale topoi. Within the preimage $\pi^{-1}(\tilde{X})$, it is possible to find a smooth hypersurface (of possibly much larger degree) $X^\prime$. This gives a map from the cohomology of $X$ to the cohomology of $X^\prime$, thereby comparing the etale cohomology of a variety in characteristic $0$ with the etale cohomology of characteristic $p$. Using this, it is easy to verify the weight-monodromy conjecture for $X$. - 13 Yeah... And so we watch math history in the making. – Olivier May 31 2011 at 18:17 3 @Scholze: Many thanks for this! @Oliver: well put indeed! – SGP May 31 2011 at 21:57 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. - 2 Yes, for the moment I suspect watching the video of the IAS talk is your best bet (though if you're in the neighborhood of Bonn, I think Scholze is giving a seminar this coming Tuesday, the 24th). – D. Savitt May 23 2011 at 0:50 Thanks, but I'd prefer reading to watching videos. – Thomas Riepe May 23 2011 at 10:17 I also found this video some weeks ago and found it very interesting. – Martin Brandenburg May 23 2011 at 14:51 2 @Riepe: in addition to the items you have linked to (in the main question), there is also the work of Kedlaya-Liu (see www-math.mit.edu/~kedlaya/math/…) as well as the work of Gabber-Ramero and Andreatta-Iovita referred to by Scholze in his talk. – SGP May 23 2011 at 19:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 223, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308424592018127, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/67883/linear-programming-single-optimal-solution
# Linear Programming - Single Optimal Solution Is it correct to state that if a linear objective function is not in parallel with any of the constraints, than there is a single optimal solution at some vertex of the polytope? - there can be more than one optimal solution, neednt be single – Bhargav Sep 27 '11 at 9:06 Even if the objective function is not in parallel with any of constraints? Can you construct such example with 2 variables? – Michael Sep 27 '11 at 9:17 ## 1 Answer This is incorrect. To construct a class of simple counterexamples in three dimensions, consider problems of the form $Ax \le b, \, x \ge 0$ where $A$ is any positive $2 \times 3$ matrix of rank 2 and $b$ is a positive 2-vector. This defines a non-empty bounded feasible set in the first octant. Now consider objective functions of the form $x \mapsto cx$ where $c$ is a convex combination of the two rows of $A$. The solution set of the maximization problem is the line segment where $x \ge 0$ and $Ax = b$, but the objective function is not parallel to any of the constraints. You may get multiple solutions in any situation where $c$ is in the space spanned by the rows of $A$. - You are right. Thank you very much for the answer. But is there a way I can use LP to find some vertex of a given polytope? – Michael Sep 27 '11 at 13:57 ask a new question, asking questions in comments is not advisable on the site – Bhargav Sep 29 '11 at 1:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.898040235042572, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/121775-modulo-squares-cubes.html
# Thread: 1. ## modulo squares and cubes If we look at the field F_7* modulo the squares, then we can say that this is isomorphic to Z/2 by sending the squares to 0, and the non-squares to 1 (at least we think this is the reason that it is isomorphic to Z/2). Now we want to look at F_7* modulo cubes, we think this is isomorphic to Z/3, but we don't know why. We can send the cubes to 0, but how do we know which elements are mapped to 1 and 2? 2. Originally Posted by lizzy If we look at the field F_7* modulo the squares, then we can say that this is isomorphic to Z/2 by sending the squares to 0, and the non-squares to 1 (at least we think this is the reason that it is isomorphic to Z/2). Now we want to look at F_7* modulo cubes, we think this is isomorphic to Z/3, but we don't know why. We can send the cubes to 0, but how do we know which elements are mapped to 1 and 2? Since $a^3\in\{-1,0,1\}\!\!\pmod 7$ , then $a^3\in\{-1,1\}\!\!\pmod 7$ for $a\in\left(\mathbb{F}_7\right)^*$ . Also, if $\phi: \left(\mathbb{F}_7\right)^*\rightarrow \mathbb{Z}_3$ is such a homomorphism, it must be that $\phi(ab)=\phi(a)+\phi(b)$ , where the product in the left side is modulo 7, whereas the sum in the right side is modulo 3. But it must be $\phi(a)=\phi(b)\!\!\!\pmod 3\Longleftrightarrow \phi(a)-\phi(b)=0\!\!\!\pmod 3$ $\Longleftrightarrow \phi(ab^{-1})=0\!\!\!\pmod 3\Longleftrightarrow ab^{-1}=\pm 1\!\!\!\pmod 7$ $\Longleftrightarrow a=\pm b\!\!\!\pmod 7$ . From here you can now build your function $\phi$ taking into account, of course, that it must be $\phi(\pm 1)=0$ since only $\pm 1$ are cubes in $\left(\mathbb{F}_7\right)^*$... Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612922072410583, "perplexity_flag": "head"}
http://mathoverflow.net/questions/28672/explicit-invariants-under-change-of-basis-of-maps-v-to-v-otimes-v/28708
Explicit invariants (under change of basis) of maps $V \to V \otimes V$. Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is standard to construct numbers associated to a linear transformation $f: V \to V$ of a finite-dimensional vector space which are invariant under change of basis. The coefficients of the characteristic polynomial are such, and it is quite simple to see for example that the trace is invariant by equating it with the map which takes $\sum v_i \otimes w_i \in V \otimes V^* \cong Hom(V, V)$ and then evaluates $\sum w_i(v_i)$. My question is: can one explicitly associate quantities to maps $V \to V \otimes V$ which are invariant under change of basis? A simple dimension count indicates there should be plenty of invariants, since the orbits will have dimension roughly $d^3 - d^2$, where $d$ is the dimension of $V$, but I have yet to construct a single one. - Sorry for the confusion. By "quantity" I mean a non-constant function from Hom( V, V\otimesV)^{GL(V)} to a familiar set like the ground ring (I am most interested in the rational numbers, but one could extend to the complex numbers if needed), the natural numbers, {0,1} etc. For this I don't even care about continuity (e.g. rank also is of interest), just "computability" (a vague term, sorry) by hand in small cases and computer in large ones, and having a large enough collection to hopefully distinguish the maps I am interested in. – Dev Sinha Jun 19 2010 at 9:52 OK, it sounds like you are interested in classifying the orbits of $GL(V)$-action on $\operatorname{Hom}(V,V\otimes V),$ as in Tom's answer, together with numerical invariants that distinguish them. If that's indeed the case, since all other answers were dealing with algebraic invariants instead, your best bet would be to post another question clearly stipulating it. – Victor Protsak Jun 22 2010 at 1:49 5 Answers The answer depends on what you mean by "quantities"! As José demonstrated, there are no polynomial absolute invariants, essentially because of homogeneity, however, there are plenty of rational ones, obtained as fractions $F/G,$ where $F$ and $G$ are polynomial relative invariants of the same weight $k:$ $$F(g\cdot f)=(\det g)^k F(f),\ G(g\cdot f)=(\det g)^k G(f), \quad g\in GL(V).$$ Your dimension count is one of the common false believes. In fact, sanity is restored when working with rational functions, for $$\operatorname{tr\ deg} K(X)^G=\operatorname{tr\ deg} K(X)-\dim O_x,$$ where $X$ is an irreducible algebraic variety over an alg. closed field $K$ of char 0 with an action of an algebraic group $G$, $O_x=G\cdot x$ is a generic orbit, $G_x$ is the stabilizer of $x$, $\dim O_x=\dim G-\dim G_x.$ In fact, rational invariants always separate generic orbits.1 What goes wrong with polynomial invariants is that orbits need not be Zariski closed. In your example, due to the presence of dilations, Zariski closure of every orbit contains zero. A $G$-invariant polynomial function $F$ is constant on any orbit $O$ and hence its value at any point $x\in O$ is equal to $F(0),$ so $F$ is constant. Another way to resolve the issue is to replace the group $G$ with its subgroup $[G,G]$, which is the inter-section of the kernels of all 1-dimensional representations of $G$, thus replace $Gl(V)$ with $SL(V)$ om the example. After the $GL(V)$-equivariant identification $Hom(V,V\otimes V)\simeq V^*\otimes V\otimes V$ and polarization, which replaces a homogeneous degree $d$ polynomial function on $W$ with a multilinear map with $d$ arguments $W^\otimes d\to K$, the question reduces to finding multilinear $SL(V)$-invariants. Classical invariant theory shows that they are all obtained by composing $GL(V)$-invariant contractions $V^* \otimes V\to K,$ $\xi\otimes v\mapsto \xi(v)$ and expansions $K\to V^*\otimes V,$ $1\mapsto \sum e_i^*\otimes e_i$, permutations, symmetrizations, antisymmetrizations, and the $SL(V)$-invariant determinants $V^{\otimes k} \to K,$ $v_1\otimes\ldots\otimes v_k\mapsto \det[v_1|\ldots|v_k],$ $k=\dim V.$ Explicit formulas are a bit messy. Footnotes 1See Vinberg and Popov's article on invariant theory in Algebraic Geometry 4 volume of the yellow Russian Math Encyclopaedia. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No nontrivial linear one exists, I'm afraid. The underlying reason why the trace exists is that there is a $\mathrm{GL}(V)$-equivariant endomorphism of $V$: namely, the identity endomorphism. By contrast, there cannot be any $\mathrm{GL}(V)$-equivariant linear maps $V \to V \otimes V$. For definiteness, let us consider $V$ a real vector space. Consider the action of the $\mathbb{R}^\times$ subgroup of $\mathrm{GL}(V)$ consisting of scalar matrices. Then if $f:V \to V \otimes V$ is linear $$f(\lambda v) = \lambda f(v)$$ for any $v \in V$ and $\lambda \in \mathbb{R}^\times$. On the other hand, equivariance would say that $$\lambda \cdot f(v) = \lambda^2 f(v).$$ You can only reconcile both if $f$ is identically zero. Edit Unwisely, I had assumed linearity. (The OP did mention other polynomial invariants.) As pointed in comments to this answer, there are of course rational invariants. - 3 So there's no linear invariant. And for linear maps $V\to V$ the trace is the only linear invariant. But the question was not about linear invariants alone. For linear maps $V\to V\otimes V$ there are in fact no nonconstant polynomial invariants, by the same sort of scaling argument; but shouldn't there be some invariants that are rational functions of the coefficients? – Tom Goodwillie Jun 19 2010 at 1:07 Tom: You've anticipated my answer which I have been conscientiously typing for the past 120 minutes. Due to time constraints and annoying markup bugs, I had to give up on an explicit example. – Victor Protsak Jun 19 2010 at 1:26 Thanks to both -- I've edited my answer to reflect your comments. – José Figueroa-O'Farrill Jun 19 2010 at 1:38 What is meant by the fascinating statement that "the underlying reason why Trace exists is that there is a GL(V)-equivariant endomorphism $V \to V$"? Even if not terribly deep this would be a nice remark to understand. – T. Jun 22 2010 at 21:35 This argument does not work over $\mathbb{F}_2$, as far as I can tell; does anything interesting happen in that case? – Daniel Litt Jul 13 2010 at 14:35 This is essentially a rephrazing of Victor Protsak's reply (which I read too late). In characteristic zero the answer is in principle given by classical invariant theory. Let $(e^i)$, $i=1,...,n$ be a basis for V. Then coordinates on the space $Hom(V,V \otimes V)$ are given by tensors $c^i_{jk}$. Such a tensor sends the vector $\lambda_i e^i$ to $c^i_{jk}\lambda_i e^j \otimes e^k$ (using the summation condition for repeated indices). In addition let $\epsilon^{i_1,...,i_n}$ be the tensor which is $1$ if the $i_1,...,i_n$ form an even permutation of $1,...,n$, which is $-1$ for an odd permutation and which is zero otherwise. Similarly for $\epsilon_{i_1,...,i_n}$. SL(V) invariants are now given by expressions in $c,\epsilon$ which have no free indices (it is possible to do this via graphs). One has to be careful since many expressions are zero for symmetry reasons. There are clearly no linear invariants. If $dim(V)=2$ then I believe $\epsilon^{kl}c^i_{kj}c^j_{il}$ is an honest quadratic invariant. Another one is $\epsilon^{kl}c^i_{ik}c^j_{lj}$. I.e. the determinant of the left partial trace with the right partial trace. EDIT: The two dimensional case can also be viewed in a less abstract way. If $dim V=2$ then $V\cong V^\ast$ as $SL(V)$ representation. Furthermore by Clebsch Gordan we have $V^* \otimes V\otimes V\cong S^3V\oplus V\oplus V$. This is the problem of finding the generating concomitants for binary cubic forms. The answer can be found in Grace and Young. In this setting I see only one quadratic invariant, namely the determinant between the two copies of V. This means that the two quadratic invariants identified above are linearly dependent which does not appear obvious. But they are indeed. If you write them out explicitly you get (up to sign) $c^1_{11}c^1_{12}-c^1_{21}c^1_{11}+c^1_{12}c^2_{12}-c^2_{21}c^1_{21}+c^2_{12}c^2_{22} -c^2_{22}c^2_{21}$ - Jose's argument proves that there are no polynomial invariants, i.e. no invariants that are polynomial in the matrix coefficients with respect to a given basis. Victor suggests that there should be rational invariants. Here's an easy one. Let $\alpha : V \to V \otimes V$ be your function. Then there are roughly four (not necessarily symmetric) bivectors $\beta: k \to V\otimes V$ by tracing $\alpha^{\otimes 2} : V^{\otimes 2} \to V^{\otimes 4}$ in various different ways. For generic $\alpha$, this bivector $\beta$ will be nondegenerate, i.e. it will have an "inverse" $\beta^{-1}: V\otimes V \to k$, defined by declaring that one of the traces of $\beta\otimes \gamma$ is the identity map $V \to V$. If $\beta^{-1}$ exists, then it is a (nonsymmetric) inner product on $V$. Now, $\alpha$ determines two natural vectors $\operatorname{tr}_L(\alpha),\operatorname{tr}_R(\alpha)\in V$, by tracing in the two different ways. So some invariants are the squared lengths of $\operatorname{tr}_L(\alpha),\operatorname{tr}_R(\alpha)$ and their two inner products with respect to $\beta^{-1}$. - I am a bit suspicious. Any rational $GL(V)$-invariant, when written as a quotient of polynomial semi-invariants, must somehow involve determinants of order $\dim V.$ Are they hidden within your "squared length with respect to $\beta^{-1}$" prescription? (A minor terminological point: bivectors are skew-symmetric.) – Victor Protsak Jun 19 2010 at 7:47 I've been thinking about the case when $V$ is $2$-dimensional. Here, I claim, is a parametrization of the $GL(V)$-orbits of 'generic' linear maps $F:V\to V\otimes V$: To numbers $p, a, b, c$ we associate a map which acts on a basis $v, w$ by: $F(v)= -p(v\otimes v)+ w\otimes w$ $F(w)= (v\otimes w-w\otimes v)+ a(v\otimes v)+b(v\otimes w+w\otimes v)+ c(w\otimes w)$. To put a generic $F$ in this form, write it as $F_++F_-$, symmetric plus antisymmetric. $F_-$ is given by $x\mapsto v\otimes x-x\otimes v$ for a unique $v\in V$. Assume $v\ne 0$. Choose $w$ such that $v,w$ is a basis, but remember we are free to replace $w$ by $sv+tw$ for any scalars $s$ and $t\ne 0$. Think of the symmetric tensor $F_+(v)$ as a homogeneous quadratic polynomial in indeterminates $v,w$. Assume it has a $w^2$ term. Replacing $w$ by suitable $tw$ we can make that term $w^2$. Now replacing $w$ by suitable $sv+w$ ("completing the square") we can eliminate the $vw$ term. So $F_+(v)= -pv^2+w^2=-p(v\otimes v)+ w\otimes w$ for some $p$. And $F_+(w)= av^2+2bvw+cw^2=a(v\otimes v)+b(v\otimes w+w\otimes v)+ c(w\otimes w)$ for some $a,b,c$. This gives the formulas above for $F$. I used up all the choices I had except that $w$ can still be changed to $-w$, which would change $p,a,b,c$ to $p,-a,b,-c$. So the "parametrization" is actually two to one. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 139, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242238402366638, "perplexity_flag": "head"}
http://mathoverflow.net/questions/8069?sort=votes
## Spin structures on 7-dimensional spherical space forms ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background Let $M$ be a spin manifold and let $\Gamma$ be a finite group acting freely and isometrically on $M$ in such a way that $M/\Gamma$ is a smooth riemannian manifold. The quotient will be spin if and only if the action of $\Gamma$ on $M$ lifts to the spin bundle. For reasons having to do with $11 = 7 + 4$, I got interested in $M=S^7$ with the round metric. There is a unique spin structure on $S^7$ and the spin bundle is $$\mathrm{Spin}(7) \to \mathrm{Spin}(8) \to S^7.$$ A while back, together with one of my students, we investigated which smooth quotients $S^7/\Gamma$ are spin and how many inequivalent spin structures they admit. This boils down to determining the isomorphic lifts of $\Gamma \subset \mathrm{SO}(8)$ to $\mathrm{Spin}(8)$. There are lots of finite subgroups $\Gamma \subset \mathrm{SO}(8)$ acting freely on $S^7$, which are listed in Wolf's Spaces of constant curvature and to our surprise (this does not happen with $S^5$, say) we found that all quotients $S^7/\Gamma$ are spin; although they do not all have the same number of spin structures. Our results were obtained by a case-by-case analysis, but we always remained with the sneaky suspicion that there ought to be a simple topological explanation. Question Is there one? Perhaps based on the parallelizability of $S^7$? Thanks in advance. Edit I'm answering Chris's questions in the first comment below. The problem is indeed the existence of a subgroup $\Gamma' \subset \mathrm{Spin}(8)$ such that obvious square commutes: $$\Gamma' \to \Gamma \to \mathrm{SO}(8) = \Gamma' \to \mathrm{Spin}(8) \to \mathrm{SO}(8)$$ and where the first map $\Gamma' \to \Gamma$ is an isomorphism. This is the same as lifting $\Gamma \to \mathrm{SO}(8)$ via the spin double cover. The simplest counterexample for $S^5$ is to take any freely acting cyclic subgroup $\Gamma \subset \mathrm{SO}(6)$ of even order. - Two questions: (1) is this equivalent to asking whether the homomorphism $\Gamma \to SO(8)$ lifts to Spin(8)? (2) What is the counter example in the case of the five-sphere $S^5$. I have a sketch of a proof, but it seems to work too generally so something must be wrong. – Chris Schommer-Pries Dec 9 2009 at 17:02 I will answer this as an edit to the question, if that's alright. – José Figueroa-O'Farrill Dec 9 2009 at 17:37 ## 2 Answers Here is a partial answer. If the order of $\Gamma$ is odd, then this is a trivial application of transfer maps. You have described your manifold as a quotient $\pi:S^7 \to M = S^7/\Gamma$, and hence $S^7$ is a covering space of $M$. The transfer map is a wrong way map in cohomology: $\tau^* : H^* (S^7) \to H^* (M)$ which exists for cohomology in, say, $\mathbb{Z}/2$-coefficients. The composition $\tau^* \pi^*$ is multiplication by the order of $\Gamma$, which in this case is an isomorphism when the order of $\Gamma$ is odd. But since the cohomology of $S^7$ vanishes in degrees 1 and 2, this proves that these groups also vanish for $M$ and hence $M$ has a spin structure and it is unique. The more interesting case is when $\Gamma$ is 2-primary. For example why does $\mathbb{R}P^7$ have a spin structure? I suspect that your intuition is spot on and that it has to do the framing of $S^7$. - Thanks! Nice argument. Most relevant groups, though, have even order :) In fact, the only freely acting subgroups of SO(8) which have odd order are those which are an extension of an odd-order cyclic by another odd-order cyclic (the orders satisfying some arithmetic condition). Can you point me to a reference for the transfer maps, by the way? – José Figueroa-O'Farrill Dec 7 2009 at 21:19 You don't really need the transfer for this: the map $S^7/\Gamma \to B\Gamma$ classifying the covering space is 6-connected, so if the group has odd order $S^7/\Gamma$ has no $\mathbb{F}_2$-cohomology to support Stiefel--Whitney classes. – Oscar Randal-Williams Dec 7 2009 at 22:26 @Oscar: You still need to know that Z/2 cohomology of $B \Gamma$ vanishes, and this is usually proven with transfers. @Jose: Most standard references have a chapter on transfers. The standard reference for me is Allen Hatcher Algebraic Topology which is available online from his website and has a chapter on transfers. (I think he does it for integral cohomology, but the Z/2 case is exactly the same). – Chris Schommer-Pries Dec 8 2009 at 0:15 On the otherhand, since we only need to know that the low dimensional cohomology groups (1 and 2) of $B\Gamma$ vanish, we can probably avoid transfers by looking at explicit interpretations of group cohomology. $H^1 (\Gamma, Z/2) = Hom(\Gamma, Z/2)$ vanishes if $\Gamma$ is of odd order, so we just need to calculate that there are no non-trivial Z/2 central extensions of $\Gamma$ to show that $H^2$ vanishes. This is probably a standard calculation. – Chris Schommer-Pries Dec 8 2009 at 12:18 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would suspect triality is involved. The two spinor representations of spin(8) of spin(8) have the same dimension as the fundamental vector representation of spin(8) for all other spinor groups the representations don't have the same dimension and I believe this is related to triality. Here is an article on triality: http://en.wikipedia.org/wiki/Triality It has references to more material. Also see this article on SO(8): http://en.wikipedia.org/wiki/SO(8) - Triality is indeed an automorphism of $\Spin(8)$ which relates the vector and the two half-spinor representations. I am not sure how to use it to lift $\Gamma$, though. Since $\Gamma$ is a subgroup of $\mathrm{SO}(V)$ where $V$ is the vector representation, then triality maps it isomorphically to a subgroup of $\mathrm{SO}(\Delta_\pm)$, where $\Delta_\pm$ are the half-spinor reps. Is it clear that either of these two subgroups lift to $\mathrm{Spin}(8)$? I'll have to think about it some more, but it looks promising, so thanks! You've got my +1. – José Figueroa-O'Farrill Dec 7 2009 at 20:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943244993686676, "perplexity_flag": "head"}
http://darrenjw.wordpress.com/tag/carlo/
# Darren Wilkinson's research blog Statistics, computing, Bayes, stochastic modelling, systems biology and bioinformatics ## Posts Tagged ‘Carlo’ 04/06/2011 ### Java libraries for (non-uniform) random number simulation Anyone writing serious Monte Carlo (and MCMC) codes relies on having a very good and fast (uniform) random number generator and associated functions for generation of non-uniform random quantities, such as Gaussian, Poisson, Gamma, etc. In a previous post I showed how to write a simple Gibbs sampler in four different languages. In C (and C++) random number generation is easy for most scientists, as the (excellent) GNU Scientific Library (GSL) provides exactly what most people need. But it wasn’t always that way… I remember the days before the GSL, when it was necessary to hunt around on the net for bits of C code to implement different algorithms. Worse, it was often necessary to hunt around for a bit of free FORTRAN code, and compile that with an F77 compiler and figure out how to call it from C. Even in the early Alpha days of the GSL, coverage was patchy, and the API changed often. Bad old days… But those days are long gone, and C programmers no longer have to worry about the problem of random variate generation – they can safely concentrate on developing their interesting new algorithm, and leave the rest to the GSL. Unfortunately for Java programmers, there isn’t yet anything quite comparable to the GSL in Java world. I pretty much ignored Java until Java 5. Before then, the language was too limited, and the compilers and JVMs were too primitive to really take seriously for numerical work. But since the launch of Java 5 I’ve been starting to pay more interest. The language is now a perfectly reasonable O-O language, and the compilers and JVMs are pretty good. On a lot of benchmarks, Java is really quite comparable to C/C++, and Java is nicer to code, and has a lot of impressive associated technology. So if there was a math library comparable to the GSL, I’d be quite tempted to jump ship to the Java world and start writing all of my Monte Carlo codes in Java. But there isn’t. At least not yet. When I first started to take Java seriously, the only good math library with good support for non-uniform random number generation was COLT. COLT was, and still is, pretty good. The code is generally well-written, and fast, and the documentation for it is reasonable. However, the structure of the library is very idiosyncratic, the coverage is a bit patchy, and there doesn’t ever seem to have been a proper development community behind it. It seems very much to have been a one-man project, which has long since stagnated. Unsurprisingly then, COLT has been forked. There is now a Parallel COLT project. This project is continuing the development of COLT, adding new features that were missing from COLT, and, as the name suggests, adding concurrency support. Parallel COLT is also good, and is the main library I currently use for random number generation in Java. However, it has obviously inherited all of the idiosyncrasies that COLT had, and still doesn’t seem to have a large and active development community associated with it. There is no doubt that it is an incredibly useful software library, but it still doesn’t really compare to the GSL. I have watched the emergence of the Apache Commons Math project with great interest (not to be confused with Uncommons Math – another one-man project). I think this project probably has the greatest potential for providing the Java community with their own GSL equivalent. The Commons project has a lot of momentum, the Commons Math project seems to have an active development community, and the structure of the library is more intuitive than that of (Parallel) COLT. However, it is early days, and the library still has patchy coverage and is a bit rough around the edges. It reminds me a lot of the GSL back in its Alpha days. I’d not bothered to even download it until recently, as the random number generation component didn’t include the generation of gamma random quantities – an absolutely essential requirement for me. However, I noticed recently that the latest release (2.2) did include gamma generation, so I decided to download it and try it out. It works, but the generation of gamma random quantities is very slow (around 50 times slower than Parallel COLT). This isn’t a fundamental design flaw of the whole library – generating Gaussian random quantities is quite comparable with other libraries. It’s just that an inversion method has been used for gamma generation. All efficient gamma generators use a neat rejection scheme. In case anyone would like to investigate for themselves, here is a complete program for gamma generation designed to be linked against Parallel COLT: ```import java.util.*; import cern.jet.random.tdouble.*; import cern.jet.random.tdouble.engine.*; class GammaPC { public static void main(String[] arg) { DoubleRandomEngine rngEngine=new DoubleMersenneTwister(); Gamma rngG=new Gamma(1.0,1.0,rngEngine); long N=10000; double x=0.0; for (int i=0;i<N;i++) { for (int j=0;j<1000;j++) { x=rngG.nextDouble(3.0,25.0); } System.out.println(x); } } } ``` and here is a complete program designed to be linked against Commons Math: ```import java.util.*; import org.apache.commons.math.*; import org.apache.commons.math.random.*; class GammaACM { public static void main(String[] arg) throws MathException { RandomDataImpl rng=new RandomDataImpl(); long N=10000; double x=0.0; for (int i=0;i<N;i++) { for (int j=0;j<1000;j++) { x=rng.nextGamma(3.0,1.0/25.0); } System.out.println(x); } } } ``` The two codes do the same thing (note that they parameterise the gamma distribution differently). Both programs work (they generate variates from the same, correct, distribution), and the Commons Math interface is slightly nicer, but the code is much slower to execute. I’m still optimistic that Commons Math will one day be Java’s GSL, but I’m not giving up on Parallel COLT (or C, for that matter!) just yet… Tags:Apache, Carlo, code, COLT, Commons, gamma, generation, GSL, Java, library, math, MCMC, Monte, non-uniform, number, numerical, parallel, Parallel COLT, random, scientific, uniform Posted in Uncategorized | 13 Comments » 17/05/2011 ## Introduction In the previous post I explained how one can use an unbiased estimate of marginal likelihood derived from a particle filter within a Metropolis-Hastings MCMC algorithm in order to construct an exact pseudo-marginal MCMC scheme for the posterior distribution of the model parameters given some time course data. This idea is closely related to that of the particle marginal Metropolis-Hastings (PMMH) algorithm of Andreiu et al (2010), but not really exactly the same. This is because for a Bayesian model with parameters $\theta$, latent variables $x$ and data $y$, of the form $\displaystyle p(\theta,x,y) = p(\theta)p(x|\theta)p(y|x,\theta),$ the pseudo-marginal algorithm which exploits the fact that the particle filter’s estimate of likelihood is unbiased is an MCMC algorithm which directly targets the marginal posterior distribution $p(\theta|y)$. On the other hand, the PMMH algorithm is an MCMC algorithm which targets the full joint posterior distribution $p(\theta,x|y)$. Now, the PMMH scheme does reduce to the pseudo-marginal scheme if samples of $x$ are not generated and stored in the state of the Markov chain, and it certainly is the case that the pseudo-marginal algorithm gives some insight into why the PMMH algorithm works. However, the PMMH algorithm is much more powerful, as it solves the “smoothing” and parameter estimation problem simultaneously and exactly, including the “initial value” problem (computing the posterior distribution of the initial state, $x_0$). Below I will describe the algorithm and explain why it works, but first it is necessary to understand the relationship between marginal, joint and “likelihood-free” MCMC updating schemes for such latent variable models. ### MCMC for latent variable models #### Marginal approach If we want to target $p(\theta|y)$ directly, we can use a Metropolis-Hastings scheme with a fairly arbitrary proposal distribution for exploring $\theta$, where a new $\theta^\star$ is proposed from $f(\theta^\star|\theta)$ and accepted with probability $\min\{1,A\}$, where $\displaystyle A = \frac{p(\theta^\star)}{p(\theta)} \times \frac{f(\theta|\theta^\star)}{f(\theta^\star|\theta)} \times \frac{p({y}|\theta^\star)}{p({y}|\theta)}.$ As previously discussed, the problem with this scheme is that the marginal likelihood $p(y|\theta)$ required in the acceptance ratio is often difficult to compute. #### Likelihood-free MCMC A simple “likelihood-free” scheme targets the full joint posterior distribution $p(\theta,x|y)$. It works by exploiting the fact that we can often simulate from the model for the latent variables $p(x|\theta)$ even when we can’t evaluate it, or marginalise $x$ out of the problem. Here the Metropolis-Hastings proposal is constructed in two stages. First, a proposed new $\theta^\star$ is sampled from $f(\theta^\star|\theta)$ and then a corresponding $x^\star$ is simulated from the model $p(x^\star|\theta^\star)$. The pair $(\theta^\star,x^\star)$ is then jointly accepted with ratio $\displaystyle A = \frac{p(\theta^\star)}{p(\theta)} \times \frac{f(\theta|\theta^\star)}{f(\theta^\star|\theta)} \times \frac{p(y|{x}^\star,\theta^\star)}{p(y|{x},\theta)}.$ The proposal mechanism ensures that the proposed $x^\star$ is consistent with the proposed $\theta^\star$, and so the procedure can work provided that the dimension of the data $y$ is low. However, in order to work well more generally, we would want the proposed latent variables to be consistent with the data as well as the model parameters. #### Ideal joint update Motivated by the likelihood-free scheme, we would really like to target the joint posterior $p(\theta,x|y)$ by first proposing $\theta^\star$ from $f(\theta^\star|\theta)$ and then a corresponding $x^\star$ from the conditional distribution $p(x^\star|\theta^\star,y)$. The pair $(\theta^\star,x^\star)$ is then jointly accepted with ratio $\displaystyle A = \frac{p(\theta^\star)}{p(\theta)} \frac{p({x}^\star|\theta^\star)}{p({x}|\theta)} \frac{f(\theta|\theta^\star)}{f(\theta^\star|\theta)} \frac{p(y|{x}^\star,\theta^\star)}{p(y|{x},\theta)} \frac{p({x}|y,\theta)}{p({x}^\star|y,\theta^\star)}\\ \qquad = \frac{p(\theta^\star)}{p(\theta)} \frac{p(y|\theta^\star)}{p(y|\theta)} \frac{f(\theta|\theta^\star)}{f(\theta^\star|\theta)}.$ Notice how the acceptance ratio simplifies, using the basic marginal likelihood identity (BMI) of Chib (1995), and $x$ drops out of the ratio completely in order to give exactly the ratio used for the marginal updating scheme. Thus, the “ideal” joint updating scheme reduces to the marginal updating scheme if $x$ is not sampled and stored as a component of the Markov chain. Understanding the relationship between these schemes is useful for understanding the PMMH algorithm. Indeed, we will see that the “ideal” joint updating scheme (and the marginal scheme) corresponds to PMMH using infinitely many particles in the particle filter, and that the likelihood-free scheme corresponds to PMMH using exactly one particle in the particle filter. For an intermediate number of particles, the PMMH scheme is a compromise between the “ideal” scheme and the “blind” likelihood-free scheme, but is always likelihood-free (when used with a bootstrap particle filter) and always has an acceptance ratio leaving the exact posterior invariant. ### The PMMH algorithm #### The algorithm The PMMH algorithm is an MCMC algorithm for state space models jointly updating $\theta$ and $x_{0:T}$, as the algorithms above. First, a proposed new $\theta^\star$ is generated from a proposal $f(\theta^\star|\theta)$, and then a corresponding $x_{0:T}^\star$ is generated by running a bootstrap particle filter (as described in the previous post, and below) using the proposed new model parameters, $\theta^\star$, and selecting a single trajectory by sampling once from the final set of particles using the final set of weights. This proposed pair $(\theta^\star,x_{0:T}^\star)$ is accepted using the Metropolis-Hastings ratio $\displaystyle A = \frac{\hat{p}_{\theta^\star}(y_{1:T})p(\theta^\star)q(\theta|\theta^\star)}{\hat{p}_{\theta}(y_{1:T})p(\theta)q(\theta^\star|\theta)},$ where $\hat{p}_{\theta^\star}(y_{1:T})$ is the particle filter’s (unbiased) estimate of marginal likelihood, described in the previous post, and below. Note that this approach tends to the perfect joint/marginal updating scheme as the number of particles used in the filter tends to infinity. Note also that for a single particle, the particle filter just blindly forward simulates from $p_\theta(x^\star_{0:T})$ and that the filter’s estimate of marginal likelihood is just the observed data likelihood $p_\theta(y_{1:T}|x^\star_{0:T})$ leading precisely to the simple likelihood-free scheme. To understand for an arbitrary finite number of particles, $M$, one needs to think carefully about the structure of the particle filter. #### Why it works To understand why PMMH works, it is necessary to think about the joint distribution of all random variables used in the bootstrap particle filter. To this end, it is helpful to re-visit the particle filter, thinking carefully about the resampling and propagation steps. First introduce notation for the “particle cloud”: $\mathbf{x}_t=\{x_t^k|k=1,\ldots,M\}$, $\boldsymbol{\pi}_t=\{\pi_t^k|k=1,\ldots,M\}$, $\tilde{\mathbf{x}}_t=\{(x_t^k,\pi_t^k)|k=1,\ldots,M\}$. Initialise the particle filter with $\tilde{\mathbf{x}}_0$, where $x_0^k\sim p(x_0)$ and $\pi_0^k=1/M$ (note that $w_0^k$ is undefined). Now suppose at time $t$ we have a sample from $p(x_t|y_{1:t})$: $\tilde{\mathbf{x}}_t$. First resample by sampling $a_t^k \sim \mathcal{F}(a_t^k|\boldsymbol{\pi}_t)$, $k=1,\ldots,M$. Here we use $\mathcal{F}(\cdot|\boldsymbol{\pi})$ for the discrete distribution on $1:M$ with probability mass function $\boldsymbol{\pi}$. Next sample $x_{t+1}^k\sim p(x_{t+1}^k|x_t^{a_t^k})$. Set $w_{t+1}^k=p(y_{t+1}|x_{t+1}^k)$ and $\pi_{t+1}^k=w_{t+1}^k/\sum_{i=1}^M w_{t+1}^i$. Finally, propagate $\tilde{\mathbf{x}}_{t+1}$ to the next step… We define the filter’s estimate of likelihood as $\hat{p}(y_t|y_{1:t-1})=\frac{1}{M}\sum_{i=1}^M w_t^i$ and $\hat{p}(y_{1:T})=\prod_{i=1}^T \hat{p}(y_t|y_{1:t-1})$. See Doucet et al (2001) for further theoretical background on particle filters and SMC more generally. Describing the filter carefully as above allows us to write down the joint density of all random variables in the filter as $\displaystyle \tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1}) = \left[\prod_{k=1}^M p(x_0^k)\right] \left[\prod_{t=0}^{T-1} \prod_{k=1}^M \pi_t^{a_t^k} p(x_{t+1}^k|x_t^{a_t^k}) \right]$ For PMMH we also sample a final index $k'$ from $\mathcal{F}(k'|\boldsymbol{\pi}_T)$ giving the joint density $\displaystyle \tilde{q}(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1})\pi_T^{k'}$ We write the final selected trajectory as $\displaystyle x_{0:T}^{k'}=(x_0^{b_0^{k'}},\ldots,x_T^{b_T^{k'}}),$ where $b_t^{k'}=a_t^{b_{t+1}^{k'}}$, and $b_T^{k'}=k'$. If we now think about the structure of the PMMH algorithm, our proposal on the space of all random variables in the problem is in fact $\displaystyle f(\theta^\star|\theta)\tilde{q}_{\theta^\star}(\mathbf{x}_0^\star,\ldots,\mathbf{x}_T^\star,\mathbf{a}_0^\star,\ldots,\mathbf{a}_{T-1}^\star)\pi_T^{{k'}^\star}$ and by considering the proposal and the acceptance ratio, it is clear that detailed balance for the chain is satisfied by the target with density proportional to $\displaystyle p(\theta)\hat{p}_\theta(y_{1:T}) \tilde{q}_\theta(\mathbf{x}_0,\ldots,\mathbf{x}_T,\mathbf{a}_0,\ldots,\mathbf{a}_{T-1}) \pi_T^{k'}$ We want to show that this target marginalises down to the correct posterior $p(\theta,x_{0:T}|y_{1:T})$ when we consider just the parameters and the selected trajectory. But if we consider the terms in the joint distribution of the proposal corresponding to the trajectory selected by $k'$, this is given by $\displaystyle p_\theta(x_0^{b_0^{k'}})\left[\prod_{t=0}^{T-1} \pi_t^{b_t^{k'}} p_\theta(x_{t+1}^{b_{t+1}^{k'}}|x_t^{b_t^{k'}})\right]\pi_T^{k'} = p_\theta(x_{0:T}^{k'})\prod_{t=0}^T \pi_t^{b_t^{k'}}$ which, by expanding the $\pi_t^{b_t^{k'}}$ in terms of the unnormalised weights, simplifies to $\displaystyle \frac{p_\theta(x_{0:T}^{k'})p_\theta(y_{1:T}|x_{0:T}^{k'})}{M^{T+1}\hat{p}_\theta(y_{1:T})}$ It is worth dwelling on this result, as this is the key insight required to understand why the PMMH algorithm works. The whole point is that the terms in the joint density of the proposal corresponding to the selected trajectory exactly represent the required joint distribution modulo a couple of normalising constants, one of which is the particle filter’s estimate of marginal likelihood. Thus, by including $\hat{p}_\theta(y_{1:T})$ in the acceptance ratio, we knock out the normalising constant, allowing all of the other terms in the proposal to be marginalised away. In other words, the target of the chain can be written as proportional to $\displaystyle \frac{p(\theta)p_\theta(x_{0:T}^{k'},y_{1:T})}{M^{T+1}} \times \text{(Other terms...)}$ The other terms are all probabilities of random variables which do not occur elsewhere in the target, and hence can all be marginalised away to leave the correct posterior $\displaystyle p(\theta,x_{0:T}|y_{1:T})$ Thus the PMMH algorithm targets the correct posterior for any number of particles, $M$. Also note the implied uniform distribution on the selected indices in the target. I will give some code examples in a future post. Tags:Bayesian, Carlo, filter, Hastings, inference, likelihood, marginal, Markov, MCMC, Metropolis, model, Monte, particle, PMCMC, PMMH, process, pseudo, pseudo-marginal, sequential, SMC, space, state Posted in Uncategorized | 7 Comments » 15/05/2011 ## The pseudo-marginal approach to MCMC for Bayesian inference In a previous post I described a generalisation of the Metropolis Hastings MCMC algorithm which uses unbiased Monte Carlo estimates of likelihood in the acceptance ratio, but is nevertheless exact, when considered as a pseudo-marginal approach to “exact approximate” MCMC. To be useful in the context of Bayesian inference, we need to be able to compute unbiased estimates of the (marginal) likelihood of the data given some proposed model parameters with any “latent variables” integrated out. To be more precise, consider a model for data $y$ with parameters $\theta$ of the form $\pi(y|\theta)$ together with a prior on $\theta$, $\pi(\theta)$, giving a joint model $\displaystyle \pi(\theta,y)=\pi(\theta)\pi(y|\theta).$ Suppose now that interest is in the posterior distribution $\displaystyle \pi(\theta|y) \propto \pi(\theta,y)=\pi(\theta)\pi(y|\theta).$ We can construct a fairly generic (marginal) MCMC scheme for this posterior by first proposing $\theta^\star \sim f(\theta^\star|\theta)$ from some fairly arbitrary proposal distribution and then accepting the value with probability $\min\{1,A\}$ where $\displaystyle A = \frac{\pi(\theta^\star)}{\pi(\theta)} \frac{f(\theta|\theta^\star)}{f(\theta^\star|\theta)} \frac{\pi(y|\theta^\star)}{\pi(y|\theta)}$ This method is great provided that the (marginal) likelihood of the data $\pi(y|\theta)$ is available to us analytically, but in many (most) interesting models it is not. However, in the previous post I explained why substituting in a Monte Carlo estimate $\hat\pi(y|\theta)$ will still lead to the exact posterior if the estimate is unbiased in the sense that $E[\hat\pi(y|\theta)]=\pi(y|\theta)$. Consequently, sources of (cheap) unbiased Monte Carlo estimates of (marginal) likelihood are of potential interest in the development of exact MCMC algorithms. ## Latent variables and marginalisation Often the reason that we cannot evaluate $\pi(y|\theta)$ is that there are latent variables in the problem, and the model for the data is conditional on those latent variables. Explicitly, if we denote the latent variables by $x$, then the joint distribution for the model takes the form $\displaystyle \pi(\theta,x,y) = \pi(\theta)\pi(x|\theta)\pi(y|x,\theta)$ Now since $\displaystyle \pi(y|\theta) = \int_X \pi(y|x,\theta)\pi(x|\theta)\,dx$ there is a simple and obvious Monte Carlo strategy for estimating $\pi(y|\theta)$ provided that we can evaluate $\pi(y|x,\theta)$ and simulate realisations from $\pi(x|\theta)$. That is, simulate values $x_1,x_2,\ldots,x_n$ from $\pi(x|\theta)$ for some suitably large $n$, and then put $\displaystyle \hat\pi(y|\theta) = \frac{1}{n}\sum_{i=1}^n \pi(y|x_i,\theta).$ It is clear by the law of large numbers that this estimate will converge to $\pi(y|\theta)$ as $n\rightarrow \infty$. That is, $\hat\pi(y|\theta)$ is a consistent estimate of $\pi(y|\theta)$. However, a moment’s thought reveals that this estimate is not only consistent, but also unbiased, since each term in the sum has expectation $\pi(y|\theta)$. This simple Monte Carlo estimate of likelihood can therefore be substituted into a Metropolis-Hastings acceptance ratio without affecting the (marginal) target distribution of the Markov chain. Note that this estimate of marginal likelihood is sometimes referred to as the Rao-Blackwellised estimate, due to its connection with the Rao-Blackwell theorem. ### Importance sampling Suppose now that we cannot sample values directly from $\pi(x|\theta)$, but can sample instead from a distribution $\pi'(x|\theta)$ having the same support as $\pi(x|\theta)$. We can then instead produce an importance sampling estimate for $\pi(y|\theta)$ by noting that $\displaystyle \pi(y|\theta) = \int_X \pi(y|x,\theta)\frac{\pi(x|\theta)}{\pi'(x|\theta)}\pi'(x|\theta)\,dx.$ Consequently, samples $x_1,x_2,\ldots,x_n$ from $\pi'(x|\theta)$ can be used to construct the estimate $\displaystyle \hat{\pi}(y|\theta) = \frac{1}{n}\sum_{i=1}^n \pi(y|x_i,\theta) \frac{\pi(x_i|\theta)}{\pi'(x_i|\theta)}$ which again is clearly both consistent and unbiased. This estimate is often written $\displaystyle \hat{\pi}(y|\theta) = \frac{1}{n}\sum_{i=1}^n \pi(y|x_i,\theta) w_i$ where $w_i=\pi(x_i|\theta)/\pi'(x_i|\theta)$. The weights, $w_i$, are known as importance weights. ### Importance resampling An idea closely related to that of importance sampling is that of importance resampling where importance weights are used to resample a sample in order to equalise the weights, often prior to a further round of weighting and resampling. The basic idea is to generate an approximate sample from a target density $\pi(x)$ using values sampled from an auxiliary distribution $\pi'(x)$, where we now supress any dependence of the distributions on model parameters, $\theta$. First generate a sample $x_1,\ldots,x_n$ from $\pi'(x)$ and compute weights $w_i=\pi(x_i)/\pi'(x_i),\ i=1,\ldots,n$. Then compute normalised weights $\tilde{w}_i=w_i/\sum_{k=1}^n w_k$. Generate a new sample of size $n$ by sampling $n$ times with replacement from the original sample with the probability of choosing each value determined by its normalised weight. As an example, consider using a sample from the Cauchy distribution as an auxiliary distribution for approximately sampling standard normal random quantities. We can do this using a few lines of R as follows. ```n=1000 xa=rcauchy(n) w=dnorm(xa)/dcauchy(xa) x=sample(xa,n,prob=w,replace=TRUE) hist(x,30) mean(w) ``` Note that we don’t actually need to compute the normalised weights, as the `sample` function will do this for us. Note also that the average weight will be close to one. It should be clear that the expected value of the weights will be exactly 1 when both the target and auxiliary densities are correctly normalised. Also note that the procedure can be used when one or both of the densities are not correctly normalised, since the weights will be normalised prior to sampling anyway. Note that in this case the expected weight will be the (ratio of) normalising constant(s), and so looking at the average weight will give an estimate of the normalising constant. Note that the importance sampling procedure is approximate. Unlike a technique such as rejection sampling, which leads to samples having exactly the correct distribution, this is not the case here. Indeed, it is clear that in the $n=1$ case, the final sample will be exactly drawn from the auxiliary and not the target. The procedure is asymptotic, in that it improves as the sample size increases, tending to the exact target as $n\rightarrow \infty$. We can understand why importance resampling works by first considering the univariate case, using correctly normalised densities. Consider a very large number of particles, $N$. The proportion of the auxiliary samples falling in a small interval $[x,x+dx)$ will be $\pi'(x)dx$, corresponding to roughly $N\pi'(x)dx$ particles. The weight for each of those particles will be $w(x)=\pi(x)/\pi'(x)$, and since the expected weight of a random particle is 1, the sum of all weights will be (roughly) $N$, leading to normalised weights for the particles near $x$ of $\tilde{w}(x)=\pi(x)/[N\pi'(x)]$. The combined weight of all particles in $[x,x+dx)$ is therefore $\pi(x)dx$. Clearly then, when we resample $N$ times we expect to select roughly $N\pi(x)dx$ particles from this interval. This corresponds to a proportion $\pi(x)dt$, corresponding to a density of $\pi(x)$ in the final sample. Obviously the above argument is very informal, but can be tightened up into a reasonably rigorous proof for the 1d case without too much effort, and the multivariate extension is also reasonably clear. ## The bootstrap particle filter The bootstrap particle filter is an iterative method for carrying out Bayesian inference for dynamic state space (partially observed Markov process) models, sometimes also known as hidden Markov models (HMMs). Here, an unobserved Markov process, $x_0,x_1,\ldots,x_T$ governed by a transition kernel $p(x_{t+1}|x_t)$ is partially observed via some measurement model $p(y_t|x_t)$ leading to data $y_1,\ldots,y_T$. The idea is to make inference for the hidden states $x_{0:T}$ given the data $y_{1:T}$. The method is a very simple application of the importance resampling technique. At each time, $t$, we assume that we have a (approximate) sample from $p(x_t|y_{1:t})$ and use importance resampling to generate an approximate sample from $p(x_{t+1}|y_{1:t+1})$. More precisely, the procedure is initialised with a sample from $x_0^k \sim p(x_0),\ k=1,\ldots,M$ with uniform normalised weights ${w'}_0^k=1/M$. Then suppose that we have a weighted sample $\{x_t^k,{w'}_t^k|k=1,\ldots,M\}$ from $p(x_t|y_{1:t})$. First generate an equally weighted sample by resampling with replacement $M$ times to obtain $\{\tilde{x}_t^k|k=1,\ldots,M\}$ (giving an approximate random sample from $p(x_t|y_{1:t})$). Note that each sample is independently drawn from $\sum_{i=1}^M {w'}_t^i\delta(x-x_t^i)$. Next propagate each particle forward according to the Markov process model by sampling $x_{t+1}^k\sim p(x_{t+1}|\tilde{x}_t^k),\ k=1,\ldots,M$ (giving an approximate random sample from $p(x_{t+1}|y_{1:t})$). Then for each of the new particles, compute a weight $w_{t+1}^k=p(y_{t+1}|x_{t+1}^k)$, and then a normalised weight ${w'}_{t+1}^k=w_{t+1}^k/\sum_i w_{t+1}^i$. It is clear from our understanding of importance resampling that these weights are appropriate for representing a sample from $p(x_{t+1}|y_{1:t+1})$, and so the particles and weights can be propagated forward to the next time point. It is also clear that the average weight at each time gives an estimate of the marginal likelihood of the current data point given the data so far. So we define $\displaystyle \hat{p}(y_t|y_{1:t-1})=\frac{1}{M}\sum_{k=1}^M w_t^k$ and $\displaystyle \hat{p}(y_{1:T}) = \hat{p}(y_1)\prod_{t=2}^T \hat{p}(y_t|y_{1:t-1}).$ Again, from our understanding of importance resampling, it should be reasonably clear that $\hat{p}(y_{1:T})$ is a consistent estimator of ${p}(y_{1:T})$. It is much less clear, but nevertheless true that this estimator is also unbiased. The standard reference for this fact is Del Moral (2004), but this is a rather technical monograph. A much more accessible proof (for a very general particle filter) is given in Pitt et al (2011). It should therefore be clear that if one is interested in developing MCMC algorithms for state space models, one can use a pseudo-marginal MCMC scheme, substituting in $\hat{p}_\theta(y_{1:T})$ from a bootstrap particle filter in place of $p(y_{1:T}|\theta)$. This turns out to be a simple special case of the particle marginal Metropolis-Hastings (PMMH) algorithm described in Andreiu et al (2010). However, the PMMH algorithm in fact has the full joint posterior $p(\theta,x_{0:T}|y_{1:T})$ as its target. I will explain the PMMH algorithm in a subsequent post. Tags:bootstrap, Carlo, estimate, filter, importance, likelihood, marginal, MCMC, Monte, particle, PMCMC, PMMH, R, resampling, rstats, sampling, sequential, SMC, unbiased Posted in Uncategorized | 6 Comments » 20/09/2010 ## Motivation and background In this post I will try and explain an important idea behind some recent developments in MCMC theory. First, let me give some motivation. Suppose you are trying to implement a Metropolis-Hastings algorithm, as discussed in a previous post (required reading!), but a key likelihood term needed for the acceptance ratio is difficult to evaluate (most likely it is a marginal likelihood of some sort). If it is possible to obtain a Monte-Carlo estimate for that likelihood term (which it sometimes is, for example, using importance sampling), one could obviously just plug it in to the acceptance ratio and hope for the best. What is not at all obvious is that if your Monte-Carlo estimate satisfies some fairly weak property then the equilibrium distribution of the Markov chain will remain exactly as it would be if the exact likelihood had been available. Furthermore, it is exact even if the Monte-Carlo estimate is very noisy and imprecise (though the mixing of the chain will be poorer in this case). This is the “exact approximate” pseudo-marginal MCMC approach. To give credit where it is due, the idea was first introduced by Mark Beaumont in Beaumont (2003), where he was using an importance sampling based approximate likelihood in the context of a statistical genetics example. This was later picked up by Christophe Andrieu and Gareth Roberts, who studied the technical properties of the approach in Andrieu and Roberts (2009). The idea is turning out to be useful in several contexts, and in particular, underpins the interesting new Particle MCMC algorithms of Andrieu et al (2010), which I will discuss in a future post. I’ve heard Mark, Christophe, Gareth and probably others present this concept, but this post is most strongly inspired by a talk that Christophe gave at the IMS 2010 meeting in Gothenburg this summer. ## The pseudo-marginal Metropolis-Hastings algorithm Let’s focus on the simplest version of the problem, where we want to sample from a target p(x) using a proposal q(x’|x). As explained previously, the required Metropolis-Hastings acceptance ratio will take the form A=p(x’)q(x|x’)/[p(x)q(x'|x)]. Here we are assuming that p(x) is difficult to evaluate (usually because it is a marginalised version of some higher-dimensional distribution), but that a Monte-Carlo estimate of p(x), which we shall denote r(x), can be computed. We can obviously just substitute this estimate into the acceptance ratio to get A=r(x’)q(x|x’)/[r(x)q(x'|x)], but it is not immediately clear that in many cases this will lead to the Markov chain having an equilibrium distribution that is exactly p(x). It turns out that it is sufficient that the likelihood estimate, r(x) is non-negative, and unbiased, in the sense that E(r(x))=p(x), where the expectation is with respect to the Monte-Carlo error for a given fixed value of x. In fact, as we shall see, this condition is actually a bit stronger than is really required. Put W=r(x)/p(x), representing the noise in the Monte-Carlo estimate of p(x), and suppose that W ~ p(w|x) (note that in an abuse of notation, the function p(w|x) is unrelated to p(x)). The main condition we will assume is that E(W|x)=c, where c>0 is a constant independent of x. In the case of c=1, we have the (typical) special case of E(r(x))=p(x). For now, we will also assume that W>=0, but we will consider relaxing this constraint later. The key to understanding the pseudo-marginal approach is to realise that at each iteration of the MCMC algorithm a new value of W is being proposed in addition to a new value for x. If we regard the proposal mechanism as a joint update of x and w, it is clear that the proposal generates (x’,w’) from the density q(x’|x)p(w’|x’), and we can re-write our “approximate” acceptance ratio as A=w’p(x’)p(w’|x’)q(x|x’)p(w|x)/[wp(x)p(w|x)q(x'|x)p(w'|x')]. Inspection of this acceptance ratio reveals that the target of the chain must be (proportional to) p(x)wp(w|x). This is a joint density for (x,w), but the marginal for x can be obtained by integrating over the range of W with respect to w. Using the fact that E(W|x)=c, this then clearly gives a density proportional to p(x), and this is precisely the target that we require. Note that for this to work, we must keep the old value of w from one iteration to the next – that is, we must keep and re-use our noisy r(x) value to include in the acceptance ratio for our next MCMC iteration – we should not compute a new Monte-Carlo estimate for the likelihood of the old state of the chain. ## Examples We will consider again the example from the previous post – simulation of a chain with a N(0,1) target using uniform innovations. Using R, the main MCMC loop takes the form ```pmmcmc<-function(n=1000,alpha=0.5) { vec=vector("numeric", n) x=0 oldlik=noisydnorm(x) vec[1]=x for (i in 2:n) { innov=runif(1,-alpha,alpha) can=x+innov lik=noisydnorm(can) aprob=lik/oldlik u=runif(1) if (u < aprob) { x=can oldlik=lik } vec[i]=x } vec } ``` Here we are assuming that we are unable to compute dnorm exactly, but instead only a Monte-Carlo estimate called noisydnorm. We can start with the following implementation ```noisydnorm<-function(z) { dnorm(z)*rexp(1,1) } ``` Each time this function is called, it will return a non-negative random quantity whose expectation is dnorm(z). We can now run this code as follows. ```plot.mcmc<-function(mcmc.out) { op=par(mfrow=c(2,2)) plot(ts(mcmc.out),col=2) hist(mcmc.out,30,col=3) qqnorm(mcmc.out,col=4) abline(0,1,col=2) acf(mcmc.out,col=2,lag.max=100) par(op) } metrop.out<-pmmcmc(10000,1) plot.mcmc(metrop.out) ``` MCMC output and convergence diagnostics So we see that we have exactly the right N(0,1) target, despite using the (very) noisy noisydnorm function in place of the dnorm function. This noisy likelihood function is clearly unbiased. However, as already discussed, a constant bias in the noise is also acceptable, as the following function shows. ```noisydnorm<-function(z) { dnorm(z)*rexp(1,2) } ``` Re-running the code with this function also leads to the correct equilibrium distribution for the chain. However, it really does matter that the bias is independent of the state of the chain, as the following function shows. ```noisydnorm<-function(z) { dnorm(z)*rexp(1,0.1+10*z*z) } ``` Running with this function leads to the wrong equilibrium. However, it is OK for the distribution of the noise to depend on the state, as long as its expectation does not. The following function illustrates this. ```noisydnorm<-function(z) { dnorm(z)*rgamma(1,0.1+10*z*z,0.1+10*z*z) } ``` This works just fine. So far we have been assuming that our noisy likelihood estimates are non-negative. This is clearly desirable, as otherwise we could wind up with negative Metropolis-Hasting ratios. However, as long as we are careful about exactly what we mean, even this non-negativity condition may be relaxed. The following function illustrates the point. ```noisydnorm<-function(z) { dnorm(z)*rnorm(1,1) } ``` Despite the fact that this function will often produce negative values, the equilibrium distribution of the chain still seems to be correct! An even more spectacular example follows. ```noisydnorm<-function(z) { dnorm(z)*rnorm(1,0) } ``` Astonishingly, this one works too, despite having an expected value of zero! However, the following doesn’t work, even though it too has a constant expectation of zero. ```noisydnorm<-function(z) { dnorm(z)*rnorm(1,0,0.1+10*z*z) } ``` I don’t have time to explain exactly what is going on here, so the precise details are left as an exercise for the reader. Suffice to say that the key requirement is that it is the distribution of W conditioned to be non-negative which must have the constant bias property – something clearly violated by the final noisydnorm example. Before finishing this post, it is worth re-emphasising the issue of re-using old Monte-Carlo estimates. The following function will not work (exactly), though in the case of good Monte-Carlo estimates will often work tolerably well. ```approxmcmc<-function(n=1000,alpha=0.5) { vec=vector("numeric", n) x=0 vec[1]=x for (i in 2:n) { innov=runif(1,-alpha,alpha) can=x+innov lik=noisydnorm(can) oldlik=noisydnorm(x) aprob=lik/oldlik u=runif(1) if (u < aprob) { x=can } vec[i]=x } vec } ``` In a subsequent post I will show how these ideas can be put into practice in the context of a Bayesian inference example. Tags:Andrieu, approximate, Beaumont, Carlo, chain, exact, filter, Hastings, importance, marginal, Markov, MCMC, Metropolis, Monte, particle, PMCMC, PMMH, programming, pseudo-marginal, R, rstats, sampling Posted in Uncategorized | 12 Comments » Xi'an's Og an attempt at bloggin, from scratch... Normal Deviate Thoughts on Statistics and Machine Learning
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 181, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280805587768555, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/111263-easy-answer-but-i-dont-get.html
Thread: 1. This is an easy answer but i dont get it. OK, $t_{ab}=ax+b$. I have to prove this is a subgroup of S4. Now, Ive already proved closure by composition $t_{ab}=ax+b$ with $t_{cd}=cx+d$ to yield $(ac)x+(ad+b)$. Now, the identities wrt composition are $a=1$ and $b=0$. What is the inverse!? How would you find the inverse wrt composition here? Thanks! 2. Originally Posted by sfspitfire23 OK, $t_{ab}=ax+b$. I have to prove this is a subgroup of S4. Now, Ive already proved closure by composition $t_{ab}=ax+b$ with $t_{cd}=cx+d$ to yield $(ac)x+(ad+b)$. Now, the identities wrt composition are $a=1$ and $b=0$. What is the inverse!? How would you find the inverse wrt composition here? Thanks! A subroup of...who?? Anyway, it must be that $T_{\alpha \beta}T_{ab}=x \Longrightarrow (a\alpha)x+(a\beta +b)=x$ Well, now just solve the above for $\alpha\,,\,\,\beta$ Tonio 3. So there indeed will be 2 inverses? 4. Originally Posted by sfspitfire23 So there indeed will be 2 inverses? Of course not: this is a group. What gave you that idea? Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433265924453735, "perplexity_flag": "middle"}
http://physics.aps.org/articles/v5/131
# Viewpoint: Modeling Quantum Field Theory , Department of Physics, Technion–Israel Institute of Technology, Technion City, Haifa 32000, Israel Published November 26, 2012  |  Physics 5, 131 (2012)  |  DOI: 10.1103/Physics.5.131 An analog of the dynamical Casimir effect has been achieved, where phonons replace photons and thermal fluctuations replace vacuum fluctuations. #### Acoustic Analog to the Dynamical Casimir Effect in a Bose-Einstein Condensate J.-C. Jaskula, G. B. Partridge, M. Bonneau, R. Lopes, J. Ruaudel, D. Boiron, and C. I. Westbrook Published November 26, 2012 | PDF (free) Empty space is constantly fluctuating with virtual photons, which come into existence and vanish almost immediately. While these virtual photons are all around us, they cannot be observed directly. However, in a special kind of environment with spatial or temporal inhomogeneity, virtual photons can become real, observable photons by means of a variety of effects. Unfortunately, creating such environments can be exceedingly difficult. The challenge can be made easier by using a condensed-matter analog to the vacuum and its photon modes [1]. In Physical Review Letters, Jean-Christophe Jaskula and colleagues at the University of Paris-Sud, France, report that they have created such an analog for the dynamical Casimir effect, in which a rapidly changing resonator (Fig. 1) produces real particles [2]. In addition to being a condensed-matter system, their observation is an analogy in another way: The real particles they observe originate from thermal fluctuations rather than quantum fluctuations of the vacuum. Their work opens the door for the observation of the quantum vacuum version, in their condensed-matter analog system. The phenomenon studied by Jaskula and co-workers was studied previously by Engels and colleagues [3], but the interpretation was strictly classical. The real particles created were referred to as Faraday waves, oscillatory patterns that appear at half of the driving frequency. Now, Jaskula and colleagues [2] show that the waves have pair correlations in momentum space, thus making the connection with quantum-mechanical pair production and the dynamical Casimir effect. The real dynamical Casimir effect (not the analog effect) was observed in Ref. [4]. However, such observations of the production of real particles are scarce due to the experimental challenges. For each effect, including the dynamical Casimir effect, these challenges are formidable. In the Schwinger effect, for example, a homogeneous electric field can pull apart pairs of oppositely charged virtual particles [5]. The electric field should be strong enough to give an acceleration of $mc3/ħ$, where $m$ is the mass of the particles. Thus, to produce an electron-positron pair, an electric field of $1018V/m$ is required, giving an acceleration of $1029m/s2$. To put this in perspective, if this acceleration were maintained in the laboratory reference frame, the electron would reach the speed of light from rest within a distance of $10-13m$. The event horizon of a black hole can also convert pairs of virtual particles (such as photons) to real particles, which are referred to as Hawking radiation [6]. One of the members of the pair has negative energy, and the other positive. Within the event horizon, the negative energy photon of the virtual pair can exist indefinitely, allowing the positive energy photon to exist also. This real photon travels away from the black hole as Hawking radiation. Unfortunately, the radiation is too weak to observe with current technology. Creating or finding a very small black hole would help the effort. On the other hand, virtual photons can be detected by accelerating the detector of the photons (the Unruh effect) [7]. In the reference frame of the detector, the virtual photons of the vacuum will appear to be a thermal distribution of real photons. In other words, the virtual photons are Doppler shifted into reality. A detector accelerating at $1020m/s2$ would measure a radiation temperature of only $1K$. Another way to detect the virtual photons is to rapidly change the nature of the vacuum. In the dynamical Casimir effect, a resonator has a discrete spectrum of eigenmodes [8]. These modes are populated with the virtual vacuum fluctuations. One such mode is illustrated in Fig. 1. Suddenly, the length of the resonator is changed very rapidly, at a speed which is a significant fraction of the speed of light (the experimental challenge). The change is too fast to be adiabatic, so the population of the virtual vacuum fluctuations is amplified. The extra population consists of real, observable particles. As we can see, it is a challenge to convert virtual particles into real, observable particles. In all cases, the experimental parameters which must be achieved are formidable. But what if we could replace the speed of light with the speed of sound? In a Bose-Einstein condensate, phonons could play the role of the photons, and the condensate itself could play the role of the quantum vacuum. This is the idea of the condensed-matter analog [1]. Following the suggestion of Carusotto et al. [9], Jaskula and colleagues used a cigar-shaped Bose-Einstein condensate as a resonator for the analog of the dynamical Casimir effect [2]. In the experiment of Jaskula et al., the Bose-Einstein condensate was confined by focused laser light. The atoms forming the condensate were attracted to the bright light like insects to a lamp. In one experiment, the authors suddenly increased the laser intensity by a factor of $2$, which caused an abrupt increase in the speed of sound in the condensate, and a sudden decrease in the resonator length, as indicated in Fig. 1. Each thermally populated mode was unable to follow the sudden change adiabatically. This resulted in the production of pairs of phonons with equal and opposite momenta, and a wide distribution of momenta was observed. In another experiment, the laser intensity was modulated sinusoidally, with a variation of about $10%$. This resulted in pairs of phonons with frequencies equal to half of the modulation frequency, thus demonstrating the connection between the dynamical Casimir effect and parametric down-conversion of nonlinear optics [9]. The ongoing study of the dynamical Casimir effect is part of our effort to convince ourselves that empty space is truly filled with virtual particles. If they are really there, then we want to see them in the real vacuum, as well as in a Bose-Einstein condensate analog of vacuum. ### References 1. W. G. Unruh, “Experimental Black-Hole Evaporation?” Phys. Rev. Lett. 46, 1351 (1981). 2. J-C. Jaskula, G. B. Partridge, M. Bonneau, R. Lopes, J. Ruaudel, D. Boiron, and C. I. Westbrook, “Acoustic Analog to the Dynamical Casimir Effect in a Bose-Einstein Condensate,” Phys. Rev. Lett. 109, 220401 (2012). 3. P. Engels, C. Atherton, and M. A. Hoefer, “Observation of Faraday Waves in a Bose-Einstein Condensate,” Phys. Rev. Lett. 98, 095301 (2007). 4. C. M. Wilson, G. Johansson, A. Pourkabirian, M. Simoen, J. R. Johansson, T. Duty, F. Nori, and P. Delsing, “Observation of the Dynamical Casimir Effect in a Superconducting Circuit,” Nature 479, 376 (2011). 5. R. Brout, S. Massar, R. Parentani, and Ph. Spindel, “A Primer for Black Hole Quantum Physics,” Phys. Rep. 260, 329 (1995). 6. S. W. Hawking, “Black Hole Explosions?” Nature 248, 30 (1974). 7. W. G. Unruh, “Notes on Black-Hole evaporation,” Phys. Rev. D 14, 870 (1976). 8. V. V. Dodonov, “Current Status of the Dynamical Casimir Effect,” Phys. Scr. 82, 038105 (2010). 9. I. Carusotto, R. Balbinot, A. Fabbri, and A. Recati, “Density Correlations and Analog Dynamical Casimir Emission of Bogoliubov Phonons in Modulated Atomic Bose-Einstein Condensates,” Eur. Phys. J. D 56, 391 (2010). ### About the Author: Jeff Steinhauer Jeff Steinhauer is an associate professor at the Technion – Israel Institute of Technology. He specializes in condensed matter aspects of Bose-Einstein condensation, such as the Josephson effect and various types of excitations. This combination is not a coincidence, considering the subjects of his postdoctoral and doctoral research. Specifically, he did postdocs in atomic physics at the Weizmann Institute in Rehovot, Israel, as well as at MIT. In his doctoral work, he studied the Josephson effect and vortices in superfluid helium-3 and helium-4 at Berkeley, although his Ph.D. was officially from UCLA. ## Related Articles ### More Atomic and Molecular Physics Condensate in a Can Synopsis | May 16, 2013 Remove the Noise Synopsis | Apr 25, 2013 ### More Particles and Fields Positrons Galore Viewpoint | Apr 3, 2013 A Year-Long Search for Dark Matter Synopsis | Mar 28, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013 Invisibility Cloak for Heat Focus | May 10, 2013 Desirable Defects Synopsis | May 10, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9030414819717407, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/4799/how-to-count-photons?answertab=active
# How to count photons How are photons counted? What is the experimental setup used to count photons from a laser or even a lamp? Of course, in the case of the lamp, I would be able to count only the photons that pass through an area sensor in a particular observation point. If at all possible, a setup that can be done at home is preferred--one that avoids expensive instruments as much as possible (I may be crazy to even think this is possible). The key point is: We can't count photons like we count sheep. So how do we infer from the effects of single photons and count from there? I can start with $E = \frac{hc}{\lambda}$, and all other manifestations of energy, like heat or motion. - 2 Dear Kit, you might have a look at en.wikipedia.org/wiki/Photomultiplier , rp-photonics.com/photon_counting.html and the references on these sites. Greets – Robert Filter Feb 8 '11 at 9:39 A laser emits too many photons to count them individually, but you can measure the output power and divide it by the energy of a single photon. – gigacyan Feb 8 '11 at 9:58 ## 2 Answers A single photon can easily be detected by a photomultiplier. The basic idea is that a photon hitting a metal plate in the tube ejects an electron from the metal plate by the photoelectric effect. An electric field inside the photomultiplier then accelerates the electron until it slams into another metal plate, releasing a bunch of electrons. These are then accelerated to a third metal plate, etc. The end result is a sizable current we can measure. The Wikipedia article is quite good and has more detail. You might be able to build a crude one at home with a lot of dedication, but it's a delicate device requiring a vacuum and quality electronics. These devices work for IR to UV light. We can also measure individual photons with a scintillation counter. I used these in a couple of undergraduate labs to detect x-ray radiation from nuclear processes. They work by detecting when a photon (usually high-energy) ionizes an atom in some particular substrate, so they're tuned to detect photons at certain ranges of frequencies. The scintillator does not directly detect these photons, but converts them to several lower-energy photons that we can detect with other means to infer the high-energy photons' presence, so you'll need some more electronics to go with it. Still, we were able to watch single-photon events get counted in lab. (Thanks dmckee for clarification in comments). The "rod" photoreceptors in your eye may be able to detect single photons. So you can detect single photons at home without any equipment, under the right circumstances. One device you can fairly easily build at home is a cloud chamber, but it will detect mostly $\alpha$ and $\beta$ radiation rather than photons. However, you might see trails from high-energy photons (gamma radiation). There should be lots of sets of instructions on the web for how to build one. - Nice, but I'm going to quibble about the scintillator: that's a mechanism to generate a bunch of lower energy (usually visible band) photons from one high energy one. Then you go about counting the visible photons with a PMT, MCP, or high QE photodiode. It might be better to talk about ionization detectors (Geiger tubes, proportional tubes and more sophisticated wire chambers) in that context. – dmckee♦ Feb 8 '11 at 20:40 @dmckee Yes, good point, thank you. – Mark Eichenlaub Feb 8 '11 at 20:44 1 The eye photoreceptors will react to a single photon (ie the rhodopsin in the rods), but the brain will not register it. It needs around 5 or so to register in consciousness. – Gordon Feb 8 '11 at 22:04 @dmckee: So the scintillator is some sort of down-converter? – Kit Feb 9 '11 at 1:04 @Kit: Scintillators convert some of the energy lost by ionizing particles passing through them into light. High energy photons scatter electrons along their path, and it's their energy that the scintillator responds to. – dmckee♦ Feb 9 '11 at 2:31 Here is a reference for counting photons in a cavity with very sophisticated equipment. http://hal.archives-ouvertes.fr/docs/00/16/56/29/PDF/Gerlin_et_al_2.pdf A good CCD camera basically works at the single photon level. Amateur astronomers who photograph faint stars by stacking images are working at this level. You can get more info from Sky & Telescope or Astronomy magazines. - +1 for CCDs. Even the cheap one can respond to single photons, but their quantum efficiency is not impressive. – dmckee♦ Feb 8 '11 at 20:41 Would it be possible to add a link showing how good a CCD camera is? That would answer the questioner's question on home brew single photon counters. I'll +1 when I see the addition. – Carl Brannen Feb 12 '11 at 0:44 The right graph shows a measurement that was taken on a camera with 1 electron per pixel read noise: physics.stackexchange.com/questions/12058/…. The gain used in this experiment wasn't maximum. If you can live with low dynamic range -- you'll have to if you want to see single photons -- you can increase the EM gain by maybe a factor 20. But then CIC (clock induced charge) is an important problem. If you can live with 100kHz readout speed you can use a cheap CCD camera. I tried this one: starlight-xpress.co.uk/products.htm. – whoplisp Jul 9 '11 at 10:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245514273643494, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/121389/find-a-convex-hull-that-contains-given-points/121396
## Find a convex hull that contains given points? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that you have vectors $a_{1},...,a_{m}$ in $\mathbb{R}^n$. Can you take $n$ among them, let $v_{1},...,v_{n}$ such that $a_{1},...,a_{m}$ belong to the convex hull of $(cn)v_{1},-(cn)v_{1},...,(cn)v_{n},-(cn)v_{n}$ for some constant $c$? My idea was to use gram-schmidt process where at step $i$ I choose the vector $u_{i}$ with the maximum euclidean norm. The answer would be those $a_{i}$'s that maximize the at each step the euclidean norm. Any ideas? Thanks - Even for the case where we have that $(c⋅n)v_{1},−(c⋅n)v_{1},...,(c⋅n)v_{n},−(c⋅n)v_{n}$ might help. Thanks! – unknown (yahoo) Feb 10 at 10:30 1 Except of Anton the others didn't understand the question i guess. First of all $c$ must be independent of $n$. Also you may assume that $a_{1},...,a_{m}$ span $R^n$ and then you have to find (if there exist) $n$ (as the dimension) vectors $v_{1},...,v_{n}$ s.t that $a_{1},...,a_{m}$ belong to the convex hull of $(c\sqrt{n})v_{1},-(c\sqrt{n})v_{1},...,$. Anton the fact is the number of initial vectors is $m = O(n^2)$, so I want sparse number of them... Thanks again! – unknown (yahoo) Feb 11 at 0:32 1 Can you do this for $m=n+1$? – Anton Petrunin Feb 11 at 2:18 1 Sure since $a_{1},...,a_{n+1}$ span $R^n$ we get that \lambda_{1}a_{1}+...+\lambda_{n} a_{n} + \lambda_{n+1}a_{n+1} =0 where the scalars are not all zero. Then divide by the one with maximum absolute value, let max the index: a_{max} = \sum_{i\neq max} \frac{\lambda_{i}}{\lambda_{max}}a_{i} with |\frac{\lambda_{i}}{\lambda_{max}}|\leq 1. Hence $a_{max}$ can be written as a convex combination of $nv_{1}-nv_{1},...,nv_{n},-nv_{n}$. Actually I want to solve it (If i can) for $(c\cdot n)v_{1},-(c\cdot)nv_{1},...$ – unknown (yahoo) Feb 11 at 7:07 I changed my answer. – Anton Petrunin Feb 11 at 19:41 ## 3 Answers Take $v_1,v_2,\dots,v_n$ which span parallelepiped of maximal volume. If $$a_i=x_1\cdot v_1+\dots+x_n\cdot v_n$$ then $|x_k|\le 1$, otherwise exchanging $v_k$ to $a_i$ will increase the volume. Hence $a_i$ belongs to the convex hull of $\{\pm n\cdot v_{i}\}$; i.e. $c=1$. Below is the original answer to the original question. If you agree to choose $N=\tfrac{n{\cdot}(n+1)}{2}$ points then you can get $c=1$. Take the ellipsoid of smallest volume which contains all $\{a_i\}$. You may assume that ${a_i}$ is in generic position, in this case at most $N$ of the points lie on the boundary of ellipsoid; take them as $\{v_i\}$. If one of $a_i$ does not lie in the convex hull of $\{\pm\sqrt{n}\cdot v_i\}$ then you can decrease the volume of the ellipsoid, by pushing it in one direction and expanding in all the orthogonal directions. - Anton: The last version does not work since some $|x_k|$ could be (much) larger than $1$ and some (much) smaller. It is easy to construct examples with $n=2$ and $m=3$. – Misha Feb 11 at 22:33 2 @Misha: I do not think you are right; if $|x_i|>1$ once then you can increase the volume. – Anton Petrunin Feb 12 at 5:53 1 @Anton: Can you explain why by exchanging $u_{k}$ and $a_{i}$ the volume is increased? – unknown (yahoo) Feb 12 at 10:06 Observe that $v_{1},...,v_{n}$ are not orthonormal... – unknown (yahoo) Feb 12 at 10:18 1 The volume is the absolute value of determinant of matrix $A$ which has as columns the vectors $u_{1},...,u_{n}$. If you change $u_{1}$ with $a_{i} = x_{1}u_{1}+...+x_{n}u_{n}$ (let $|x_{1}|\geq 1$) then the volume of the new parallelepiped will be $|x_{1}||det(A)|$ is that right? I believe you are correct!!! – unknown (yahoo) Feb 12 at 10:29 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This reduces to the problem of finding a set of linearly independent vectors with maximum cardinality. There are in general many such sets, but any of them is a solution if you pick $c$ large enough. Then the convex hull is an $n$-orthoplex (AKA cross-polytope). If you make $c$ big, it will include any set of points in the span of the set, including on particular $a_1,...,a_m$. Some googling reveals that an algorithm for finding such a set is here. Or are you trying to produce a solution with minimum $c$? This is a much more interesting question, by which I mean that I don't know the answer (:-). - 1 I think the question is whether a constant $c$ exists for which a solution is possible for all choices of $m$, $n$, $a_1,\ldots,a_m$. – Yoav Kallus Feb 10 at 16:13 This is just to say that I don't think choosing the vectors with the maximum Euclidean norm will suffice. In the example below, the two longest vectors (red) are collinear, and the hull derived from them—regardless of $c$—will be a line segment, which cannot contain the original (green) points. - the idea is to choose pairs $\pm v_i$ of maximal norm, IMHO... – Dima Pasechnik Feb 10 at 14:44 @Dima: Would not this example still lead to collinear $v_i$? – Joseph O'Rourke Feb 10 at 15:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253247380256653, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/51523/translation-invariance-without-momentum-conservation
# Translation Invariance without Momentum Conservation? Instead of the actual gravitational force, in which the two masses enter symmetrically, consider something like $$\vec F_{ab} = G\frac{m_a m_b^2}{|\vec r_a - \vec r_b|^2}\hat r_{ab}$$ where $\vec F_{ab}$ is the force on particle $a$ due to particle $b$ and the units of $G$ have been adjusted. Whenever the masses are unequal, the forces are not equal and opposite, violating Newton's third law and conservation of momentum in the process. As momentum conservation has been violated, my understanding is that translation invariance should be violated as well by this force. But the force law still depends only on separations rather than absolute coordinates, so the physics seems to be translation invariant. What am I getting wrong? - Why there is square of M_b in your equation? – hwlau Jan 18 at 4:10 3 @hwlau it's postulating an alternate form for the gravitational force, just to have an example of an asymmetric force for the question. – David Zaslavsky♦ Jan 18 at 4:24 Dear Mike, off-topic. I am not 100% sure but I think that the vectors with indices have smaller arrows above the letter only, as in $\vec r_a$ (simply $\backslash{\rm vec}\,\,r\underline{ }a$) and not a big arrow above everything which you achieved by braces around $r_a$. – Luboš Motl Jan 18 at 6:07 Thanks Lubos, I agree this looks more normal. Having thought about it, $\hat{r_{ab}}$ was probably also non-standard. – Mike Jan 18 at 6:22 ## 2 Answers Momentum conservation doesn't automatically follow from translation invariance. That only happens because of special features of physical laws, so if you want to prove that translation invariance implies conservation of momentum, you'll need to use some principles of physics to do it. Make up new laws that break those principles and you can indeed have translation invariance without momentum conservation. The principle you need is called "least action". You need to be able to write the physical law in a way so that it's minimizing something, like, for example, light taking the fastest path between points (and minimizing travel time). This is a simple example of what to minimize; others are more complex. In general we create a function called the action that takes as its inputs the history of the entire physical system over some time and outputs a number. Whichever motion of the system minimizes the action subject to some boundary conditions is the true motion. Most physical laws can be written this way, including Newtonian gravity. Your law, though, can't. The reason is we can't come up with an action that makes sense. If the formula for it involves $m_a m_b^2$, well, one problem is that the universe has no way to decide which mass is which, so it would be very strange indeed! Even if there were a way to decide the one on the left is "b", for example, we'd be stuck with that. That one on the left would always be the one that's squared. In your proposed law, we always square the "to" mass, but the action formula, even if it knows the masses are different, has no idea which one is "from" and "to", because it can only see the entire system. You couldn't get from least action to a way to treat the masses with this particular asymmetry. So you're right. That law does violate conservation of momentum and it does have translation invariance, but the piece you're missing is that it's a strange law that doesn't obey some basic rules that real laws do. The most-accessible introduction to these ideas is in Feynman's The Character of Physical Law, or this lecture he gave: http://www.youtube.com/watch?v=zQ6o1cDxV7o The argument of interest comes near the end, 45 or 50 minutes in. - 2 While I agree with the answer, I would be interested in knowing: 1) How can we prove that his postulate doesn't correspond to a Lagrangian ? (without using circular argument) 2) I am not sure what physical "sense" action has even in systems that have one. Would like hear some elaboration on that. – Sankaran Jan 18 at 4:58 Can you explain where the argument I already gave is circular? It's hand-wavy, but I think it is not logically flawed and it's obvious enough how to be more precise if you wish. – Mark Eichenlaub Jan 18 at 5:11 1 – Mark Eichenlaub Jan 18 at 5:13 I mean the argument that universe cannot decide which one is sounds right but I don't see how that implies a lack of action (purely in terms of a mathematical/logic conclusion). The left-right argument seems to invoke lack of translational symmetry which is what we are after showing(?) – Sankaran Jan 18 at 5:14 1 No, the left-right argument has nothing to do with translational symmetry. I don't see how you get that. It's just saying there's a function L(G,m_a,m_b,r_a,r_b,v_a,v_b,t) and we look at things like $\partial L/\partial r_a$. Here m_a is a constant. If the formula has m_a^2 in it, it will be that particular mass is squared when we look at the equation of motion for a or for b. There's nothing in this about translations. – Mark Eichenlaub Jan 18 at 5:17 show 4 more comments Your forces are always equal. It is the accelerations that are unequal in case of equal masses. The situation is similar to the Coulomb interaction. The total momentum is conserved. There is no problem here. EDIT: As Michael Brown kindly pointed out, the forces are implied to be different. Then indeed the momentum conservation does not hold. The situation is similar to that with a known motion of a "sourcing body" $\vec{r}_b (t)$: although the force on a probe body at $\vec{r}_a$ depends only on the relative distance $|\vec{r}_a$-$\vec{r}_b(t)|$, the momentum is not conserved (neither is the energy). - 4 -1 because you apparently didn't notice the $m_b^2$ factor that was the focus of the question. The OP isn't asking about Newton's law applied to bodies with different masses, he is proposing a new law where the forces are unequal as a counter-example to the claim that translation invariance alone is sufficient to imply momentum conservation. – Michael Brown Jan 18 at 13:54 @MichaelBrown: Indeed, if $\vec{F}_{ab}\ne \vec{F}_{ba}$, then it is a different situation. Then it is very similar to a particle in an external force where the momentum is not conserved. – Vladimir Kalitvianski Jan 18 at 13:58 Removed the -1 now. – Michael Brown Jan 19 at 12:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492945671081543, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/55892-show-has-no-proper-subgroups-finite-index-print.html
# show has no proper subgroups of finite index Printable View • October 26th 2008, 05:27 PM mandy123 show has no proper subgroups of finite index Show that Q, the group of rational numbers under addition, has no proper subgroups of finite index, but Z has. • October 26th 2008, 07:36 PM NonCommAlg Quote: Originally Posted by mandy123 Show that Q, the group of rational numbers under addition, has no proper subgroups of finite index, but Z has. let N be a subgroup of $\mathbb{Q}$ with $[\mathbb{Q}:N]=n.$ since $(\mathbb{Q},+)$ is abelian, N is normal and hence $\mathbb{Q}/N$ is a group of order $n.$ so: $\mathbb{Q}=n\mathbb{Q} \subseteq N \subseteq \mathbb{Q}.$ thus: $N=\mathbb{Q}. \ \ \Box$ is this really not in your lecture notes that every nonzero subgroup of $\mathbb{Z}$ has finite index?!!! (Dull) All times are GMT -8. The time now is 04:15 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277861714363098, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/236319/first-order-differential-equation
# First order differential equation Im clueless on how to solve the following question... $xe^y\frac {dy}{dx} = e^y +1$ What i've done is... $\frac {dy}{dx} = \frac 1x + \frac {1}{xe^e}; \frac {dy}{dx} - \frac {1}{xe^e} = \frac 1x$ Find the integrating factor.. $v(x) = e^{P(x)}; where P(x) = \int p(x)dx \Rightarrow P(x) = \int \frac 1x dx = ln|x| \\v(x) = e^{P(x)} = e^{ln|x|} = x; \\ y = \frac {1}{v(x)} \int v(x)q(x) dx = \frac 1x \int {x}{\frac 1x} dx = 1+c$ I know I made a mistake somewhere. Would someone advice me on this?? - solve rather $x\,dz/dx=z+1$ and then set $z=e^y$ – user8268 Nov 13 '12 at 9:50 ## 1 Answer You have $xe^y\frac{dy}{dx}=e^y+1$, so, by dividing both sides by $x$ and by $e^y+1$, we get $$\begin{align*}\frac{e^y}{e^y+1}\frac{dy}{dx}=\frac1x\hspace{5pt}&\Rightarrow \hspace{5pt}\frac{e^y}{e^y+1}dy=\frac1xdx\hspace{5pt}\Rightarrow \hspace{5pt}\int\frac{e^y}{e^y+1}dy=\int\frac1xdx \\ &\Rightarrow\hspace{5pt}\ln(e^y+1)=\ln (C|x|)\hspace{5pt}\Rightarrow \hspace{5pt}e^y+1=C|x|\end{align*}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380807876586914, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php/Newton's_Basin
# Newton's Basin ### From Math Images Jump to: navigation, search Newton's Basin Fields: Fractals and Calculus Image Created By: Ashley T. Website: Fractal Foundation Newton's Basin Newton's Basin is a visual representation of Newton's Method, which is a procedure for estimating the root of a function. # Basic Description Animation Emphasizing Roots This image is one of many examples of Newton's Basin or Newton's Fractal. Newton's Basin is based on a calculus technique called Newton's Method, a procedure Newton developed to estimate roots (or solutions) of equations. Each pixel in a Newton's Basin corresponds to a unique coordinate, or point. The colors in a Newton's Basin usually correspond to each individual root of the equation, and can be used to infer where each root is located. Each color region reflects the set of points, which, after undergoing iteration with the equation describing the fractal, will eventually get closer and closer to the value of the root associated with that color. The animation emphasizes the roots in a Newton's Basin, whose equation clearly has three roots. The image featured at the top of this page is also a Newton's Basin with three roots. # A More Mathematical Explanation Note: understanding of this explanation requires: *Calculus [Click to view A More Mathematical Explanation] The image at the top of this page is a visual representation of Newton's Method in calculus expanded [...] [Click to hide A More Mathematical Explanation] The image at the top of this page is a visual representation of Newton's Method in calculus expanded into the complex plane. ### Newton's Method Newton's Method in calculus is a procedure to find roots of polynomials, using an estimated value as a starting point. Newton devised an iterated method (animated to the right) with the following steps: 1. Estimate a starting x-value ($x_o$) on the graph near to the root 2. Find the tangent line at that starting x-value 3. Find the root of the tangent line 4. Using the tangent's root as new starting x-value ($x_{n}, x_{n+1},...$), iterate the method to find a better estimate The results of this method lead to very close estimates to the root of the polynomial. Newton's Method can also be expressed algebraically as follows, where $x_n$ is the nth estimate: $f'(x_n) = \frac{\mathrm{\Delta y}}{\mathrm{\Delta x}} = \frac{0 - y_n}{x_n - x_{n+1}}$ $f'(x_n) = \frac{f(x_n)}{x_n - x_{n+1}}$ $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ ### Newton's Basin Newton Basin with 5 Roots To produce an interesting fractal, the Newton Method needs to be extended to the complex plane. Newton's Basin is created using a complex polynomialOr a polynomial with co-efficients that are complex, such as $p(z) = z^3 - 2z + 2$, where z is in the form a + bi, with real and/or complex roots. In addition, each root in a Newton's Basin fractal is usually given a distinctive color. Thus, the fractal on the right is generated by a polynomial with a total of five roots colored magenta, yellow, red, green, and blue. Every pixel in the image represents a complex number. Each complex number is applied to the equation and iterated continually with the output of the previous iteration becoming the input of the next iteration. This iteration is done by using the same equations discussed in the previous Newton's method section, where x is now a complex number z, y is now a complex number p, and $z_n$ is the nth estimate: $f'(z_n) = \frac{\mathrm{\Delta p}}{\mathrm{\Delta z}} = \frac{f(z_n)}{z_n - z_{n+1}}$ $z_{n+1} = z_n - \frac{f(z_n)}{f'(z_n)}$ #### Coloring $f(z) = z^5 - 1$ If the iterations lead the complex number to converge towards a particular root, the pixel is colored according to the color of that root. If the iterations lead to a loop and not a root, then the pixel is colored black because the complex number does not converge. Each root has a set of complex numbers (or pixels)that converge to the root (algebraically, this set would include all of the $z_0$ values referenced above). This set of coordinates is called the root's basin of attraction, where the name of this fractal comes from. In addition, some images including shading in each basin. The shading is determined by the number of iterations it takes each pixel to converge to its root, and it allows us to see the location of the root more clearly. The darker the shading of a pixel is, the more iterations it requires for that pixel to converge to its respective root. #### An Example For example, the image below, as well as the image at the top of the page, was created from the equation $p(z) = z^3 - 2z + 2$. Since this equation is a 3rd degree complex polynomial, it has three roots, two of which are complex: $z_1 = -1.7693$ $z_2 = 0.8846 + 0.5897i$ $z_3 = 0.8846 - 0.5897i$ The resulting map of these solutions are to the right. You can see that the Newton's Basin created from this complex polynomial has three roots (yellow, blue, and green) that correspond to the solution map. #### Self-Similarity As with all other fractals, Newton's Basin exhibits self-similarity. The video below is an interactive representation of the continual self-similarity displayed by the Newton's Basin shown in an above section with a root degree of 5 $f(z) = z^5 - 1$. Towards the end of the video, you will notice that the pixels are no longer adequate to continue magnifying the image...however, the fractal still goes on. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References Wikipedia, Newton fractal page and Newton's Method page Simon Tatham, Fractals derived from Newton-Raphson iteration David E. Joyce, Newton Basins If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Retrieved from "http://mathforum.org/mathimages/index.php/Newton%27s_Basin" Categories: | | | | | | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123554229736328, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1975/is-a-smooth-closed-surface-in-euclidean-3-space-rigid
## Is a smooth closed surface in Euclidean 3-space rigid? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Classical theorem of Cohn-Vossen: A closed convex surface in Euclidean 3-space cannot be deformed isometrically. Robert Connelly found an example of a polyhedral surface that can be deformed isometrically. A metal hinged model of it can be found at IHES. But what about an arbitrary not-necessarily-convex smooth closed surface? Is it necessarily rigid? Or maybe it might be possible to make a smooth version of Connelly's example? It's easy to make smooth "hinges". The real challenge is finding a smooth model of the vertices, which is where two or more hinges meet. - ## 2 Answers On the Springer Online Encyclopedia there's a relevant article here: http://eom.springer.de/T/t092810.htm It says that a theorem due to Kuiper in 1955 implies that no smooth closed surface in R^3 is C^1-isometrically rigid. I think the reference is: N.H. Kuiper, On C^1-isometric embeddings, Indag. Math. XVII, (1954) 545-556 and 683-689. On the other hand it says that nothing is known for the C^2 case, and a book on Open Problems in Geometry also says that as 1994 it is still open, see http://books.google.fr/books?id=S5CD-YceX6QC&pg=PA62 As for polyhedra, Schlenker has a rigidity criterion for non-convex ones, preprint here: http://www.math.univ-toulouse.fr/~schlenker/texts/rcnp.pdf (sadly lacks the drawings, the published reference is Discrete and Computational Geometry, 33 (2005):2, 207-221). (I'm no expert on this, just some googling). - By the way, the infinitesimal rigidity criterion mentioned here is still partly open. It states that any convex polyhedron in $R^3$ with vertices in convex position which is "decomposable" (can be cut in convex polyhedra with no new vertex) is infinitesimally rigid. With Ivan Izmestiev we proved that this holds under an additional hypothesis of "codecomposability". Without this additional hypothesis it's unknown whether the criterion holds. – Jean-Marc Schlenker Jul 16 2011 at 5:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Apparently the question is still open for smooth enough surfaces and deformations (that is, at least $C^2$). Mike Anderson wrote a preprint claiming to prove local rigidity of smooth enough surfaces, but it was later withdrawn. Idjad Sabitov and his collaborators have been working on this question, developing for instance a theory of higher-order isometric deformations, see e.g. Sabitov, I. Kh. Local theory of bendings of surfaces [MR1039820 (91c:53004)]. Geometry, III, 179–256, Encyclopaedia Math. Sci., 48, Springer, Berlin, 1992. He conjectures that local rigidity holds for analytic surfaces. - Thanks for these references. I did not know about Mike Anderson's attempt. – Deane Yang Jul 16 2011 at 12:25 1 It is worth noting that the smooth case is equivalent to the local uniqueness of a solution to a PDE of mixed type, which is an extremely difficult question. Even if you try to take a more geometric approach, you have to confront the difficulties implied by this. Basically, the way a positively curved surface bends is different from how a negatively curved surface bends, and you have to somehow match up the bending where curvature changes sign. The only way I can see around this is to find some miraculous integral identity that implies rigidity. But non-ridigity seems more interesting to me. – Deane Yang Jul 16 2011 at 12:31 1 Right -- but it depends how one looks at it. One possibility (used by Anderson, for instance, in the preprint mentioned above) is to consider deformations of the metric in the domain bounded by the surface, under the condition that it remains flat (or constant curvature). In this setting the PDE is elliptic, but it's the boundary condition (on the surface) which is not too well behaved. – Jean-Marc Schlenker Jul 16 2011 at 20:19 1 Yes, that was a cool idea by Anderson, but in the end you can't escape the non-ellipticity and its consequences. I often wish mathematicians could get at least some credit for cool ideas, even if they don't work. – Deane Yang Jul 17 2011 at 21:24 I suspect the idea is older than Anderson's preprint. It comes up naturally if you think of the infinitesimal rigidity of convex surfaces in Euclidean (or hyperbolic) 3-space and try to extend it either to hyperbolic 3-manifolds with smooth, strictly convex boundary --Thurston had conjectured that they are determined by the induced metric on the boundary, and it turned out to be true -- or in higher dimensions to Einstein metrics with convex boundary on the ball (there the question is still open I think). I think people trying to prove rigidity of hyperbolic convex cores had tried this. – Jean-Marc Schlenker Jul 18 2011 at 17:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363179802894592, "perplexity_flag": "head"}
http://www.citizendia.org/Quantum_mechanics
For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics. Quantum mechanics (QM or quantum theory) is a physical science dealing with the behavior of Matter and Energy on the scale of Atoms Quantum mechanics $\Delta x \, \Delta p \ge \frac{\hbar}{2}$ Uncertainty principle Introduction to...Mathematical formulation of... This box: view • talk • edit Fig. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain Quantum mechanics (QM or quantum theory) is a physical science dealing with the behavior of Matter and Energy on the scale of Atoms The mathematical formulation of quantum mechanics is the body of mathematical formalisms which permits a rigorous description of Quantum mechanics. 1: The wavefunctions of an electron in a hydrogen atom possessing definite energy (increasing downward: n = 1, 2, 3, . A wave function or wavefunction is a mathematical tool used in Quantum mechanics to describe any physical system The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J A hydrogen atom is an atom of the chemical element Hydrogen. The electrically neutral In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός . . ) and angular momentum (increasing across: s, p, d,. In Physics, the angular momentum of a particle about an origin is a vector quantity equal to the mass of the particle multiplied by the Cross product of the position . . ). Brighter areas correspond to higher probability density for a position measurement. In Quantum mechanics, a probability amplitude is a complex -valued function that describes an uncertain or unknown quantity Wavefunctions like these are directly comparable to Chladni's figures of acoustic modes of vibration in classical physics and are indeed modes of oscillation as well: they possess a sharp energy and thus a keen frequency. Ernst Florens Friedrich Chladni (ˈɛʀnst ˈfloːʀɛns ˈfʀiːdʀɪç ˈkladnɪ November 30, 1756 – April 3, 1827) was a German Acoustics is the interdisciplinary science that deals with the study of Sound, Ultrasound and Infrasound (all mechanical waves in gases liquids and solids In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός Frequency is a measure of the number of occurrences of a repeating event per unit Time. The angular momentum and energy are quantized, and only take on discrete values like those shown (as is the case for resonant frequencies in acoustics). In Physics, the angular momentum of a particle about an origin is a vector quantity equal to the mass of the particle multiplied by the Cross product of the position In Physics, quantization is a procedure for constructing a Quantum field theory starting from a classical field theory. In Physics, resonance is the tendency of a system to Oscillate at maximum Amplitude at certain frequencies, known as the system's Quantum mechanics is the study of mechanical systems whose dimensions are close to or below the atomic scale, such as molecules, atoms, electrons, protons and other subatomic particles. Mechanics ( Greek) is the branch of Physics concerned with the behaviour of physical bodies when subjected to Forces or displacements History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny In Chemistry, a molecule is defined as a sufficiently stable electrically neutral group of at least two Atoms in a definite arrangement held together by History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive A subatomic particle is an elementary or composite Particle smaller than an Atom. Quantum mechanics is a fundamental branch of physics with wide applications. Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. Quantum theory generalizes classical mechanics and provides accurate descriptions for many previously unexplained phenomena such as black body radiation and stable electron orbits. Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects A phenomenon (from Greek φαινόμενoν, pl φαινόμενα - phenomena) is any observable occurrence The Electromagnetic radiation emitted by a Black body. You may also be looking for Incandescence, the radiation from a body The effects of quantum mechanics are typically not observable on macroscopic scales, but become evident at the atomic and subatomic level. Macroscopic is commonly used to describe physical objects that are measurable and observable by the Naked eye. History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny A subatomic particle is an elementary or composite Particle smaller than an Atom. There are however exceptions to this rule such as superfluidity. Superfluidity is a phase of matter or description of Heat capacity in which unusual effects are observed when Liquids, typically of Helium-4 ## Overview The word “quantum” came from the Latin word which means "unit of quantity". In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1, at right). In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny The discovery that waves have discrete energy packets (called quanta) that behave in a manner similar to particles led to the branch of physics that deals with atomic and subatomic systems which we today call quantum mechanics. A wave is a disturbance that propagates through Space and Time, usually with transference of Energy. A subatomic particle is an elementary or composite Particle smaller than an Atom. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, particle physics, and nuclear physics. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. Chemistry (from Egyptian kēme (chem meaning "earth") is the Science concerned with the composition structure and properties Condensed matter physics is the field of Physics that deals with the macroscopic physical properties of Matter. Solid-state physics, the largest branch of Condensed matter physics, is the study of rigid Matter, or Solids The bulk of solid-state physics theory and Atomic physics (or atom physics) is the field of Physics that studies atoms as an isolated system of Electrons and an atomic nucleus. Molecular physics is the study of the physical properties of Molecules and of the Chemical bonds between Atoms that bind them Computational chemistry is a branch of Chemistry that uses computers to assist in solving chemical problems Quantum chemistry is a branch of Theoretical chemistry, which applies Quantum mechanics and Quantum field theory to address issues and problems in Particle physics is a branch of Physics that studies the elementary constituents of Matter and Radiation, and the interactions between them Nuclear physics is the field of Physics that studies the building blocks and interactions of Atomic nuclei. The foundations of quantum mechanics were established during the first half of the twentieth century by Werner Heisenberg, Max Planck, Louis de Broglie, Albert Einstein, Niels Bohr, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli and others. Werner Heisenberg (5 December 1901 in Würzburg &ndash1 February 1976 in Munich) was a German theoretical physicist best known for enunciating the Louis-Victor-Pierre-Raymond 7th duc de Broglie, FRS (də bʁœj ( August 15 1892 &ndash March 19 1987) was a French Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical Niels Henrik David Bohr (nels ˈb̥oɐ̯ˀ in Danish 7 October 1885 – 18 November 1962 was a Danish Physicist who made fundamental contributions to understanding Max Born (11 December 1882 &ndash 5 January 1970 was a German Physicist and Mathematician who was instrumental in the development of Quantum Below is a list of famous Physicists Many of these from the 20th and 21st centuries are found on the list of recipients of the Nobel Prize in physics. Some fundamental aspects of the theory are still actively studied. Quantum mechanics is essential to understand the behavior of systems at atomic length scales and smaller. History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny For example, if Newtonian mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus, making stable atoms impossible. Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J The nucleus of an Atom is the very dense region consisting of Nucleons ( Protons and Neutrons, at the center of an atom However, in the natural world the electrons normally remain in an unknown orbital path around the nucleus, defying classical electromagnetism. Quantum mechanics was initially developed to provide a better explanation of the atom, especially the spectra of light emitted by different atomic species. A spectrum (plural spectra or spectrums) is a condition that is not limited to a specific set of values but can vary infinitely within a continuum. Light, or visible light, is Electromagnetic radiation of a Wavelength that is visible to the Human eye (about 400–700 Isotopes (Greek isos = "equal" tópos = "site place" are any of the different types of atoms ( Nuclides The quantum theory of the atom was developed as an explanation for the electron's staying in its orbital, which could not be explained by Newton's laws of motion and by Maxwell's laws of classical electromagnetism. An atomic orbital is a Mathematical function that describes the wave-like behavior of an electron in an atom In Classical electromagnetism, Maxwell's equations are a set of four Partial differential equations that describe the properties of the electric In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted A wave function or wavefunction is a mathematical tool used in Quantum mechanics to describe any physical system In Mathematics, a vector space (or linear space) is a collection of objects (called vectors) that informally speaking may be scaled and added This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. Probability is the likelihood or chance that something is the case or will happen For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with arbitrary accuracy. In Physics, conjugate variables are pair of variables mathematically defined in such a way that they become Fourier transform duals of one-another For instance, electrons may be considered to be located somewhere within a region of space, but with their exact positions being unknown. Contours of constant probability, often referred to as “clouds” may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. It should be stressed that the electron itself is not spread out over such cloud regions. It is either in a particular region of space, or it is not. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain The other exemplar that led to quantum mechanics was the study of electromagnetic waves such as light. "Exemplars" directs here For the superpowered comic book team see Exemplars (comics. Electromagnetic radiation takes the form of self-propagating Waves in a Vacuum or in Matter. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or quanta, Albert Einstein exploited this idea to show that an electromagnetic wave such as light could be described by a particle called the photon with a discrete energy dependent on its frequency. Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena This led to a theory of unity between subatomic particles and electromagnetic waves called wave–particle duality in which particles and waves were neither one nor the other, but had certain properties of both. Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave In Physics and Chemistry, wave–particle duality is the concept that all Matter and Energy exhibits both Wave -like and While quantum mechanics describes the world of the very small, it also is needed to explain certain “macroscopic quantum systems” such as superconductors and superfluids. Macroscopic is commonly used to describe physical objects that are measurable and observable by the Naked eye. Superconductivity is a phenomenon occurring in certain Materials generally at very low Temperatures characterized by exactly zero electrical resistance Superfluidity is a phase of matter or description of Heat capacity in which unusual effects are observed when Liquids, typically of Helium-4 Broadly speaking, quantum mechanics incorporates four classes of phenomena that classical physics cannot account for: (i) the quantization (discretization) of certain physical quantities, (ii) wave-particle duality, (iii) the uncertainty principle, and (iv) quantum entanglement. In Physics, quantization is a procedure for constructing a Quantum field theory starting from a classical field theory. In Physics, conjugate variables are pair of variables mathematically defined in such a way that they become Fourier transform duals of one-another In Physics and Chemistry, wave–particle duality is the concept that all Matter and Energy exhibits both Wave -like and In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain Quantum entanglement is a quantum mechanical Phenomenon in which the Quantum states of two or more objects are linked together so that one object Each of these phenomena is described in detail in subsequent sections. ## History Main article: History of quantum mechanics The history of quantum mechanics began essentially with the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy is radiated and absorbed in quantities divisible by discrete ‘energy elements’, E, such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy, as defined by the following formula: $E = h \nu = \hbar \omega\,$ where h is Planck's Action Constant. The history of Quantum mechanics as this interlaces with history of Quantum chemistry began essentially with the 1838 discovery of Cathode rays Cathode rays (also called an electron beam or e-beam) are streams of Electrons observed in Vacuum tubes i Michael Faraday, FRS ( September 22 1791 – August 25 1867) was an English The Electromagnetic radiation emitted by a Black body. You may also be looking for Incandescence, the radiation from a body Gustav Robert Kirchhoff ( March 12, 1824 &ndash October 17, 1887) was a German Physicist who contributed to the fundamental Ludwig Eduard Boltzmann ( February 20, 1844 &ndash September 5, 1906) was an Austrian Physicist famous for his founding Frequency is a measure of the number of occurrences of a repeating event per unit Time. In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός The Planck constant (denoted h\ is a Physical constant used to describe the sizes of quanta. Although Planck insisted that this was simply an aspect of the absorption and radiation of energy and had nothing to do with the physical reality of the energy itself, in 1905, to explain the photoelectric effect (1839), i. Introduction When a Metallic surface is exposed to Electromagnetic radiation above a certain threshold Frequency, the light is absorbed and Electrons e. that shining light on certain materials can function to eject electrons from the material, Albert Einstein postulated, as based on Planck’s quantum hypothesis, that light itself consists of individual quanta, which later came to be called photons (1926). Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical Light, or visible light, is Electromagnetic radiation of a Wavelength that is visible to the Human eye (about 400–700 In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena From Einstein's simple postulation was borne a flurry of debating, theorizing and testing, and thus, the entire field of quantum physics. Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons ## Relativity and quantum mechanics The modern world of physics is notably founded on two tested and demonstrably sound theories of general relativity and quantum mechanics —theories which appear to contradict one another. The defining postulates of both Einstein's theory of relativity and quantum theory are indisputably supported by rigorous and repeated empirical evidence. However, while they do not directly contradict each other theoretically (at least with regard to primary claims), they are resistant to being incorporated within one cohesive model. Einstein himself is well known for rejecting some of the claims of quantum mechanics. While clearly inventive in this field, he did not accept the more philosophical consequences and interpretations of quantum mechanics, such as the lack of deterministic causality and the assertion that a single subatomic particle can occupy numerous areas of space at one time. Causality (but not causation) denotes a necessary relationship between one event (called cause and another event (called effect) which is the direct consequence He also was the first to notice some of the apparently exotic consequences of entanglement and used them to formulate the Einstein-Podolsky-Rosen paradox, in the hope of showing that quantum mechanics has unacceptable implications. Quantum entanglement is a quantum mechanical Phenomenon in which the Quantum states of two or more objects are linked together so that one object In Quantum mechanics, the EPR paradox is a Thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities This was 1935, but in 1964 it was shown by John Bell (see Bell inequality) that Einstein's assumption that quantum mechanics is correct, but has to be completed by hidden variables, was based on wrong philosophical assumptions: according to the paper of J. Bell's theorem is a theorem that shows that the predictions of Quantum mechanics (QM are not intuitive and touches upon fundamental philosophical issues that relate to modern Bell and the Copenhagen interpretation (the common interpretation of quantum mechanics by physicists for decades), and contrary to Einstein's ideas, quantum mechanics is • neither a "realistic" theory (since quantum measurements do not state pre-existing properties, but rather they prepare properties) • nor a local theory (essentially not, because the state vector $|\psi\rangle$ determines simultaneously the probability amplitudes at all sites, $|\psi\rangle\to\psi(\mathbf r), \forall \mathbf r$). The Copenhagen interpretation is an interpretation of Quantum mechanics. In Physics, the principle of locality is that distant objects cannot have direct influence on one another an object is influenced directly only by its immediate surroundings The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which one can measure the state of one particle and instantaneously change the state of its entangled partner, although the two particles can be an arbitrary distance apart; however, this effect does not violate causality, since no transfer of information happens. Causality (but not causation) denotes a necessary relationship between one event (called cause and another event (called effect) which is the direct consequence These experiments are the basis of some of the most topical applications of the theory, quantum cryptography, which works well, although at small distances of typically ${\le}$ 1000 km, being on the market since 2004. Quantum cryptography, or quantum key distribution (QKD uses Quantum mechanics to guarantee secure communication There do exist quantum theories which incorporate special relativity—for example, quantum electrodynamics (QED), which is currently the most accurately tested physical theory [1]—and these lie at the very heart of modern particle physics. Quantum electrodynamics ( QED) is a relativistic Quantum field theory of Electrodynamics. Particle physics is a branch of Physics that studies the elementary constituents of Matter and Radiation, and the interactions between them Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those applications. However, the lack of a correct theory of quantum gravity is an important issue in cosmology. Quantum gravity is the field of Theoretical physics attempting to unify Quantum mechanics, which describes three of the fundamental forces of nature Cosmology (from Greek grc κοσμολογία - grc κόσμος kosmos, "universe" and grc -λογία -logia) is study ## Attempts at a unified theory Main article: Quantum gravity Inconsistencies arise when one tries to join the quantum laws with general relativity, a more elaborate description of spacetime which incorporates gravitation. Quantum gravity is the field of Theoretical physics attempting to unify Quantum mechanics, which describes three of the fundamental forces of nature Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916 SpaceTime is a patent-pending three dimensional graphical user interface that allows end users to search their content such as Google Google Images Yahoo! YouTube eBay Amazon and RSS Gravitation is a natural Phenomenon by which objects with Mass attract one another Resolving these inconsistencies has been a major goal of twentieth- and twenty-first-century physics. The twentieth century of the Common Era began on The 21st century is the current century of the Christian Era or Common Era in accordance with the Gregorian calendar. Many prominent physicists, including Stephen Hawking, have labored in the attempt to discover a "Grand Unification Theory" that combines not only different models of subatomic physics, but also derives the universe's four forces—the strong force, electromagnetism, weak force, and gravity— from a single force or phenomenon. Stephen William Hawking CH, CBE, FRS, FRSA (born 8 January 1942 is a British theoretical physicist. Grand Unification, grand unified theory, or GUT refers to any of several very similar unified field theories or models in Physics that In particle physics the strong interaction, or strong force, or color force, holds Quarks and Gluons together to form Protons and Electromagnetism is the Physics of the Electromagnetic field: a field which exerts a Force on particles that possess the property of The weak interaction (often called the weak force or sometimes the weak nuclear force) is one of the four Fundamental interactions of nature Gravitation is a natural Phenomenon by which objects with Mass attract one another ## Quantum mechanics and classical physics Predictions of quantum mechanics have been verified experimentally to a very high degree of accuracy. Thus, the current logic of correspondence principle between classical and quantum mechanics is that all objects obey laws of quantum mechanics, and classical mechanics is just a quantum mechanics of large systems (or a statistical quantum mechanics of a large collection of particles). This article discusses quantum theory For other uses see Correspondence principle (disambiguation. Laws of classical mechanics thus follow from laws of quantum mechanics at the limit of large systems or large quantum numbers. Main differences between classical and quantum theories have already been mentioned above in the remarks on the Einstein-Podolsky-Rosen paradox. In Quantum mechanics, the EPR paradox is a Thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities Essentially the difference boils down to the statement that quantum mechanics is coherent (addition of amplitudes), whereas classical theories are incoherent (addition of intensities). In Physics, coherence is a property of waves that enables stationary (i Thus, such quantities as coherence lengths and coherence times come into play. For microscopic bodies the extension of the system is certainly much smaller than the coherence length; for macroscopic bodies one expects that it should be the other way round. This is in accordance with the following observations: Many “macroscopic” properties of “classic” systems are direct consequences of quantum behavior of its parts. For example, stability of bulk matter (which consists of atoms and molecules which would quickly collapse under electric forces alone), rigidity of this matter, mechanical, thermal, chemical, optical and magnetic properties of this matter—they are all results of interaction of electric charges under the rules of quantum mechanics. In Chemistry, a molecule is defined as a sufficiently stable electrically neutral group of at least two Atoms in a definite arrangement held together by Electric charge is a fundamental conserved property of some Subatomic particles which determines their Electromagnetic interaction. Because seemingly exotic behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with extremely fast-moving or extremely tiny particles, the laws of classical “Newtonian” physics still remain accurate in predicting the behavior of surrounding (“large”) objects—of the order of the size of large molecules and bigger. Despite the proposal of many novel ideas, the unification of quantum mechanics—which reigns in the domain of the very small—and general relativity—a superb description of the very large—remains, tantalizingly, a future possibility. (See quantum gravity, string theory. Quantum gravity is the field of Theoretical physics attempting to unify Quantum mechanics, which describes three of the fundamental forces of nature String theory is a still-developing scientific approach to Theoretical physics, whose original building blocks are one-dimensional extended objects called strings ) ## Theory There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory proposed by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[2] and wave mechanics (invented by Erwin Schrödinger). The term transformation theory refers to a procedure used by P Matrix mechanics is a formulation of Quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925 Werner Heisenberg (5 December 1901 in Würzburg &ndash1 February 1976 in Munich) was a German theoretical physicist best known for enunciating the In Physics, especially Quantum mechanics, the Schrödinger equation is an equation that describes how the Quantum state of a Physical system In this formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". In Quantum physics, a quantum state is a mathematical object that fully describes a quantum system. In Physics, particularly in Quantum physics, a system observable is a property of the system state that can be determined by some sequence of physical Examples of observables include energy, position, momentum, and angular momentum. In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός In Classical mechanics, momentum ( pl momenta SI unit kg · m/s, or equivalently N · s) is the product In Physics, the angular momentum of a particle about an origin is a vector quantity equal to the mass of the particle multiplied by the Cross product of the position Observables can be either continuous (e. In Mathematics, a continuous function is a function for which intuitively small changes in the input result in small changes in the output g. , the position of a particle) or discrete (e. Discrete mathematics, also called finite mathematics, is the study of mathematical structures that are fundamentally discrete in the sense of not supporting or requiring the g. , the energy of an electron bound to a hydrogen atom). Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable. In Probability theory and Statistics, a probability distribution identifies either the probability of each value of an unidentified Random variable Naturally, these probabilities will depend on the quantum state at the instant of the measurement. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" can be roughly translated from German as inherent or as a characteristic). The German language (de ''Deutsch'') is a West Germanic language and one of the world's major languages. In the everyday world, it is natural and intuitive to think of everything being in an eigenstate of every observable. Everything appears to have a definite position, a definite momentum, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values for the position or momentum of a certain particle in a given space in a finite time; rather, it only provides a range of probabilities of where that particle might be. Therefore, it became necessary to use different words for (a) the state of something having an uncertainty relation and (b) a state that has a definite value. The latter is called the "eigenstate" of the property being measured. For example, consider a free particle. In Physics, a free particle is a particle that in some sense is not bound In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as a wave. In Physics and Chemistry, wave–particle duality is the concept that all Matter and Energy exhibits both Wave -like and Therefore, its quantum state can be represented as a wave, of arbitrary shape and extending over all of space, called a wave function. In Quantum physics, a quantum state is a mathematical object that fully describes a quantum system. A wave is a disturbance that propagates through Space and Time, usually with transference of Energy. A wave function or wavefunction is a mathematical tool used in Quantum mechanics to describe any physical system The position and momentum of the particle are observables. The Uncertainty Principle of quantum mechanics states that both the position and the momentum cannot simultaneously be known with infinite precision at the same time. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain However, one can measure just the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large at a particular position x, and almost zero everywhere else. The Dirac delta or Dirac's delta is a mathematical construct introduced by the British theoretical physicist Paul Dirac. If one performs a position measurement on such a wavefunction, the result x will be obtained with almost 100% probability. In other words, the position of the free particle will almost be known. This is called an eigenstate of position (mathematically more precise: a generalized eigenstate (eigendistribution) ). In Mathematical analysis, distributions (also known as generalized functions) are objects which generalize functions and Probability distributions If the particle is in an eigenstate of position then its momentum is completely unknown. An eigenstate of momentum, on the other hand, has the form of a plane wave. In the Physics of Wave propagation (especially Electromagnetic waves, a plane wave (also spelled planewave) is a constant-frequency wave whose It can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. In Physics wavelength is the distance between repeating units of a propagating Wave of a given Frequency. The Planck constant (denoted h\ is a Physical constant used to describe the sizes of quanta. In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes If the particle is in an eigenstate of momentum then its position is completely blurred out. Usually, a system will not be in an eigenstate of whatever observable we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or generalized eigenstate) of that observable. This process is known as wavefunction collapse. In certain interpretations of quantum mechanics, wave function collapse is one of two processes by which Quantum systems apparently evolve according to the laws of It involves expanding the system under study to include the measurement device, so that a detailed quantum calculation would no longer be feasible and a classical description must be used. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. In physics a wave packet is an envelope or packet containing an arbitrary number of wave forms When one measures the position of the particle, it is impossible to predict with certainty the result that we will obtain. It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x. Wave functions can change as time progresses. An equation known as the Schrödinger equation describes how wave functions change in time, a role similar to Newton's second law in classical mechanics. In Physics, especially Quantum mechanics, the Schrödinger equation is an equation that describes how the Quantum state of a Physical system Newton's laws of motion are three Physical laws which provide relationships between the Forces acting on a body and the motion of the The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity, like a classical particle with no forces acting on it. However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain. This also has the effect of turning position eigenstates (which can be thought of as infinitely sharp wave packets) into broadened wave packets that are no longer position eigenstates. Some wave functions produce probability distributions that are constant in time. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J History See also Atomic theory, Atomism The concept that matter is composed of discrete units and cannot be divided into arbitrarily tiny The nucleus of an Atom is the very dense region consisting of Nucleons ( Protons and Neutrons, at the center of an atom In Mathematics, the spherical coordinate system is a Coordinate system for representing geometric figures in three dimensions using three coordinates the radial (Note that only the lowest angular momentum states, labeled s, are spherically symmetric). The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time. Determinism is the philosophical Proposition that every event including human cognition and behaviour decision and action is causally determined During a measurement, the change of the wavefunction into another one is not deterministic, but rather unpredictable, i. The framework of Quantum mechanics requires a careful definition of measurement, and a thorough discussion of its practical and philosophical implications e. , random. Randomness is a lack of order Purpose, cause, or predictability The probabilistic nature of quantum mechanics thus stems from the act of measurement. Probability is the likelihood or chance that something is the case or will happen This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Interpretations of quantum mechanics have been formulated to do away with the concept of "wavefunction collapse"; see, for example, the relative state interpretation. An interpretation of quantum mechanics is a statement which attempts to explain how Quantum mechanics informs our Understanding of Nature. The many-worlds interpretation or MWI (also known as relative state formulation, theory of the universal wavefunction, parallel universes, The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. Quantum entanglement is a quantum mechanical Phenomenon in which the Quantum states of two or more objects are linked together so that one object For details, see the article on measurement in quantum mechanics. The framework of Quantum mechanics requires a careful definition of measurement, and a thorough discussion of its practical and philosophical implications ### Mathematical formulation Main article: Mathematical formulation of quantum mechanics In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac and John von Neumann, the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined up to a complex number of norm 1 (the phase factor). The mathematical formulation of quantum mechanics is the body of mathematical formalisms which permits a rigorous description of Quantum mechanics. In Mathematical physics and Quantum mechanics, quantum logic is a set of rules for Reasoning about propositions which takes the principles of Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted In Mathematics a Topological space is called separable if it contains a countable dense subset that is there exists a sequence \{ x_n This article assumes some familiarity with Analytic geometry and the concept of a limit. In other words, the possible states are points in the projectivization of a Hilbert space. In Mathematics a projective space is a set of elements constructed from a vector space such that a distinct element of the projective space consists of all non-zero vectors which The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. In Mathematics, an integrable function is a function whose Integral exists Each observable is represented by a maximally-Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. A number of Mathematical entities are named Hermitian, after the Mathematician Charles Hermite: Hermitian adjoint In Mathematics, on a finite-dimensional Inner product space, a self-adjoint operator is one that is its own adjoint, or equivalently one whose matrix In Mathematics, an operator is a function which operates on (or modifies another function Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian, the operator corresponding to the total energy of the system, generates time evolution. In Physics, especially Quantum mechanics, the Schrödinger equation is an equation that describes how the Quantum state of a Physical system In Quantum mechanics, the Hamiltonian H is the Observable corresponding to the Total energy of the system In physics an operator is a function acting on the space of Physical states As a resultof its application on a physical state another physical state is obtained The inner product between two state vectors is a complex number known as a probability amplitude. In Mathematics, an inner product space is a Vector space with the additional Structure of inner product. In Quantum mechanics, a probability amplitude is a complex -valued function that describes an uncertain or unknown quantity During a measurement, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. In Mathematics, the absolute value (or modulus) of a Real number is its numerical value without regard to its sign. The possible results of a measurement are the eigenvalues of the operator - which explains the choice of Hermitian operators, for which all the eigenvalues are real. We can find the probability distribution of an observable in a given state by computing the spectral decomposition of the corresponding operator. In Mathematics, particularly Linear algebra and Functional analysis, the spectral theorem is any of a number of results about Linear operators Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain In Mathematics, the commutator gives an indication of the extent to which a certain Binary operation fails to be Commutative. The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value. Whereas the absolute value of the probability amplitude encodes information about probabilities, its phase encodes information about the interference between quantum states. The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 In physics interference is the addition ( superposition) of two or more Waves that result in a new wave pattern This gives rise to the wave-like behavior of quantum states. It turns out that analytic solutions of Schrödinger's equation are only available for a small number of model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the hydrogen-molecular ion and the hydrogen atom are the most important representatives. Much insight in Quantum mechanics can be gained from understanding the solutions to the time-dependent non-relativistic Schrödinger equation in an appropriate Configuration The quantum harmonic oscillator is the quantum mechanical analogue of the classical harmonic oscillator. In Physics, the particle in a box (also known as the infinite potential well or the infinite square well) is a problem consisting of a single particle inside A hydrogen atom is an atom of the chemical element Hydrogen. The electrically neutral Even the helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully analytic treatment. Helium ( He) is a colorless odorless tasteless non-toxic Inert Monatomic Chemical There exist several techniques for generating approximate solutions. For instance, in the method known as perturbation theory one uses the analytic results for a simple quantum mechanical model to generate results for a more complicated model related to the simple model by, for example, the addition of a weak potential energy. In Quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system Potential energy can be thought of as Energy stored within a physical system Another method is the "semi-classical equation of motion" approach, which applies to systems for which quantum mechanics produces weak deviations from classical behavior. The deviations can be calculated based on the classical motion. This approach is important for the field of quantum chaos. Quantum chaos is a branch of Physics which studies how chaotic classical systems (see Dynamical systems and Chaos theory) can be shown to be limits of An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over histories between initial and final states; this is the quantum-mechanical counterpart of action principles in classical mechanics. This article is about a formulation of quantum mechanics For integrals along a path also known as line or contour integrals see Line integral. In Physics, the action is a particular quantity in a Physical system that can be used to describe its operation ### Interactions with other scientific theories The fundamental rules of quantum mechanics are very broad. They assert that the state space of a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do not tell us which Hilbert space or which operators, or if it even exists. This article assumes some familiarity with Analytic geometry and the concept of a limit. In Mathematics, on a finite-dimensional Inner product space, a self-adjoint operator is one that is its own adjoint, or equivalently one whose matrix These must be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system moves to higher energies or equivalently, larger quantum numbers. This article discusses quantum theory For other uses see Correspondence principle (disambiguation. In other words, classic mechanics is simply a quantum mechanics of large systems. This "high energy" limit is known as the classical or correspondence limit. One can therefore start from an established classical model of a particular system, and attempt to guess the underlying quantum model that gives rise to the classical model in the correspondence limit Unsolved problems in physics: In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the superposition of states and wavefunction collapse, give rise to the reality we perceive? Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. Special relativity (SR (also known as the special theory of relativity or STR) is the Physical theory of Measurement in Inertial The Klein–Gordon equation ( Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic version of the Schrödinger In Physics, the Dirac equation is a relativistic quantum mechanical wave equation formulated by British physicist Paul Dirac in 1928 and provides While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. In quantum field theory (QFT the forces between particles are mediated by other particles The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics ( QED) is a relativistic Quantum field theory of Electrodynamics. Electromagnetism is the Physics of the Electromagnetic field: a field which exerts a Force on particles that possess the property of The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. Electric charge is a fundamental conserved property of some Subatomic particles which determines their Electromagnetic interaction. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical $-\frac{e^2}{4 \pi\ \epsilon_0\ } \frac{1}{r}$ Coulomb potential. A hydrogen atom is an atom of the chemical element Hydrogen. The electrically neutral This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena Quantum field theories for the strong nuclear force and the weak nuclear force have been developed. In particle physics the strong interaction, or strong force, or color force, holds Quarks and Gluons together to form Protons and The weak interaction (often called the weak force or sometimes the weak nuclear force) is one of the four Fundamental interactions of nature The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of the subnuclear particles: quarks and gluons. Quantum chromodynamics (abbreviated as QCD is a theory of the Strong interaction ( color force a Fundamental force describing the interactions of the In Physics, a quark (kwɔrk kwɑːk or kwɑːrk is a type of Subatomic particle. Gluons ( Glue and the suffix -on) are Elementary particles that cause Quarks to interact and are indirectly responsible for the The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory known as electroweak theory. The weak interaction (often called the weak force or sometimes the weak nuclear force) is one of the four Fundamental interactions of nature In Particle physics, the electroweak interaction is the unified description of two of the four Fundamental interactions of nature Electromagnetism and the It has proven difficult to construct quantum models of gravity, the remaining fundamental force. Gravitation is a natural Phenomenon by which objects with Mass attract one another In Physics, a fundamental interaction or fundamental force is a mechanism by which particles interact with each other and which cannot be explained in terms Semi-classical approximations are workable, and have led to predictions such as Hawking radiation. Hawking radiation (also known as Bekenstein-Hawking radiation) is a Thermal radiation with a black body spectrum predicted to be emitted by Black holes However, the formulation of a complete theory of quantum gravity is hindered by apparent incompatibilities between general relativity, the most accurate theory of gravity currently known, and some of the fundamental assumptions of quantum theory. Quantum gravity is the field of Theoretical physics attempting to unify Quantum mechanics, which describes three of the fundamental forces of nature General relativity or the general theory of relativity is the geometric theory of Gravitation published by Albert Einstein in 1916 The resolution of these incompatibilities is an area of active research, and theories such as string theory are among the possible candidates for a future theory of quantum gravity. String theory is a still-developing scientific approach to Theoretical physics, whose original building blocks are one-dimensional extended objects called strings ## Derivation of quantization The particle in a 1-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as zero potential energy inside a certain interval and infinite everywhere outside that interval. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as[3]: $- \frac {\hbar ^2}{2m} \frac {d ^2 \psi}{dx^2} = E \psi.$ The general solutions are: $\psi = A e^{ikx} + B e ^{-ikx} \;\;\;\;\;\; E = \frac{k^2 \hbar^2}{2m}$ $\psi = C \sin kx + D \cos kx \;$ (exponential rewrite) The presence of the walls of the box restricts the acceptable solutions to the wavefunction. At each wall : $\psi = 0 \; \mathrm{at} \;\; x = 0,\; x = L$ Consider x = 0 • sin 0 = 0, cos 0 = 1. To satisfy $\psi = 0 \;$ D = 0 (cos term is removed) Now Consider: $\psi = C \sin kx \;$ • at X = L, $\psi = C \sin kL \;$ • If C = 0 then $\psi =0 \;$ for all x and would conflict with Born interpretation • therefore sin kL must be satisfied by $kL = n \pi \;\;\;\; n = 1,2,3,4,5 \;$ In this situation, n must be an integer showing the quantization of the energy levels. ## Applications Quantum mechanics has had enormous success in explaining many of the features of our world. The individual behaviour of the subatomic particles that make up all forms of matter—electrons, protons, neutrons, photons and others—can often only be satisfactorily described using quantum mechanics. Matter is commonly defined as being anything that has mass and that takes up space. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive This article is a discussion of neutrons in general For the specific case of a neutron found outside the nucleus see Free neutron. In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena Quantum mechanics has strongly influenced string theory, a candidate for a theory of everything (see reductionism). String theory is a still-developing scientific approach to Theoretical physics, whose original building blocks are one-dimensional extended objects called strings A theory of everything ( TOE) is a putative Theory of Theoretical physics that fully explains and links together all known physical phenomena Reductionism can either mean (a an approach to understanding the nature of complex things by reducing them to the interactions of their parts or to simpler or more fundamental things It is also related to statistical mechanics. Statistical mechanics is the application of Probability theory, which includes mathematical tools for dealing with large populations to the field of Mechanics Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. Chemistry (from Egyptian kēme (chem meaning "earth") is the Science concerned with the composition structure and properties Quantum chemistry is a branch of Theoretical chemistry, which applies Quantum mechanics and Quantum field theory to address issues and problems in (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and by approximately how much. Most of the calculations performed in computational chemistry rely on quantum mechanics. Computational chemistry is a branch of Chemistry that uses computers to assist in solving chemical problems Much of modern technology operates at a scale where quantum effects are significant. Technology is a broad concept that deals with a Species ' usage and knowledge of Tools and Crafts and how it affects a species' ability to control and adapt Examples include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A laser is a device that emits Light ( Electromagnetic radiation) through a process called Stimulated emission. In Electronics, a transistor is a Semiconductor device commonly used to amplify or switch electronic signals An electron microscope is a type of Microscope that uses Electrons to illuminate a specimen and create an enlarged image The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. Dioden2jpg|thumb|right|150px|Figure 2 Various semiconductor diodes In Electronics, a transistor is a Semiconductor device commonly used to amplify or switch electronic signals Electronics refers to the flow of charge (moving Electrons through Nonmetal conductors (mainly Semiconductors, whereas electrical Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to develop quantum cryptography, which will allow guaranteed secure transmission of information. Quantum cryptography, or quantum key distribution (QKD uses Quantum mechanics to guarantee secure communication Information as a concept has a diversity of meanings from everyday usage to technical settings A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. A quantum computer is a device for Computation that makes direct use of distinctively Quantum mechanical Phenomena, such as superposition A computer is a Machine that manipulates data according to a list of instructions. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum states over arbitrary distances. Quantum teleportation, or entanglement-assisted teleportation, is a technique used to transfer information on a Quantum level usually from one Particle In many devices, even the simple light switch, quantum tunneling is vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up, in the case of the light switch, of a layer of oxide. A light switch is a Switch, most commonly used to operate Electric lights permanently connected equipment or Electrical outlets In modern homes In Quantum mechanics, quantum tunnelling is a nanoscopic phenomenon in which a particle violates the principles of Classical mechanics by penetrating a Flash memory chips found in USB drives also use quantum tunneling to erase their memory cells. Flash memory is non-volatile computer memory that can be electrically erased and reprogrammed ## Philosophical consequences Main article: Interpretation of quantum mechanics Since its inception, the many counter-intuitive results of quantum mechanics have provoked strong philosophical debate and many interpretations. An interpretation of quantum mechanics is a statement which attempts to explain how Quantum mechanics informs our Understanding of Nature. Philosophy is the study of general problems concerning matters such as existence knowledge truth beauty justice validity mind and language An interpretation of quantum mechanics is a statement which attempts to explain how Quantum mechanics informs our Understanding of Nature. Even fundamental issues such as Max Born's basic rules concerning probability amplitudes and probability distributions took decades to be appreciated. Max Born (11 December 1882 &ndash 5 January 1970 was a German Physicist and Mathematician who was instrumental in the development of Quantum In Quantum mechanics, a probability amplitude is a complex -valued function that describes an uncertain or unknown quantity In Probability theory and Statistics, a probability distribution identifies either the probability of each value of an unidentified Random variable The Copenhagen interpretation, due largely to the Danish theoretical physicist Niels Bohr, is the interpretation of quantum mechanics most widely accepted amongst physicists. The Copenhagen interpretation is an interpretation of Quantum mechanics. Niels Henrik David Bohr (nels ˈb̥oɐ̯ˀ in Danish 7 October 1885 – 18 November 1962 was a Danish Physicist who made fundamental contributions to understanding According to it, the probabilistic nature of quantum mechanics predictions cannot be explained in terms of some other deterministic theory, and does not simply reflect our limited knowledge. Quantum mechanics provides probabilistic results because the physical universe is itself probabilistic rather than deterministic. Probability is the likelihood or chance that something is the case or will happen Determinism is the philosophical Proposition that every event including human cognition and behaviour decision and action is causally determined Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical (Hence his famous quote "God does not play dice with the universe. ") He held that there should be a local hidden variable theory underlying quantum mechanics and consequently the present theory was incomplete. In Quantum mechanics, a local Hidden variable theory is one in which distant events are assumed to have no instantaneous (or at least Faster-than-light He produced a series of objections to the theory, the most famous of which has become known as the EPR paradox. In Quantum mechanics, the EPR paradox is a Thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local theories. John Stewart Bell ( June 28 1928 &ndash October 1 1990) was a Physicist, and the originator of Bell's Theorem, one of the Bell's theorem is a theorem that shows that the predictions of Quantum mechanics (QM are not intuitive and touches upon fundamental philosophical issues that relate to modern Experiments have been taken as confirming that quantum mechanics is correct and the real world must be described in terms of nonlocal theories. The Bell test experiments serve to investigate the validity of the entanglement effect in Quantum mechanics by using some kind of Bell inequality. The writer C. S. Lewis viewed quantum mechanics as incomplete, because notions of indeterminism did not agree with his religious beliefs. Clive Staples Lewis (29 November 1898 – 22 November 1963 [4] Lewis, a professor of English, was of the opinion that the Heisenberg uncertainty principle was more of an epistemic limitation than an indication of ontological indeterminacy, and in this respect believed similarly to many advocates of hidden variables theories. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain Epistemology (from Greek επιστήμη - episteme, "knowledge" + λόγος, " Logos " or theory of knowledge In Philosophy, ontology (from the Greek, genitive: of being (part The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view. The Everett many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a "multiverse" composed of mostly independent parallel universes. The many-worlds interpretation or MWI (also known as relative state formulation, theory of the universal wavefunction, parallel universes, The multiverse (or meta-universe) is the hypothetical set of multiple possible Universes (including our universe that together comprise all of Reality. This is not accomplished by introducing some new axiom to quantum mechanics, but on the contrary by removing the axiom of the collapse of the wave packet: All the possible consistent states of the measured system and the measuring apparatus (including the observer) are present in a real physical (not just formally mathematical, as in other interpretations) quantum superposition. Quantum superposition is the fundamental law of Quantum mechanics. (Such a superposition of consistent state combinations of different systems is called an entangled state. Quantum entanglement is a quantum mechanical Phenomenon in which the Quantum states of two or more objects are linked together so that one object ) While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we can observe only the universe, i. e. the consistent state contribution to the mentioned superposition, we inhabit. Everett's interpretation is perfectly consistent with John Bell's experiments and makes them intuitively understandable. John Stewart Bell ( June 28 1928 &ndash October 1 1990) was a Physicist, and the originator of Bell's Theorem, one of the However, according to the theory of quantum decoherence, the parallel universes will never be accessible for us, making them physically meaningless. In Quantum mechanics, quantum decoherence is the mechanism by which quantum systems interact with their environments to exhibit probabilistically additive behavior This inaccessibility can be understood as follows: once a measurement is done, the measured system becomes entangled with both the physicist who measured it and a huge number of other particles, some of which are photons flying away towards the other end of the universe; in order to prove that the wave function did not collapse one would have to bring all these particles back and measure them again, together with the system that was measured originally. In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena This is completely impractical, but even if one can theoretically do this, it would destroy any evidence that the original measurement took place (including the physicist's memory). ## Notes 1. ^ Life on the lattice: The most accurate theory we have 2. ^ Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born has been obfuscated. An interpretation of quantum mechanics is a statement which attempts to explain how Quantum mechanics informs our Understanding of Nature. The many-worlds interpretation or MWI (also known as relative state formulation, theory of the universal wavefunction, parallel universes, The framework of Quantum mechanics requires a careful definition of measurement, and a thorough discussion of its practical and philosophical implications The dynamics of photons in the double-slit experiment describes the relationship between classical electromagnetic waves and Photons the quantum counterpart of classical electromagnetic Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave The scientific school of Quantum electrochemistry began to form in the 1960s under Revaz Dogonadze. Quantum chromodynamics (abbreviated as QCD is a theory of the Strong interaction ( color force a Fundamental force describing the interactions of the Quantum chemistry is a branch of Theoretical chemistry, which applies Quantum mechanics and Quantum field theory to address issues and problems in A quantum computer is a device for Computation that makes direct use of distinctively Quantum mechanical Phenomena, such as superposition Quantum electronics is the area of Physics dealing with the effects of Quantum mechanics on the behaviour of Electrons in matter and their interactions In quantum field theory (QFT the forces between particles are mediated by other particles In Quantum mechanics, quantum information is Physical information that is held in the "state" of a Quantum system. Quantum mind theories are based on the premise that Quantum mechanics is necessary to fully understand the Mind and Brain, particularly concerning an explanation In the physical sciences quantum thermodynamics is the study of Heat and work dynamics in quantum systems The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of Theoretical chemistry involves the use of physics to explain or predict chemical phenomena Quasi-set theory is a formal mathematical theory of collections of indistinguishable objects mainly motivated by the assumption that certain objects treated in Quantum physics Quantum optics is a field of research in Physics, dealing with the application of Quantum mechanics to phenomena involving Light and its interactions Werner Heisenberg (5 December 1901 in Würzburg &ndash1 February 1976 in Munich) was a German theoretical physicist best known for enunciating the Max Born (11 December 1882 &ndash 5 January 1970 was a German Physicist and Mathematician who was instrumental in the development of Quantum A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This was recognized in a paper by Heisenberg, in 1940, honoring Max Planck. See: Nancy Thorndike Greenspan, "The End of the Certain World: The Life and Science of Max Born" (Basic Books, 2005), pp. 124 - 128, and 285 - 286. 3. ^ Derivation of particle in a box, chemistry.tidalswan.com 4. ^ Does God Play Dice? Archived Lecture by Professor Steven Hawking, Department of Applied Mathematics and Theoretical Physics (DAMTP) University of Cambridge. Retrieved on 2007-09-07. Year 2007 ( MMVII) was a Common year starting on Monday of the Gregorian calendar in the 21st century. Events 1251 BC - A Solar eclipse on this date might mark the birth of legendary Heracles at Thebes Greece. ## References • P. A. M. Dirac, The Principles of Quantum Mechanics (1930) -- the beginning chapters provide a very clear and comprehensible introduction • David J. Griffiths, Introduction to Quantum Mechanics, Prentice Hall, 1995. David J Griffiths (born 1942 is a US physicist and educator He has worked at Reed College since 1978 where he is currently the Howard Vollum ISBN 0-13-124405-1 -- A standard undergraduate level text written in an accessible style. • Richard P. Feynman, Robert B. Leighton and Matthew Sands (1965). Richard Phillips Feynman (ˈfaɪnmən May 11 1918 – February 15 1988 was an American Physicist known for the Path integral formulation of quantum Robert B Leighton ( September 10, 1919 &ndash March 9, 1997) was an American The Feynman Lectures on Physics, Addison-Wesley. The Feynman Lectures on Physics by Richard Feynman, Robert Leighton, and Matthew Sands is perhaps Feynman's most accessible technical work • Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (1957) pp 454-462. Hugh Everett III ( November 11, 1930 – July 19, 1982) was an American physicist who first proposed the Many-worlds interpretation • Bryce DeWitt, R. Bryce Seligman DeWitt ( January 8, 1923 &ndash September 23, 2004) was a Theoretical physicist best known for formulating Canonical Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X • Albert Messiah, Quantum Mechanics, English translation by G. The Princeton University Press is an independent publisher with close connections to Princeton University. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons, vol. I, chapter IV, section III. • Richard P. Feynman, QED: The Strange Theory of Light and Matter -- a popular science book about quantum mechanics and quantum field theory that contains many enlightening insights that are interesting for the expert as well • Marvin Chester, Primer of Quantum Mechanics, 1987, John Wiley, N. Richard Phillips Feynman (ˈfaɪnmən May 11 1918 – February 15 1988 was an American Physicist known for the Path integral formulation of quantum QED The Strange Theory of Light and Matter is an adaptation for the general reader of four Lectures on Quantum electrodynamics (QED by Richard Feynman In quantum field theory (QFT the forces between particles are mediated by other particles Y. ISBN 0-486-42878-8 • Hagen Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3th edition, World Scientific (Singapore, 2004) (drafts of a forthcoming fourth edition available online here) • George Mackey (2004). Hagen Kleinert (born 1941 is Professor of Theoretical Physics at the Free University of Berlin, Germany (since 1968 Honorary Professor at the Kyrgyz-Russian George Whitelaw Mackey (February 1 1916 in St Louis, Missouri – March 15 2006 in Belmont, Massachusetts) was an American Mathematician The mathematical foundations of quantum mechanics. Dover Publications. ISBN 0-486-43517-2. • Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed. ). Prentice Hall. ISBN 0-13-805326-X. • Omnès, Roland (1999). Roland Omnès is the author of several books which aim to close the gap between our common sense experience of the classical world and the complex formal mathematics which is now required Understanding Quantum Mechanics. Princeton University Press. ISBN 0-691-00435-8. • Transnational College of Lex (1996). Transnational College of Lex is a research institution in Japan dedicated to the proposition that learning is highly influenced by environment What is Quantum Mechanics? A Physics Adventure. Language Research Foundation, Boston. ISBN 0-9643504-1-6. • J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955. • H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publications 1950. Hermann Klaus Hugo Weyl ( 9 November 1885 – 8 December 1955) was a German Mathematician. • Max Jammer, "The Conceptual Development of Quantum Mechanics" (McGraw Hill Book Co. Max Jammer (born 1915 is an Israeli physicist and philosopher of physics. , 1966) • Gunther Ludwig, "Wave Mechanics" (Pergamon Press, 1968) ISBN 0-08-203204-1 • Albert Messiah, Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer, fourth printing 1966, North Holland, John Wiley & Sons. • Eric R. Scerri, The Periodic Table: Its Story and Its Significance, Oxford University Press, 2006. Considers the extent to which chemistry and especially the periodic system has been reduced to quantum mechanics. ISBN 0-19-530573-6 • Gary Zukav, The Dancing Wu Li Masters: An Overview of the New Physics (1979) ISBN 0-553-26382-X
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216607809066772, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=526039
Physics Forums Canonical Commutation Relations: Why? Virtually every treatment of quantum mechanics brings up the canonical commutation relations (CCR); they go over what the Poisson bracket is and how it relates to a phase space / Hamiltonian mechanics, and then say "then, you replace that with ih times the commutator, and replace the dynamical variables with Hermitian operators, and TADA! You now have TEH QUANTUM MACHANICHS"...sometimes there's a vague attempt to explain it away by pointing to the Fourier-duality of the canonical variables (p/q), or if they really want to put the argument to rest, they say "by the Stone-von Neumann theory". But...that doesn't really help me. Reading the Wikipedia article on Stone-von Neumann seems to have a lot of information, but I can't really make out a real explanation / derivation from it. So -- can anyone explain this to a n00b? Here are a few points that I'm curious about in particular: - what does a Poisson bracket have to do with a (anti-)commutator? - I get how if you multiply a commutator by h, at a classical level it would look like the operators did indeed commute -- but why that particular value? And where does the imaginary unit come in? - What does this have to do with the change from a finite-dimensional phase space to an infinite-dimensional Hilbert space, and the change from variables in this phase space to Hermitian operators on the Hilbert space? ...though these are by no means anywhere near the only questions -- they're more to just maybe get the conversation started. Also, any pointers to books / references would be great. Thanks, Justin PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 6 Recognitions: Gold Member Science Advisor This has to do with the Noetherian association of observables (which may be conserved under a given dynamics) with generators of transformations (which may be symmetry transformations if the dynamics conserves the corresponding observable a la Noether's theorem). In short there is a Lie group structure common to classical and quantum mechanics (the Lie group whose Lie algebra of generators provide possible components for the Hamiltonian which in turn is the generator of time evolution of the system). In classical mechanics the representation of this transformation group is as action conserving flows of points in phase space, that is to say the dynamic flow from one state to another state. These will be (action area conserving) diffeomorphisms of phase space generated by the Poisson bracket action of the Hamiltonian function. The Poisson bracket is the representative of the Lie product for this representation. In quantum mechanics one moves to a linear representation and the Lie product is necessarily the commutator product of operators in the associative algebra in which the group is represented. It all comes down to the fact that e.g. momenta rotate in both quantum and classical mechanics as vectors. Angular momenta generate these rotations. Likewise momenta generate translations. Once you establish the dynamic group generated by the observables the main thing left to resolve is how these generators are represented and the information content in the measurements (spectrum and logic/probability inferences for subsequent measurements.) Quote by jambaugh ... In classical mechanics the representation of this transformation group is as action conserving flows of points in phase space, that is to say the dynamic flow from one state to another state. These will be (action area conserving) diffeomorphisms of phase space generated by the Poisson bracket action of the Hamiltonian function. The Poisson bracket is the representative of the Lie product for this representation. ... Extraordinarily well-put. I think I at least understand the line of argument now. You wouldn't be able to suggest a reference to could fill out the rest of the details, would you? Or even better, anything more broadly on this nexus of group/representation theory and mechanics (quantum or otherwise)? It seems everything I find on the topic is either totally abstract, or 100% specific to the development of the standard model. At the other extreme, books on advanced mechanics (in the model of Abraham / Marsden) that brush on this are a dime a dozen, but I haven't seen anything that breaks it down at that level (e.g. the Poisson bracket as a representation of the group of translations / the commutator as the unique linear representation of the same), let alone anything that goes into any depth on the topic. Canonical Commutation Relations: Why? Sakurai introduces the momentum operator as the generator of translations and then derives the commutational relations from that. He doesn't go in depth, but at least they don't fall from the sky. (He does the same for the Hamiltonian which leads to the Schrödinger equation) Blog Entries: 6 Recognitions: Gold Member Science Advisor Quote by jjustinn Extraordinarily well-put. I think I at least understand the line of argument now. You wouldn't be able to suggest a reference to could fill out the rest of the details, would you? Or even better, anything more broadly on this nexus of group/representation theory and mechanics (quantum or otherwise)? ... Hmmm... For me this kind of jelled out of understanding of QM and looking back at CM. I built my picture up trying to parse "Quantization of Gauge Systems" By Marc Henneaux, Claudio Teitelboim. I doubt it is a good starting point. But the detailed discussion of constrained systems and general covariance in the classical and quantum mechanical setting provides an environment to build one's understanding. Possibly other more introductory references on constrained systems would be better. (this one is what I have). Note that in the constrained systems one is working on a sub-manifold of phase-space and to keep to that manifold one modifies the Poisson bracket to a Dirac bracket. Maybe I ought to try to write up something like "An Introduction to Modern Physics, via Noether an Lie". My MS in Math was in non-linear PDE's looking at symmetry group methods and my PhD work was in physics looking at Lie group deformations, trying to find new physics in altering the implicit group structure. So I kinda have a mathematical group centered perspective. Quote by kith Sakurai introduces the momentum operator as the generator of translations and then derives the commutational relations from that. Quote by jambaugh ...I built my picture up trying to parse "Quantization of Gauge Systems" By Marc Henneaux, Claudio Teitelboim. ... You might start with the wikipedia article on the Dirac bracket, and its related links. Thanks to both of you for the suggestions...I'll check them out. Quote by jambaugh Maybe I ought to try to write up something like "An Introduction to Modern Physics, via Noether an Lie". That sounds like something that would be both interesting and invaluable. It seems like there's a void in this area -- that is, a rigorous and complete introduction to mechanics (quantum or otherwise) via infinitesimal transformations / Lie theory. It seems like everyone either glosses over it, using bits and pieces of results as intermediate steps in proofs, or assumes that you already understand it completely... Hi, I have a basic question regarding the commuter brackets. Basically, what does the syntax mean? I understand that in general, [a,b]=ab-ba, but I am confused as to how to make sense of that with fields. I am currently reading up on the quantization of the Klein-Gordon Equation, and I ran in to such a problem. I attached a jpg file that has the rest (the book I got the equation from leaves out h-bars). Any help you can give will be most appreciated! Thanks. Attached Thumbnails Blog Entries: 6 Recognitions: Gold Member Science Advisor Quote by sampo Hi, I have a basic question regarding the commuter brackets. Basically, what does the syntax mean? I understand that in general, [a,b]=ab-ba, but I am confused as to how to make sense of that with fields. I am currently reading up on the quantization of the Klein-Gordon Equation, and I ran in to such a problem. I attached a jpg file that has the rest (the book I got the equation from leaves out h-bars). Any help you can give will be most appreciated! Thanks. I think there was a typo, it should have read: $[\phi(x,t),\pi(y,t)]=i\delta(x-y)$ This is the field commutation relations where one is "second quantizing" i.e. quantizing a field. You can think of the classical $\phi(x,t)$ as a abstract displacement over time of a model system located at position x. Dual to that displacement is a canonical momentum $\pi(x,t)$. You can think of the coordinates x (or y) here as a continuous index just as for an n dimensional system with $[q^k, p_j] = i\delta^k_j$ has index j or k. This comes from modeling say a bosonic field, in terms of an array of harmonic oscillators located at each point of space. Each harmonic oscillator has its own coordinate and momentum. The coordinate value for the harmonic oscillator at x is $\phi(x)$ and its momentum is $\pi(x)$. You then have a field of classical systems which you then quantize to get a quantum field. Local coupling between these systems at each point allow them to propagate the field waves which when quantized become the bosons. It is important to realize that the [tex]\phi(x,t)[/itex] here never was nor is a wave-function. It starts as a classical field (note it first appears in a classical Lagrangian) in a model which is quantized to yield the quantum field. Thank you! that helps. Ballentine's QM textbook works out some justification for this along the lines suggested above. I recently found a good explanation in "Quantum Field Theory - A Modern Perspective" by Nair. I am trying to understand the canonical quantization of the Klein-Gordon equation and I am quite mixed up. I have been going over the same equations for over a week now and getting nowhere. My main textbook is Quantum Field Theory: a Modern Introduction by Michio Kaku, but I have been scouring the web as well. Attached is my work. If anyone can help point out where I am going wrong, it will be MOST appreciated. Thanks. I could only attach three images, so here are the other two... Attached Thumbnails Apparently my first three images didn't attach. woops. Attached Thumbnails Quote by jjustinn That sounds like something that would be both interesting and invaluable. It seems like there's a void in this area -- that is, a rigorous and complete introduction to mechanics (quantum or otherwise) via infinitesimal transformations / Lie theory. It seems like everyone either glosses over it, using bits and pieces of results as intermediate steps in proofs, or assumes that you already understand it completely... "Rotations, Quaternions, and Double Groups" by Simon Altmann provides an exceptionally thorough and all encompassing description from a geometric perspective (the author is a mathematician) Spinors are well described along with their relationships to quaternions, the Pauli matrices, infinitesimal transformations, Lie theory, representations and projections. Thanks, I just ordered the book off of Amazon. Okay, I think I am comfortable with the mathematical derivation of the creation and annihilation operators. But I am still having problems understanding how both the position and momentum fields, and the creation and annihilation operators commute. I guess my question boils down to functionality. If all of these are functions onto the complex plane, how come their values don't commute? Attached are my questions in more explicit detail. Thanks. Attached Thumbnails Tags ccr, commutators, poisson brackets, quantum mechanics, stone-von neumann Thread Tools | | | | |------------------------------------------------------------|---------------------------|---------| | Similar Threads for: Canonical Commutation Relations: Why? | | | | Thread | Forum | Replies | | | Quantum Physics | 1 | | | General Physics | 5 | | | Advanced Physics Homework | 6 | | | General Physics | 5 | | | Advanced Physics Homework | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406446814537048, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/139888-standard-deviation-problem.html
# Thread: 1. ## Standard Deviation Problem I would appreciate your help on the following problem: A mason is contracted to build a patio retaining wall. Plans call for the base of the wall to be a row of 50 10-inch bricks, each separated by $\frac{1}{2}$-inch-thick mortar. Suppose that the bricks used are randomly chosen from a population of bricks whose mean length is 10 inches and whose standard deviation is $\frac{1}{32}$ inch. Also, suppose that the mason, on the average, will make the mortar $\frac{1}{2}$ inch thick, but the actual dimension varies from brick to brick, the standard deviation of the thickness being $\frac{1}{16}$ inch. What is the standard deviation of the length of the first row of the wall? 2. Originally Posted by hasanbalkan I would appreciate your help on the following problem: A mason is contracted to build a patio retaining wall. Plans call for the base of the wall to be a row of 50 10-inch bricks, each separated by $\frac{1}{2}$-inch-thick mortar. Suppose that the bricks used are randomly chosen from a population of bricks whose mean length is 10 inches and whose standard deviation is $\frac{1}{32}$ inch. Also, suppose that the mason, on the average, will make the mortar $\frac{1}{2}$ inch thick, but the actual dimension varies from brick to brick, the standard deviation of the thickness being $\frac{1}{16}$ inch. What is the standard deviation of the length of the first row of the wall? The length of the wall is: $L=(X_1+ ... + X_{50}) + (Y_1+ ... + Y_{49})$ Where the $X$'s are RVs corresponding to the length of the 50 bricks and the $Y$'s are RVs corresponding to the thickness of the mortar between consecutive bricks, and these are all independedntly distributed. The rest is just routine. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306778907775879, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2338/big-o-notation-encryption-algorithms/2339
# Big-O Notation: Encryption Algorithms I am currently completing a dissertation concerning the encryption of data through a variety of cryptographic algorithms. I have spent much time reading journals and papers but as yet have been unable to find any record of their performance complexity. Would anyone have an idea of the Big-O complexity of the following algorithms? • RSA • DES • Triple DES (Which I would expect to be of the same order as DES) • AES • Blowfish Thank you in advance; if you could provide a link to a reputable and citable source if would be very much appreciated. - – CodesInChaos Apr 10 '12 at 23:18 ## 1 Answer Most of these algorithms (i.e. the block ciphers DES, Triple DES, AES, Blowfish) are normally only working on a fixed block size, and take approximately the same time independently of input, thus they are $O(1)$. If you put them into a mode of operation to encrypt longer messages, you usually get an $O(m)$ complexity, where $m$ is the message size, as you have $O(m)$ blocks of data to encrypt. (One could design modes of operations with different complexity, but they have to touch at least each input bit once to be reversible, thus $O(m)$ is a minimum. Also, with $O(m)$ block cipher calls you can do enough to make it secure, so there is no point of making it slower.) Two more notes to specific ciphers: • Yes, Triple-DES usually needs thrice the computing power as DES, but this then gets $O(1)$ or $O(m)$, too. • Blowfish is known for its quite slow key schedule (which takes as long as encrypting about 4 KB of data), but this is still $O(1)$. Thus, $O$-notation is not really an interesting thing to look at in block ciphers. It gets a bit more interesting when we look at algorithms with a varying input size. For the asymmetric algorithm RSA, we have the public (and private) key modulus $n$, and its size $k = [\log_2 n]$ in bits can be considered a security parameter. (The private exponent $d$ is of similar size, while the public exponent $e$ is usually some small number like $3$ or $65537 = 2^{16}+1$.) The message size is then limited by $O(k)$, too. Encryption and decryption are both modular exponentiations of plaintext or ciphertext modulo $n$, with the respective exponents. With the square-and-multiply algorithm, encryption needs $O(1)$, decryption $O(k)$ multiplications and a similar number of modular reductions, each of $k$-bit or $2k$-bit numbers ... which means about $O(k^2)$ or $O(k^3)$ elementary operations (with a quite small factor, as you use the word size build into your processor). Decryption can be sped up by storing the factors of $n$, but this still gives only a constant factor, I think (i.e. it reduces the $k$ in the formulas). RSA also uses one of various padding schemes, but this should be in O(k) and thus not contribute to the complexity. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475336074829102, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/38040/killing-vectors-for-so3-rotational-symmetry/38074
# Killing vectors for SO(3) (rotational) symmetry I am reading a paper$^1$ by Manton and Gibbons on the dynamics of BPS monopoles. In this, they write the Atiyah-Hitchin metric for a two-monopole system. The first part is for the one monopole moduli manifold, and other terms for a 4-dimensional hyper kahler surface which is SO(3) symmetric parameterized by the euler angles. He obtains two sets of SO(3) killing vectors. What is the systematic way to obtain these two various sets? What are the equations involved? $$\xi^R_i=\cot{\theta}\cos{\psi}\frac{\partial}{\partial{\psi}}-\sin{\psi}\frac{\partial}{\partial{\theta}}+\frac{cos{\psi}}{\sin{\theta}}\frac{\partial}{\partial{\phi}}$$ $$\xi^R_2=-\cot{\theta}\sin{\psi}\frac{\partial}{\partial{\psi}}+\cos{\psi}\frac{\partial}{\partial{\theta}}+\frac{sin{\psi}}{\sin{\theta}}\frac{\partial}{\partial{\psi}}$$ $$\xi^R_3=\frac{\partial}{\partial{\psi}}$$ and the other set by $$\xi^L_1=\cot{\theta}\cos{\phi}\frac{\partial}{\partial{\phi}}+\sin{\phi}\frac{\partial}{\partial{\theta}}-\frac{\cos{\phi}}{\sin{\theta}}\frac{\partial}{\partial{\psi}}$$ $$\xi^L_2=-\cot{\theta}\sin{\phi}\frac{\partial}{\partial{\phi}}-\cos{\phi}\frac{\partial}{\partial{\theta}}-\frac{\sin{\psi}}{\sin{\theta}}\frac{\partial}{\partial{\psi}}$$ $$\xi^L_3=-\frac{\partial}{\partial{\phi}}$$ References: $^1$ G.W. Gibbons and N.S. Manton, Classical and quantum dynamics of BPS monopoles, Nucl. Phys. B274 (1986) 183. - Is this better suited for Maths SE? – ramanujan_dirac Sep 22 '12 at 21:41 If you know the metric and use the natural connection, you solve $\mathcal{L}_X g_{ab}=X_{a;b}-X_{b;a}=0$. These are the vector fields that if you follow them they leave the metric unchanged. – kηives Sep 22 '12 at 23:22 ## 1 Answer The specific form of the Killing vectors depends on the parameterization of the group element, from the notation (and the results), one can deduce that Euler angle parameterization has been used: $g = exp(i\sigma_3 \psi) exp(i \sigma_1 \theta) exp(i \sigma_3 \phi)$ where the sigmas are the generators of rotations with respect to Cartesian axes in the three dimensional representation. Also the two sets of killing vectors correspond to the left and right action of $SO(3)$ on itself which preserve the invariant metric. I'll describe to you the case of the left action for example. The basic equation definining the lect killing vectors is $K_A^L g = \sigma_A g$ (for the right action $K_A^R g = g \sigma_A$, I'll skip from now the superscript understanding it is a left action). $K_A$ is a differential operator: $K_A = K_A^{\phi} \frac{\partial}{\partial \phi} +K_A^{\theta} \frac{\partial}{\partial \theta} +K_A^{\psi} \frac{\partial}{\partial \psi}$ For convenience we shall call $x^1 = \phi$, $x^2 = \theta$, $x^3 = \psi$, So our task is to compute $K_A^j$ In order to do that,we remember that Maurer-Cartan one form $g^{-1} dg$ is a Lie algebra valued one form, i.e., $m = g^{-1} dg = i a_j^A \sigma_A dx^j$ (With summation convention) Thus, the first task to be done is to explicitely compute the coefficients $a_j^A$, this is done by computing the derivatives in the given parameterization. If we contract this form with a killing vector, we obtain: $<K_A, m> = i K_A^j a_j^B \sigma_B = g^{-1} K_A g =g^{-1} \sigma_A g$ Using the orthogonality relations $tr(\sigma_A \sigma_B) = \delta_{AB}$ We obtain: $K_A^j a_j^B = tr(\sigma_B g^{-1} \sigma_A g)$ Thus by solving this system of linear equations or equivalently, inverting the matrix A we get the formula for the Killing vectors components: $K_A^j = (a^{-1})_B^j tr(\sigma_B g^{-1} \sigma_A g)$ in summary, one needs to compute the coefficient matrix of the Maurer-Cartan form and invert it and to compute the traces required by the last equation, then compute the Killing vector components by matrix multiplication. - Sir, thanks for the reply. Unfortunately, I don't know anything about the Maurer-Cartan form. I am aware of the so called killing equation $V^λ ∂_λ g_{μν} + ∂_μ V^λ g_{λν} + ∂_ν V^λ g_{μλ}$ which comes from $\nabla_\mu V_\nu + \nabla_\nu V_\mu=0$.Can one solve this problem using the above mentioned killing equation rather than the method proposed above. – ramanujan_dirac Sep 23 '12 at 10:04 – ramanujan_dirac Sep 23 '12 at 10:10 – David Bar Moshe Sep 23 '12 at 10:43 cont. For the computation of the Killing vectors according to the given Wikipedia page, one needs in advance to construct the invariant metric. Furthermore, given the invariant metric, the Killing vector components satisfy differential equations which are harder to solve than the algebraic equations in the method described in the answer. – David Bar Moshe Sep 23 '12 at 10:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8935897350311279, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29907
## Which tensor fields on a symplectic manifold are invariant under all Hamiltonian vector fields? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a connected symplectic manifold $(M, \omega)$ of dimension $m=2n$. A few preliminary reminders (mostly to fix the notation): A vector field $X$ is symplectic if its flow preserves the symplectic form, ie. $L_X \omega = 0$, where $L_X$ denotes Lie derivative with respect to $X$. The Cartan formula shows that this is equivalent to the 1-form $i_X\omega = \omega(X, -)$ being closed. The Hamiltonian vector field associated to a smooth function $f$ is the vector field determined by $\omega(X_f, -) = df$; any symplectic vector field is locally Hamiltonian. The questions I'm interested in are of local nature, so we don't have to worry about the distinction. Question 1: Which differential forms are invariant under all Hamiltonian flows (meaning $L_{X_f}\alpha = 0$ for all smooth functions $f$)? Clearly, the symplectic form itself generates a truncated polynomial algebra (isomorphic to $\mathbb{R}[x]/(x^{n+1})$) inside $\Omega^*(M)$ which is invariant under all Hamiltonian flows. But is it possible that there are other than those? I believe I have shown that there are no invariant 1-forms using a horrible calculation in local (Darboux) coordinates, but I'm not sure if this method is suitable for higher degrees. In the even degrees, we know that the answer is not 0, and I can't see how to prove that an invariant $2d$-form is necessarily a constant multiple of $\omega^d$. Question 2: What can one say about more general tensor fields on $M$? I am especially interested in the sections of the symmetric powers of $TM$ (ie. symmetric multi vector fields). The proof that no $1$-forms are invariant is easily adapted to proving that no vector fields are invariant, but again, I'm not sure if this generalizes. Question 3: Suppose we have a subalgebra $A\subset C^\infty(M)$ with the property that for each $p\in M$, `$\{df_p \mid f\in A\} = T^*_pM$` (in other words, the Hamiltonian vector fields associated to the functions in $A$ realize every tangent vector on $M$). Do the answers to Questions 1 and 2 change if we only insist that the forms/tensor fields should be invariant under the Hamiltionian vector fields associated to the elements of $A$? It is certainly important that we still have a whole algebra of functions available; on $\mathbb{R}^{2n}$, any constant coefficient tensor field is invariant under the Hamiltonian vector fields associated to the coordinate functions $x_j, y_j$ (which, up to a sign, are just the corresponding coordinate vector fields $\partial/\partial y_j, \partial/\partial x_j$). The proof that no invariant vector fields exists also requires one to consider the Hamiltonians associated to $x_j^2$ and $y_j^2$. - ## 2 Answers Any symplectic linear transformations in $T_xM$ is locally realizable as a Hamiltonian vector field, thus for questions 1 and 2, one can profitably use representation theory of the symplectic group. FACT (Lefschetz decomposition) Let $W$ be a $2n$-dimensional symplectic vector space, $\bigwedge^\ast W$ its exterior algebra, and $\omega\in\bigwedge^2 W$ the invariant two-form. Exterior multiplication by $\omega$ and the contraction with $\omega$ define a pair of $Sp(W)$-equivariant graded linear transformations $L, \Lambda$ of $\bigwedge^\ast W$ into itself of degrees $2$ and $-2,$ and let $H=\deg-n$ be the graded degree $0$ map acting on $\bigwedge^k$ as multiplication by $k-n.$ Then $L,H,\Lambda$ form the standard basis of the Lie algebra $\mathfrak{sl_2}$ acting on $\bigwedge^\ast W$ and the actions of $Sp(W)$ and $\mathfrak{sl_2}$ are the commutants of each other. See, for example, Roger Howe, Remarks on classical invariant theory. Corollary Every homogeneous $Sp(W)$-invariant element of $\bigwedge^\ast W$ is a multiple of $\omega^k$ for some $0\leq k\leq n.$ Since, conversely, every polynomial in $\omega$ is invariant under the Hamiltonian vector fields, this gives a full description of the invariant differential forms. For question 2, locally every invariant tensor must reduce to an $Sp(W)$-invariant element of the tensor algebra. For the special case of symmetric tensors, the answer is trivial. FACT Under the same assumptions, the $k$th symmetric power $S^k W$ is a simple $Sp(W)$-module (non-trivial for $k>0$). General case can be handled using similar considerations from classical invariant theory. A more involved question of describing the invariant local tensor operations on symplectic manifolds (an analogue of the well-known problem of invariant local operations on smooth manifolds, such as the exterior differential or Schoutens bracket) was considered in an old article by A.A.Kirillov. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Just some preliminary thoughts (which I think should work), via a Darboux basis. Firstly, under the local coordinate, the coordinate functions generate the coordinate derivatives as their Hamiltonian vector fields. Now for the coordinate derivatives, their Lie action on tensors is simply partial derivation on the tensor coefficients in the Darboux coordinate, so this show that any invariant tensors, when written in local coordinates, must be constant coefficient. Thus it suffices to show at one point that the coefficient is trivial. Now observe that the Hamiltonian vector fields associated with the function $x_j^2 + y_j^2$ is the rotation vector field in the $x_j$-$y_j$ plane. There are no rotationally invariant 1-forms in $T_0\mathbb{R}^2$, and the only rotationally invariant two-form is $dx\wedge dy$. By a counting argument, for forms of odd degree, there will be at least "an odd coordinate out" that doesn't pair into $dx_j\wedge dy_j$ pairs. By the rotational symmetry any such term must have vanishing coefficient. So there are no odd invariant forms. Now, let us write $w_i = dx_i\wedge dy_i$. For forms of even degree, the argument above shows that it must be a linear combination of terms of the form $w_i \wedge w_j \wedge ... \wedge w_k$. Now, the function $x_i y_j - x_j y_i$ generates the vector field that simultaneously rotates in the $x_i$-$x_j$ plane and $y_i$-$y_j$ plane. This implies that point-wise an invariant form must be obtained through a symmetric polynomial in $w_i$. Using that $w_i^2 = 0$ this here should imply that the only invariant even forms are given by powers of $\omega$. So I think this allows me to answer Question 1 in the affirmative (in the sense that the only invariant forms are the trivial ones) and Question 3 in a manner differently from posed (that locally it suffices you consider the functions $x_i, y_i, x_i^2 + y_i^2, x_iy_j - x_j y_i$ to settle the problem for forms). For arbitrary tensor fields the question is a bit more delicate. Since there are larger classes of invariant objects under rotation. For example, the tensor field $\sum dx_i\otimes dx_i + dy_i \otimes dy_i$ in local coordinates is invariant under all of the operations I've considered above. (Of course, as Deane noted below in the comments, the infinitesimal symmetries of this tensor is finite dimensional, so by a dimensional counting argument it cannot be invariant under all Hamiltonian vector fields.) It is, however, not clear to me how to rule out general tensor fields using a fixed, finite dimensional set of vector fields as I've done above for forms. - 1 This is a nice discussion, where you have reduced to looking at the standard symplectic structure on $R^n$. So can't you rule out the standard flat metric (mentioned in the last paragraph) by simply using a Hamiltonian vector field that is not an infinitesimal isometry? More generally, isn't it the case that if the group of local diffeomorphisms preserving a tensor is finite-dimensional, then it can't be invariant under all Hamiltonian vector fields? – Deane Yang Jun 29 2010 at 17:17 @Deane: you are absolutely correct. I wrote that last paragraph with half-a-mind on whether it suffices to consider only a finite dimensional set of vector fields for ALL (say symmetric) tensors. I shall update the answer to reflect your observation. – Willie Wong Jun 29 2010 at 18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913439154624939, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/201964/the-smallest-compactification
# The smallest compactification I've been reading about compactifications, and I am very familiar with the Stone-Cech compactification, which can be thought of as the largest compactification of a space. For a locally compact space $X$, I'm familiar with the one-point compactification being the smallest compactification of $X$. I was wondering, for any space, can we find a smallest compactification? - ## 2 Answers Theorem 3.5.12 in Engelking: If in the family $\mathscr{C}(X)$ of all compactifications of a non-compact Tikhonov space $X$ there exists an element $cX$ which is smallest with respect to the order $\le$, then $X$ is locally compact and $cX$ is equivalent to the Alexandroff compactification $\omega X$ of $X$. If $cX$ is a compactification of $X$, I’m going to identify $X$ with $c[X]$. Proof. Suppose that $X$ has a smallest compactification $cX$, and suppose that there are distinct points $x$ and $y$ in the remainder $cX\setminus X$. Let $Y=cX\setminus\{x,y\}$. $Y$ is an open subspace of $cX$, so it’s locally compact and has a one-point compactification $\omega Y$. But $\omega Y$ is a compactification $c_1 X$, and by hypothesis $cX\le c_1 X$, so there is a continuous surjection $f:c_1 X\to cX$ such that $f\upharpoonright X=\mathrm{id}_X$. $X$ is dense in $Y\subseteq\omega X=c_1 X$, and $\mathrm{id}_Y\upharpoonright X=\mathrm{id}_X=f\upharpoonright X$, so $\mathrm{id}_Y=f\upharpoonright Y$. In other words, $f$ is a continuous function from $c_1 X$, a compactification of $Y$, onto $cX$, another compactification of $Y$, that fixes $Y$ pointwise. But this is clearly impossible, since $|c_1 X\setminus Y|=1$ and $|cX\setminus Y|=2$. Thus, $cX\setminus X$ is a singleton, $X$ is locally compact, and it’s straightforward to check that $cX=\omega X$. $\dashv$ - Thanks for this! – Nate Eldredge Sep 25 '12 at 12:52 According to Wikipedia, the answer is no. More precisely, a noncompact Tychonoff space has a smallest compactification if and only if it has a one-point compactification (if and only if it is locally compact). So it suffices to find a noncompact Tychonoff space which is not locally compact; $\mathbb{Q}$ is probably the most familiar example. - Wikipedia cites Engelking as a reference; I'd be interested in seeing a sketch of the proof if anyone would care to post it. – Nate Eldredge Sep 25 '12 at 3:10 @Nate: See my answer. – Brian M. Scott Sep 25 '12 at 6:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327743649482727, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/groebner-bases
## Tagged Questions 1answer 33 views ### How to recover the ideal from grobner basis of kernel of ann(x) M -> ann(x) i can find the grobner basis of kernel of ann(x) and need the final step to recover this basis to ideal as i know, eliminate is not for all cases, what is the general … 3answers 225 views ### Groebner bases for power series rings (reference request) Hello, Could you help me with a reference to elementary properties of Groebner bases in rings of formal power series over a field? I am especially interested in generic initial i … 0answers 56 views ### how to find groebner basis from numeric data or statistic data my goal is find groebner basis from data and then reverse reduce to find a monomial find groebner basis from numeric data do reverse reduce exist? any similar idea 3 how to fin … 0answers 32 views ### how to minimize cost function in integer programming with groebner basis below are maple code 3*rho1 - 2*rho2 + rho3 - rho4 = -1 4*rho1 + rho2 - rho3 = 5 original without cost function: with(Groebner): K := {y1-(x1^3)(x2^4),y2-(x2^(1+2))(w^ … 2answers 222 views ### Solving the Field Membership Problem using Grobner Bases Is there an easy way to determine whether a set of elements in a field generates the whole field or only a subfield? Specifically, I have a subfield of $k(x,y)$ described in terms … 0answers 226 views ### Testing isomorphism of finitely generated algebras Let $A=\mathbf{Q}[x_1,\ldots,x_n]$ be the polynomial ring in $n$ variables over the rational numbers. Let $B=\mathbf{Q}[f_1,\ldots,f_r]$ and $C=\mathbf{Q}[g_1,\ldots,g_s]$ be two f … 1answer 229 views ### Reasonable implementation of finding Gröbner bases over non-field coefficient rings Gröbner bases are usually considered in the ring of polynomials over a field. However, there are useful definitions and algorithms for Gröbner bases over other coefficient rings; s … 2answers 231 views ### Monomial orderings in noncommutative setting An ordering of monomials in the free associative algebra $k\langle x_1,\ldots,x_n\rangle$ is called a monomial ordering (EDIT: it seems that an equally common term used in this con … 0answers 134 views ### Bounding the degrees in a Bézout relation for integer polynomials Let $A$ and $B$ be two polynomials in $\mathbf Z[X]$ which generate $\mathbf Z[X]$, that is assume that there exist polynomials $U$ and $V$ in $\mathbf Z[X]$ such that A \cdot … 1answer 529 views ### Existence of a real-valued solution to system of multivariate polynomial equations Given a system of multivariate, polynomial equations, is there a way to determine if it has a solution in a given field (for instance the set of all reals). I don't care what the s … 1answer 226 views ### What does the d-slice of a weighted polynomial algebra look like? This question comes from the explicit construction of a smooth projective model of a hyperelliptic curve. Nevertheless it is fully elementary and, to me, more interesting than hype … 1answer 471 views ### PBW theorem over a Q-algebra, without freeness or flatness Let $k$ be a commutative ring with $1$. Let $L$ be a $k$-Lie algebra, which is not necessarily free as a $k$-module. Let $S\left(L\right)$ denote the symmetric algebra of $L$ (over … 2answers 732 views ### Numerical solution for a system of multivariate polynomial equations Hi all, I have a system of 6th-order polynomial equations in 4 variables $q_1, q_2, q_3, q_4$ (i.e. polynomials with all the terms such as $q_1^6, q_2^6, q_2^4 q_3^2$): \$P_k(q_1, … 2answers 392 views ### Nonstandard monomial orders? Are there any articles/books/examples where a non-standard monomial order is used? What are the applications of these monomial orders? In particular, uses in groebner bases and var … 1answer 304 views ### Can we make Buchberger’s algorithm faster for a given ideal if we are allowed to vary the monomial order? Suppose we have a finite set of generators for an ideal $I \subset R := \Bbbk[x_1,\dotsc, x_n]$, where $\Bbbk$ is a field. If we choose a monomial ordering, then Buchberger's algo … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8565625548362732, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/245777/how-to-find-an-example-of-matrix-a-that-satisfies-a-1-frac1n-a-wh/245788
# How to find an example of matrix $A$ that satisfies $A^{-1} = \frac{1}{n} A$, where $A = [a_{ij}]_{n \times n}$? How to find an example of matrix $A$ that satisfies $A^{-1} = \frac{1}{n} A$, where $A \in n \times n$? For example if $A= \left( \begin{array}{ccc} 1 & 1 & 1\\ 1 & i & i^2\\ 1 & i^2 & i^4 \\ \end{array} \right)$, then $A^{-1}=\frac{1}{3} \overline{A}$ - 1 Hint: if $A$ is the diagonal matrix $\alpha I$, what is $A^{-1}$? – TonyK Nov 27 '12 at 16:32 You said "$A\in n\times n$". I have never seen notations like this. Did you mean $A$ is an $n\times n$ matrix, or its entries are positive integers in $\{1,2,\ldots,n\}$, or something else? – user1551 Nov 27 '12 at 17:49 I meant that $A = [a_{ij}]_{n \times n}$ – alvoutila Nov 27 '12 at 19:16 ## 3 Answers Your matrix satisfies the polynomial $A^2 - nI = 0$. Therefore the minimal polynomial of your matrix is either $A \pm \sqrt{n}I = 0$ where the only matrices which satisfy the criteria are $A=\pm\sqrt{n}I$ or the minimal polynomial splits as $(A+\sqrt{n}I)(A-\sqrt{n}I) = 0$. In this case, you can take any matrix similar to a diagonal matrix with $\pm\sqrt{n}$ on the diagonals. - followup question: give counterexample that $AA = A^2$( $A \in n \times n$) – alvoutila Nov 27 '12 at 16:54 @alvoutila: What is AA? – Phira Nov 27 '12 at 16:56 $AA=A \cdot A$ where $\cdot$ is matrix product. – alvoutila Nov 27 '12 at 19:12 $AA$ is always $A^2$. $A^2$ is shorthand for $AA$ so I'm not sure what you mean. – EuYu Nov 27 '12 at 19:41 to me $A^2= \left( \begin{array}{ccc} 1^2 & 1^2 & 1^2\\ 1^2 & i^2 & i^4\\ 1^2 & i^4 & i^8 \\ \end{array} \right) \neq A \cdot A$. – alvoutila Nov 27 '12 at 21:03 show 1 more comment The easiest way is to let $A$ be a multiple of the identity. These are easy to invert. - Hadamard matrices all have this property. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8814713358879089, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/109752/line-segments-intersecting-jordan-curve/155309
# Line segments intersecting Jordan curve I have thought this problem a week without success. Is there a set $A\subset \mathbb{R}^2$ such that • The boundary of $A$, $\partial A$, is a Jordan curve and • For any $B\in \operatorname{int} A\ne\emptyset$, $C\in \operatorname{ext} A\ne\emptyset$ , the line segment $BC$ intersects $\partial A$ infinitely many times? Any ideas? - 2 Fascinating question. If there is such an example, the intersection of the Jordan curve with any line cannot contain an isolated point. For otherwise you could take a point to either side of the isolated point and get a line segment joining inside and outside which hits the curve once. Also the curve is closed, so the intersection set with any line must be a closed subset of the line with no isolated points. This means it is a perfect set, and a theorem says that it is therefore uncountable. – Grumpy Parsnip Feb 16 '12 at 2:40 So an example must hit every line joining interior and exterior uncountably many times. – Grumpy Parsnip Feb 16 '12 at 2:41 I have very little intuition about this; can a line segment intersect the boundary of the Koch snowflake finitely many times? – Miha Habič Feb 16 '12 at 9:50 @MihaHabič: yes, it can hit it in one point at a "corner." – Grumpy Parsnip Feb 16 '12 at 12:13 2 $A=\partial A=S^1$ works fine, but you'd probably prefer $\operatorname{int} A$ to be nonempty ;) – savick01 Feb 18 '12 at 23:29 show 3 more comments ## 3 Answers This is not an answer, only some ideas. Let $A$ be a Jordan domain, and let $f\colon \mathbb D\to A$ be a conformal map of the unit disk onto $A$. By Caratheodory's theorem $f$ extends to a homeomorphism of closures. One says that $f$ is twisting at a point $\zeta\in\partial\mathbb D$ if for every curve $\Gamma\subset \mathbb D$ ending at $\zeta$ the following holds: $$\liminf_{z\to\zeta}\ \arg(f(z)-f(\zeta))=-\infty, \quad \limsup_{z\to\zeta}\ \arg(f(z)-f(\zeta))=+\infty,\quad (z\in\Gamma)$$ (This definition is from the book Boundary behaviour of conformal maps by Pommerenke. Some sources have different wording. The definition is unchanged if one replaces $\Gamma$ by a radial segment.) If $f$ is twisting at $\zeta$, there is no line segment that crosses $\partial A$ only at $f(\zeta)$. Indeed, the preimage of such a line segment would be a curve along which $\arg(f(z)-f(\zeta))$ is constant. Pommerenke's book presents several results on twisting points and gives pointers to literature. The message is that $f$ can have a lot of twisting points. Of course, it is impossible for $f$ to be twisting at every point of $\partial\mathbb D$. (Consider any disk contained in $A$ whose boundary touches $\partial A$.) But what we want is for every point of $\partial A$ to be twisting either on the inside or on the outside (i.e., for the conformal map onto interior or onto exterior). My profound lack of knowledge of complex dynamics suggests that the Julia set of the quadratic polynomial $p(z)=z^2+\lambda z$ could have this property when $|\lambda|<1$ and $\mathrm{Im}\,\lambda\ne 0$. Indeed, in this case the polynomial has two Fatou components, and the boundary between them is a Jordan curve, indeed a quasicircle. The curve appears to be twisting as expected (see this applet, which takes polynomials in the form $z^2+c$. Here $c=\lambda/2-\lambda^2/4$ with $|\lambda|<1$, so we are in the main cardioid of the Mandelbrot set). Maybe someone who understands complex dynamics can tell if this Julia set is indeed an example. - I am not sure, but here's an idea. Take the graph of $x sin(1/x)$ and join the two zeroes around (+/-)0.15 by a big enough curve. Take the points to be $(0.12, 0)$ and $(-100, 0)$. EDIT: Misunderstood, that it is required for only a single pair of points. OP wants this to happen for every pair of points. - 1 I also thought the solution might be some kind of fractal. – Jaakko Seppälä Feb 15 '12 at 21:21 1 I believe the OP wants a curve which works for every pair of points. – Grumpy Parsnip Feb 16 '12 at 2:29 That's right Jim. I'm non-native in English and I thought that for any means for all. – Jaakko Seppälä Feb 16 '12 at 9:40 Actually, rereading the question I can see this. I misunderstood.. My mistake! Sorry. – aelguindy Feb 16 '12 at 9:45 Such a set does exist. See http://mathoverflow.net/questions/100025/how-many-times-line-segments-can-intersect-a-jordan-curve - Nice construction by Anton. You should accept this as an answer instead of my half-baked remarks. – user31373 Jun 19 '12 at 20:42 Okay. I thought it is not polite to accept own answers. – Jaakko Seppälä Jun 19 '12 at 21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318065047264099, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51204/is-there-a-crash-course-book-on-abelian-varieties-e-g-an-introduction-for-ph/51232
## Is there a “crash-course” book on Abelian varieties (e.g., an introduction for physicists)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, In our (rather applied) theoretical physics research, we have encountered an important class of problems, which seem to require an understanding of Abelian functions (unfortunately, this subject is not a part of standard theoretical physics education, so we know little). I would like to learn the basics on my own to see if there are some results that we can actually use in our research. Is there a sort of "crash-course" on the practical aspects of the subject? I am looking for a book with the minimal level of "abstraction;" e.g., that would not include words such as "morphism" etc., but would include examples of practical calculations and/or a list of key technical tools. Thank you. - 8 Harry - I apologize if I offended you by trying to avoid "morphisms." I'm sure that they are very useful in learning the real meaning of things, but let's just say that I am not intellectually adept to go into such depths, but I just would like to get a feeling about a range of technical tools, if any, that the modern theory provides to simple-minded people such as I. My naive impression (based on googling things for 2 hours) was that Abelian functions and Abelian varieties are not unrelated. Whatever the theory is called - that generalizes elliptic functions - I am looking for a book on it – Victor Galitski Jan 5 2011 at 16:32 6 Victor, Abelian varieties and Abelian functions are indeed related. The latter are very roughly functions on the former. Even though I'm an algebraic geometer, I don't agree that you need to master the whole subject to make use of this class of functions. Hopefully the references below will get you started. Good luck. – Donu Arapura Jan 5 2011 at 16:40 11 Harry, I don't think you should interpret Victor's request regarding the word "morphism" so literally. Probably in Victor's situation, he wants to know about computations involving things like e.g. explicit power series. So he is just looking for books that spend less time on "abstract" stuff that he probably doesn't need for his present purposes. – Kevin Lin Jan 5 2011 at 17:55 15 There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy. – Emerton Jan 5 2011 at 18:31 7 I think Harry should note that David Mumford, for example, knew when to be abstract, when concrete in geometry. Preconceptions don't help, let's all agree. – Charles Matthews Jan 5 2011 at 19:06 show 5 more comments ## 11 Answers You could try looking at the first chapter of Mumford's book Abelian varieties. I forget whether or not it uses the word morphism, but it does adopt a resolutely complex analytic view-point, which will probably be (close to) the view-point you want. He begins with a discussion of complex tori, i.e. quotients of the form $\mathbb C^g/\Lambda$, where $\Lambda$ is a lattice of rank $2g$, and then considers the problem of embeddding such a quotient into some complex projective space. The existence of such an embedding is what distinguishes abelian varieties from random complex tori, and is intimately related to the theory of theta functions. (Indeed, theta functions provide the embedding.) I think that Mumford probably does use the terminology of line bundles. Line bundles and their sections come into the picture because to give an embedding of some variety $X$ into projective space, you need (more-or-less) to choose some homogeneous coordinates on $X$. But since the homogeneous coordinates of a point are not quite well-defined (they are only well-defined up to a scalar) the homogeneous coordinates are not quite functions, but only functions well-defined up to a certain scalar transformation, which exactly makes them be sections of a line bundle. In the case of abelian varieties, you can pull these sections back from $\mathbb C^g/\Lambda$ to $\mathbb C^g$, where they do become honest functions, but instead of being invariant under $\Lambda$ (if they were invariant under such translations, they would descend to be honest functions on the abelian variety $\mathbb C^g/\Lambda$, which they are not), they transform by some scalar when you translate the argument by an element of $\Lambda$. When you figure out exactly what the right scalar transformation law is, you find that you are talking about theta-functions! My memory is that Mumford does explain this fairly concretely in his first chapter. Even if it doesn't stand alone as an explanation, it may be a helpful bridge between the very classical description that you are likely to find most accessible, and the view-point that you will find in most of the modern mathematical literature. Let me add that (at least in my view) follow-up questions about specific points of the theory (say in reference to one of the texts that you ultimately decide to look at) would be quite welcome. - 15 Dear Matthew, I have been wanting to say this for some time now and this seems as good an opportunity as any to say it: browsing MO, I am repeatedly blown away by the amount of lucid insight in down to earth terms that you manage to squeeze into a few hundred characters! – Alex Bartel Jan 6 2011 at 5:03 1 You should also have a look at Mumford's Lectures on Theta Ch. 1 and Ch. 2. They are extremely concrete. – Jon Yard Jan 6 2011 at 5:46 Dear Alex, Thanks for your kind words. Regards, Matthew – Emerton Jan 6 2011 at 6:58 I would like to second Alex's comment! Also, I looked at the first chapter of Mumford's book, and it has the word "morphism" exactly once. – Kevin Lin Jan 6 2011 at 9:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Topics in Complex Function Theory, Abelian Functions and Modular Functions of Several Variables by C. L. Siegel is a standard reference using complex function theory. There are older works (e.g. H. F. Baker) that may give an approach to a given problem by formulae, and not all aspects of that concrete theory are easily found in the modern literature. That said, it's a deceptive subject. I have to assume you are familiar with classical elliptic function theory, or else there is really no way to understand what's going on. Roughly speaking because zeroes and poles of functions of two or more complex variables are never isolated points, it can be quite hard to generalise a result you want from elliptic functions to abelian functions. In other words the zeroes and poles of functions carry geometry, and cannot be used to do book-keeping in such a simple-minded way. One solution to that issue is to express everything in terms of theta-functions. There one's luck changes: basically the one-dimensional and higher-dimensional case are both ruled in the same way by a type of Heisenberg group, and (as David Mumford showed) you can see everything in the theory ultimately coming back to a form of Stone–von Neumann theorem. Now reading Mumford's papers is not the crash course you are looking for! That probably doesn't exist. It's just a reassurance that there is an underlying structure to the underlying equations (you can't expect abelian varieties to be complete intersections, outside a few classical cases). It is probably true to say that abelian function theory doesn't have an adequate literature, in fact. - Thank you, Charles - Just ordered the Siegel's book. I am familiar with elliptic functions and what you said about the theta-functions and higher-d generalizations sounds very relevant. In fact, that's exactly what we've done: re-expressed a key quantity in our non-string theory in terms of theta-functions and we'd like to proceed further. Reading math literature, we get the impression that there exist many technical results and methods, but we admittedly don't understand the terminology etc. that seem to contain them. Any useful reference (e.g. applied math review of proceedings) would help. – Victor Galitski Jan 5 2011 at 16:13 There is plenty more that can and has been said for small d or the hyperelliptic case. If you could post more on the context, it might be possible to give more of a steer. – Charles Matthews Jan 7 2011 at 8:12 For some more context, try also mathoverflow.net/questions/26551/…. – Charles Matthews Jan 7 2011 at 10:57 You could try to look at the following book : From number theory to physics. Edited by M. Waldschmidt, P. Moussa, J. M. Luck and C. Itzykson. Springer-Verlag, Berlin, 1992. xiv+690 pp. ISBN: 3-540-53342-7 This book arose from a conference held in Les Houches in 1989, whose aim was to bring together number theorists and theoretical physicists. It contains in particular a long article Introduction to compact Riemann surfaces, Jacobians, and abelian varieties, by Jean-Benoît Bost. I really like this article, and I think it deserves to be known more widely. Here is the MathSciNet review of this article, by H. Lange : « The article contains an introduction to the theory of abelian varieties for physicists. This aim is taken seriously: the author uses a language which should be familiar to theoretical physicists. There are three chapters: Compact Riemann surfaces, Jacobians, and General abelian varieties. Riemann surfaces are introduced as conformal classes of $C^\infty$-metrics on an oriented two-dimensional differentiable manifold. The subject of Riemann surfaces may then be seen as the study of conformally invariant properties of two-dimensional Riemann manifolds. This is the reason why Riemann surfaces occur in some topics in physics, e.g. string theory or conformal field theory. Cohomology groups are introduced as Dolbeault cohomology groups of line bundles. For the physicist this means that $H^1(X,L)$ can be interpreted as the zero modes of the adjoint of the operator $\overline\partial_L$. With these definitions the main results of the theory such as the Riemann-Roch theorem, Serre duality and Hodge decomposition are proven using regularizing operators and Fredholm theory. The Jacobian $J(X)$ of a compact Riemann surface $X$ is introduced as the set of $\overline\partial$-connections on a $C^\infty$ line bundle modulo the action of the group $C^\infty(X,{\bf C}^*)$. The author shows that there are isomorphisms to the Albanese variety and the Picard variety of $X$, thus providing a proof of the Abel-Jacobi theorem. In the third section the basics of general abelian varieties are given. The paper also contains some interesting historical digressions, for example a sketch of the original approach of Abel to Abel's theorem. » The material about abelian varieties (the third chapter of the article) is quite comparable to the beginning of Mumford's book, pointed out by Emerton. Finally, I think that the study of abelian varieties can hardly be dissociated from the study of Riemann surfaces, because historically abelian varieties appeared as Jacobians of Riemann surfaces. - A very classical introduction is Swinnerton-Dyer's Analytic theory of abelian varieties (London Mathematical Society Lecture Note Series 14). Another good place to start is M. Schlichenmaier, An introduction to Riemann surfaces, algebraic curves and moduli spaces, Theoretical and Mathematical Physics, Springer-Verlag 2007 (2nd ed.) - There's a nice, short, introduction to abelian varieties over C by Mike Rosen in Arithmetic Geometry, Springer-Verlag, 1986, Chapter 4. - The book by Polishchuk should be very helpful, and close in spirit to the nonexistent crash course on Mumford's paper discussed by Charles Matthews in his reponse: http://www.cambridge.org/gb/knowledge/isbn/item1169225/?site_locale=en_GB - 2 I worry this would be a hard read for a theoretical physicist. – Pete L. Clark Jan 5 2011 at 19:22 1 I agree with Pete. The algebraic part of this book uses many results from algebraic geometry without reference. Similarly the analytic part of the book would probably require being read in parallel with a book on complex manifolds. – solbap Jan 5 2011 at 19:56 2 I found it very explicit and formula-based in a way that a physicist (such as myself?) could easily latch onto. – Eric Zaslow Jan 5 2011 at 22:32 4 @Eric Zaslow: I hate to get ad hominem here, but: you may call yourself a physicist (and I see that your PhD was in physics), but Northwestern University calls you Professor of Mathematics. Again, I worry that a less mathematically sophisticated individual than yourself might have trouble with Polishchuk's book. But it's no problem: the point here is to present an array of options, and your answer certainly contributes to that. – Pete L. Clark Jan 6 2011 at 0:48 3 Ha ha... but when I collaborated with Polishchuk I was less than two years out of my physics Ph.D.! You're right, though. I wonder what the intended application is, and what the questioner thinks. – Eric Zaslow Jan 6 2011 at 2:00 I'm surprised that no one has mentioned Abelian varieties by Milne as yet. - I would add Oliver Debarre's: Complex Tori and Abelian Varieties. It does not cover the theory of Abelian functions in as great of depth as many of the other sources cited. But, compared to the others, it does begin at an elementary level, it is concrete and short. In total the book is about 100 pages and the chapters most relevant here are 4,5,6 which take up about 40 pages. For learning this subject I think this book is a good supplement to reading a more advanced source because Debarre is likely to spend more time on aspects other sources might skim over. Another book perhaps worth mentioning is Complex Abelian Varieties by Lange and Birkenhake. It is much longer and covers much more material. It is assumes some algebraic geometry background but I would say no more than Mumford's book. It also deals only with complex theory and it is fairly concrete. - Mumford's "Tata Lectures on Theta II" might me be close to what you want. It discusses applications of abelian varieties, more precisely Jacobians, to solving various differential equations of interest to physicists. - there are a couple of relevant paragraphs in Basic algebraic geometry by shafarevich which i recommend, as well as a historical appendix. e.g. he gives an example to show that unlike the one dimensional case, there exist lattices in C^2 for which not even any meromorphic functions can be periodic (chapter VIII.1). in addition to siegel and swinnerton dyer, mumford's other theta functions book is one i would recommend, volume 1 of the three volume series, something like "tata lectures on theta I". Also in lecture IV of his Curves and their Jacobians, now included in his "red book", at least the first 8 pages give an invaluable and succinct introduction to principally polarized abelian varieties and their "moduli" (classifying spaces). And let's get "morphisms" out of the way, since the word appears even in swinnerton -dyer. In one sense it is a generic version of the several terms automorphism, isomorphism, homomorphism, homeomorphism, diffeomorphism,... i.e. a map that preserves some structure to be specified. In algebraic geometry it usually means a structure preserving map that is defined everywhere, as "holomorphic" is used in complex analysis to mean meromorphic and having no poles. so in swinnerton - dyer it probably means holomorphic map. but it is often defined where it is used. Here is another reference: Lectures on Riemann surfaces proceedings of the College on Riemann Surfaces, International Centre for Theoretical Physics, Trieste, Italy, 9 Nov.-18 Dec., 1987 editors, M. Cornalba, X. Gomez-Mont, A. Verjovsky. Published 1989 by World Scientific in Singapore, Teaneck, NJ . Written in English. Complex analytic theory of Teichmüller space / R.M. Porter Riemann surfaces, moduli, and hyperbolic geometry / S.A. Wolpert Gauge theory on Riemann surfaces / N.J. Hitchin Graph curves and curves on K3 surfaces / R. Miranda Koszul cohomology and geometry / M.L. Green Constructing the moduli space of stable curves / I. Morrison Meromorphic functions and cohomology on a Riemann surface / X. Gomez-Mont The theorems of Riemann-Roch and Abel / M. Cornalba The Jacobian variety of a Riemann surface and its theta geometry / R. Smith Families of varieties and the Hilbert scheme / C. Ciliberto and E. Sernesi A sampling of vector bundle techniques in the study of linear series / R. Lazarsfeld Moduli of curves and theta-characteristics / M. Cornalba Some algebraic geometrical methods in string theory / L. Dabrowski and C. Reina Lectures on stable curves / F. Bardelli. Edition Notes Includes bibliographical references. "Revised versions of most of the original notes, covering the entire spectrum of the subjects touched upon during the College, from the foundational ones to areas of very active current research"--P. v. Classifications Library of Congress QA333 .C65 1987 - I found Lang's book very readable, but the more analytic Conforto's "Abelsche Funktionen und Algebraische Geometrie" or this may fit better to your interests. Here a link to an encyclopedia article. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363176822662354, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=80356
Physics Forums ## Pauli Matrices My other problem is: Consider now the space of 2x2 complex matrices. Show that the Pauli Matrices |I>= 1 0 0 1 |sigma x>= 0 1 1 0 |sigma y>= 0 -i i 0 |sigma z>= 1 0 0 -1 form an orthonormal basis for this space when k=1/2. To spare yourself from having to compute 10 different matrix products, I recommend that you write out what the inner product is for general matrices A and B first. I'm really lost on this one! Any help getting me started would be greatly appreciated! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor Do you know what it means for them to be orthonormal? I think it's safe to assume that the underlying field is C (otherwise you're being asked to prove a falsehood), so it suffices to show that these four matrices are orthonormal, and it will follow that they form an orthonormal basis. Do you know how to check if a set is orthonormal? Since I don't know the inner product you're working with, I can't help you, but if you know the inner product and you know the definition of "orthonormal" you should be able to do this. Well, the inner product is defined as |A||B|cos (angle). I don't have numbers for A & B. He says use the inner product for "general" 2x2 matrices A & B. Also, I must not understand orthonormal very well. If the two vectors are orthonormal, then they are perpendicular. The dot product which I thought was the same as the inner product would be zero. ## Pauli Matrices The only inner product on 2x2 matrices that I've ever really used is $$(A, B) \equiv \frac{1}{2} \textrm{Tr}( A B^{\dagger})$$ Where Tr(A) is the trace of A, and B-dagger is the Hermitian conjugate of B. He did give us that formula, but how do I use that without actual numbers for A & B? Should I make I = A and sigma x = B and then sigma y = C and sigma z = D? Then I have <A|B> = 1/2Trace(AtB) (where t = Hermitian conjugate) and <C|D> = 1/2Trace(CtD)? So, for the real numbers, i.e. the matrices of I and sigma x & y, the complex conjugate of any real number is that real number. So, the hermitian equivalent for I=A is the same matrix. The trace is 1 and the trace of sigma x = B is zero. So the inner product <A|B> is zero? Which makes sense if they are orthonormal, right? The Hermitian conjugate of sigma y = C is also the same matrix because you reverse the signs of the (i)'s for the complex conjugate and then you transpose ending up with the same matrix you started with. So, again, the Trace of C = zero and then the inner product <C|D> also is zero. Am I even remotely on the right track here??? Recognitions: Homework Help Science Advisor Let A be: a1 a2 a3 a4 and B be: b1 b2 b3 b4 Well you should know how to find the hermitian conjugate of B, you should know how to multiply A and B*, and you should know how to find the trace of AB*. So basically you have a formula that's "ready-to-go" and you just need to know the entries of your matrices in order to compute the inner product, you won't have to do the multiplication or look at the diagonal each time. If you express these matrices with respect to some basis, then you will be expressing it as a 4-tuple. If we have a matrix A, then we can call the corresponding 4-tuple [A]. Then [A]*M[C] defines an inner product so long as M is a 4x4 positive definite matrix. If we take M = 0.5I, then the defined inner product will be like qbert's, but we can make many different choices for M. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by blanik He did give us that formula, but how do I use that without actual numbers for A & B? What do you mean ? You do have actual numbers for all the matrices. Should I make I = A and sigma x = B and then sigma y = C and sigma z = D? Then I have <A|B> = 1/2Trace(AtB) (where t = Hermitian conjugate) and <C|D> = 1/2Trace(CtD)? Why are we talking about C, D etc. Yes, you apply the provided definition of the inner product to each pair of matrices and see if they give you zero. So, for the real numbers, i.e. the matrices of I and sigma x & y, the complex conjugate of any real number is that real number. $\sigma_y$ is not a real matrix. But the rest is true. So, the hermitian equivalent for I=A is the same matrix. The trace is 1 and the trace of sigma x = B is zero. So the inner product <A|B> is zero? Which makes sense if they are orthonormal, right? It should, but that's not the way to find the inner product. You are misapplying the definition. The inner product is (half) the trace of the product, not (half) the product of the traces. First multiply, then find the trace. The Hermitian conjugate of sigma y = C is also the same matrix because you reverse the signs of the (i)'s for the complex conjugate and then you transpose ending up with the same matrix you started with. Correct. In fact, all the matrices are Hermitian since in each case $a_{ij} = (a_{ji})^*$. If this were in a physics course, you will come across more reasons why these particular matrices want to be hermitian (or selfadjoint). So, again, the Trace of C = zero and then the inner product <C|D> also is zero. Again, you must reverse the order of those operations. Am I even remotely on the right track here??? You're getting there. Recognitions: Homework Help Science Advisor This reminds me of one of those facts about the Pauli spin matrices (and more generally, Clifford algebras), that no one ever talks about. Normally one models the spin of elementary particles with spinors, in this case, those 2x1 vectors, and one treats the matrices as operators. It would be a little cleaner if the matrices were all you needed. It turns out that it is possible to model spin with the matrices alone. Using the usual basis for the Pauli spin matrices (or any other basis you prefer, or you can refuse to specify a basis and treat it in Clifford algebra form), let $$\hat{u} =$$ $$u_x\hat{x} + u_y \hat{y} + u_z\hat{z}$$, where $$\hat{u}$$ has length one. For a particle with spin in the $$+\hat{u}$$ direction, associate the particle with a matrix as follows: $$|+1/2_u +1/2 \rangle \equiv (1 + u_x\sigma_x + u_y\sigma_y + u_z\sigma_z)/2.$$ The matrices so defined are "idempotent", which means that when you square them, you get back the same value. To compute the inner product, $$\langle A | B \rangle$$, compute the usual complex conjugate product and compute the trace. The inner product so computed is not quite the same as the inner product computed from spinors, but in certain ways it is an improvement. From a physical point of view, what we can measure about spin states are probabilities. For example, the probability that a spin-1/2 particle that is oriented in the +z direction will give +1/2 when measured in some other direction. When computing the transition probability between two spinors, say A and B, one computes: $$|\langle A | B \rangle |^2$$ That is, one computes the spinor inner product, and then takes the magnitude and squares it. For the idempotent matrices defined above, one uses the matrix inner product and then takes the magnitude. There is no need to square it. The result is the same, but using the idempotents eliminates the extra squaring operation. Here's the (somewhat trivial) calculation for the inner product between two matrices, one associated with spin+1/2 in the +z direction, the other with spin+1/2 in the +u direction, with $$\hat{u} = \cos(\theta)\hat{z} + \sin(\theta)\hat{x}$$: $$\textrm{tr}(\left(\begin{array}{cc}1 & 0 \\ 0 & 0 \end{array}\right) \left(\begin{array}{cc}1+\cos(\theta) & \sin(\theta) \\ \sin(\theta) & 1-\cos(\theta) \end{array}\right)/2)$$ $$= (1 + \cos(\theta))/2$$ While this is obvious in the above case, it generalizes to any two of these idempotent matrices that represent spin in whatever directions you'd like. (Try it if you don't believe me, or better, use $$\sigma_x$$ notation and the result will be obvious.) Perhaps the idempotents are a more basic method of modeling elementary particles than spinors. Carl Thread Tools | | | | |-------------------------------------|----------------------------|---------| | Similar Threads for: Pauli Matrices | | | | Thread | Forum | Replies | | | General Physics | 2 | | | Calculus & Beyond Homework | 0 | | | Advanced Physics Homework | 3 | | | Quantum Physics | 4 | | | Quantum Physics | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215313196182251, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/76176/enumerating-all-subgroups-of-the-symmetric-group/79139
# Enumerating all subgroups of the symmetric group Is there an efficient way to enumerate the unique subgroups of the symmetric group? Naïvely, for the symmetric group $S_n$ of order $\left | S_n \right | = n!$, there are $2^{n!}$ subsets of the group members that could potentially form a subgroup. In addition, many of these subgroups are going to be isomorphic to each other. I feel that the question has an easy answer in terms of the conjugacy classes, but I don't see how. If the answer generalizes to all finite groups, please elaborate! - 1 Have you looked at the software called GAP? It has such an enumeration routine and it seems to be reasonably fast. – Ryan Budney Oct 26 '11 at 20:55 3 By "unique", do you mean "up to isomorphism"? Given that every group of order $n$ is isomorphic to a subgroup of $S_n$, I would expect it to be hard, not easy, to enumerate all subgroups of $S_n$ up to isomorphism. Enumerating all subgroup of, say, $S_{64}$, would include enumerating all groups of order up to $64$; given that there are 294 groups of order 64, it seems like a tall order. – Arturo Magidin Oct 26 '11 at 20:57 5 @Bill: It's actually more than enumerating all groups of order $n$ and less, because you also have subgroups of $S_n$ of order more than $n$. – Arturo Magidin Oct 26 '11 at 21:15 1 @Arturo The group $S_{64}$ has considerably more elements than there are atoms in the universe. The fact that there are 294 groups of size 64 is going to be your least problem, and does not preclude the possibility of efficient subgroup enumeration algorithms (where the running time should be measured as a function of $|S_n|$ and not of $n$). – Alex B. Oct 27 '11 at 0:13 1 I think conjugacy classes would be relevant if you were interested in normal subgroups (which are necessarily unions of conjugacy classes). – Joel Cohen Oct 27 '11 at 1:02 show 9 more comments ## 1 Answer The number of distinct subgroups of the symmetric group on n points are given for n ≤ 13 in oeis:A005432, the number of conjugacy classes of subgroups is oeis:A000638 for n ≤ 18, and the number of (abstract) isomorphism classes amongst the subgroups is oeis:A174511 for n ≤ 10 (I get 894 for n=11, 2065 for n=12, 3845 for n=13, and I think 7872 for n=14). To give a feel for these numbers, I include them in a table below for n ≤ 15. I also include the number of transitive subgroups of Sn, since this is a very different number. The number of conjugacy classes is also known as the number of permutation groups (transitive and intransitive alike). As far as I know, combining the transitive groups to form intransitive groups involves an enormous amount of book-keeping and calculation and so has not been done (the number of transitive groups are known up into the 30s and maybe up to n ≤ 63 by now). I do not include the naive estimate of $2^{n!}$ since for $n=5$ one gets 1329227995784915872903807060280344576, which is quite a bit bigger than the number of subgroups, which is 156. $$\begin{array}{r|rrrrrrrrrr} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \#sub & 1 & 2 & 6 & 30 & 156 & 1455 & 11300 & 151221 & 1694723 & 29594446 \\ \#ccs & 1 & 2 & 4 & 11 & 19 & 56 & 96 & 296 & 554 & 1593 \\ \#iso & 1 & 2 & 4 & 9 & 16 & 29 & 55 & 137 & 241 & 453 \\ \#trn & 1 & 1 & 2 & 5 & 5 & 16 & 7 & 50 & 34 & 45 \\ \end{array}$$ $$\begin{array}{r|rrrrr} n & 11 & 12 & 13 & 14 & 15 \\ \hline \#sub & 404126228 & 10594925360 & 175238308453 & 5651774693595 & ? \\ \#ccs & 3094 & 10723 & 20832 & 75154 & 159129 \\ \#iso & 894 & 2065 & 3845 & 7872 & ? \\ \#trn & 8 & 301 & 9 & 63 & 104 \\ \end{array}$$ No known method is particularly "efficient" in n, otherwise one would have calculated these quite a bit further. To find the number of subgroups given the conjugacy classes of subgroups, one takes a representative of each conjugacy class of subgroups, and sums the indices of the normalizers. In particular, #sub is not much harder than #ccs to calculate, but it is much much larger and less useful. Asymptotics on these numbers are fairly different than these early terms, but are given in Pyber (1993) and Pyber-Shalev (1997): • $2^{\left(\tfrac1{16}+o(1)\right)n^2} \leq \#\text{sub} \leq 24^{\left(\tfrac16+o(1)\right)n^2}$ with the lower bound conjectured to be tight. • $\log(\#\text{sub}) = \Theta(n^2)$, in other words • $\log(\#\text{ccs}) = \Theta(n^2)$, because a subgroup can have at most $n!$ conjugates, and $n!$ is so tiny • $C^{n^2/\log(n)} \leq \#\text{iso}$ for some $C>1$ The lower bounds are mostly obtained by considering p-subgroups which dominate once n is sufficiently large. The upper bound requires the CFSG to control the insoluble subgroups. I didn't see the upper bound for #iso, but of course one can use #iso ≤ #ccs. - The numbers in the row #trn are the numbers of conjugacy classes of transitive subgroups (see this answer for the count for $S_4$). – joriki Mar 30 '12 at 8:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431244730949402, "perplexity_flag": "head"}
http://mathoverflow.net/questions/44125/what-is-a-good-introductory-text-for-moduli-theory/44267
## What is a good introductory text for moduli theory? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi,everyone. I am looking for an introductory textbook on moduli theory,about the background on algebraic geometry,I have read Hartshorne chapter1~4. could you please show some good books or roadmap for studying moduli theory which emphasis to arithmetic aspect? how about Mumford's GIT? is it an introductory textbook? Thank you very much! - 2 If this question remains open it surely should be community wiki. – Grétar Amazeen Oct 29 2010 at 13:15 To Artie Prendergast-Smith : sorry, I have modified the sentence. – kiseki Oct 29 2010 at 13:31 (Deleted my comments, which no longer make sense) – Artie Prendergast-Smith Oct 29 2010 at 14:18 7 Mumford's GIT (in its several editions) is a fundamental source but certainly not an introductory textbook in my view. – Jim Humphreys Oct 29 2010 at 19:24 ## 12 Answers Ian Morrison wrote up some nice lectures in the book Lectures on Riemann surfaces,World Scientific publishers, Proceedings of the college of Riemann surfaces in 1987, at the ICTM in Trieste. They were intended as an informal introduction to the two detailed treatments mentioned below by Mumford (l'Enseignement) and Gieseker (Tata). There are notes on Ravi Vakil's web page for his course on deformations and moduli: http://math.stanford.edu/~vakil/727/index.html There is a nice treatment of the chow coordinates of a projective variety in chapter 1 of the book Basic algebraic geometry by Shafarevich. This is very elementary and readable. There is a good discussion of the existence of the Hilbert scheme in Mumford's book Lectures on curves on an algebraic surface, Annals of math studies #59. Sophisticated, but we were able to use it in a seminar long ago, and got some good insight from it. Mumford (notes by Morrison) first wrote up the case of stable curves in Stability of projective Varieties, in l'Enseignement mathematique, 1977, based on an idea of Gieseker. Then Gieseker himself presented his version at the Tata Institute in Bombay (TIFR), and wrote it up in their series of lectures on mathematics and physics, #69, 1982. The original presentation of the concept of stable curves, due to Alan Mayer and David Mumford, is in talks by Mayer and Mumford at the Woods Hole conference 1964, available on James Milne's web site at Michigan, or that of roy smith (mathwonk) at University of Georgia. As I recall, even the detailed works by Mumford, (GIT, Enseignement), always include some introductory examples and motivation that anyone can read, so one should not shy away from the actual definitive works completely. In regard to the fine recommendations above, Mukai's is actually a textbook as requested, and not a monograph like most of my recommendations here, but that of course makes it longer. For beginners, I would observe that the Chow approach is to characterize a projective variety by all lines meeting it, thus getting a subset of the Grassmannian of lines, while the Hilbert approach is to describe a variety by the set of all hypersurfaces of fixed large degree containing it, thus getting a subspace of the vector space of those polynomials, another Grassmannaian. Then to characterize abstract varieties, one first chooses some natural projective embedding, say by a multiple of the canonical class, then considers the corresponding Hilbert or Chow scheme, and tries to collapse together all different embeddings of the same variety, in GIT by taking a quotient by a group action. This then leads to singularities at orbits which are smaller than usual, i.e. at points with non trivial isotropy coming from automorphisms of the variety. These isotropy groups are included in the data of a moduli "stack", but were always considered informative even earlier. Since the subject is huge, it helps also to know which aspect is of interest. A moduli space is usually the set of isomorphism classes of objects of a given type. Hence, aside from foundational subtleties, it “exists” as a set. Then the problem is to give it more structure and to prove it has some nice properties. In algebraic geometry one often tries to give it structure as an algebraic space, scheme, or quasi projective variety, perhaps progressively in that order. So the first job would be to define a natural structure as abstract topological space or even abstract scheme. Next one wants to capture this structure by some “moduli”. Classically, “moduli” are numbers that distinguish non isomorphic objects, i.e. numerical invariants such as projective coordinates in a field, so this translates into the stage of giving a structure of quasiprojective variety. This requires finding embedding functions, or sections of line bundles which are constant on equivalence classes. If the equivalence classes are orbits of a group action, one seeks functions constant on orbits, i.e. “invariants”, and this is the subject of “invariant theory". Since algebraic projective mappings are continuous, their level sets are closed, so the geometric invariant theory problem arises of which orbits are closed. This leads into various concepts of “stability” of objects under a given action, and also, since closure is a relative notion, of determining certain unstable subsets to exclude so that the remaining orbits become closed. This is the subject studied by Mumford in which he adapted ideas of Hilbert. Finally, one wants to find a good geometric compactification of the given moduli space, since the set of isomorphism classes of a given type is seldom compact. The method of parametrizing moduli spaces by subsets of Hilbert schemes, yields a natural compactification, since Hilbert schemes are projective, but since all isomorphism classes of the original type were already present before compactifying, it is unclear what geometric objects the new points added correspond to. This leads to the challenge of identifying the Hilbert scheme compactification with a more abstract compactification which adds in degenerate versions of the original geometric objects. These abstract objects are called perhaps “moduli stable” objects of the orginal kind, and one must show this abstract compact space can be identified with some version of the Hilbert scheme projective compactified one. The concept of (moduli) stable curves was introduced by Mayer and Mumford, and the next job was to show they give a good abstract separated compactifiction of M(g). This is presumably the content of the paper of Deligne and Mumford. Then the proof they in fact give a natural projective compactification in the Hilbert scheme GIT sense is apparently accomplished in the references of Mumford and Gieseker. Aside from these global aspects of moduli there are local questions, such as what is the dimension of a (component of a) moduli space, or what is its tangent space? These are the concern of “deformation theory”, or the local variations of structure of a given object. Here also one distinguishes deformations of the original objects, usually non singular varieties or manifolds, as in the works of Kodaira, from deformations of the degenerate objects included at the boundary of the compactification, i.e. deformations of singularities. For the latter there is a nice Tata lecture note by M. Artin, and a recent book by Greuel, Lossen, and Shustin. All sources rely fundamentally on the unpublished 1964 PhD thesis of M. Schlessinger at Harvard. After all these foundations are settled, it remains to compute invariant properties of the resulting moduli spaces, their singularities, canonical class, Kodaira dimension, Picard group, cohomology, chow ring, rational curves in them,….. For M(g)bar this is still going in progress. However it seems to me most answers, especially mine, are oriented to geometric questions as opposed to the requested arithmetic ones. Should one suggest some works say by Faltings and Chai? - thank you so much for your suggestions – kiseki Nov 1 2010 at 15:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Read Katz-Mazur, "Arithmetic moduli of elliptic curves" (and for your purposes you can ignore the last chapter, even though it was their motivation for writing the book). - I was going to give K-M as an answer too until I saw BCnrd had got there first. It's a fabulous book for the formalism of moduli problems. Big though. – Kevin Buzzard Oct 29 2010 at 19:42 As for somewhat informal (and mostly brief) introductions to moduli spaces, see • Introduction to moduli spaces by Han-Bom Moon • The Moduli Space of Curves and Its Tautological Ring by Ravi Vakil in the Notices of the AMS • An introduction to the moduli spaces of curves by Maarten Hoeve • Rational Points on Moduli Spaces of Curves by Dave Jensen • A minicourse on moduli of curves by E. Looijenga See also the BAMS paper Perturbations, deformations, and variations (and near-misses") in geometry, physics, and number theory by Barry Mazur, in particular, Sections 4 and 5. - A very readable introduction is S.Mukai, An Introduction to Invariants and Moduli - Huybrechts-Lehn, The geometry of moduli spaces of sheaves - Harris-Morrison, "Moduli of Curves". - 10 It gives a nice overview, but it isn't terribly precise. For a case in point, see their discussion of stacks. – Donu Arapura Oct 29 2010 at 17:31 I have to agree. Also, the construction of the Hilbert Scheme is barely scheched on page 8. That's a pretty huge shortcut for someone who has just finished reading Hartshorne. I'd suggest learning about this first for example in Mumford or FGA explained. Maybe it can be a good book to look at from time to time to measure how familiar you are getting with the general picture. Plus I find the font used in "Moduli of Curves" really bloated and this doesn't help reading it. – YBL Apr 27 2011 at 0:26 An Invitation to Quantum Cohomology: Kontsevich's Formula for Rational Plane Curves by Kock and Vainsencher is a really nice introductory book if you also like enumerative problems. The first part of the book gives a nice introduction to moduli problems with special emphasis on the moduli space of pointed stable curves, $M_{g,n}$. There is also a neat proof of Kontsevich's formula and a chapter on Gromov-Witten invariants. - I like the classic Mumford D., Suominen K. "Introduction to the theory of moduli". - Another introduction to moduli: C. S. Seshadri, "Theory of Moduli", Proceedings of Symposia in Pure Mathematics, Vol. XXIX (Algebraic Geometry - Arcata 1974), pp. 263-304. American Mathematical Society. - I haven't read it (and indeed do not know any GIT), but I think I have read somewhere that Peter Newsteads "Introduction to Moduli Problems and Orbit Spaces" is supposed to be easier to read than Mumfords book on GIT. - I have read most of Newstead this term. It is easy to read, barely mentioning schemes, and introducing moduli through basic examples of classifying endomorphisms of a finite-dimensional vector space. Eventually one is led to studying moduli of vector bundles of fixed rank and degree over a curve. With references to [GIT] profuse throughout the text, Newstead feels almost like a prolonged introduction to Mumford's GIT. It might not have the desired arithmetic flavour, but I would recommend it as an introduction to moduli. It may be difficult to find a copy though. - In the late 50s, Grothendieck gave a series of Bourbaki lectures called "Technique de descente et théorèmes d'existence en géométrie algébriques" (available on NUMDAM). These were later compiled in "Fondements de la géométrie algébrique" (FGA). He discussed descent, existence theorem in formal geometry, Hilbert schemes, Picard schemes. Recently, a series of lecture by Illusie, Nisture, Vistoli & others, explaining all this is in modern langage for students was edited into a book "Fundamental Algebraic Geometry: Grothendieck's FGA Explained". This is a good introduction to many tools used in moduli theory. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277774095535278, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/29635/find-a-linear-transformation-such-that-2-vectors-are-mapped-to-these-two-values
# Find a linear transformation such that 2 vectors are mapped to these two values Find a linear transformation $T\colon \mathbb{R}^2 \to P_2(\mathbb{R})$ such that $T(1,2) = 3 + 4x^2$ and $T(3,5) = 1 - 2x + 3x^2$. I was thinking of approaching the problem the following way: $$T \left( \begin{matrix} 1 \\ 2 \end{matrix} \right) = \left( \begin{matrix} 3 \\ 0 \\ 4 \end{matrix} \right)$$ $$T \left( \begin{matrix} 3 \\ 5 \end{matrix} \right) = \left( \begin{matrix} 1 \\ -2 \\ 3 \end{matrix} \right)$$ $$\text{Let } T = \left( \begin{matrix}A & B \\ C & D \\ E & F \end{matrix} \right) \text{ then }$$ $$\begin{align*} A + 2B = 3\\ C + 2B = 0\\ E + 2F = 4\\ 3A = 5B = 1\\ 3C + 5D = -2\\ 3E + 5F = 3 \end{align*}$$ But converting this system of equations into a matrix and performing row reduction gave me $$T = \left( \begin{matrix} -13 & 8\\-16 & 46/5 \\-14&9 \end{matrix}\right)$$ which when checked against the two given untransformed vectors do not give me the right result. In addition, the question asks to prove that this linear transformation T is unique. How do I show that a transformation is indeed unique? - For future reference, questions should be asked as such. Using the imperative mode implies that you are giving orders. – JavaMan Mar 29 '11 at 2:15 The second equation you derived for $T$ is incorrect: it should be $C+2D=0$, not $C+2B=0$, and your solution does not satisfy this. – Arturo Magidin Mar 29 '11 at 4:20 I'm sorry if my question sounded like a command, that was most certainly not my intention. Thank you Arturo for pointing this out, it appears like I was too tired to notice this stupid mistake. – Millianz Mar 29 '11 at 15:37 ## 1 Answer Note that $(1,2)$ and $(3,5)$ form a basis of $\mathbb{R}^2$. A linear transformation is completely determined by its action on a basis, so that gives uniqueness. More explicitly: Since $(0,1) = 3(1,2) - (3,5)$ and $(1,0)=-5(1,2)+2(3,5)$, then knowing that $T$ does to $(1,2)$ and to $(3,5)$, and knowing that $T$ is linear, tells you what $T$ does to $(1,0)$ and to $(0,1)$, which tells you what $T$ does to everything, and gives you an easy way to write down what $T$ does to any vector $(a,b)$ (in terms of what it does to $(1,0)$ and to $(0,1)$). Since everything will be forced by the values of $T(1,2)$ and of $T(3,5)$, if $U$ were any linear transformation that has the same values at $(1,2)$ and at $(3,5)$ as $T$ does, then it would also agree with $T$ on $(1,0)$ and on $(0,1)$, and therefore $T$ and $U$ would agree everywhere, that is, $T=U$. (This is the "standard" way of proving uniqueness: assume you have two objects [in this case, linear transformations] that have the properties you want, and show that this implies that they are equal.) The moral here is: if you know what a linear transformation does to a basis, you know what the linear transformation does to everything. )And you can define a linear transformation by specifying what it does to a basis.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9631763100624084, "perplexity_flag": "head"}
http://mathhelpforum.com/calculators/86272-should-i-upgrade-83-89-next-semester.html
Thread: 1. Should I upgrade from an 83 to an 89 for next semester? Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? 2. Originally Posted by sinewave85 Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? If the calculator is not used in the exam then you're not disadvantaged by not having one. Most things you might like to do (checking assignment work etc.) can be done using on-line freeware. 3. Originally Posted by sinewave85 Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? Definitely not. I use a TI-82 since high school (I'm now a sophomore) and it is more than enough until now. Furthermore we are not allowed to use any calculator in maths classes, but in physics yes. 4. Hello sinewave85! This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult. Cliffnotes: 1) I would buy one if it is affordable for you 2) If you don't want to or don't have the money now, it won't hurt you 3) It will only become a crutch if you allow it to Jameson 5. Originally Posted by The Second Solution If the calculator is not used in the exam then you're not disadvantaged by not having one. Most things you might like to do (checking assignment work etc.) can be done using on-line freeware. Thanks for the advice! I should look into what kind of freeware is out there. 6. Originally Posted by Jameson Hello sinewave85! This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult. Cliffnotes: 1) I would buy one if it is affordable for you 2) If you don't want to or don't have the money now, it won't hurt you 3) It will only become a crutch if you allow it to Jameson Thanks so much for all the advice! My main interest in one is, as The Second Solution said, the ability to check over my work -- both major steps of complex problems and finished solutions. The cost is significant for me, but so is the time. I find myself spending almost as much time going back over my work line by line as I do working the problems in the first place, and it adds up to a lot of time each week spent on my one math course. With relatively low-intensity freshman courses, I have found the time, but I worry that may change as I get into higher-level work. If I could, for instance, enter something like this $\frac{d}{dx}\left(\sin^{-1}(xy) + \frac{\pi}{2} = \cos^{-1}(y)\right)$ and see quickly if I had the right answer, that would be great. It is not so much that I worry about abusing it (putting questions through the calculator without first doing the work on paper) but rather becoming so attached to it psychologically that I feel lost without it. Since I am a distance student, about 95% of my final grade is one four-hour test at the end of the semester -- a bad time to be switching habits. As I said, thanks to all for the advice. I felt kind of silly asking this question, and I appreciate the thoughtfull responses! Now, I suppose, it is just a matter of weighing my priorities. 7. I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part time or live meekly for a while to earn the \$100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't to tell you to buy it no matter what, but just something to think about. Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for the last year or so of undergraduate math study. That's about every possible thing I can think of on this topic, so good luck 8. Originally Posted by sinewave85 Thanks for the advice! I should look into what kind of freeware is out there. An on-line integrator: Wolfram Mathematica Online Integrator An on-line derivative calculator: Step-by-Step Derivatives 9. Originally Posted by Jameson I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part time or live meekly for a while to earn the \$100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't to tell you to buy it no matter what, but just something to think about. Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for the last year or so of undergraduate math study. That's about every possible thing I can think of on this topic, so good luck Ok, you've convinced me. I will budget for one over the summer. I have no illusions about growing up to be a mathemetician; my goal is just to make it through enough math to keep my degree options open. Thanks for all of the good advice, Jameson! 10. Originally Posted by mr fantastic An on-line integrator: Wolfram Mathematica Online Integrator An on-line derivative calculator: Step-by-Step Derivatives Thanks so much for the links. Those will help alot! I really appreciate all of the input. 11. I happen to own an 89. In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money. 12. Originally Posted by gammaman I happen to own an 89. In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money. Thanks for the input, gammaman. Not being allowed to use the calculator on tests is on the con's list for me. I wonder, however, if you find the calculator helpful or enlightening on daily work. Does it make it easier to work though the material efficiently or to understand the concepts presented? Or do you not use it at all, given its exclusion from tests?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9703741073608398, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=34251
Physics Forums Recognitions: Gold Member Homework Help Science Advisor ## A rolling problem This is a rather neat mechanics problem (in my opinion at least): A sphere with radius r and mass m sits on top of a cylinder with radius R and mass M that can rotate without friction about its axis (normal to the direction of gravity). The sphere's moment of inertia with respect to an axis through its center (which also is its C.M) is given as: $$I_{s}=mk_{s}r^{2}$$. The cylinder's moment of inertia with respect to its axis (on which its C.M is placed) is given by: $$I_{c}=Mk_{c}R^{2}$$ The problem: The sphere is displaced from equilibrium, and begins rolling down the cylinder. Find the angular velocity of the cylinder when the sphere loses contact with the cylinder PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug This a type of problem I have solved before , the trick is simply to cencentrate on enery gained through falling and the energy lossed in rotation. The type of answer can be applied to other shapes than cylindrical. A much greater and informative problem is to attempt to Simulate this by a numerical integration simulator using finite time steps. It illustrates that the concept of smooth rolling is problematic -- under simulation the ball or topcylinder always bounces under initial conditions without which it will not start. The bounces become more pronounced with time and the cylinder leaves the surface at discreet points determined by the last bounce usually with a trajectory which closely follows the substrate. It usually fails to be accurate do to computer rounding or you lose patience with long running simulations. However it indicates this may be a real situation -- the substrate has to deform in order to create a resisting force , and the ball has to be off center to start it's a messy situation ignored by the simple energy maths. Ray. Recognitions: Gold Member Homework Help Science Advisor Re: "A much greater and informative problem is to attempt to Simulate this by a numerical integration simulator.." And whatever has that to do with "Brain Teasers"? (I always thought this was in the domain of proper research/applied maths, but perhaps I'm wrong..) Recognitions: Gold Member Science Advisor Staff Emeritus ## A rolling problem Can't seem to find the problem with the constraint $$r_1 \omega_1 = r_2 \omega_2$$ But this results in the sphere never falling off ! Where am I screwing up ? You mean that you wish to simplify the problem so that you can solve it -- well that's okay but not much of a teaser. Sorry what I meant to say was that sometimes puzzles have hidden difficulties which unless pointed out may invalidate the puzzle. Here the hidden question is whether the top cylinder is always in contact as it rolls, if not then it is not continuously accellerated and also bounces. To start the ball you have to assume a surface depression this accellerates the ball upwards and sideways and therefore it will leave the surface as the depression reaches maximum rebound, Second point is that rolling assumes a coefficient of friction whioch unless stated does not tell you if on contact slippage occurs. You see I am not attacking here just for the sake of it this puzzle has a wealth of hidden complexity. If you are not interested ignore but others may wish to think about it. Recognitions: Gold Member Homework Help Science Advisor rayjohn01: You seem to think that the theory of rolling does not provide accurate answers . It does, down to some length scale that must be/have been determined by experiment. Experiment, mind you, not numerical simulation! When numerical simulations enable us to gain levels of accuracy in our predictions hitherto unreached (i.e, are accurate down to some smaller length scale, or provide us with "acceptably" accurate answers to analytically intractable problems), that's just great (it means science is progressing). I am not at all uninterested in the bouncing behaviour you described; in fact, I found it very fascinating on its own. However, this, in my view, belongs to serious, evolving science, and I really don't see the appropriateness of such matters in a light-hearted "Brain teaser" forum. Now, I doubt the "problem" tickles your brain much, but have you given thought that it may tickle others'? The problem is rather instructive, and has quite a few pit-falls buried in it. (Of course, as long as one is cool-headed, it's quite easy) Your being overly dismissive of a problem in the first place caused my rather waspish comment. BTW, while it is of course true that normal forces are developed by deformations of materials, it does not follow that you need to assume such deformations and rebound in order to get the system moving. Just letting the initial angle to the vertical be a tiny, non-zero angle is enough. Recognitions: Gold Member Homework Help Science Advisor Quote by Gokul43201 Can't seem to find the problem with the constraint $$r_1 \omega_1 = r_2 \omega_2$$ But this results in the sphere never falling off ! Where am I screwing up ? You'll need to use 3 angular velocities in order to solve the problem Recognitions: Gold Member Science Advisor Staff Emeritus I was hoping to solve it in the frame where the cylinder has a pure rotation. 3 angular velocities ? Yes. My w1 is for the cylinder and w2 is the angular velocity of the sphere about its own axis (passing thru CoM of sphere). w3 is not an independent variable, if there's no slipping. So far, there's only one thing that I believe I've figured correctly : I'm not cool-headed. But I shall use the excuse that I haven't really thought about this seriously yet. Doesn't strike me as a brain teaser type problem...but that only means I haven't thought of the best way to solve it. To Arildno Your probably right , I am used to a less formal forum where they mix topics in a very higledy pigledy way. A reason for my reply was that I remember a puzzle in Scientific American on a simple pool shot ( last year or so ) which at first sight appeared harmless but on further scrutiny turned out to be a horrendous problem involving infinite recursions, they had assumed a zero point ball without stating the pocket size. As regards your puzzle this is what worries me-- surfaces are never totally regular even at the atomic level - if the ball ever leaves the surface it appears to get chaotic with the bounces growing and getting longer as a result it never appears to leave the surface at the mathematically predicted point. If you try to do an experiment the closeness of the trajectory prevents a simple view of what happens ( ignoring other physical properties). I realise this is not light hearted stuff but the truth rarely is. Yours Ray. Recognitions: Gold Member Homework Help Science Advisor Quote by Gokul43201 I was hoping to solve it in the frame where the cylinder has a pure rotation. 3 angular velocities ? Yes. My w1 is for the cylinder and w2 is the angular velocity of the sphere about its own axis (passing thru CoM of sphere). w3 is not an independent variable, if there's no slipping. There is to be no slipping. The 3 angular velocities are related to each other by various equations. The first is that of the cylinder, the second is the angular velocity by which the C.M of the sphere changes its position relative to the fixed vertical. (Clearly, these are not the same, since that would imply that the material points on the cylinder on the line connecting the cylinder axis and the C.M. of the sphere remain the same points through time) The third angular velocity is the angular velocity of the sphere. Recognitions: Gold Member Homework Help Science Advisor Quote by rayjohn01 If you try to do an experiment.. This is an example of what 17th-19th century experimental physicists did with extreme care and in painstaking detail. It is well-known, that in the vast majority of cases, classical mechanics approximations (including the smooth rolling theory) predicted with great success the measured quantities (up, of course, to some error margin). Once again, I am delighted by the discovery of complexity, not frightened by it. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by arildno There is to be no slipping. The 3 angular velocities are related to each other by various equations. The first is that of the cylinder, the second is the angular velocity by which the C.M of the sphere changes its position relative to the fixed vertical. (Clearly, these are not the same, since that would imply that the material points on the cylinder on the line connecting the cylinder axis and the C.M. of the sphere remain the same points through time) The third angular velocity is the angular velocity of the sphere. Okay, your w1, w2, w3 are my w1, w3, w2 respectively. Not going to look at this till tomorrow morn. Recognitions: Gold Member Homework Help Science Advisor It's time I disclosed my reason for posting this problem. The major reason why I did so, is that the problem is one of the simplest that highlights the intricacies involved in the standard definition of "rolling". The basic solution technique is, as rayjohn pointed out, that of using energy conservation. This can be regarded as the "trivial" part of the problem. The "hard" part is to state the rolling condition correctly! (I call this "hard", since most textbooks I've seen either skips it entirely, or formulates it in an ambigouous way that might easily lead to false conclusions, because the examples typically used may, in a subtle manner, lead to false generalizations. More about this later on.) 1. The basic contact point condition in energy conservation: Let's first "forget" the rolling aspect, and instead focus on answering the question: What must the contact point condition be, in order for the system cylinder+sphere to be energy-conservative? Clearly, it must be: The relative velocity between the contact point on the cylinder and the contact point on the sphere must be zero! (Otherwise, the internal force acting between them would do work on the entire system) Or, differently stated, the contact point velocities must be equal. Let us assume that the cylinder (radius R) rotates with angular velocity $$\omega_{c}$$ Since the C.M. of the sphere (radius r) moves in a circular orbit about the cylinder axis, we assign to it an angular velocity $$\omega_{C.M}$$ The sphere itself has an angular velocity attached to it, which we write as: $$\omega_{s}=\omega_{C.M.}+\omega_{rel}$$ Hence, we have, for equal contact point velocities: $$R\omega_{c}=(R+r)\omega_{C.M}-r\omega_{s}$$ Or: $$R\omega_{c}=R\omega_{C.M}-r\omega_{rel} (1)$$ This is the correct contact point condition for energy conservation. I believe eq. (1) raises some eyebrows, since the sphere's (total) angular velocity is not an explicit parameter!! I will now state the standard "rolling" condition, and show that this, correctly interpreted, is, in fact, equivalent to (1) 2. The rolling condition: The classical rolling condition, is that given objects 1 and 2, they are said to roll on each other, if the "contact-point arclength" (denoted either as "CPA" or "s") as traced out on object 1 equals the CPA as traced out on object 2. Or, in terms, of velocities, we have: $$\dot{s}_{1}=\dot{s}_{2} (2)$$ Equation (2) is the classical rolling condition. Let us rewrite eq. (1) in the form: $$R(\omega_{C.M}-\omega_{c})=r\omega_{rel} (3)$$ Now, the left-hand side of (3) is easy to interpret: $$R\omega_{C.M}$$ is the velocity by which the contact point slides down the cylinder surface, because the contact point always lies at distance R on the line connecting the cylinder axis and the sphere's C.M. (and that line, clearly, rotates with the C.M s angular velocity). But, since the cylinder itself rotates, the actual CPA-velocity is not given by $$R\omega_{C.M}$$ ! Because a material point on the cylinder moves with velocity $$R\omega_{c}$$ the CPA-velocity on the cylinder must be given by their difference, namely the left-hand side of (3) In order therefore, that the rolling condition is equivalent with the energy conservation demand, I must show that $$r\omega_{rel}$$ is, in fact, the correct expression for the CPA-velocity as traced out on the sphere. (I'll do that later..) Recognitions: Gold Member Homework Help Science Advisor In order to get a grip on the CPA-velocity concept, we need to introduce a very specific frame of reference, namely the sliding reference frame (SFR). Consider an object O moving in some manner on a surface S. One reference frame is of particular impotance, namely the reference frame with its origin at the instantaneous point of contact, and its axes parallell with the vector normal and the tangents at the contact point (SFR) (Since, for differentiable surfaces their tangent planes at the point of contact must coincide, SFR is unique) Now, to the terminology of "sliding". Clearly, if O is at rest relative to SFR, its material point of contact remains the same throughout the motion. We then say that O is sliding along S, since no CPA is traced out on O (i.e, the CPA on O remains a single point) (Conversely, if the contact point remains on the same material point on S, whereas O rotates, we say that O is "spinning" on S) Let us now assume that the contact point on O has a relative (parting) tangential velocity with respect to the SFR-frame (we neglect relative normal velocity, since then O would leave S) . Then a CPA will be traced out on O in the opposite direction, with the same speed. Hence, we are able to state the following relation for the CPA-velocity: The CPA-velocity of an object is the negative tangential component of the object's contact point velocity, computed relative to the SFR-frame. We may, of course, interchange the roles played by O and S in the above argument, and since the SFR-velocity is the same for both objects, the rolling condition is seen to imply that the (absolute) contact point velocity of each object must be the same. But this is the energy conservation demand. It is a curious fact that the CPA-concept must be formulated in a (generally) blatantly non-inertial reference frame, that doesn't have the decency to be even a body-fixed reference frame!! Due to this intricacy, I would advocate the abolishing of all references to the classical rolling condition in modern physics education, and instead only use the simple energy conservation demand. I will post some further arguments for this position later. Thread Tools | | | | |----------------------------------------|-------------------------------|---------| | Similar Threads for: A rolling problem | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 3 | | | Introductory Physics Homework | 8 | | | Introductory Physics Homework | 28 | | | Introductory Physics Homework | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375287890434265, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/112748/what-manifolds-can-have-a-non-piecewise-linear-structure
## What manifolds can have a (non-piecewise) linear structure? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) By the definition I'm using, all manifolds are Hausdorff and second countable. For all non-negative integers $n$, I define $B_n$ is to be `$\: \big\{\mathbf{v} \in \mathbf{R}^n : ||\mathbf{v}||<1\big\} \:$`. For what manifolds does there exist an atlas of charts $c : U\to B_n \:$ such that the transition maps are all locally affine? (Replacing "locally affine" with "piecewise affine" would make the answer, by definition, those manifolds for which there exists a piecewise linear structure.) - Are you looking for sufficient conditions or a list of examples? – Misha Nov 18 at 13:43 ## 1 Answer If I am right, such manifolds are called affine manifolds. They are smooth manifolds together with a flat, torsion free connection. Maybe it is worth recalling Chern's conjecture that the Euler characteristic of an affine manifold should vanish. Konstant B. and Sullivan D. in: "The Euler characteristic of an affine space form is zero, Bull. Amer. Math. Soc. 81 (1975)", no. 5, 937-938 proved this conjecture in the case of the quotient of the ordinary space ${\mathbb R}^n$ by a discrete group of affine transformations. - 2 David -- a small remark: the Chern conjecture is for compact affine manifolds; otherwise it is obviously false (take the Euclidean 3-space minus a point). – algori Nov 18 at 14:23 algori-- thank you for your remark. – David C Nov 18 at 14:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8914503455162048, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2051/why-do-we-think-there-are-only-three-generations-of-fundamental-particles/17500
# Why do we think there are only three generations of fundamental particles? In the standard model of particle physics, there are three generations of quarks (up/down, strange/charm, and top/bottom), along with three generations of leptons (electron, muon, and tau). All of these particles have been observed experimentally, and we don't seem to have seen anything new along these lines. A priori, this doesn't eliminate the possibility of a fourth generation, but the physicists I've spoken to do not think additional generations are likely. Question: What sort of theoretical or experimental reasons do we have for this limitation? One reason I heard from my officemate is that we haven't seen new neutrinos. Neutrinos seem to be light enough that if another generation's neutrino is too heavy to be detected, then the corresponding quarks would be massive enough that new physics might interfere with their existence. This suggests the question: is there a general rule relating neutrino masses to quark masses, or would an exceptionally heavy neutrino just look bizarre but otherwise be okay with our current state of knowledge? Another reason I've heard involves the Yukawa coupling between quarks and the Higgs field. Apparently, if quark masses get much beyond the top quark mass, the coupling gets strong enough that QCD fails to accurately describe the resulting theory. My wild guess is that this really means perturbative expansions in Feynman diagrams don't even pretend to converge, but that it may not necessarily eliminate alternative techniques like lattice QCD (about which I know nothing). Additional reasons would be greatly appreciated, and any words or references (the more mathy the better) that would help to illuminate the previous paragraphs would be nice. - 2 Excellent question! I remember reading something about this that I'll see if I can look up if nobody else gets there first. – David Zaslavsky♦ Dec 18 '10 at 22:38 I agree, good question. And I also remember reading an answer to this question some time ago. – Noldorin Dec 19 '10 at 2:04 ## 5 Answers There are very good experimental limits on light neutrinos that have the same electroweak couplings as the neutrinos in the first 3 generations from the measured width of the $Z$ boson. Here light means $m_\nu < m_Z/2$. Note this does not involve direct detection of neutrinos, it is an indirect measurement based on the calculation of the $Z$ width given the number of light neutrinos. Here's the PDG citation: http://pdg.lbl.gov/2010/listings/rpp2010-list-number-neutrino-types.pdf There is also a cosmological bound on the number of neutrino generations coming from production of Helium during big-bang nucleosynthesis. This is discussed in "The Early Universe" by Kolb and Turner although I am sure there are now more up to date reviews. This bound is around 3 or 4. There is no direct relationship between quark and neutrino masses, although you can derive possible relations by embedding the Standard Model in various GUTS such as those based on $SO(10)$ or $E_6$. The most straightforward explanation in such models of why neutrinos are light is called the see-saw mechanism http://en.wikipedia.org/wiki/Seesaw_mechanism and leads to neutrinos masses $m_\nu \sim m_q^2/M$ where $M$ is some large mass scale on the order of $10^{11} ~GeV$ associated with the vacuum expectation value of some Higgs field that plays a role in breaking the GUT symmetry down to $SU(3) \times SU(2) \times U(1)$. If the same mechanism is at play for additional generations one would expect the neutrinos to be lighter than $M_Z$ even if the quarks are quite heavy. Also, as you mentioned, if you try to make fourth or higher generations very heavy you have to increase the Yukawa coupling to the point that you are outside the range of perturbation theory. These are rough theoretical explanations and the full story is much more complicated but the combination of the excellent experimental limits, cosmological bounds and theoretical expectations makes most people skeptical of further generations. Sorry this wasn't mathier. - Thank you; this was very informative (no need to apologize). – Scott Carnahan Dec 19 '10 at 9:08 One part of the answer to this question is that the neutrinos are Majorana particles (or Weyl--- the two are the same in 4d), which can only acquire mass from nonrenormalizable corrections. The neutrinos do not have a right handed partner in an accessible energy range. If there is such a partner, it is very very heavy. So this means that they have to be exactly massless if the standard model is exactly renormalizable. The interactions that give neutrinos mass are two-Higgs two-Lepton scattering events in the standard model Lagrangian, where the term is $HHLL$ with the SU(2) indices of each H contracted with an L. This term gives neutrino masses, but is dimension 5, so is suppressed by the natural energy scale, which is 1016 GeV, the GUT scale. This gives the measured neutrino masses. This term also rules out a low energy Planck scale. If you have another generation, the next neutrino would have to be light, just because of this suppression. There is no way to couple the Higgs to the next neutrino much stronger than the other three. There are only 3 light neutrinos, as revealed by the Z width, BBN, as others said. - 2 It is unclear whether or not Neutrinos are Majorana particles or Dirac particles. It is more elegant if they are the former, but it is an open question in particle physics. – Columbia Sep 10 '11 at 22:14 @Columbia: Neutrinos are Majorana in the standard model, and they are certainly Majorana in real life, although I agree that experimentally it is an open question. Sterile neutrinos can have any mass you like, they are not stabilized to be zero mass by a gauge charge, so they require a ridiculous fine tuning to be TeV mass, let alone eV mass. Barring any evidence that they are there, this possibility should be excluded a-priori. – Ron Maimon Sep 11 '11 at 1:52 @Ron Maimon: excluding anything that is possible a priori is ridiculous, and has historically led to all sorts of insanity, like the energists rejecting Boltzmann, or Einstein's war on Quantum Mechanics. Be open to and consider all possible options. Especially since large-mass sterile neutrinos make at least a plausible dark matter candidate. – Jerry Schirmer Sep 14 '11 at 20:52 @Jerry Schirmer: then why don't you consider that the Higgs Lagrangian might break rotational invariance a little bit? – Ron Maimon Sep 15 '11 at 9:15 1 Um, I don't think many people besides Lubos would argue that string theory is accepted true science. I know that the string theory professors at my graduate department certainly didn't. – Jerry Schirmer Sep 17 '11 at 18:31 show 10 more comments My research involves a geometric model of spin-1/2 particles, though the discussion of the three generations is beyond the scope of my thesis. However, if I can figure out how to mention this speculation in the Future Work section at the end of my thesis, I will probably do so. I can't help but marvel at the coincidence of the number three for generations as well as for dimensions of space (where the inertial reference frame fixes the time dimension related to the spatial dimensions). If spin was treated as an oscillation (not just an "intrinsic angular momentum"), then higher-generational particles could have more complicated modes of oscillation: second- and third-generation particles could have two- and three- dimensional spin modes, respectively. If spin was somehow related to mass (which the magnetic dipole moment seems to say it is), then the greater masses of the higher-generational particles could be explained by these higher-dimensional oscillations. Somehow. :) I am only putting this idea out because I don't suspect I will have the chance to investigate it myself in a more thorough manner. But who knows, maybe I will, and maybe your comments on the idea will help me hone it. Or maybe someone else will take it and run with it, which is fine with me as long as I am mentioned in the credits somewhere. ;) - I am upvoting because I think this is an idea that we all have enjoyed to speculate with when we were youngsters, and we digg for quaternions and clifford algebras ans alternative vector products... It is bold of you to tell it explicitly. – arivero Sep 17 '11 at 1:09 This idea is not original--- the idea that the muon is an oscillation excitation is as old as the muon. It is not the best model available today, because the quarks and leptons are fundamental. – Ron Maimon Nov 27 '11 at 7:33 +1 for relating mass to higher oscillation modes in three dimensions. – Stefan Bischof Mar 18 at 23:21 If instead of "why do we...", you had asked "why do I...", speculative answers could be considered too. In 25 years, I have thought some ones; perhaps some people would like to add more, here as community wiki (doesn't generate rep) or in the comments if rep is less than 100. • Lock between colour and flavour. • mass matrix would need to be 3x3 for some reason. • mass matrix is 3x3 as a minimum to violate CP (but it could be more, then) • mass matrix is 3x3 in order to use the involved lengths in some discretization of the calculus of derivatives up to second order. Related to the ambiguity of choosing an ordering when quantising terms such as $xp$. • three generations come from the the relationship between bosonic strings, with 24 transversal directions, and superstrings, with 8. Related to Leech lattice, heterotic strings, etc. • three generations without the neutrinos are 84 helicities, or three generations with Right and Left neutrinos but excluding the top quark are also 84 helicities. This is also the number of components of the source of the 11D Membrane, of M-theory fame. • three generations is the only solution for my (@arivero) petty theory, the sBootstrap, to work with leptons besides quarks. And even only with quarks, any other solution is uglier. - Theoretical reasons for three generations. Traditional. A. Anything less than 3 generations could not introduce CP violation into heavy quark decay. This actually lead to the prediction of the bottom and top quarks. GUT/String theory B. The biggest special lie group is E8, this happens to nicely split into three copies of E6, producing 3 generations. - -1: This is nonsense. Generations are not copies of the gauge group, E8 does not split into any more than 1 copy of E6, certainly not in standard string compactifications, you don't add up dimensions to figure out how Lie groups split, because some generators disappear completely during the breaking. The generation number is determined by Fermionic zero modes on the compactification manifold, not by the Lie Group. – Ron Maimon Nov 27 '11 at 7:25 The 248-dimensional adjoint representation of E8 transforms under SU(3)×E6 as: (8,1) + (1,78) + (3,27) + (\overline{3},\overline{27}). – Dr BDO Adams Nov 28 '11 at 1:00 What you wrote is the standard embedding of SU(3)xE6 in E8 to reproduce an E6 GUT. Notice that there is only one copy of E6, not three. – Ron Maimon Nov 28 '11 at 2:34 One copy of the forces the 78 adjoint version, but free copies of the fermions the 27-multpet. Having the forces and fermions in the same group, means it has to be a supersymmetric theory, with E8_bosons * E8_fermions. – Dr BDO Adams Dec 30 '11 at 8:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940630316734314, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34094/interpolation-on-real-riemann-surfaces/34155
## Interpolation on real Riemann surfaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background: Generalizing the notion of upper half plane to compact Riemann surfaces: Suppose $p(x,y) \in \mathbb{R}[x,y]$ is a polynomial in 2 variables with real coefficients, defining a smooth complex plane algebraic curve $C_0 = {(x,y) \in \mathbb{C}^2:p(x,y)=0}$. Let $C$ be the projective closure of $C_0$ in $P^2\mathbb{C}$, and assume that $C$ is also smooth. Since $C$ is defined over the real numbers, it comes equipped with an involution $\sigma:C\rightarrow C$, $\sigma(x,y) = (\overline{x},\overline{y})$. Denote by $X$ the compact Riemann surface associated to $C$, and let $X_\mathbb{R}$ be the set of fixed points of $\sigma$. If the space $X - X_\mathbb{R}$ has exactly two connected components, then $X$ is called a real compact Riemann surface of dividing type, and the two connected components are denoted by $X_+$ and $X_{-}$ (the decision between "the positive half plane" and "the negative half plane" arbitrarily). And finally, to the question: I am given a real compact Riemann surface of dividing type $X$, and interested in interpolation problems of meromorphic functions with conditions such as "all the poles of $f$ lie in the upper half plane". Does anybody knows of any previous work in the area? Any known techniques to relate these topological and algebraic constructions? - 1 Liran, your question seems to be a bit vague... Could you add a bit more details? – Dmitri Aug 1 2010 at 21:19 For example, I would like to know when for a given set of points \{a_1,\dots,a_n\}, there is a meromorphic function whose zeros are exactly \{a_1,\dots,a_n\}, and all of whose poles belong to $X_{+}$. Does this clarify? – Liran Shaul Aug 1 2010 at 21:22 Great, this is much more concreet! – Dmitri Aug 1 2010 at 21:48 ## 1 Answer Let me give a version of the question in the comment: Let $X$ be a curve of genus $g$ with a real separting involution, and conisder the map $Sym^n(X_+)\to Jac^n(X)$. For wich $n$ is this map surjective? Or, in other words, what is the minimal number of poles of a meromorphic function with poles in $X_+$ that garanties that zeros can happen at any collection of points? This sounlds like a very nice question. In the case $g=1$ you can always take $n=2$. Also for any $g$ you sould take $n>g$ because $Sym^g(X)$ maps to $Jac^g(X)$ with degree $1$. Added. The notation $Sym^n(X)$ means the symmetric power of $X$. Let me explain also why what is above is a reformulation of the original question. Indeed, a divisor $\sum_i x_i-\sum_i y_i$ on $X$ is a divisor of a meromorfic function iff it represent zero in $Jac^0(X)$. So if we want to chose arbitraly zeros $x_i$ of a meromorphic function $f$ keeping the poles $y_i$ in $X_+$ it is enouth to know that $\sum_i y_i$ can take any value in $Jac^n(X)$ (to cancel the point $\sum_i x_i$). This is eactly the condition that $Sym^n(X_+)\to Jac^n(X)$ is surjective. - I am sorry, but could you explain your notation "Sym"? and why is this formulation equivalent? thanks! – Liran Shaul Aug 1 2010 at 22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228991270065308, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/symmetry?page=4&sort=active&pagesize=15
# Tagged Questions The symmetry tag has no wiki summary. 6answers 1k views ### What is the symmetry which is responsible for conservation of mass? According to Noether's theorem, all conservation laws originate from invariance of a system to shifts in a certain space. For example conservation of energy stems from invariance to time translation. ... 1answer 195 views ### Conserved quantum observables from symmetries *with density matrix* I’ve read Ballentine where he derives the conserved observable operators (momentum, energy, ...) from symmetries of space-time. Can I read up such a derivation in more detail somewhere else or even ... 1answer 367 views ### Weinberg's way of deriving Lie algebra related to a Lie group I was reading the second chapter of the first volume of Weinberg's books on QFT. I am quite confused by the way he derives the Lie algebra of a connected Lie group. He starts with a connected Lie ... 4answers 725 views ### QM and Renormalization (layman) I was reading Michio Kaku's Beyond Einstein. In it, I think, he explains that when physicsts treat a particle as a geometric point they end up with infinity when calculating the strength of the ... 2answers 210 views ### Correlation Functions, Symmetries and Measurements Is there a book that goes deep into correlation functions? What I'm interested in a book/article that explains in the detail the relation of the correlation functions with symmetries and how one can ... 2answers 364 views ### What's the importance of Noether's theorem in Physics The Noether's theorem that I want to mention is the following: Noether's theorem. I know the importance of Noether's contribution to modern algebra. Can anyone write about Noether's theorem in ... 2answers 376 views ### Why are conformal transformations so prevalent in physics? What is it about conformal transformations that make them so widely applicable in physics? These preserve angles, in other words directions (locally), and I can understand that might be useful. Also, ... 2answers 157 views ### What are the limitations of the FLRW metric? I was wondering, given how in any other area of life making an explosion spherically symmetric is more or less impossible is there any reason to expect that the universe is? I appreciate that the FLRW ... 1answer 129 views ### Similar masses and lifetimes of the $\Delta$ baryons Why do the four spin 3/2 $\Delta$ baryons have nearly identical masses and lifetimes despite their very different $u$ and $d$ quark compositions? 0answers 115 views ### Symmetries of separable potential For separable potential, say $x^4+y^4$, its symmetry are degenerate. Is that a generic case to every separable potential? I will explain my question: The potential $x^4+y^4$ has \$A_1, B_1, A_2, B_2, ... 1answer 173 views ### How to perform a scale (invariance) transformation? According to this wikipedia article in the $\phi^4$ section, the equation $$\frac{1}{c^2}\frac{∂^2}{∂t^2}\phi(x,t)-\sum_i\frac{∂^2}{∂x_i^2}\phi(x,t)+g\ \phi(x,t)^3=0,$$ in 4 dimensions is invariant ... 2answers 252 views ### Is there a 1-1 correspondence between symmetry and group theory? The professor in my class of mathematical physics introduces the definition of groups and said that group theory is the mathematics of symmetry. He gave also some examples of groups such as the set ... 2answers 322 views ### Groups acting on physics - a clarification on electrons and spin My first question is fairly basic, but I would like to clarify my understanding. The second question is to turn this into something worth answering. Consider a relativistic electron, described by a ... 2answers 204 views ### Which symmetry is associated with conservation of flux? Which symmetry is associated with conservation of flux (e.g., in electromagnetism)? For example, when working with Gauss's law in electromagnetism, net flux through an arbitrary volume element ... 0answers 313 views ### Gauge redundancies and global symmetries It is often said that local (gauge) transformation is only redundancy of description of spin one massless particles, to make the number degrees of freedom from three to two. It is often said that ... 1answer 257 views ### Understanding P-, CP-, CPT-violation etc. in field theory and in relation to the principle of relativity I can never get my head around the violations of $P-$, $CP-$, $CPT-$ violations and their friends. Since the single term "symmetry" is so overused in physics and one has for example to watch out and ... 2answers 551 views ### The Energy-Momentum Tensor and the Ward Identity I have a question regarding a homework problem for my quantum field theory assignment. For the purposes of the question, we can just assume the Lagrangian is that of a real scalar field: ... 2answers 375 views ### The Ozma Problem The "Ozma problem" was coined by Martin Gardner in his book "The Ambidextrous Universe", based on Project Ozma. Gardner claims that the problem of explaining the humans left-right convention would ... 2answers 179 views ### Particles mass determined by SO(D-2) vs SO(D-1) I've recently come across this statement that massless particles arise from $SO(D-2)$ symetry and massive particles from $SO(D-1)$. I would have guessed that it would be the exact opposite way, but ... 1answer 965 views ### Symmetric potential and the commutator of parity and hamiltonian In one dimension - How can one prove that the Hammiltonian and the parity operator commute in the case where the potential is symmetric (an even function)? i.e. that [H, P] = 0 for V(x)=V(-x) 2answers 876 views ### Poincare group vs Galilean group One can define the Poincare group as the group of isometries of the Minkowski space. Is its Lie algebra given either by the equations 2.4.12 to 2.4.14 (..as also given in this page - ... 3answers 265 views ### About symmetry, and about electron density in crystals in particular The book Introduction to Solid State Physics by Kittel says: "We have seen that a crystal is invariant under any translation of the form T [...]. Any local physical property of the crystal, such as ... 3answers 379 views ### Noether's theorem and “translations” of the Hamiltonian function In a nutshell, Noether's theorem states that for every continuous symmetry a corresponding conserved quantity exists. Now, the Hamiltonian equations of motion (let's talk about a classical system ... 2answers 1k views ### Relation between total orbital angular momentum and symmetry of the wavefunction My question essentially revolves around multi-electron atoms and spectroscopic terms. I understand the idea that the total wavefunction for Fermions should be antisymmetric. Consider as an example, ... 4answers 939 views ### If all conserved quantities of a system are known, can they be explained by symmetries? If a system has $N$ degrees of freedom (DOF) and therefore $N$ independent1 conserved quantities integrals of motion, can continuous symmetries with a total of $N$ parameters be found that deliver ... 1answer 363 views ### SU(N) symmetry and its representations If a Lagrangian containing an N-multiplet of fields is invariant under global $\mathbf{SU}(N)$ transformations, does that necessarily imply it is invariant under $\mathbf{SU}(N-1)$, ... 2answers 644 views ### What does “soft” in “soft symmetry breaking” mean? For example it is stated that if supersymmetry breaking is soft then stability of gauge hierarchy can be still maintained. 3answers 405 views ### Noether theorem with semigroup of symmetry instead of group Suppose You have semigroup instead of typical group construction in Noether theorem. Is this interesting? In fact there is no time-reversal symmetry in the nature, right? At least not in the same ... 2answers 501 views ### Expansion in spherical harmonics in cubic symmetry suppose I have an electrostatic potential which I expand in spherical harmonics via $$\sum_{l,m} A^l_m r^n P_l^{|m|}(\cos \theta) e^{im\varphi}$$ and I know that the field has cubic symmetry. Is ... 2answers 197 views ### How do you derive Noether's theorem when the action combines chiral, antichiral, and full superspace? How do you derive Noether's theorem when the action combines chiral, antichiral, and full superspace? 1answer 687 views ### Why must the deuteron wavefunction be antisymmetric? Wikipedia article on deuterium says this: The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, ... 2answers 317 views ### Symmetry breaking What is a good place to learn the details of symmetry breaking? What I am looking for is a more serious exposition than the wiki-article, which explains the details, especially the mathematical part, ... 4answers 383 views ### Is “real” antimatter (odd under C, P, T) unphysical? A positron is odd under charge conjugation and parity reversal but nevertheless even with respect to time reversal. Is a theoretical positron which would be odd under all three symmetries (C, P, T) ... 3answers 383 views ### What sort of experiment would directly test time reversal invariance? I guess the title says it all: how could/would you experimentally test whether our universe is truly time reversal invariant, without relying on the CPT theorem? What experiments have been proposed to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186093807220459, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CJM-2012-060-7
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view Generalized Frobenius Algebras and Hopf Algebras Read article [PDF: 497KB] Published:2013-02-06 • Miodrag Cristian Iovanov, University of Southern California, Department of Mathematics, 3620 South Vermont Ave. KAP 108, Los Angeles, California 90089-2532 Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF Abstract "Co-Frobenius" coalgebras were introduced as dualizations of Frobenius algebras. We previously showed that they admit left-right symmetric characterizations analogue to those of Frobenius algebras. We consider the more general quasi-co-Frobenius (QcF) coalgebras; the first main result in this paper is that these also admit symmetric characterizations: a coalgebra is QcF if it is weakly isomorphic to its (left, or right) rational dual $Rat(C^*)$, in the sense that certain coproduct or product powers of these objects are isomorphic. Fundamental results of Hopf algebras, such as the equivalent characterizations of Hopf algebras with nonzero integrals as left (or right) co-Frobenius, QcF, semiperfect or with nonzero rational dual, as well as the uniqueness of integrals and a short proof of the bijectivity of the antipode for such Hopf algebras all follow as a consequence of these results. This gives a purely representation theoretic approach to many of the basic fundamental results in the theory of Hopf algebras. Furthermore, we introduce a general concept of Frobenius algebra, which makes sense for infinite dimensional and for topological algebras, and specializes to the classical notion in the finite case. This will be a topological algebra $A$ that is isomorphic to its complete topological dual $A^\vee$. We show that $A$ is a (quasi)Frobenius algebra if and only if $A$ is the dual $C^*$ of a (quasi)co-Frobenius coalgebra $C$. We give many examples of co-Frobenius coalgebras and Hopf algebras connected to category theory, homological algebra and the newer q-homological algebra, topology or graph theory, showing the importance of the concept. Keywords: coalgebra, Hopf algebra, integral, Frobenius, QcF, co-Frobenius MSC Classifications: 16T15 - Coalgebras and comodules; corings 18G35 - Chain complexes [See also 18E30, 55U15] 16T05 - Hopf algebras and their applications [See also 16S40, 57T05] 20N99 - None of the above, but in this section 18D10 - Monoidal categories (= multiplicative categories), symmetric monoidal categories, braided categories [See also 19D23] 05E10 - Combinatorial aspects of representation theory [See also 20C30]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8524261116981506, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/30504/semi-simple-lie-groups-and-their-fundamental-representations
## Semi-simple lie groups and their fundamental representations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've got a really basic question on the representation theory of semi-simple Lie groups. I know that a rank-R semi-simple Lie group possesses R fundamental representations. But is the relation between semi-simple Lie groups and their fundamental representations injective? That is, can two distinct semi-simple Lie groups possess the same set of fundamental representations? I have always just assumed that the answer was no, but I don't know how to prove it ( in my defense I'm an undergrad physicist, not a mathematician, by trade!) Any straightening out of this matter would be most appreciated. - 2 I don't understand this question. A (finite-dimensional) representation of a Lie group $G$ is a homomorphism $G \to \mathrm{GL}(V)$ for some vector space $V$. In other words, it's not just the vector space $V$, but includes the map. Hence if you have two non-isomorphic Lie groups $G$ and $G'$, what does it mean to say that they have the same set of (fundamental) representations? – José Figueroa-O'Farrill Jul 4 2010 at 12:35 OK, I see what you mean: in physics we often (I suppose lazily) refer to the vector space itself as the representation. Thus it is often said that the up, down and strange quarks constitute a fundamental 'representation' of the group SU3, even though we're just talking about the particles (ie the 3 states in that vector space). So I suppose what I really want to know is whether, if you have two non-isomorphic semi-simple groups (like SU2 and SO3), is it possible for the weight diagrams corresponding to the fundamental representations be the same in each case? – fourthinternational Jul 4 2010 at 13:07 1 Are you only interested in compact (connected) Lie groups? For example, how about the compact and split real forms of a common complex (connected) semisimple Lie group (such as ${\rm{SU}}_n$ and ${\rm{SL}}_n(\mathbb{R})$)? – Boyarsky Jul 4 2010 at 13:30 The terminology has to be a little more precise here: (1) You seem to be talking only about compact semisimple Lie groups, whose irreducible representations are all finite dimensional and correspond naturally to those of a corresponding complex group or its Lie algebra. (2) To get all of the fundamental representations you have to work with a simply connected group, which can cover various proper homomorphic images having the same rank but sharing only some of the irreducible representations. Note too that a weight diagram lives in a space whose dimension equals the rank of the given group. – Jim Humphreys Jul 4 2010 at 13:31 Thanks for the clarifications. Yes, I'm only interested in connected semi-simple Lie groups. As for compactness, I'm pretty sure a restriction to compact groups would be fine in this context (since the sort of particle physics reps I'm directly interested in are finite-dimensional and unitary). – fourthinternational Jul 4 2010 at 13:44 show 1 more comment ## 1 Answer It seems to me that you need to think more about what it is you really want to know. First, taking your question at face value. It sounds like a version of 20 Questions that starts with "I'm thinking of a semisimple Lie algebra". It is not clear what questions I am allowed to ask. My overall impression is that we could have an involved discussion about the rules and arrive at the point where I have a set of questions you find acceptable and which allow me to determine the Lie algebra. For example, I find it plausible that the rank and the list of dimensions of fundamental representations determine the Lie algebra. I don't feel inclined to attempt a proof. Any such proof would rely heavily on the classification of simple Lie algebras and tricks involving the lists of dimensions of fundamental representations. I would also question whether this is the right question from the point of view of the physics. First we don't know how many fermions there are. All we can say is that we have done these scattering experiments at these energies and this is the list of particles we have seen. Second if we think we have found all particles then the gauge bosons form the adjoint representation so we know the dimension of the adjoint representation. Is this information you would disclose in the game of 20 Questions? Thirdly I don't know of any reason why the fermions should be a fundamental representation. As I understand it supersymmetry does impose strong conditions on the representations. My understanding is that there are good physical arguments for restricting the spin to be at most 2 (or maybe less?). I would be interested in seeing these various physical conditions listed and it would then be challenging problem to classify the solutions. This must be known in the physics community. - Well, what I really want to know is if you knew you had a complete set of fundamental representations of a (compact, connected) semi-simple Lie group, but didn't know what the group was, you could figure out what the group was from the information that these were its fundamental representations. (I heard that groups are classified entirely by their 'weight lattices' and 'root lattices', and since these are determined by the fundamental weights, a complete set of fundamental weights should be sufficient to work backwards to identifying the group. But I'd like a reference for this. – fourthinternational Jul 4 2010 at 22:07 You also need to say simply-connected (you would have saved some trouble if you had put "Lie algebra" instead of "Lie group"). – Bruce Westbury Jul 5 2010 at 0:13 The real issue is: what precisely do you mean by "having a complete set of fundamental representations"? – Bruce Westbury Jul 5 2010 at 0:15 Your comments on weight and root lattices sound confused. Have you attended a course or read a book on the classification of complex simple Lie algebras? – Bruce Westbury Jul 5 2010 at 0:21 1 It sounds as though you need a tutorial. There are numerous books on this subject (look in QA252.3 in your library). This site works best for focused questions. – Bruce Westbury Jul 5 2010 at 7:57 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467942118644714, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/247975/logic-functions-and-statements
# Logic: Functions and statements What is the relationship between the concept of an equation (a statement) and the concept of a function (and the concept of morphisms in category theory)? I'm going to use equations as the most important subclasses of statements. To take a seemingly trivial example, the equation `(E1) 2*x = 5`. This equation which could be rewritten as `(E2) f(x) => 2*x -5`, but E1 and E2 are technically not equivalent. With f(x)=> I mean, the statement is a function of x. For every x f(x) returns true or false, depending on the contents of f(x). The equation E1 can be true for values depending on x. The truth value is dependent on the value of x, i.e. E1 is a function which returns a bool, given a value of x. E2 returns a value. I haven't defined domain and co-domain. Say it is N. E2 is always true. So if I write `(E3) f(x) = 2*x -5 and f(x)=0`, E3 and E1 become equivalent. In other words, everything is a function, because equations are simply boolean functions. In (E2), f(x) could be seen as a placeholder for (E1). E2 says: there is an expression, which is an equation, which has the contents 2*x=5. In a programming language this could be all said much clearer. In python E1 would be: 2*x ==5 and E3: def f(x): return 2*x-5==0. More generally, as I understand it, the approach of Russell/Wittgenstein was to reduce everything to statements. In category everything is reduced to functions (or morphisms in it's own parlance). Edit: some further notes to clarify. - 4 You say that the equation $2x=5$ ‘could be rewritten as’ $f(x)=2x-5$; this is absolutely false. $(E1)$ is a statement, something with a truth value, while $(E2)$ is an incomplete definition $-$ specifically, the definition of the function $f$ $-$ and as such has no truth value; they are completely different species of animal. – Brian M. Scott Nov 30 '12 at 11:25 I have rewritten the statement. However I don't see anything wrong with writing f(x) = 2x -5. This can be interpreted as saying: 2x -5 depends on x. Rather the traditional statement f(x) = 2x-5, should be written as f(x) := 2x-5, because it is a definition of a function, and not an equation. "=" acts as an assignment operator not as a comparison operator ("=="). This works obviously in most cases, because mathematicians can tell the difference between "=" and "==". This doesn't mean it's completely rigorous. – RParadox Nov 30 '12 at 11:36 3 I saw that you’d rewritten it; my objection still holds. I don’t agree that ‘$f(x)=2x-5$’ can be interpreted as saying that $2x-5$ depends on $x$. It has two possible interpretations: (1) an incomplete definition of $f$, and (2) an assertion that some previously defined function $f$ is the same as the function $2x-5$. The latter does make it a statement, but one that is unrelated to the statement $2x=5$. – Brian M. Scott Nov 30 '12 at 11:41 The relationship is that f(x) takes the value of 0 when the equation is true. In fact equations can be rewritten as A=B <=> A-B=0. define f(x)=A-B and let f(x)=0. There you go. – RParadox Dec 1 '12 at 22:24 1 No. Under interpretation (1) of my previous comment it’s meaningless to talk about the truth or falsity of $f(x)=2x-5$. Under interpretation (2) the statement ‘$f(x)=2x-5$ is true’ is meaningful but has nothing at all to do with the statement ‘$2x=5$’; rather, it’s equivalent to the statement ‘$f(x)-(2x-5)=0$’. – Brian M. Scott Dec 1 '12 at 22:33 ## 3 Answers Though not widely used for some reason, I have found the following definition to be useful. Given sets A and B, f is said to be a function mapping A to B iff: $\forall x(x\in A\rightarrow f(x)\in B)$ Built into this notation is the fact that every element of A has unique image in B. By a simple substitution, we have: $(f(a)=b \wedge f(a)=c)\rightarrow b=c$ As for morphisms in category theory, it's best not to think of them as functions (although they can be functions). Category theory is less like set theory (with its elements, sets and functions) and more like graph theory (with its nodes and directed arrows). The difference is that there may be any number of arrows (morphisms) between any pair of nodes (objects). Every node $x$ has an identity arrow with a source and target node at $x$. Composition of arrows is defined for any pair of arrows with compatible source and target nodes. And composition of arrows is associative. See: http://dcproof.com/CategoryDefinition.htm - Ok, great. Are equations or more generally relations, relations of graphs? For instance in computer science we evaluate A+B == C*D, by reducing each side to a boolean expression. – RParadox Dec 4 '12 at 8:37 You can use a 2D graph to show which values satisfy some relation between a pair of variables. Each point (dot or pixel) has an ordered pair of numbers associated with it. The graph of the equation $y=2x+3$ is a straight line. For each point on that line, the $x$ and $y$ co-ordinates are such that $y=2x+3$. The graph of the relation $y<2x+3$ is the region below the same line. For each point in that region, the co-ordinates are such that $y<2x+3$. In your equation here, you have 4 variables. It is true if and only if A+B has the same numerical value as C*D. – Dan Christensen Dec 4 '12 at 13:52 It is not a direct answer on the question, but Brian mostly answered that in the comment. There are many possible ways of using the abstract arrows of categories to express sentences. I describe one of these: In category theory originally one rather considers various structures and their morphisms, than simply functions. Let us now fix an algebraic first order language: operation symbols $\mu,\nu,..$ of given arities, and a set of variables $x,y,...$. We can formally build terms out of these and then equations are the atomic formulas of the form $\tau(\vec x)=\sigma(\vec x)$ where $\tau,\sigma$ are terms containing variables within $\vec x=(x_1,..,x_n)$. Now consider the category of algebraic structures of the given type. Then, for example, to the given equation $\tau(\vec x)=\sigma(\vec x)$ we can assign the canonical homomorphism from the free algebra on $\{x_1,..,x_n\}$ to its quotient by the given equation. Or, another way, for any algebra $A$ and equation as above over the variables $x_1,..,x_n$, we can assign them the injection $$\{(a_1,..,a_n)\in A^n\mid \tau(\vec a)=\sigma(\vec a)\} \to A^n.$$ - Interesting. This assumes it is possible to distinguish between a first order language and a second order language and that many of those loaded words, such as structures, functions, morphisms, free algebras, equation, etc. are always perfectly clear. – RParadox Nov 30 '12 at 12:05 Categories are diagrams of morphisms, a function is a morphism and an equation $f=g$ is a diagram with the following property. Take the two morphisms $f: A \rightarrow B$, $g: C \rightarrow D$ and the corresponding morphisms $h: A \rightarrow C$ and $i: B \rightarrow D$. $f=g \iff$ $g$ and $h$ are the identity. This is very easy to see in vector spaces: two vectors are equal if and only if they map every point A to the same point B. If any other vector maps C to D, and A is equal to C, B has to be equal to D. Or put more simple: any two vectors that are equal map to the same point from 0. Or: if A=A then A-A=0, so the inverse of every vector that is equal to A will retract A to zero (this requires the inverse though). For instance two triangles are equal for instance, if put above each other there is only one triangle left. To compare two triangles in this way requires an affine transformation. In other words in category theory morphisms (functions) are more fundamental than equations and equations can be explained by the diagram given above. In the category of sets every relation is a subset of the cartesian product. Functions and equations are relations. To give another example in the category of programs, every side is evaluated as a tree to true or false or a simple expression. Both sides are the result of the function, call it eval() which maps every statement to a simple expression. In my example 2*x=5, 2*x is an expression which gets evaluated depending on x. If x is 3, 2*x is being evaulted as 6 and can be compared to 5. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421197175979614, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/48198/effective-resistance-of-inductor/48270
# Effective resistance of inductor In a lab experiment, we connected a simple circuit: an AC voltage source, connected (in series) to a variable resistor and an inductor. We measured the current in the circuit, and the voltage that falls over the inductor. We calculated the phase difference between the voltages and used it to calculate $V_L$, and used it to calculate $R_L$, the effective resistance of the inductor. We got that $R_L(I)$ rises up to a maxima, and then decreases, but we couldn't understand why - as we understand, $R_L=2\pi fL$, so it should be constant... What did we not understand? - So you're adjusting the variable resistor, and the frequency of the AC source is constant? – Art Brown Jan 3 at 2:25 Exactly. The frequency stays constant while we change the current through the resistance. – Ofir Jan 3 at 8:16 ## 1 Answer There are a couple of nonlinear magnetic material effects that might be at play here, although this answer must be described as speculation without more detail. Both effects are more pronounced if your inductor is ungapped. 1) At very low current levels (corresponding to very low levels of magnetic field H), the inductance can be lower than nominal. (The B-H characteristic of the magnetic core material has a lower slope right at the origin.) As current increases from these low levels, the calculated inductor impedance $Z_L =2 \pi f L$ would increase and then stabilize at the nominal value. 2) As current continues to increase, eventually the inductor starts to saturate. (The B-H characteristic flattens at high fields.) Inductance then becomes a decreasing function of current, so the calculated inductor impedance would decrease. You can see some B-H curves illustrating these effects in the wikipedia article on "Saturation (magnetic)". Introducing a gap in the magnetic core reduces the component's inductance but stabilizes it against these effects. - Thanks. I'm now interested in understanding why the B-H characteristic is not linear in ferromagnetic materials. Can you recommend a good source for that? – Ofir Jan 3 at 18:49 @Ofir: You're welcome. You could try Kittel, Introduction to Solid State Physics, but I'm not sure if it's the best reference. I'm no expert on the theory of ferromagnetism. – Art Brown Jan 4 at 3:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231684803962708, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38056?sort=newest
## Differential Operators on General Commutative Rings Let k be an algebraically closed field of characteristic zero, and let R be a commutative k-algebra. Then a (Grothendieck) differential operator on R is a k-linear endomorphism $\delta$ of R, with the property that there is some $n\in \mathbb{N}$ such that for any $r_0,r_1...r_n\in R$, the iterated commutator vanishes: $$[...[[\delta,r_0],r_1]...,r_n]=0$$ Let the smallest such $n$ be the order of $\delta$. The set of all differential operators is then a subring of $End_k(R)$, which has an ascending filtration given by the order, and with $D_0(R)=R$. If $R=k[x_1,...x_r]$, then $D(R)$ will be polynomial differential operators (in the calculus sense) in r-variables. More generally, if R is the ring of regular functions on a smooth affine variety, then $D(R)$ is the usual ring of differential operators generated by multiplication operators and directional derivatives. However, if $Spec(R)$ is not smooth, then $D(R)$ does not have an obvious geometric interpretation. For example, if $R=k[x]/x^n$, then all k-linear endomorphisms of R are differential operators, and so $$D(k[x]/x^n)=Mat_n(k)$$ ## Idempotents For both research reasons and curiosity, I am interested in idempotent elements in $D(R)$, for R a general commutative ring. An idempotent is an element $\delta\in D(R)$ such that $\delta^2=\delta$. Idempotents in a commutative ring $R$ correspond to projections onto disconnected components of $Spec(R)$, but $D(R)$ is not commutative. If the base ring $R$ does have idempotents, then they will also be idempotents under the inclusion $R\subset D(R)$. However, there can be idempotents of higher order. Consider the example from before, of $R=k[x]/x^n$. Here, $D(R)=Mat_n(k)$, and there are many idempotents in $Mat_n(k)$, even though $R$ here has none. As an explicit example, take $k[x]/x^2$, and consider the endomorphism which sends 1 to 0 and x to itself. This can be realized by the differential operator $x\partial_x$ (which has a well-defined action on $k[x]/x^2$), and it squares to itself. In general, I believe that $R$ must have nilpotent elements if $D(R)$ will have idempotents of positive order (since the symbol needs to square to zero). My general question is, what is known about general idempotent elements in $D(R)$? Has anyone seriously looked at them? Do they correspond to something geometric? Is there a condition one can put on a subspace decomposition $V\oplus W=R$ such that the projection onto $V$ which kills $W$ is a differential operator for the algebra structure on $R$? - ## 1 Answer Here's a cute partial result, classifying idempotents of order 1. Let $\delta$ be an idempotent differential operator of order 1. Then there is a unique decomposition $R\simeq A\oplus M$, with $A$ a subring and $M$ a square-zero ideal, and an element $m\in M$, such that $$\delta = \epsilon + m - (1-2\epsilon) \pi_M$$ where $\epsilon$ is an idempotent in $R$ and $\pi_M$ is the projection onto $M$ with kernel $A$ (which is a derivation). Note that if $R$ is an integral domain, then $\epsilon$ is $1$ or $0$. As a consequence, when $R$ is an integral domain, the decomposition of $R$ corresponding to $\delta$ and $1-\delta$ is $R=A'\oplus M$, where $A'$ is a shear translation of $A$ given by $a\rightarrow a+m$ (for some fixed $m$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263443946838379, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/3320/does-the-number-pi-have-any-significance-besides-being-the-ratio-of-a-circles-d/17397
# Does the number pi have any significance besides being the ratio of a circle's diameter to its circumference? Pi appears a LOT in trigonometry, but only because of its 'circle-significance'. Does pi ever matter in things not concerned with circles? Is its only claim to fame the fact that its irrational and an important ratio? - 1 I'd rather this be a comment instead, so: $\pi$ turns up in the expression for the so-called "probability integral" (a.k.a. the "error function") among other things. How circles relate to this is a bit of a long-winded explanation though. – J. M. Aug 26 '10 at 0:11 2 – Qiaochu Yuan Aug 26 '10 at 0:17 10 Also, let's get one thing straight here: circles are eerily important. You will never stop running into circles in mathematics. – Qiaochu Yuan Aug 26 '10 at 0:26 3 – Qiaochu Yuan Aug 26 '10 at 0:52 2 Fundamental source of $\pi$ is circle nothing else. It may be difficult to find it but it is always there. – Pratik Deoghare Aug 26 '10 at 12:43 show 7 more comments ## 4 Answers It is difficult to know if a circle is not lurking somewhere, whenever there is $\pi$, but the values of the Riemann zeta function at the positive even integers have a lot to do with powers of $\pi$: see here for the values. For instance, you can prove that the probability that two "randomly chosen" positive integers are coprime is $\frac{1}{\zeta(2)} = \frac{6}{\pi^2}$. - 7 You had my upvote at "it is difficult to know if a circle is not lurking somewhere..." – J. M. Aug 26 '10 at 0:22 Maybe that has something to do with angles and the 2D lattice. – asmeurer Dec 20 '12 at 20:03 $\pi$ appears in Stirling's approximation, which is not obviously related to circles. This means that $\pi$ appears in asymptotics related to binomial coefficients, such as $$\displaystyle {2n \choose n} \approx \frac{4^n}{\sqrt{\pi n}}.$$ In other words, the probability of flipping exactly $n$ heads and $n$ tails after flipping a coin $2n$ times is about $\frac{1}{\sqrt{\pi n}}$. This asymptotic also suggests that on average you should flip between $n + \sqrt{\pi n}$ and $n - \sqrt{\pi n}$ heads. - 6 This is closely related to J. Mangaldan's comment about the probability integral. Somehow I think it all ties back to the fact that e^{-x^2} is its own Fourier transform. – Qiaochu Yuan Aug 26 '10 at 0:31 Yes. Yes it does. :) – J. M. Aug 26 '10 at 0:47 1 – Qiaochu Yuan Jan 14 '11 at 0:53 @QiaochuYuan You might be interested in Kunth's "Why Pi?" Lecture. He shows how this is related to cirlces! – Peter Tamaroff Feb 27 '12 at 5:19 Because of the formula $e^{i\pi}+1=0$ you will find $\pi$ appearing in lots of places where it's not clear there is a circle, e.g in the normal distribution formulae. - Yes, the ratio $\pi$ of a circle's circumference to its diameter shows up in many, many places where one might not expect it! One partial explanation (similar in spirit to "circles lurk everywhere") is that the equation for a circle is a $quadratic$ (eg. $x^2+y^2 = r^2$.) After nice linear functions, the next most commonly used functions are quadratic functions and everywhere one runs into a quadratic function, a trig substitution (e.g. $x = r \cos \theta; y=r\sin \theta$) may be useful, turning the quadratic function into something involving $\pi.$ This explains the antiderivative $\int \frac{1}{1+x^2} dx$ involving $\pi$, the sum of reciprocals of squares $\sum^\infty\frac{1}{k^2}$ involving $\pi$ and the area under the Gaussian distribution involving $\pi$. And so on.... - 1 How does it explain the sums of reciprocals of squares involving pi? – George Lowther Jan 13 '11 at 23:02 1 – Qiaochu Yuan Jan 14 '11 at 0:52 @Qiaochu: Most of the proofs I know apply equally well to evaluating $\sum 1/n^d$ (for d even) and even $\sum (-1)^d/n^d$ (for d odd), which also involve $\pi$. So, the fact that the terms are squares doesn't seem particularly significant to the appearance of $\pi$. – George Lowther Jan 14 '11 at 1:41 I'll have a look through the alternative proofs in that link though. – George Lowther Jan 14 '11 at 1:42 (I meant $\sum(-1)^d/(2n+1)^d$ above). I always thought of these sums involving $\pi$ for similar reasons, and not just the $d=2$ case in isolation. – George Lowther Jan 14 '11 at 1:56 ## protected by t.b.Jul 20 '11 at 8:48 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932047426700592, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/71725/restricted-three-body-problem/71728
## Restricted Three-Body Problem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The movement of a spacecraft between Earth and the Moon is an example of the infamous Three Body Problem. It is said that a general analytical solution for TBP is not known because of the complexity of solving the effect of three bodies which all pull on each other while moving, a total of six interactions. Mathematician Richard Arenstorf while at NASA solved a special case of this problem, by simplifying the interactions to four, because, the effect of the spacecraft's gravity upon the motion of the vastly more massive Earth and Moon is practically non-existent. Arenstorf found a stable orbit for a spacecraft orbiting between the Earth and Moon, shaped like an '8' http://en.wikipedia.org/wiki/Richard_Arenstorf Arenstorf's technical report is here http://hdl.handle.net/2060/19630005545 Was Arenstorf's solution purely analytical, or did he use numerical mechanisms? Is the '8' shape an optimal path, meaning the route on which the spacecraft would expand the least amount of energy? If yes, how was this requirement included in the derivation in mathematical form? If anyone has a clean derivation for this problem, that would be great, or any links to books, other papers, etc. Thanks - 1 Crossposted to math.SE: math.stackexchange.com/questions/54735 – Zev Chonoles Jul 31 2011 at 15:41 ## 4 Answers Maybe this 2001 Notices of the AMS article (PDF link) by Richard Montgomery is useful: it describes a figure-8 solution to the 3-body problem as well as some other "coreographies". Some of the latter ones were found numerically by Carles Simó. EDIT: It is probably not the same figure-8 solution mentioned in the question, since the masses of the three bodies are equal in the case of the Chenciner--Montgomery solution. - The coreographies, of course, are for the $N$-body problem with $N$ often much larger than 3. – José Figueroa-O'Farrill Jul 31 2011 at 15:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I can only give you a partial answer to the first question, after which I will add a comment on the TBP. In a sense, Arenstorf's solution cannot be found by purely analytical methods since we only have approximate observations of the positions and interactions of celestial bodies. To give an example, the "official" model for the motion of the Moon is based on the two 900-page volumes of Ch. Delaunay from the 1860's. He computed three power series representing the position of the Moon, taking into account the perturbation of major bodies and the best current observations of the time. But "computing the series" is not accurate. What he really did was find all the (hundreds of) terms up to the $7^{\rm th}$ degree. With so many intervening variables, this means writing three expressions that use 126 pages. In the end, the most accurate representation available is not a true series but a polynomial expression with approximate coefficients! It is possible that a stable solution is found using interval arithmetic. This would amount to giving a formal proof that such an orbit exists for a set of parameters that include those that describe the positions of Earth and Mars. I have not looked at Arenstorf's report yet, so I do not know if this is what he did. Now to the comment. It is technically false to say that "a general analytical solution for TBP is not known". The full expression of the solution in power series was given by Sundman in 1912 (see Wikipedia), and the solution for the $n$-body problem was given in 1991 by Q. Wang (there is a nice report of this in Math. Intelligencer 1996,18,p. 66–70). The reference is "The global solution of the n-body problem". Celestial Mechanics and Dynamical Astronomy 50 (1): 73–88. Unfortunately, these solutions are purely theoretical as they converge too slowly for practical computation. - The problem of 'optimal path' for going to moon has been studied under the topic of "Circular Restricted three-body problem" and "Planar circular three-body problem (PCR3BP)". Poincare' made major contributions in our understanding of complexity of such class of problems, with this (See his "New methods of celestial mechanics"). The major analytical breakthroughs were made in 60s by C. Conley and R. McGehee (See McGeee's thesis: "Some homoclinic orbits for the restricted three-body problem"), who described the geometry of the PCR3BP near the fixed points L1 and L2. The existence of homo-clinic orbits are proved for certain cases of the parameter $\mu$, which describes the ratio of mass of smaller of the two bodies with the mass of two bodies. In 90s, the work of Marsden,Lo,Koon,Ross at JPL and Caltech (Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics, Chaos, 2000) finally computed these homoclinic and heteroclinic intersections and showed that 'low-energy' travel between Earth and Moon can be understood in terms of these manifolds. For an introduction to the mathematics and computation of 'low-energy' orbits in Restricted three body problem, I would recommend the following (free) e-book: http://www2.esm.vt.edu/~sdross/books/space_book.html - 2-Body Problems also exist which have no specific solution such that there is a range of solutions for a given physical condition. This means solvability is not based on the number of bodies but the state and representation of space. Indian Journal of Science and Technology published a physical proof called, “Binary Precession Solutions based on Synchronized Field Couplings” http://www.indjst.org/index.php/indjst/article/view/30008/25962 In this research, a generalized wave function with classical characteristics was isolated within the motion of binary stars. The wave function provided the first tool for cracking the complex motion of DI Herculis and other binary stars that had several measured precession solutions. http://xxx.lanl.gov/pdf/1111.3328v2.pdf In this research, published about a year after the Indian Journal of Science and Technology publication, mathematicians from Imperial College London produced a proof for the physical existence of wave functions. The research was published in Nature Magazine. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336534142494202, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/phase-transition+statistical-mechanics
# Tagged Questions 0answers 26 views ### Lattice model completely constrained by boundary data I am dealing with a lattice model that has the peculiar property that if I specify all the spins on the boundary, by local conservation laws, the whole lattice configuration (throughout the whole ... 0answers 39 views ### Lambda transition data points of $\require{mhchem}\ce{^4He}$ I'm looking to get some data on the lambda transition of $\require{mhchem}\ce{^4He}$. I need the data points of the specific heat vs. temperature graph, if that makes sense. 0answers 53 views ### Spin Glass Transitions in Random Bond Ising Model (RBIM) In brief, is there a list of spin glass transition properties for the RBIM on different lattices? Is there any know results about the relationships between these probabilities for a graph and its ... 2answers 89 views ### Any example of lower symmetry in high temperature phase than the low temperature phase? All the phase transition cases I came across so far have this property: the lower temperature phase has lower symmetry than the higher temperature one. But it is nowhere explicitly said that, lower ... 2answers 276 views ### Can a first order phase transition have an order parameter? Order parameter is used to describe second order phase transition. It seems that in some papers it is used in the first order phase transitions. Can first order phase transition have an order ... 1answer 201 views ### Reasons for violation of universality in statistical mechanics The Universality in statistical mechanics is nicely explained by the renormalization group theory. However, there are fair amount of numerical and theoretical studies show that it can be violated in ... 1answer 83 views ### Parameter determining argon phase Currently I am working a molecular simulation to determine phases of an argon NPT ensemble using Lennard Jones potential. Mainly I use the radial distribution function to determine solid, liquid, or ... 2answers 282 views ### What is a bulk phase transition? I have been able to google "bulk phase transition" and get plenty of results that verify that something called a bulk phase transition exists, however, I cannot seem to find a precise definition of ... 4answers 120 views ### What is the simplest system that has both, discontinous and continous phase transitions? I am looking the simplest system that has both discontinous phase transition and a continous phase transition between the same phases (you can change one parameter). discontinous transition: first ... 2answers 384 views ### Latent heat vs temperature of phase transitions? Is the latent heat associated with phase transitions correlated with the temperature at which they occur? The latent heat is related to the difference in energy between the two phases, and the ... 1answer 296 views ### Mean-field theory in 1D Ising model A mean-field theory approach to the Ising-model gives a critical temperature $k_B T_C = q J$, where $q$ is the number of nearest neighbours and $J$ is the interaction in the Ising Hamiltonian. Setting ... 1answer 114 views ### What kind of phases nanoparticles have (gas-solid-liquid)? If a phase transition requires a number of particles that is in the TD-limit, can nanoparticles (~10 atoms) have phase transitions? What kind of phases and transitions nanoparticles have? 3answers 238 views ### How many particles is needed to observe a phase transition? This is a question that was rised when we were discussing "what is melting actually". How many particles you need to form a liquid or solid. I have some remarks to point out what I want to know. Q: ... 1answer 110 views ### renormalization group in d=3 Do we really understand why the renormalization group in $d=2+\varepsilon$ and $d=4-\varepsilon$ taking $\varepsilon=1$ gives "good" values for critical exponents in $d=3$? Are they exceptions? Is it ... 1answer 268 views ### What are conditions for the existence of a critical value (for a phase transition)? Can there only be a critical temperature if there is some natural unit for an observable in the model, i.e. if there is a natural scale for something? Otherwise I don't see how for a system there ... 1answer 72 views ### Phase Transition in the Ising Model with Non-Uniform Magnetic Field Consider the Ferromagnetic Ising Model ($J>0$) on the lattice $\mathbb{Z}^2$ with the Hamiltonian with boundary condition $\omega\in\{-1,1\}$ formally given by ... 4answers 425 views ### Where can I find a good classification for phase transitions? I'm having a hard time to find a good (and modern) classification scheme for phase transitions and related universality classes. Can someone recommend a paper/book/site? Detailed mathematical aspects ... 2answers 178 views ### What happens for the spins around the phase transition Suppose we now consider a lattice of spin, say Ising model, and the phase transition at the critical temperature $T_c$. There are few scaling laws describe the regime around the critical temperature ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069532155990601, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/214902-real-analysis-sequences-limits.html
# Thread: 1. ## Real analysis; sequences and limits I think I need to do this by contradiction, but I'm stuck. Prove: If the sequence <an> has the limit A, the sequence <bn> has the limit B, and an<bn for all n, show A< or = B. 2. ## Re: Real analysis; sequences and limits Nah, no need for contradiction. If $\displaystyle \begin{align*} a_n < b_n \end{align*}$, then $\displaystyle \begin{align*} b_n = a_n + c_n \end{align*}$, where $\displaystyle \begin{align*} c_n > 0 \end{align*}$ for all n, which must go to some positive value (C) since B is finite. Then $\displaystyle \begin{align*} \lim_{n \to \infty} b_n &= \lim_{n \to \infty} \left( a_n + c_n \right) \\ &= \lim_{n \to \infty} a_n + \lim_{n \to \infty} c_n \\ &= A + C \end{align*}$ But we also know this limit is B, therefore B = A + C, and thus $\displaystyle \begin{align*} B \geq A \end{align*}$. 3. ## Re: Real analysis; sequences and limits How do you know B is finite? Also, how do you know the c sub n has limit C? In class, we have done these type of proof using the definition of converge. I can follow this, mostly, but I don't think I'm allowed to do the proof this way. Thanks, 4. ## Re: Real analysis; sequences and limits If a sequence has a limiting value of B, then B is a number and is clearly finite. In order for all the b terms to be greater than the a terms, then it's the same as adding some value to the a terms. Since the a terms go to some value A, in order for the b terms to also go to some value, then the remaining c terms also have to go to some value. 5. ## Re: Real analysis; sequences and limits Hi, In direct answer to your question about proof by contradiction, I offer the following: 6. ## Re: Real analysis; sequences and limits This is helpful, but did you let epsilon = -L/2? which is positive because L is less than zero. But where did you get -L/2? I'm guessing it's from some scratch work that should be obvious to me, but isn't. Also, we haven't proved or used that the limit of the difference two convergent sequences is the the difference of the limits. I know it's true, but since we haven't proved it I can't use it. 7. ## Re: Real analysis; sequences and limits Originally Posted by amyw This is helpful, but did you let epsilon = -L/2? which is positive because L is less than zero. But where did you get -L/2? I'm guessing it's from some scratch work that should be obvious to me, but isn't. Also, we haven't proved or used that the limit of the difference two convergent sequences is the the difference of the limits. I know it's true, but since we haven't proved it I can't use it. This is a well-known problem used to teach a particular way of proof. Suppose that $b<a$ then $b<\frac{a+b}{2}<a$ So let $\varepsilon = \frac{{a - b}}{2} > 0$. Now note that $a - \varepsilon = \frac{{b + a}}{2}\;\& \;b + \varepsilon = \frac{{b + a}}{2}.$ That means $\left( {b - \varepsilon ,b + \varepsilon } \right) \cap \left( {a - \varepsilon ,a + \varepsilon } \right) = \emptyset$. But because $(b_n)\to b$ almost all of the terms are in $\left( {b - \varepsilon ,b + \varepsilon } \right)$. AND because $(a_n)\to a$ almost all of the terms are in $\left( {a - \varepsilon ,a + \varepsilon } \right)$. Surely you can see a contradiction there? 8. ## Re: Real analysis; sequences and limits Hi Amy, I hope the following is yet more "insight" into your problem: 9. ## Re: Real analysis; sequences and limits Thanks, this makes perfect sense to me now that I've had some sleep. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643737077713013, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/hyperbolic-geometry?sort=votes&pagesize=50
# Tagged Questions Questions on hyperbolic geometry, the geometry on manifolds with negative curvature. 4answers 1k views ### Hyperbolic critters studying Euclidean geometry You've spent your whole life in the hyperbolic plane. It's second nature to you that the area of a triangle depends only on its angles, and it seems absurd to suggest that it could ever be otherwise. ... 7answers 1k views ### What are the interesting applications of hyperbolic geometry? I am aware that, historically, hyperbolic geometry was useful in showing that there can be consistent geometries that satisfy the first 4 axioms of Euclid's elements but not the fifth, the infamous ... 1answer 385 views ### How to create mazes on the hyperbolic plane? I'm interested in building maze-like structures on the [5, 4] tiling of the hyperbolic plane, where by maze-like I mean something akin to a spanning tree of the underlying lattice: a subgraph of the ... 3answers 444 views ### What is the connection of the sequence 3, 4, 5/3, 2/3, 1 with deep topics? Quote from Don Zagier (Mathematicians: An Outer View of the Inner World): " I like explicit, hands-on formulas. To me they have a beauty of their own. They can be deep or not. As an example, ... 1answer 240 views ### Teichmüller spaces via representations I don't have much expertise in this area but I am confused by a remark I overheard regarding Teichmüller spaces. I was always under the impression that for a surface $S$ (say genus $\geq 2$) ... 2answers 165 views ### Embedding the Infinite Binary Tree in Regular Tilings Consider the regular tiling $(m,n)$ in which $m$ $n$-agons meet at each vertex. Most of the time this tilings have to "live" in the hyperbolic plane. The edges of its polygons define a graph where two ... 4answers 914 views ### Area of a triangle $\propto\pi-\alpha-\beta-\gamma$ A hyperbolic geometry is a non-Euclidian geometry with constant negative curvature. It has the property that given a line and a point, many lines can be drawn containing the point that never meet the ... 1answer 198 views ### Is it possible to deduce a model for hyperbolic geometry from a synthetic set of axioms a la Euclid/Hilbert/Tarski? Motivation I learned from Emil Artin's book Geometric Algebra that the standard incidence axioms of affine geometry (two points determine a unique line, parallel postulate, no three collinear points ... 3answers 391 views ### Simulation of Brownian Motion If I want to simulate Brownian motion in the Euclidean space I can simulate it by a point that is moving a distance $\epsilon$ in an arbitrary direction then it randomly choose a new direction and ... 3answers 551 views ### how to generate tesselation cells using the Poincare disk model? I'm a computer programmer, and while I like math, this is an area where my understanding of math falls short of what I need in order to apply it successfully. I've been looking at M.C. Escher's ... 2answers 636 views ### Is an equilateral triangle the same as an equiangular triangle, in any geometry? I have heard of both equilateral triangles and equiangular triangles. (For example, this sporcle quiz lists both.) Are these always equivalent, regardless of geometry? I know they are the same in ... 1answer 129 views ### How to identify $SL(2,\mathbb{C})/SU(2)$ and the hyperbolic 3-space? I know that every coset representative $g\in SL(2,\mathbb{C})$ for $SL(2,\mathbb{C})/SU(2)$ can be chosen of the form g = \left( \begin{array}{cc} \sqrt{t} & \frac{z}{\sqrt{t}}\\ 0 & ... 1answer 174 views ### When does there exist an isometry that switches two subspaces? Let $V$ be a real vector space of finite dimension and let $\langle \cdot, \cdot \rangle$ be a non-degenerate symmetric bilinear form on $V$. Let $U, W \subseteq V$ be linear subspaces such that ... 1answer 144 views ### Parabolic elements correspond to punctures In Mapping Class Group by Farb and Margalit page 22, they say: Let $S$ be a hyperbolic surface. If a non-trivial element of $\pi_1(S)$ is represented by a loop (up to homotopy) around a puncture, ... 1answer 87 views ### Conformal automorphism of $H^n$ I was looking for the characterization ( or a complete list ) of the conformal automorphisms of the upper half space $H^n$ in $R^n$. I know that when $n=2$, it is $PSL(2,R)$ and when $n=3$, it is ... 3answers 281 views ### Expression of the Hyperbolic Distance in the Upper Half Plane While looking for an expression of the hyperbolic distance in the Upper Half Plane $\mathbb{H}=\{z=x +iy \in \mathbb{C}| y>0\},$ I came across two different expressions. Both of them in Wikipedia. ... 1answer 352 views ### Shortest path on hyperboloid On the sphere $S^2$, the shortest path between two points is the great circle path. How about $H^2$, the hyperboloid $x^2+y^2-z^2=-1, z\ge 1$, with the Euclidean distance? Is there a formula for the ... 2answers 112 views ### Reflections generating isometry group I was reading an article and it states that every isometry of the upper half plane model of the hyperbolic plane is a composition of reflections in hyperbolic lines, but does not seem to explain why ... 1answer 337 views ### Interpretation of Hyperbolic Metric and Möbius Transforms I was wondering if someone could explain the interpretation of the following results. In hyperbolic geometry, we say that lengths are invariant under the action of Mob($\mathbb{H}$) if given any ... 2answers 91 views ### Two hyperbolic surfaces corresponding to conjugate Fuchsian groups are isometric I have a basic question : a) Suppose $\gamma$ and $\gamma'$ be conjugate Fuchsian groups acting freely and properly discontinuously on the upper half-plane H to produce two Riemann surfaces \$ ... 1answer 53 views ### Embedding manifolds of constant curvature in manifolds of other curvatures I know that there is no complete surface embedded in $\mathbb{R}^3$ of constant curvature -$k$ for any $k$. But you can clearly embed the hyperbolic plane (curvature -1) into hyperbolic 3-space ... 1answer 409 views ### Generalized Laws of Cosines and Sines I wonder the "laws of sines and cosines" in the two cases below and how to derive them. (or any related sources) (i) For geodesic triangles on a sphere of radius $R>0$. (so constant curvature ... 0answers 42 views ### Hyperbolic diameter of Amsler's surface I've recently learned about Amsler's surface, a surface of constant negative Gaussian curvature. If I understand things correctly, there is a whole family of such surfaces, differing in the angle of ... 4answers 201 views ### Completeness of Upper Half Plane I am trying to prove that the upper half plane, defined as $\mathbb{H} = \{z \in \mathbb{C} : \Im(z)>0 \}$, is complete with respect to the hyperbolic metric. First I note that if I have some ... 1answer 97 views ### Backslash notation: $\Gamma {\setminus} \mathbb{H}^n$ I encountered this notation in a paper by Carron: When X = $\Gamma{\setminus}\mathbb{H}^n$ is a real hyperbolic manifold, ... $\Gamma$ is a discrete torsion free subgroup of SO$(n,1)$. My ... 2answers 67 views ### Showing the function $f(x,y)$ is one by one Yesterday, while teaching geometry, I was faced to a problem saying that the function below is an distance function: $$d(P,Q)=\Big|\ln\frac{\frac{x_1-c+r}{y_1}}{\frac{x_2-c+r}{y_2}}\Big|$$ where in ... 1answer 92 views ### Fuchsian groups and surfaces It's a fact that if two Fuchsian group are conjugate, the corresponding surfaces are isometric. Is the converse true ? Take 2 isometric Riemann surfaces $S$ and $S'$(which are covered by the upper ... 1answer 138 views ### Hyperbolic triangle and two points in Poincare disk Given a hyperbolic triangle $T$ and two points $p$ and $q$ in Poincare disk. Note that $p$ and $q$ are outside the triangle. If $p$ has shorter distances to the three vertices of $T$ than $q,$ can we ... 1answer 172 views ### Difference between a hyperbolic line and a geodesic The setting for hyperbolic space in this question will be the upper half plane. Now I know that to measure the distance between two points $p$ and $q$ in the upper half plane, we take \$ \inf ... 2answers 220 views ### How to analyze triangles in Lobachevsky geometry? I got an assignment to prove certain things about right triangles in Lobachevsky geometry, but so far I don't know where to start. What model is the best for studying these objects? What is the ... 1answer 379 views ### Hyperbolic geometry. 3 dimensions. What is not well understood? According to Mathworld, hyperbolic geometry is well understood in 2 dimensions but not in 3 dimensions. http://mathworld.wolfram.com/HyperbolicGeometry.html What isn't well understood about ... 1answer 270 views ### The law of sines in hyperbolic geometry What is the geometrical meaning of the constant $k$ in the law of sines, $\frac{\sin A}{\sinh a} = \frac{\sin B}{\sinh b} = \frac{\sin C}{\sinh c}=k$ in hyperbolic geometry? I know the meaning of the ... 1answer 31 views ### Study of the Laplacian on the Hyperbolic plane What's a good reference for the simplest case? I'm interested in the spectral theory of the Laplace-Beltrami operator on the upper half plane (domain, self-adjoint extension, etc.). I only need this ... 1answer 106 views ### Structure of $x^2 + xy + y^2 = z^2$ integer quadratic form The pythagorean triples $x^2 + y^2 = z^2$ can be solved in integers using rational parameterization of solutions to $x^2 + y^2 = 1$. It goes through $(1,0)$, then consider the line $y = -k (x - 1)$ ... 1answer 147 views ### Triangle inequality for hyperbolic distance A quick way to define the hyperbolic metric in the Poincare disc is via the cross ratio: Given points a,b in the disc, let p,q be the endpoints of the hyperbolic line (halfcircle/line perpendicular to ... 1answer 78 views ### centralizers in hyperbolic manifolds are cyclic I am having trouble seeing why this statement is true: "If S admits a hyperbolic metric, then the centralizer of any non-trivial element of $\pi(S)$ is cyclic. In particular, $\pi(S)$ has trivial ... 1answer 47 views ### Can different uniformizations of Riemann surfaces be related somehow Let $X$ be a hyperbolic compact connected Riemann surface. Let $U\subset X$ be an open subset. Assume that $U\neq X$. We can uniformize $X$ by $\mathbf{H}$ directly to obtain it as a quotient of ... 1answer 112 views ### explicit isometry between metric spaces Let $X=\mathbb{H}^2\times\left[0,1\right]$ (just $\left\{ (x,y,z)\big\vert y>0\right\}$ as a set) and consider the following two metrics: $$ds_1=\frac{dx+dy}{y}+dz$$ and ... 1answer 229 views ### Circle preserving homeomorphisms in the closure of $\mathbb{C}$ and Möbius Transformations I am presently a learner of Hyperbolic Geometry and am using J. W. Anderson's book $Hyperbolic$ $Geometry$. Now the author presents a sketch proof of why every circle preserving homeomorphism in ... 1answer 155 views ### Characterization of linearity in terms of metric At least in Euclidean geometry and the upper half plane model of hyperbolic geometry, the statements '$y$ lies on the line segment determined by $x$ and $z$ ' and '$d(x,y)+d(y,z)=d(x,z)$' are ... 6answers 199 views ### Help me to remember $\operatorname{cosh}^{2}(y) -\operatorname{sinh}^{2}(y)=1$, some easy verification and deduction? I can faintly visualize some way of deducing this formula with exponential functions but forgot it. How do you remember it? Suppose you just forget whether it is plus-or-minus there, how do you find ... 2answers 401 views ### Geodesic Uniqueness in the Hyperbolic Plane I am studying Hyperbolic Geometry. At this part, I have proved that semicircles and straight lines orthogonals to the real axis are geodesics in the hyperbolic plane. But how I proof that this ... 1answer 246 views ### Models of hyperbolic geometry Wikipedia states the following: [The Poincaré half-plane model of hyperbolic geometry] is named after Henri Poincaré, but originated with Eugenio Beltrami, who used it, along with the Klein model ... 1answer 168 views ### Wikipedia article on Hyperbolic geometry I was reading the Wikipedia article on hyperbolic geometry and have come across the line geodesic paths are described by intersections with planes through the origin Why is this necessarily ... 2answers 258 views ### Does anyone know a good hyperbolic geometry software program? We are currently using this program called NonEuclid but it is a little frustrating to use sometimes and I was wondering if anyone knows another program for hyperbolic geometry. 2answers 45 views ### Measure on a quotient Can anyone explain me the following : let $M$ be a hyperbolic manifold and $\Gamma = \Pi_1(M) \subset Iso(\mathbb{H}^n)$. How does the Haar measure on $Iso(\mathbb{H}^n)$ induces a measure on ... 2answers 230 views ### What is the proof that rectangles do not exist in hyperbolic geometry? I am in need of help figuring this out-- If the only straight lines in hyperbolic geometry are those that pass through the center, then isn't there a right angle? (horizontal and vertical) Which ... 2answers 65 views ### Real tree and hyperbolicity I seek a proof of the following result due to Tits: Theorem: A path-connected $0$-hyperbolic metric space is a real tree. Do you know any proof or reference? 1answer 69 views ### topic for presenting in hyperbolic geometry For my course work, i have to give a presentation of 20-30 min presentation in hyperbolic geometry. Can any one suggest some topic(or any interesting theorem) in this area.I want to present some thing ... 2answers 223 views ### Construction of equilateral triangle in Poincare disc model Points A and B are given in Poincare disc model. Construct equilateral triangle ABC. Any kind of help is welcome.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098605513572693, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3967/exotic-option-pricing
# Exotic option pricing I'm trying to price an option with payoff $\max\{a\cdot S_t - K,0\}$ where $a$ is a known constant. Ideally I'm looking for a closed form, continuous-time solution. Where should I begin? - 5 This is not exotic at all. Why don't you just bring $a$ out of the max? – Alexey Kalmykov Aug 19 '12 at 8:31 because I was too tired to realize I could, I suppose. thanks, now it looks quite trivial, actually. if you post it as an answer, I'll mark it accepted! – emaster70 Aug 19 '12 at 13:42 ## 1 Answer The payoff $\max\{a\cdot S_t - K,0\}$ can be re-written as $a\cdot\max\{S_t - K/a,0\}$. Therefore it can be priced as a regular call option with the strike $K/a$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653115272521973, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3515/is-using-a-predictable-iv-with-cfb-mode-safe-or-not?answertab=active
# Is using a predictable IV with CFB mode safe or not? While writing this answer, I noted that NIST SP 800-38A says that (emphasis mine): "For the CBC and CFB modes, the IVs must be unpredictable. In particular, for any given plaintext, it must not be possible to predict the IV that will be associated to the plaintext in advance of the generation of the IV." For CBC mode, using a predictable IV allows a well known chosen plaintext attack exploiting the way in which the IV is combined with the first block. However, I don't see how a similar attack could apply to CFB mode; indeed, as I observed in an earlier answer, the CFB, OFB and CTR modes are equivalent for the first block. Indeed, in this answer, Thomas Pornin writes that (emphasis again mine): "CFB and OFB require only uniqueness: for a given key, each IV value shall be used at most once. The is no need for unpredictability or uniformness because the IV is first encrypted "as is" (before any operation with the plaintext) and encryption of a sequence of values with a good block cipher, using a key that the attacker does not know, is a good PRNG." So, is Thomas wrong, or is NIST mistaken or merely excessively cautious? And if there is an attack enabled by using predictable IVs with CFB mode, how does it work? - ## 2 Answers Thomas is correct; there's no attack on CFB mode if you can predict the IV; NIST is just being cautious. With CBC, the value of the first encrypted block $C_0 = E_k( IV \oplus P_0)$, where $IV$ is the IV used for that packet, $P_0$ is the value of the first plaintext block, and $E_k$ is the evaluation of the block cipher. If an attacker can predict the value of the $IV$ in advance, and can influence the value $P_0$, then he can get the value $E_k(Q)$, for any value $Q$. Here is how; he learns the value of $IV$, and then injects a message whose first plaintext value $P_0 = IV \oplus Q$. The encryptor will generate an encrypted message whose first block is $C_o = E_k( IV \oplus P_0 ) = E_k( Q )$. Using this encryption oracle, the attacker can verify guesses on the possible decryption of previous messages. In contrast, with CFB mode, the first encrypted block is $C_0 = E_k(IV) \oplus P_0$. While this looks similar, there is no similar opportunity to an attacker to select the block that is presented to the block cipher. If the attacker knows the IV in advance, all he knows the value of the first block that will be presented to the block cipher; he cannot influence it by selecting any specific value for the first plaintext block. Now, if the attacker can control the value of the IV, then yes, he can use that directly to create an encryption oracle; he generally don't allow the attacker to do that. In addition, if we use the same IV repeatedly, then there is also a weakness (beyond the rather obvious leaking of the first block); the second ciphertext block is $C_1 = E_k( E_k(IV) \oplus P_0 ) \oplus P_1$; if $IV$ is constant, so is $E_k(IV)$, and so the attacker can select messages with $P_0 = E_k(IV) \oplus Q$, also creating an encryption oracle. The bottom line: predictable IVs are safe in CFB mode, as long as they don't repeat, and you don't allow the attacker to pick them. - I found a little more info on Google, so let me provide a partial answer to my own question. In particular, I found a post by David Wagner to sci.crypt in 2004, titled "IND-CPA for CFB mode", which in turn led me to a paper titled "Practical symmetric on-line encryption", published in FSE 2003 by Fouque, Martinet and Poupard. In this paper, the authors prove that CFB mode (using full-block feedback) is IND-CPA secure as long as no input block to the cipher is reused (and as long as the underlying block cipher is secure), and that this holds with high probability as long as the IVs are chosen at random, and as long as the total number of $n$-bit blocks encrypted with a given key is much less than $2^{n/2}$. (In fact, with these assumptions, they prove indistinguishability under a stronger attack model which they call concurrent blockwise adaptive chosen plaintext attack.) However, while Fouque et al. indeed assume random IVs, it does seem to me that their proof works just fine even with deterministic IVs, provided that the adversary does not get to choose the IVs and that the method of choosing the IVs does not specifically encourage collisions. (For example, using a counter as the IV should be fine, whereas choosing the next IV by encrypting the previous IV — with the same key as used for message encryption — and optionally XORing it with a predictable constant would definitely be bad.) However, a bigger issue with the security proof by Fouque et al. is that it only provides a meaningful security margin for CFB mode with full-block feedback (and a sufficiently large cipher block size). Naïvely extending their proof to CFB-$k$ mode (i.e. the variant of CFB mode using $k$-bit truncated output with a shift register) only shows this mode to be secure when the number of blocks encrypted with the same key is much less than $2^{k/2}$. (A more careful analysis could perhaps improve this bound, but that looks like a non-trivial task.) Obviously, for e.g. $k = 1$ or even $k = 8$, this proves absolutely nothing at all! Indeed, as David Wagner notes in his post, there are weak IVs for CFB-$k$ with small $k$, consisting of bit patterns that repeat with period $k$ or small multiples thereof. (In particular, if the IV and the plaintext consists of all zero bits, the ciphertext will also be all zeros with probability $1/2^k$.) While these weak IVs are rare, and thus unlikely to be chosen by random, some of them — including, in particular, the all-zero IV — may be likely to occur more often than by chance in a naïve counter sequence. I'm not aware of any practical security proofs for CFB-$k$ mode for small $k$, and David Wagner also writes so in his post. However, it seems clear that, if such encryption modes are secure at all, they can only be so if the IVs are chosen at random — possibly using the NIST-recommended method of encrypting a counter — or at least in some other manner that, with high probability, avoids the weak IVs described by Wagner. However, for CFB-$k$ mode with large $k$ (say, $k \ge 64$), the Fouque et al. proof does demonstrate security comparable to other classical block cipher modes, provided that the IVs are chosen in a manner that does not allow an attacker to easily generate collisions. Random IVs will certainly work for that, but, as far as I can tell, so should e.g. using a counter as the IV. If in doubt, though, follow the NIST recommendation and encrypt the counter just to be sure. Addendum: I found a paper by Mark Wooding, "New proofs for old modes", IACR Cryptology ePrint Archive (2008), which gives improves security proofs for a number of classical block cipher operating modes. In particular, he writes: "We show that full-width CFB is secure if the IV is any ‘generalized counter’, and that both full-width and truncated $t$-bit CFB are secure if the IV is an encrypted counter. We also show that, unlike CBC mode, it is safe to ‘carry over’ the final shift-register value from the previous message as the IV for the next message." (By "generalized counter", if I read the paper correctly, Wooding simply means any fixed enumeration of the IV space, possibly known to the attacker.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470361471176147, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30607/relation-between-diracs-generalized-hamiltonian-dynamics-method-and-path-integr
Relation between Dirac's generalized Hamiltonian dynamics method and path integral method to deal with constraints What is the relation between path integral methods for dealing with constraints (constrained Hamiltonian dynamics involving non-singular Lagrangian) and Dirac's method of dealing with such systems (which involves the Dirac bracket)? And what is the advantage/relative significance of each method? (Thanks to Ron Maimon for phrasing suggestion) - 1 Answer Well, when canonically quantizing a system with constraints, you have two methods: 1. Dirac's approach "Quantize, then Constrain"; 2. Reduced Phase Space approach "Constrain, then Quantize". Although these two approaches have analogs with path integral quantization, the Path integral approach sweeps a lot of problems under the rug when you pick a particular gauge (a la Fadeev-Poppov quantization). That's why the path integral approach is usually taught in quantum field theory courses: it's a straightforward recipe with few subtleties. The Canonical approach requires a bit more work. I am aware, in quantum gravity at least, that you can recover the Dirac quantized constraints from taking the functional derivative of the path integral with respect to the Lagrange multipliers, and demanding it vanish. So I suspect there is a way to recover the Dirac quantized version from the Path integral approach. This is unique to General Relativity, due to the inclusion of time. A formal derivation may be found in Hartle and Hawking's "Wave Function of the Universe" (Physical Review D 28 12 (1983) pp. 2960–2975 eprint). Halliwell and Hartle's "Wave functions constructed from an invariant sum over histories satisfy constraints" (Phys. Rev. D 43 (1991) pp. 1170–1194 eprint ) generalize this result for parametric systems. Barvinsky shows in "Solution of quantum Dirac constraints via path integral" (arXiv:hep-th/9711164) that the path integral directly solves the quantum constraints, for a generic first-class constrained system at the level of one-loop ("semiclassical") approximations. Klauder's "Path integrals, and classical and quantum constraints" eprint is a pedagogical review of quantizing constrained systems. You might want to look at Henneaux and Teiteilboim's Quantization of Gauge Systems, specifically Chapter 16. Addendum The path integral using the Faddeev-Poppov method is completely equivalent to the following: suppose we want to change coordinates from just position $q$ to the gauge orbit $\Lambda$ plus the physically meaningful position $\bar{q}$. Then the functional integral changes as $$\begin{align} \int \exp(I[q])\mathcal{D}q &= \int \exp(I[\bar{q}])\Delta_{fp}\,\mathcal{D}\bar{q}\mathcal{D}\Lambda\\ &=\int\mathcal{D}\Lambda\int\exp(I[\bar{q}])\Delta_{fp}\,\mathcal{D}\bar{q} \end{align}$$ We have the $\int\mathcal{D}\Lambda$ be infinite but trivial (it's the volume of the gauge orbit). The $\Delta_{fp}$ is the Faddeev-Poppov determinant. This approach is discussed in detail in Emil Mottola's "Functional Integration Over Geometries" arXiv:hep-th/9502109. - Thank you. So then, is it redundant to study Dirac's method, provided that I'm also going to do path integrals later? Actually I'm studying the former method as some sort of pre-requisite for BRST symmetry. Is even BRST superseded by some other more powerful formalism (just as Dirac constraint method is by path integral)? I'd appreciate an answer to these since I want to know whether it would be really fruitful to study what I'm studying. Thanks again... – 1989189198 Jun 24 '12 at 7:39 1 It depends what you want to do. IMHO, you should study both the Dirac approach and the path integral approach. The Dirac approach can lead to mathematically impossible constraints (e.g., the Wheeler-DeWitt equation in quantum gravity); that's probably why the path integral approach is taught. I'd strong suggest/urge/demand reading Henneaux and Teitelboim's Quantization of Gauge Systems after whatever you're reading currently (it's a book focusing on BRST quantization, but discusses quite thoroughly Dirac and Reduced Phase Space approaches). – Alex Nelson Jun 24 '12 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115805625915527, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/22699/morphisms-between-pure-complexes-of-sheaves/23257
## Morphisms between pure complexes of sheaves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I would like to understand the theory of pure complexes of (etale?) sheaves (of geometric origin?). In particular, I would like to understand which conditions are realy necessary in (part 1 of) Theorem 3.1.8 of Cataldo-Migliorini's survey http://www.ams.org/journals/bull/2009-46-04/S0273-0979-09-01260-9/S0273-0979-09-01260-9.pdf (see page 33-567). Does the splitting of part 1 (only!) really requires $\overline{\mathbb{Q}_l}$-coefficients? Which coefficients could be put here? Does the splitting exist over $X_0$? Below they explain that those extensions of mixed complexes $K_0,L_0$ of appropriate weights that come from $\mathbb{F}_q$, become zero over $\mathbb{F}$. My main question is: does there exist a triangulated category of complexes of sheaves where the corresponding Ext-group is zero from the beginning (i.e. we consider $Ext^1$ in a single triangulated category instead of the image $Ext^1(K_0.L_0)\to Ext^1(K,L)$). Is there such a category over a (more or less) general base scheme $S$ (instead of $\mathbb{F}_q$ or $\mathbb{F}$)? Again, which coefficient rings are possible here? - ## 1 Answer Dear Mikhail, I had been hoping someone else would attempt to answer this question, as I have been wondering very similar things lately. (In fact I drove myself crazy for about a month last year trying to work out some solution to what you are asking in the second paragraph.) I can't answer everything but here is a start: Write $K = \overline{\mathbb{Q}_{\ell}}$. One already sees the problems with what you are asking for $X_0 = Spec \mathbb{F}_q$. Then the category of constructible $K$-sheaves on $X_0$ is equivalent to the category of finite dimensional $K$-representations of $Gal(\mathbb{F}/\mathbb{F}_q)$, the absolute Galois group of $\mathbb{F}_q$. (The absolute Galois group is generated topologically by Frobenius, and so this is the same as giving a finite dimensional $K$-vector space together with an endomorphism.) [See BBD 5.1.11 for a statement, I think this is explained in Milne, but don't have it at the moment.] Now, a pure sheaf on $X_0$ is pure of weight $i$ if all the eigenvalues of Frobenius are algebraic integers all of whose complex conjugates over $\mathbb{C}$ have the same absolute value $q^{i/2}$. Note that here we already see that over $X_0$ a pure sheaf does not need to be semi-simple. Indeed, there is no reason why Frobenius should act semi-simply. (This is one example of what de Cataldo and Migliorini are talking about in Remark 3.1.9 after the Theorem 3.1.8.) I think it is part of the standard conjectures that Frobenius acts semi-simply on the $\ell$-adic cohomology of smooth projective varieties, which as I understand it, is still not known. I don't know what you mean when you ask: Does the splitting of part 1 (only!) really requires $\overline{\mathbb{Q}_{\ell}}$-coefficients? As to your main question, I think that the above example shows that this is too much to hope. Without working over $X_0$ one cannot define what it means to be mixed, and without going to $X_0$ one can't expect the same ext vanishing. I recently discovered your work on "weight structures" and found it very interesting. I guess you are asking the above, because you would like to argue that one gets a weight structure in the setting of $\overline{\mathbb{Q}_{\ell}}$-sheaves. There is one setting where I think that one really does get a weight structure. This is in the (at least formally) very similar world of "mixed Hodge modules" on complex varieties. There one has the desired ext vanishing from the outset. - 2 Semi-simplicity of Frobenius on l-adic cohomology of smooth projective varieties over finite fields is part of the Tate conjecture, and is open apart from some very special cases (e.g. H^1). – Emerton May 2 2010 at 15:52 Yes, I want to define a weight structure on (some version of) 'mixed sheaves'. Its existence would immediately imply that a pure complex splits as a direct sum of its perverse cohomology (put in the appropriate degrees). That's what I asked about. However, semi-simplicity should also hold, and, as you pointed out, it fails. I wonder why this does not contradict part (ii) of Proposition 5.1.15 of BBD. I will probably ask about this in a new question. – Mikhail Bondarko May 3 2010 at 4:45 Besides, it seems easy to prove the existence of a weight structure on Hodge complexes; yet I don't think that this could have really interesting consequences. Possibly, I will study Hodge modules (or something like this) in future. – Mikhail Bondarko May 3 2010 at 4:53 A comment on my first comment: I read BBD more carefully and saw that for $K_0,L_0$ the orthogonality statement is actually worse by 1 than the one needed in order to obtain a weight structure. So there is no contradictions, and $X_0$-sheaves do not fit for my purposes. – Mikhail Bondarko May 3 2010 at 5:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530826210975647, "perplexity_flag": "head"}
http://gowers.wordpress.com/2007/09/15/discovering-a-formula-for-the-cubic/
# Gowers's Weblog Mathematics related discussions ## Discovering a formula for the cubic In this post I want to revisit a topic that I first discussed on my web page here. My aim was to present a way in which one might discover a solution to the cubic without just being told it. However, the solution that arose was not very nice, and at the end I made the comment that I did not know a way of removing the rabbit-out-of-a-hat feeling that the usual much neater formula for the cubic (together with its derivation) left me with. A couple of years ago, I put that situation right by stumbling on a very simple idea about quadratic equations that generalizes easily to cubics. More to the point, the stumble wasn’t completely random, so the entire approach can be justified as the result of standard and easy research strategies. I am no historian, but I would imagine that this idea is pretty similar to the idea (in some equivalent form) that first led to this solution. I shall assume familiarity with solving quadratics—the problem here is to find the right way of generalizing this solution. (If you want to see how one might discover a solution to the quadratic then I cover that in the earlier discussion of cubics.) Given that, then the first step in a natural discovery of a solution of the cubic is the observation, which one can hardly help making, that solutions to quadratics take the form $u\pm\sqrt{v}$. If we now turn things round and just assume that solutions will take this form then we can get a very quick derivation of the quadratic formula, which, for simplicity, I will do just for quadratics of the form $x^2+bx+c$. (Of course, it is very easy to reduce the general case to this case, so this is not a serious loss of generality.) The derivation comes from the well-known fact that the roots of such a quadratic must add up to $-b$ and must multiply to give $c$. The first fact tells us that $u=-b/2$ and the second tells us that $(u+\sqrt{v})(u-\sqrt{v})=c$, which in turn tells us that $u^2-v=c$, so that $v=u^2-c$. By our earlier computation, this is $b^2/4-c$. This gives the usual quadratic formula in the case $a=1$. Was that a fully justified argument? Yes, because once you are looking for roots of the form $u\pm\sqrt{v}$ there is no mystery behind the idea of looking at what you know about the two roots, converting that into some equations for $u$ and $v$ and trying to solve those equations. You can’t tell in advance that the equations will have a nice solution, but it’s very natural to give the approach a try. Now let us ask ourselves the following question: what would be the most blindingly obvious way of generalizing the above approach to cubics? There are two ideas we might have in connection with this. The first is to try to get the cubic into as simple a form as possible, and the second is to make a guess about the general form of the roots. Let us take each of these in turn, beginning with the second. What is the most natural way of generalizing our choice above for the form of the roots? To ask this question another way: we are trying to find XXX, where XXX is to the number 3 as $u+\sqrt{v}$ and $u-\sqrt{v}$ are to the number 2. There is a very obvious guess: we should take $u+r$, $u+s$ and $u+t$, where $r$, $s$ and $t$ are the three cube roots of some number $v$. If we write $\omega$ for the cube root of 1 (or, to be more specific, the number $e^{2\pi i/3}$, then we can write this guess as $u+v^{1/3}$, $u+\omega v^{1/3}$ and $u+\omega^2v^{1/3}$ (where $v^{1/3}$ is some cube root of $v$—it doesn’t matter which). By analogy with the quadratic case, we are hoping that this will be the general form of a solution to the equation $x^3+bx^2+cx+d=0$. But a moment’s thought shows that it cannot be. Let us see this in two different ways. The first is that if that is the general form of the roots, then we have two degrees of freedom—the choice of $u$ and the choice of $v$. But we are looking at a three-dimensional set of equations (since we are free to choose $b$, $c$ and $d$). It is a good exercise to prove rigorously that our guess is guaranteed to be wrong for this reason, but for now let us be satisfied with the observation that it looks very worrying. Indeed, if life were that simple then it is hardly likely that solving the cubic would have been as hard a problem as it was. A second way to see that the guess is wrong is to consider what happens if $b=0$. Now we are looking at a cubic of the form $x^3+cx+d$, and if the roots take the form stated then, since their sum is now zero, we find that $u=0$. But then the three roots are just the cube roots of $v$, so they are the roots of the equation $x^3-v=0$. In other words, the guess is wrong unless $c=0$. (This is of course an instance of the fact that we do not have enough degrees of freedom.) So, with this small extra insight into the problem, let us try to come up with a better guess. How do we generalize a pair such as $u+\sqrt{v}$ and $u-\sqrt v$? We want a triple of roots, but we also want each component of the triple to have three degrees of freedom. In other words, we want each root to be made out of a $u$, a $v$ and a $w$. Since we don’t quite know how we will build the roots, a helpful idea at this point is to lose some information in the quadratic case. This is a slightly subtle point that I will discuss more in a moment. First let us merely observe that I could have represented the two roots of a quadratic as $u+v$ and $u-v$, and it would still have been very easy to solve for $u$ and $v$. Then the fact that a square root was involved would not have been a guess (however natural) but something that one actually derived, in a very easy and natural way. Since this slight modification of the quadratic guess will turn out to be very helpful, it is important to establish that it could be justified. That is, I am not drawing a rabbit out of a hat at this point. The justification is as follows. In the cubic case we do not know exactly what the form of our guess would take. We could just make some wild guesses and hope to hit the right answer. But much better is to make more general guesses and then work out what their more precise forms must be. We can do that in the quadratic case, so it is a very sensible strategy to try to do the same for cubics. Having established this point, let us see what happens. We are now trying to find the natural analogue for the number 3, built out of three variables $u$, $v$ and $w$, of the pair $(u+v,u-v)$ in the degree-2 case. The pair $(u+v,u-v)$ consists of a couple of linear combinations of $u$ and $v$, so it is natural (though not essential to the discovery of the argument) to think of it as a linear transformation of the pair $(u,v)$. That draws our attention to the matrix $\begin{pmatrix} 1&1\\ 1&-1\\ \end{pmatrix}$, and it is then very natural to wonder if this matrix has an obvious generalization to a $3\times 3$ matrix. It does! This is the $2\times 2$ case of the well-known circulant matrix, but even if you don’t know that, you do know that the numbers 1 and -1 are the two square roots of 1. Moreover, this is not just a coincidence but the reason that they occur in our discussion of quadratics. So it is natural to try to build a $3\times 3$ matrix out of the three cube roots of 1, which are $1$, $\omega$ and $\omega^2$. In the end there is only one sensible choice to make (give or take the odd symmetry). It is the matrix $\begin{pmatrix} 1&1&1\\ 1&\omega&\omega^2\\ 1&\omega^2&\omega\\ \end{pmatrix}$. Thus, our guess for the forms of the three roots is $u+v+w$, $u+\omega v+\omega^2 w$ and $u+\omega^2 v+\omega w$. This seems a very satisfactory guess (even if we don’t have a compelling reason to suppose that it will work). So now we are left with the task of solving for $u$, $v$ and $w$ on the assumption that they are the roots of the cubic $x^3+bx^2+cx+d$. At this point one could just plunge in, but it helps a lot to simplify the cubic first by “completing the cube”. This is the familiar idea (described in my other cubics discussion) that by substituting $y=x+b/3$ for $x$ you get a cubic in $y$ where the coefficient of $y^2$ is zero. So let’s just assume, as we may, that $b=0$, so that we are looking for roots of $x^3+cx+d$. Since the roots add up to 0 and $1+\omega+\omega^2=0$, this tells us that $u=0$, so the three roots are now of the form $v+w$, $\omega v+\omega^2w$ and $\omega^2v+\omega w$. (We are therefore down to two degrees of freedom, but so is the cubic we are trying to solve.) The information we know about these three roots is that their product is $-d$ and that the sum of all the products of two of them is $c$. So the next task is clear: expand out these expressions and see if we can solve the resulting equations in $u$ and $v$. The details of this are not particularly important: you could stop reading now and just take on trust that we end up needing to solve quadratics and take cube roots, both of which we are allowed to assume that we can do. However, it’s nice to see that it really does work. The product of the three numbers $v+w$, $\omega v+\omega^2 w$ and $\omega^2v+\omega w$ works out to be $v^3+w^3$. (It’s instructive to do this calculation for yourself and see how the fact that $1+\omega+\omega^2=0$ makes the other two possible terms cancel. Then one can see that the fact that rather simple expressions come out of these calculations is not a coincidence.) As for the sum of the three products of two of them, it comes out to be $(\omega+\omega^2)vw$, which equals $-vw$. So we need $-vw$ and $v^3+w^3$ to take the values $c$ and $d$, respectively. This tells us that $v^3$ and $w^3$ are the two roots of the equation $x^2-dx-c^3=0$, so, as claimed, we can solve for $v$ and $w$ by solving a quadratic and taking cube roots. A small extra point is that one must think a bit about which cube roots to take, but that I will gloss over here. An obvious question: what happens if one tries to generalize this approach to quartics and quintics? The answer is that in both cases it is obvious how to generalize the guess about the form that the roots should take. In the case of the quartic, when one guesses that they are of the form $u+v+w+t$, $u+iv-w-it$, $u-v+w-t$ and $u-iv-w+it$, everything works out nicely, if you get rid of the $x^3$ term and hence of $u$. You get some equations in $v,$ $w$ and $t$ and they aren’t too hard to solve. If you try it for the quintic then, not too surprisingly, you end up with some equations that are more complicated than the quintic you started with. Apologies for the matrices not coming out: I’ll repair that as soon as I can work out how to do so. [Now sorted out, with help from comments below: many thanks.] ### Like this: This entry was posted on September 15, 2007 at 6:01 pm and is filed under Demystifying proofs. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 20 Responses to “Discovering a formula for the cubic” 1. Maurizio Says: September 15, 2007 at 7:49 pm | Reply Dear Timothy, let me praise your effort in “demystifying proofs”, i think that it is one of the most important roles of mathematics and we (mathematicians or just students like me) should try to spread a wider conscience of its importance. This is a more “human” aspect of mathematics, since it concerns more “who is studing” rather than “what is being studied”. I feel often tempted to try to learn proofs and to make proofs just by blindly applying magic tricks, trying to produce a proof that may be checked by a computer, but that would give no insight about what is being proved. I’m now learning to recognize such tendency as missing trust in what can be understood (while instead i’m blindly relying on my intuitive skills to “set up the mess”), and i’m trying to correct it by all means. When (if) i will be a professor, i will be very careful while saying someone to “work hard”, to avoid people trying to do so by just studying theorems without sufficient awareness (instead of trying to develop it). I’m just trying to express my thoughts, and i’d be happy to listen the advice of more experienced people. Thanks for your post! 2. echoone Says: September 15, 2007 at 8:03 pm | Reply Check out this post. It helped figure out how to get matrices working properly on WordPress.com 3. Terence Tao Says: September 15, 2007 at 8:41 pm | Reply Dear Tim, For matrices in wordpress, I find that \begin{pmatrix} … \\ … \end{pmatrix} works nicely, e.g. $\begin{pmatrix} a & b \\ -b & a \end{pmatrix}$. (If you “edit” this comment you will be able to steal the latex code.) It is amusing to recast your above discussion through the lens of Fourier analysis. One can view solving a polynomial as solving a system of symmetrised equations. For instance, by the factor theorem, the task of finding the three roots x,y,z of the cubic $x^3 + bx^2 + cx + d = 0$ is equivalent to solving the system of equations $x+y+z = -b$ $xy + yz + xz = c$ $xyz = -d$. Now these equations are invariant under cyclic shift of the x,y,z. One of the key insights of Fourier analysis is that any mathematical problem which enjoys a translation invariance symmetry is likely to be clarified by use of the Fourier transform. This would motivate the Fourier substitution $x = u+v+w$, $y = u + \omega v + \omega^2 w$, $z = u + \omega^2 v + \omega w$, which leads to $3u = -b$ $3u^2 + 3vw = c$ $u^3 + v^3 + w^3 - 3 uvw = -d$. This lets one solve for u; to solve for v and w one can then observe a residual cyclic symmetry between v and w, prompting another Fourier transform $v = s+t$, $w = s-t$, which soon lets one solve for everything. Presumably this can all be interpreted nicely in terms of the Galois group $S_3$, and in particular to the solvability of that group, especially given that solvable groups can be “built up” from abelian groups such as ${\Bbb Z}/3{\Bbb Z}$ and ${\Bbb Z}/2{\Bbb Z}$, which are precisely the groups which enjoy nice Fourier transforms. My Galois theory is incredibly rusty, though, so I don’t see the connection clearly. 4. gowers Says: September 15, 2007 at 9:07 pm | Reply Terry: I didn’t have the “p” in “\pmatrix” and my slashes were the wrong way round: thanks for the tip. I idly wondered about Fourier analysis when the circulant matrix came up (of course) but didn’t get further than that “idly”. It is indeed a nice way of looking at it, and gives me some hope of carrying out a further project, possibly with the help of suggestions on this blog, of arriving at a completely demystified proof of the insolubility of the quintic. My Galois theory is also very rusty — in fact, I never really understood it properly as an undergraduate — which I regard as an essential qualification for carrying out such a project, since what I’d really like to do is end up with a proof that doesn’t use the language of group theory. Alternatively, I’d like to make enough elementary observations (all by following natural problem-solving techniques) that the idea of looking at automorphisms of number fields emerges of its own accord. The second approach is probably more tempting to anybody who does know their Galois theory well, but it is a challenge to do it if you aren’t allowed to draw rabbits out of hats. But your idea of not doing the initial simplification by getting rid of the quadratic term and seeing two symmetries in operation is a big help, since, as you say, it leads to the notion of solvability of a group (not that I have fully worked out the connection either, and one has to try to find the connection in a way that doesn’t rely on knowing that it is there to be found). 5. Gaspard Says: September 15, 2007 at 9:53 pm | Reply About a demystified proof of the insolubility of the quintic: I heard (but haven’t checked myself) that a nice book about this is the one by Alekseev which explains a topological approach due to Arnold which is meant to be understandable by high school students. Here’s the amazon entry http://www.amazon.com/Abels-Theorem-Problems-Solutions-International/dp/1402021860 and here is a related note of Arnold about it http://www.institut.math.jussieu.fr/seminaires/singularites/abel.pdf 6. Phil G Says: September 15, 2007 at 10:54 pm | Reply Thanks for the nice post. Give us more like this please I think you have a typo where you have written x3 – dx – c 3 instead of x2 – dx – c 3 7. gowers Says: September 16, 2007 at 12:05 am | Reply Phil: thanks — typo fixed. 8. gregknese Says: September 16, 2007 at 8:34 pm | Reply I really enjoyed this post and the discussions afterwards. It seems many people are rusty on Galois theory. Every time I have learned Galois theory (and then promptly forgotten it) I always left feeling that it is an amazing theory but I would never know how to use it to solve a concrete question. For example: If one knows that a given quintic is solvable by radicals, how does one go from there and actually find the roots? Any thoughts? 9. Terence Tao Says: September 17, 2007 at 5:33 am | Reply By some coincidence, there is another blog post on the solvability of the cubic, this time from the perspective of classical invariant theory, at http://rigtriv.wordpress.com/2007/08/29/invariants-and-solving-polynomials/ Basically, the philosophy here is to only permit yourself to write down expressions (such as the discriminant) which transform nicely under projective changes of coordinates. There are relatively few of these expressions, and so you have a smaller “search space” in which to find the invariants that factorise the original polynomial into linear factors. 10. JOHN SMITH Says: September 18, 2007 at 1:27 pm | Reply Hi Tim, It was stated on your home page that any reasonable person wouldn’t be expected to solve the cubic in a few hours or so. However, how long would you expect say a cambridge undergraduate with no knowledge of solving the cubic to discover the solution? And I wondered if it would be a good idea if you were to write a similar article on finding the closed form sum of 1+(2^m)+(3^m)…+n^m where n and m are positive integers, to offer some insight into how one might discover the formula for that? Thanks, John 11. davidspeyer Says: September 18, 2007 at 11:15 pm | Reply JOHN SMITH: There is a discussion in section 6.5 of Concrete Mathematics, by Graham, Knuth and Patashnik, of how one might discover and prove the formula for this sum by nothing but sheer obstinancy. Later, in section 7.3, they describe how the problem becomes easier when armed with the tool of generating functions. I’ll sketch the latter attack here, because generating functions are a great tool. Set S(m,n)=1+2^m+3^m+..+n^m. There are four approaches you might try in a generating function attack: you could define any of the four functions A_m(w)=sum_n S(m,n) w^n, B_m(w)=sum_n S(m,n) w^n/n!, C_n(z)=sum_m S(m,n) z^m, or D_n(z)=sum_m S(m,n) z^m/m! and try to get this function into a simple form. With A, B and C, we strike out. But, if we come back for a fourth swing, D_n has the nice closed form (e^{nz}-1)/(e^z-1)=(sum_{k>=1} n^k z^k/k! ) (1/z-1/2+z/6-z^3/30+…) and we get the closed formula immediately. So one approach to this sum comes down to knowing about generating functions, being willing to try again when the first attack fails, and some basic comfort manipulating series. 12. One way of looking at Cauchy’s theorem « Gowers’s Weblog Says: September 20, 2007 at 9:30 am | Reply [...] this? The reason is rather similar to the reason that I got rid of the square root round in my post about cubics. In both cases I wanted to generalize something that was a bit too complicated for it to be obvious [...] 13. gowers Says: September 21, 2007 at 3:43 pm | Reply My answer to JOHN SMITH’s first question of September 18th is that I would expect most Cambridge undergraduates not to manage to find a solution to the cubic unaided. However, that is not because they would be incapable of it. My post tries to demonstrate that by showing that you don’t have to have flashes of genius to solve the cubic — just the wish to generalize the quadratic solution in as natural a way as you can. Rather, it is because only a smallish proportion of Cambridge undergraduates (or indeed, mathematics undergraduates anywhere) start out with the belief that they could ever solve a mathematics problem that wasn’t carefully constructed to be solvable in a fairly routine way. There’s a simple way of getting this belief, which is to solve one hard problem. It may take a few hours, or a week, or a month, or six months, or even longer, but once someone has done it they understand from their own experience that it is possible. Then they start to have positive thoughts like, “What strategy will maximize my chances of solving this problem?” or “I seem to keep having the same difficulty — what could I do differently?” rather than negative ones like, “Maths is hard, and I haven’t been taught how to do this, so there’s no chance of my managing.” And then they find that solving difficult problems is just like many other activities: hard, yes, but not impossible and something that gets easier with practice. 14. JOHN SMITH Says: September 21, 2007 at 4:47 pm | Reply Thanks for that Tim. What you say is encouraging to me. I have deliberately Not read this post on the cubic in its entirety because I wanted to discover it without being told how which is I presume, the whole idea of the post. At first I tried ‘completing the cube’, which seemed a natural thing to try but I was heading in the wrong direction.. 15. Nick Krempel Says: December 5, 2007 at 5:57 pm | Reply Sorry for this somewhat late post – I just discovered your blog and am playing catch-up! Re: your “arriving at a completely demystified proof of the insolubility of the quintic”, I would recommend looking at Galois theory by the route through which it was originally discovered, rather than the more modern abstract point of view. Or more accurately, the various attempts at it (e.g. by Lagrange) which were subsequently refined (Galois) and what ideas led them at each stage. I haven’t yet read much about this myself, but as far as I remember the book “Pioneers of Representation Theory” has a nice section on this early on, including the sorts of explicit calculations gregknese was enquiring about – how do I actually “solve” this supposedly solvable equation? On an unrelated note, when you say “we end up needing to solve quadratics and take cube roots, both of which we are allowed to assume that we can do”, I wonder how many people would actually consider cube roots of a complex number something basic they “can calculate”, as, unlike in the square root case, you can’t find the real and imaginary parts in terms of roots of only real numbers, so presumably would resort to using transcendental functions (in which case you could of course solve the quintic too.) Utlimately you have to make a choice of “numbers I can deal with” – something you’ve talked about before in other places. 16. Ricardo S Says: December 27, 2007 at 7:25 am | Reply When Mark Kac was young he solved the cubic in a different way if a, b, and c are the roots and w^3 = 1 where w is not 1. Let s = a + b + c, t = a + wb +w^2c, u = a + w^2b + wc Then express t^3 + u^3 and t^3 * u^3 in terms of the symmetric terms. There is a nice story to that solution you can find it in Mark Kac’s autobiography “Enigmas of Chance”. It seems to me that this solution is related to Terence Tao’s solution as a sort of inverse transformation. Since someone cited sums of powers I remembered finding a nice “geometrical” solution to the sum of squares 1^2 + 2^2 + 3^2 +…+ n^2. Organize one 1, two 2′s, three 3′s, …., and n n’s into an equilateral triangle in the natural way, then add the corresponding elements of the three versions of this triangle (the original and the rotated by 120 and 240 degrees) it nice to see that the sum is constant and equal to 2n+1. Since there are n(n+1)/2 elements in the triangle the total sum is n(n+1)/2*(2n+1) and the sum of the elements in the original triangle n(n+1)(2n+1)/6 which is the sum of the squares. I would like to present this in a motivated way but I am afraid that is somewhat longer. I also tried to generalize this idea to get the sum of cubes but failed. Since it has such a nice sum I suppose that it could also have a nice geometrical method to it. Anyone has an idea? 17. Heinrich Says: December 30, 2007 at 9:32 pm | Reply As Nick already pointed out, the historical route to Galois theory is the most illuminating. According to the marvelous read Jörg Bewersdorff. Galois Theory for Beginners: A Historical Perspective http://tinyurl.com/2u6k52 Lagrange came up with the sum-of-roots-of-unity ansatz while trying to understand what’s going on with the quintic. In fact, I find the usual “modern” presentation of Galois theory utterly incomprehensible. I mean, it’s not so bad and quite simple after knowing the historical approach of permuting the roots. But why, why is the easy way pretty much forgotten? Why struggle months with something abstract-nonsensical and void of motivation when there’s a dead simple and even more illuminating approach to understand it in hours? As Feynman (?) put it: “If you can’t explain it to a six year old, you don’t really understand it.” But I’m preaching to the choir 18. Heinrich Says: December 30, 2007 at 9:47 pm | Reply Oh, and concerning the sum \$1^k + 2^k + \dots + n^k\$, there is a simple way for finding a formula for any given \$k\$: the simple observation is that in the cases \$k=1,2,3\$, the formula is a polynomial of degree \$k+1\$ and the idea is to make exactly this ansatz in the general case. This is further simplified by changing coordinates, i.e. by using a different basis than \$n^j\$ for the vector space of polynomials. A better basis is \$n, n(n-1), n(n-1)(n-2), \dots, n(n-1)\cdots(n-k)\$ since these functions have nice sums. In other words, try to find an explicit formula for \$\sum_{a=0}^n a(a-1)(a-2)\$ and be delighted 19. John Armstrong Says: December 30, 2007 at 10:16 pm | Reply Heinrich: or you could use Faulhaber’s Fabulous Formula. 20. Geometry of a Polynomial « Rigorous Trivialities Says: January 11, 2008 at 11:26 pm | Reply [...] of equations involving (because it contains a cubic, which is not good, see this post of mine and this one of Gowers’s which show how hard a single variable cubic is), we can instead homogenize and solve the system [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 129, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956365704536438, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/175750/the-definition-of-metric-space-topological-space/175787
# The definition of metric space,topological space I have read some books in analysis,all of them define metric space,topological space or vector space directly,without any reason. Therefore, I want to know the background of the definition, the problem the space aim to solve,is there any reference? Thanks a lot. - 6 What's so mysterious about the definition of a metric space? It's quite obviously axiomatising a notion of "distance". I think it's also quite clear that a vector space axiomatises a notion of space in which translations and directed line segments make sense. For topological spaces, see here or here. – Zhen Lin Jul 27 '12 at 7:02 ## 5 Answers Originally mathematics was intended to describe the real world. We then continued to develop it using the intuition of how the real world behaves in order to describe how mathematical objects would behave. In the 19th and 20th century mathematics had several foundational crises. It turned out that intuition is not a good enough foundation for mathematics. Instead we need to describe certain properties in a formal fashion and use logical rules of inference to deduce properties of mathematical objects. When this is done, one can easily see that certain properties are enough to discuss certain happenings. For example, if $T$ is a linear transformation of $\mathbb R^2$ then it is surjective if and only if it is injective. However the fact that we are over $\mathbb R$ did not matter, and this is certainly true for $\mathbb R^3$ as well. It turns out that if $V$ is a finitely generated vector space over any field $\mathbb F$ then this is true. If so, the notion of space tells us that it is a mathematical universe which has certain strutcture. These properties are the concrete properties we need to generalize the concrete real-world describing phenomenon to a general mathematical context. This abstract is very useful because it allows us to apply the same tools on seemingly different problems, simply by showing that two different objects can be seen as constructs of a similar kind. Now we return to the particular question, vector spaces; topological spaces; metric spaces; etc. Those are often generalizations of things naturally arising during the intuition-based era of mathematics. For example, $\mathbb R$ is a metric space. It has a very natural metric, the absolute value which tells us how close are two numbers. This notion can be used to define things like continuity of a function, or convergence. It turns out that measuring distance can be done in a different way, and the distance function need only to satisfy several basic properties, to a space in which we can measure distances between points we call a metric space. Similarly, but less clearly, topology is also a generalization of the real numbers and metric spaces. We notice that we can use open intervals in the real numbers, we notice that a sequence converges to a point $x$ if in every open interval around $x$, all but finitely many points of the sequences appear. Therefore open intervals are a good way to measure convergence. Topological spaces are very much a generalization of this notion, we define sets which we call open, and a sequence converges to a point if in every open set which contains this point almost all the sequence can be found. Vector spaces rise naturally when solving a system of linear equations in several variables. It turned out that not only we can generalize the number of variables and equations, but also that linear functions can be used to approximate less-linear functions (e.g. differentiable functions), and that vector spaces can be used to describe many more objects in mathematics. For example "all the real-valued continuous functions from the real numbers" has a very natural structure of a vector space over $\mathbb R$. In this vector space integrating can be seen as a linear functional, and taking anti-derivative is a linear operator. Both are very natural to calculus. All these notions, and much much more, are generalized even further in mathematics. What many mathematicians do is the study of properties, asking "what property would guarantee that a certain consequence is true?", and "what property is necessary for this consequence to hold?". That, in my eyes, the greatest beauty in mathematics. The ability to isolate and generalize properties from the particular case into an extremely abstract case. - You are quite right to ask for the context of these definitions. One place to look in this case is http://www-history.mcs.st-and.ac.uk/HistTopics/Topology_in_mathematics.html There are three reasons for abstraction: 1. To cover many known examples. 2. To simplify proofs by giving the key reasons why something is true. 3. To be available for new examples. Thus the power of abstraction is also to allow for analogies. One should also mention the amazing extension of the notion of metric space by F.W. Lawvere in Lawvere, F. William Metric spaces, generalized logic, and closed categories With an author commentary: Enriched categories in the logic of geometry and analysis. Repr. Theory Appl. Categ. No. 1 (2002), 1–37. Later: You should also realise that one of the driving forces of abstraction is laziness! Thus suppose we are working in the space $\mathbb R^3$ with the usual Euclidean distance, and have two points, say $P=(x,y,z), Q=(u,v,w)$. After a time we might get fed up with writing down the formula for the distance from $P$ to $Q$ and decide to abbreviate it to $d(P,Q)$. Then you start asking yourself what properties of $d(P,Q)$ am I really using, and it may be a surprise to find how few of these you need for the proofs, and how much easier it is to use these properties to write down the proofs and to understand them. Thus these properties of $d$ become the underlying structure for this situation. You find that you really understand why something is true. Then you find that these properties apply to more examples, and you are well away to an "abstract" theory. Again the pressure might be to apply arguments you have used in one situation in another, but the notion of distance does not immediately apply. Hence the notion of "neighbouhood". After many years it was found that in many situations the notion of "open set" is easier to work with, and has a logically simpler set of rules. So this comes to be thought of as THE definition of a topological space, and the poor students often get presented with this definition with no history, no motivation, no background, but a command to learn it! (Protests not allowed, either!) One of the reasons for abstraction is also that analogies are not between things but between the relations between things. So knots are quite unlike numbers, but the rules for the addition of knots are analogous to the rules for multiplication of numbers. So one can define a "prime knot", and ask: are there infinitely many prime knots? This is how mathematics advances, often for lack of a simple idea. As Grothendieck wrote: "Mathematics was held up for thousands of years for lack of the concept of cipher [zero], and nobody was around to take such a childish step." Grothendieck has also argued in Section 5 of his famous "Esquisses d'un programme" (1984) against the concept of topological space, as being inadequate to express geometry, or at least the geometry he had in mind. So there is nothing sacrosanct about these concepts, and their applicability and disadvantages need to borne in mind. In a college debate (years ago!) I was taken to task by a more experienced debater who quoted: "Text without context is merely pretext." I believe that the import of this applies also to mathematics, and relates to my initial remarks. - Who am I to say, but I find that when people say that mathematics was held up because no one invented zero to be extremely condescending and lacking the capacity to appreciate the monumentality of such idea. It is the leap of transcending from existence to non-existence. This is just like suggesting that electricity is a child's play; or nuclear fission; even walking or talking is highly nontrivial. To decontextualize one of my favourite quotes: "...two plus two make four. If that is granted, all else follows." – Asaf Karagila Jul 27 '12 at 23:26 1 I think the force of the word "childish" is also "without prejudice". It has been said that a force against zero was that emptiness was associated with chaos, the work of the devil. The context of my quote was on groupoids; as a childish person, I took to the idea, and its higher dimensional ramifications, but both have been described as "nonsense" or "ridiculous" by senior figures in the UK; no book on algebraic topology in English, except mine, has used the notion of the fundamental groupoid on a set of base points, which I introduced in 1967, and AG liked! – Ronnie Brown Jul 28 '12 at 9:10 I don't think that you will find that there were specific problems in mind in the development of these more abstract concepts. In terms of topological spaces, around the time of the development of a rigourous foundation for the calculus, mathematicians had to come to grips with what exactly a real number was, and what properties the real line had. They also discovered that they could apply certain ideas and methods from the familiar Euclidean spaces to "spaces" which were quite unlike these. As these ideas and methods gained more use, it was natural to attempt to find the central core of these arguments and develop a general theory about these sorts of "spaces". The following is taken directly from Engelking's text (General Topology): The emergence of general topology is a consequence of the rebuilding of the foundations of calculus achieved during the 19th century. Endeavours at making analysis independent of naive geometric intuition and mechanical arguments, to which the inventors of calculus I. Newton (1642-1727) and G. Leibniz (1646-1716) referred, led to the precise definition of limit (J. d'Alembert (1717-1783) and A. L. Cauchy (1784-1857)), to formulation of tests for convergence of infinite series (C. F. Gauss (1777-1855)) and to clarifying the notion of a continuous function (B. Bolzano (1781-1848) and Cauchy). The necessity of resting calculus on a firmer base was generally recognized when various pathelogical phenomena in convergence of trigonometric series were discovered (N. H. Abel (1802-1829), P. G. Lejuene-Dirichlet (1805-1859), P. du Bois-Reymond (1831-1889)) and the first example of nowhere differentiable continuous functions were described (Bolzano, B. Riemann (1826-1866) and K. Weierstrass (1815-1897) in 1830, 1854 and 1861, respectively). The latter examples unsettled common outlooks and led to a revision of the notion of a number and to the rise of rigorous theories of real numbers. The most important ones were: the theory proposed independently by Ch. Méray (1835-1911) and by G. Cantor (1845-1918), where real numbers were defined as equivalence classes of Cauchy sequences of rationals, and the theory due to R. Dedekind (1831-1916), where real numbers were defined as cuts in the set of rationals. Both theories gave a description of the topological structure of the real line. General topology owes its beginnings to a sequel of papers by Cantor published in 1879-1884. Discussing the uniqueness problems for trigonometric series, Cantor concentrated on the study of sets of "exceptional points", where one could drop some hypotheses of a theorem without damaging the theorem itself. Later he devoted himself to an investigation of sets, originating in this way both set theory and topology. Cantor defined and studied, in the realm of subsets of Euclidean spaces, some of the fundamental notions of topology. Further important notions, also restricted to Euclidean spaces, were introduced in 1893-1905 by C. Jordan (1838-1922), H. Poincaré (1854-1912), E. Borel (1871-1956), R. Baire (1874-1932) and H. Lebesgue (1875-1941). The decisive step forward was the move from Euclidean spaces to abstract spaces. Here, Riemann was the precursor; in 1854 he introduced and studied the notion of a two dimensional manifold and pointed out the possibility of studying higher dimensional manifolds as well as function spaces. Around 1900, when fundamental topological notions were already introduced, a few papers appeared exhibiting the existence of natural topological structures on some special sets, such as: the set of curves (G. Ascoli (1843-1896)), the set of functions (C. Arzelá (1847-1912), V. Volterra (1860-1940), D. Hilbert (1862-1943) and I. Fredholm (1866-1927)) and the set of lines and planes in the three dimensional space (Borel). In this way the ground was prepared for an axiomatic treatment of the notion of a limit and, more generally, the notion of proximity of a point to a set. - 2 Congrats on 3,000! :-) – Asaf Karagila Jul 27 '12 at 9:56 1 @Asaf: Thanks. Only about 37,000 more points to catch up to your current total. And likely another 80,000 to catch up to your then-current total once I get there. And then.... well, suffice to say, it's probably divergent. In the meantime, I'll amuse myself by voting to close questions all over the place! >;) – Arthur Fischer Jul 27 '12 at 16:47 Well, for the metric space, it's quite obvious that the metric is just an abstraction of the common concept of distance. So in the real world, there are places. Whenever you have two places (say, New York and the place where you currently are), you can tell the distance (e.g. "I'm twelve miles from New York"). That distance is never negative (if someone says "I'm minus twenty miles from New York" you immediately know he's speaking nonsense, even if you have no idea where he is). Also, if that distance is 0, you obviously must be in New York, and if you are in New York, your distance to it must be 0. Also, if you are 12 miles from New York, New York is 12 miles away from you. There's also the observation that if you look at the distances of both you and New York to a third place (say, to Washington), that the distances add up to something at least as large as your distance to New York. Essentially that encodes the fact that the direct way to new York is the shortest way; it would be a strange distance measure where you could save by making a detour. Now what I just described above are exactly the axioms of a metric space. The places are called "points", and since there many of them, there's a set of them. The distance is called "metric" and is required to have exactly the properties given above: It is defined for each pair of points (for each two places, there's a distance), $d(x,y)\ge 0$ (the distance cannot be negative), $d(x,y)=0$ exactly if $x=y$ (if your distance to New York is 0, you are in New York, and vice versa), $d(x,y)=d(y,x)$ (your as far from New York as New York is from you) and $d(x,y) \le d(x,z) + d(z,y)$ (the distance between you and New York cannot be larger than the distance between you and Washington plus the distance between Washington and New York). OK, now on to the topological space. Now think not about just single places, but complete areas (for example, countries on earth). One obvious question you can ask is if you are inside the country, or at the border. Of course you can decide that by just measuring the distance to the border (which is the minimum of the distances to all points on the border) and see if it is larger than 0. However, it seems strange that you need to measure distances to do so. After all, you should be able to tell if you are on the border without that. If you are on the border none of the countries surrounds you. So you need to have a concept of "a set surrounding you" which ultimately doesn't rely on distance. This concept is given by open sets and neighbourhoods. An open set is just a set which surrounds all its points (which means it doesn't contain its own border). So whenever you are in that set, you are surrounded by it, and definitely not at its border. Of course if you are in that set, you are also surrounded by all those sets which contain that set. Such sets are called neighbourhoods. Now if you are on the border of some set, there's of course no such neighbourhood which is completely in that set. So all you need to distinguish the interior from the border is the concept of open sets (=sets which surround all their points). Again, such "sets which surround all their points" have some general properties, which make up the definition of topological spaces. For example, the surface of earth (i.e. the set of all points) surrounds all its points. Also, the empty set surrounds all its points (because it has no points, it surrounds all of them). Moreover, if two sets surround all their points, the intersection does, too (because if you are in the intersection, you are surrounded by both sets, and thus by the intersection). And if you do an union of arbitrary "surrounding" (i.e. open) sets, again you get a "surrounding", i.e. open set. So the concept of "open set" and the topological space are actually an abstraction of the real-world concept "being surrounded by". - – Ronnie Brown Jul 28 '12 at 16:45 @RonnieBrown: I only get "The requested URL /~mas010/publar.htm was not found on this server." from your link. Maybe you mistyped it? – celtschk Jul 29 '12 at 17:47 – Rahul Narain Jul 30 '12 at 19:26 @Rahul Narain@celtschk Thanks Rahul for the correction! – Ronnie Brown Aug 6 '12 at 10:17 The problem the spaces aim to solve: They are "helpers". I mean, you have a whatever mathematical structure and you want to study it. Once you find: "Oh, my structure is metrisable (i.e., there exist a metric on it)," you can open any textbook on metric spaces and everything will be valid for your structure. If you find a structure that is closed under multiplication by numbers and addition, you have a vector space and again, the whole theory is valid for your structure. This is the reason why it is important to study such general structures as "spaces". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564432501792908, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/56582/analogue-of-spherical-coordinates-in-n-dimensions
# Analogue of spherical coordinates in $n$-dimensions What's the analogue to spherical coordinates in $n$-dimensions? For example, for $n=2$ the analogue are polar coordinates $r,\theta$, which are related to the Cartesian coordinates $x_1,x_2$ by $$x_1=r \cos \theta$$ $$x_2=r \sin \theta$$ For $n=3$, the analogue would be the ordinary spherical coordinates $r,\theta ,\varphi$, related to the Cartesian coordinates $x_1,x_2,x_3$ by $$x_1=r \sin \theta \cos \varphi$$ $$x_2=r \sin \theta \sin \varphi$$ $$x_3=r \cos \theta$$ So these are my questions: Is there an analogue, or several, to spherical coordinates in $n$-dimensions for $n>3$? If there are such analogues, what are they and how are they related to the Cartesian coordinates? Thanks. - 2 – anon Aug 9 '11 at 19:22 ## 1 Answer These are hyperspherical coordinates. You can see an example of them being put to use in this answer. - That's really funny, because I happen to read that answer just yesterday. And I was going to reference it for this question too. – mixedmath♦ Aug 9 '11 at 19:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8934882879257202, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/25299-dividing-surd-help.html
# Thread: 1. ## dividing a surd help Hello, how would i go about dividing 6+ $\sqrt40$ divided by 2?. The answer in the book is 3+ $\sqrt10$ but i don't see how to get that. Can somebody please help? Thankyou, Kris. 2. Hello Try writing $\frac{6 + \sqrt{40}}{2}$ as $\frac{6 + \sqrt{4}\sqrt{10}}{2}$ 3. Originally Posted by Kris4485 Hello, how would i go about dividing 6+ $\sqrt40$ divided by 2?. The answer in the book is 3+ $\sqrt10$ but i don't see how to get that. Can somebody please help? Thankyou, Kris. $\frac{6+\sqrt40}{2} = \frac{6+\sqrt{4 .\ 10}}{2} = \frac62 + \frac{2\sqrt{10}}{2} = 3 + \sqrt{10}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683736562728882, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/modal-logic
## Tagged Questions 2answers 129 views ### Validity in Kripke frames whose points are finite or infinite sequences Suppose $D$ is a non-empty set and $\{ R_i : i \in \mathbb{N} \}$ is a family of binary relations on sequences over $D$ so that $R_i \subseteq D^i \times D^i$. Let \$R_\omega \subse … 0answers 51 views ### On the Combinatorial Classification of Modal Kripke Frames We have that S5 modal logic is characterized by the modal axioms $K$, $M$ (reflexive), $4$ (transitive), and $B$ (symmetric). That is, an equivalence relation on a set of possible … 4answers 674 views ### How are Modal Logic and Graph Theory related? I am currently taking a graduate logic course on Modal Logic and I can't help notice that there are a certain class of graphs characterized by the modal axioms such as (4) \$\Box p … 3answers 234 views ### Why the preimage rather than image in Stone-type dualities. I am seeking a deeper understanding of the representation of set-based objects in terms of Boolean algebras. Let $\wp(A)$ be the set of subsets of a set $A$. A relation \$R \subset … 3answers 228 views ### Why are possibility and necessity dual? Hello, Recently, I'm studying modal logic for my master's thesis, and my research background is category theory. So, I naturally have a question that why it is said that necessity … 1answer 283 views ### Non-trivial consequences of Lob’s theorem Informally, Löb's theorem (Wikipedia, PlanetMath) shows that: a mathematical system cannot assert its own soundness without becoming inconsistent [Yudkowsky] In symbols: … 2answers 396 views ### A necessary condition for S4-completeness? It is well-known that the modal logic S4 is complete with respect to the class of all finite quasi-trees (where we interpret the $\Box$ modality as topological interior, and topolo … 1answer 316 views ### Looking for papers and articles on the Tarskian Möglichkeit Some background: Łukasiewicz many-valued logics were intended as modal logics, and Łukasiewicz gave an extensional definition of the modal operator: \$\Diamond A =_{def} \neg A \to … 3answers 780 views ### Modal logic - box rules Hi guys, In modal logic i.e. propositional logic with box and diamond, are then any laws to get a box or a diamond from outside a bracket to inside? I.e. \$\Box (x \rightarrow \Bo … 0answers 179 views ### Local consecuence in modal logic [closed] which means the local consecuence? in a modal logic 1answer 296 views ### Is it possible to define a closure operator in terms of partial ordering? For boolean algebra, let's take Roman Sikorski's Boolean Algebras as our reference. After giving a set of axioms, he proves (p.9) that the join of A and B is the least element of t … 2answers 409 views ### How many models are there, for a particular propositional modal logic? Background/motivation: A model for the classical propositional calculus is a boolean function b(S) which assigns 1 or 0 to each (modal-free) sentence S according to the usual rules … 2answers 442 views ### Modal logic - satisfiability Hi there, Assuming X and Y are modal formulae and diamond X is satisfiable and diamond Y is satisfiable, how would one show that they X AND Y is satisfiable? I don't think it req … 1answer 102 views ### how to determine the condition on frame if some axiom schema is given together with K axiom? In semantics for modal logic, if a new axiom schema is given together with K in question then how can one find out that what conditions the frame for the new system need to satisfy …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8979475498199463, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/90969/sheafification-via-hypercovers/95607
## Sheafification via hypercovers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The sheafification of a presheaf on a site is often constructed in a two-step process $X^{++}$, where $X^+$ consists of matching families in $X$, is always separated, and is a sheaf if $X$ is separated. But the sheafification can also be constructed in a single step by looking at matching families over hypercovers. However, the only published reference I can find which mentions this latter fact is Higher Topos Theory (section 6.5.3). Is there a reference on "good old" 1-sheaves which discusses sheafification via hypercovers? - 2 1+ since I didn't know that this can be done with hypercoverings in one step. – Martin Brandenburg Mar 12 2012 at 15:22 I'm no more too inside these questions, but may be: Lawrence Breen "On the Classification os 2-gerbes and 2-staks\$ (Asterisque 225) p.38 p.38, 39 seems indicate how make a sheafification by hypercover language.... – Buschi Sergio Apr 30 2012 at 21:05 ## 5 Answers I don't know any reference where this is proven in elementary terms (although this can be done, of course). This is part of folklore since years (in spirit, this goes back to Verdier's formula in SGA 4 (exposé V) and in Ken Brown's thesis), but the only explicit reference I know is Proposition 7.9 (for $n=0$) in the paper Dugger, Hollander and Isaksen, Hypercovers and simplicial presheaves, Math. Proc. Camb. Phil. Soc. 136 (2004), 9-51. (See here for a preprint version.) - 1 Thanks. At least this is a proof, but I was hoping for something written elementarily for 1-sheaves. – Mike Shulman Mar 12 2012 at 20:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One way of constructing the associated sheaf in one step is written here: http://cms.dm.uba.ar/academico/carreras/licenciatura/tesis/yuhjtman.pdf (in spanish) page 19, (3.2). The key idea (due to Eduardo Dubuc) is to consider "locally compatible families" instead of just "compatible families". - hi, I humbly think it would be a good idea if you could expand a little your answer to give a sketch of the argument (after all it is your thesis! :) It might also be useful as to highlight the similarities/differences with the hypercovers approach. – Yosemite Sam Apr 30 2012 at 20:54 This proof has no surprising ideas other than the definition of "locally compatible families". From there it is like the usual ++ construction. I'm not familiar with the hypercovers approach. – Sergio A. Yuhjtman Apr 30 2012 at 21:10 Sergio just brought into my attention this question. The definition of locally compatible family says exactly that the family is compatible over a hypercover refinement. So the one step construction in Yuhjtman thesis is just the one-step hypercover construction. However the hypercover in question is simply determined by a cover of the 1-simplices $U_i \times_U U_j$ of the cover $U_i \to U$, so it seems unnecessary to mention the hypercover notion. I discover this one-step construction a long time ago, and at that time I was ignorant of the hypercover notion, which as we know, is much more complicated than just the particular case determined by a cover of the 1-simplices. Eduardo J. Dubuc - Welcome, Eduardo! – David Roberts May 1 2012 at 1:00 Oh, and I took the liberty of editing your $x$ to $\times$ for readability. – David Roberts May 1 2012 at 1:01 Thanks! Of course for dealing with ordinary sheaves, it is not strictly necessary to speak about hypercovers, but when thinking (as I like to do) about higher sheaves as well, I find it helpful to talk about 1-sheaves in language which generalizes easily. – Mike Shulman May 1 2012 at 4:00 You may get lucky with Kashiaware and Schapira's newer book Categories and sheaves. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 332. Springer-Verlag, Berlin, 2006. - 1 Can you give a precise reference in this book? Or is it just a guess? – Martin Brandenburg Mar 12 2012 at 19:21 I looked there, but I couldn't find it. They discuss sheafification mostly in terms of local isomorphisms, which I suppose might be unravellable to say something about hypercovers, but it doesn't seem like it would be much less work than proving it directly. – Mike Shulman Mar 12 2012 at 20:47 let $F$ a presheaf on a topological space $X$ (all is straight generalizable to a general site). THen the separate pre-sheaf $L(F)$ associate is definited as follow: $L(F)(U)$ is the colimit of the sets $C^>(R, F)$ where: $C$ is the category of the opens of $X$, $C^>$ category of presheaves on $C$, and $R\subset h_U$ is a $X$-cover (covering cribles) of $U$, and the colimts is about all $X$-cover of $U$ and its inclusion morphisms. Then (by Yoneda lemma, and natural representation os a presheaf as the colimits of representable by the comma category on it ) we can represent the elements of $L(F)(U)$ as a class of equivalence of families $[(U_i, x_i)_{i\in I}]$ with $U_i$ form a open covering of $U$, $x:i\in U_i$ and identify two of these family: $(U_i, x_i)_{i\in I}$ and $(V_j, y_j)_{j\in J}$ if $\forall i, j\in I\times J: {x_i}_{|U_i\cap V_j}= {y_j}_{|U_i\cap V_j}$. as in the usual theorem follow that $L(F)$ is a separate presheaf, is a sheaf if $F$ is separate, is isomorphic to $F$ is $F$ is a sheaf, and $LL(F)$ is the sheaf associate with the canonical universal property. Give a example (ad hoc) of a preshaf $F$ such that $L(F)$ isnt a sheaf (neccessarly $F$ isnt separate). Let the topolgy $\tau_X=${$X, U, V, A, B, A\cap B, \emptyset$} with $X=U\cup V$ and $U\cap V=A\cup B$. Let $F(X)=\emptyset=F(\emptyset)$, $F(U)=${$a$}, $F(V)=${$b$} with $a_{|U\cap V}\neq b_{|U\cap V}$ but $a_A=b_A,\ a_B=b_B,$ then consider $\alpha:=[(U, a)]\in LF(U),\ \beta:=[(V, b)]\in LF(V)$ we have that $\alpha_{U\cap V}= \beta_{U\cap V}=[${$(A, a_A), (B, b_B)$}$]$ but $\alpha$ and $\beta$ cannot come from a (global) element of $F(X)$. THis example (I hope ) explain the the difficulty that prevents $L (F)$ to be a sheaf. for if $X=U\cup V$, in general gived $s\in L(F)(U)$, $s=[(U_i, x_i)_I]$, and $t\in L(F)(V)$, $t=[(V_j, y_j)_J]$, with $s_{|U\cap V}= t_{|U\cap V}$ this last condiction could use a refiniment of $(U_i\cap V)_I$ and $(V_j\cap U)_J$ and could be that ${s_i}_{|U_i\cap V_j} \neq {t_j}_{|U_i\cap V_j}$ but these are equal on a more strink refiniment. Now is instead of the coverings, we use (3-level I think) hypercovering (see for example Definition 2.4 on Lawrence Breen "On the Classification os 2-gerbes and 2-staks" (Asterisque 225) p.38, 39) we have a more rich representation: $[(U_i, x_i), (U_{i,j, a})_{a\in Ai,j}]$ where for $i, j\in I$ we have $U_i\cap U_j=\bigcup_{a\in Aij}U_{i,j,a}$ and ${x_i}_{|Uija}={x_j}_{|Uija}\ a\in Aij$. with the natural equavalence relation "..agree on a common refiniment". In this way the above difficulties are overcome, and has a sheaf directly to the first step. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198749661445618, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/101286/amortized-analysis-using-potential-function-exercise-from-introduction-to-algo?answertab=oldest
# Amortized analysis using potential function [Exercise from *Introduction to Algorithms*] I need some help with the following problem from Introduction to Algorithms by Cormen, Leiserson, Rivest, Stein: Consider an ordinary binary min-heap data structure with $n$ elements that supports the instructions INSERT and EXTRACT-MIN in $O(\lg n)$ worst-case time. Give a potential function $\Phi$ such that the amortized cost of INSERT is $O(\lg n)$ and he amortized cost of EXTRACT-MIN is $O(1)$, and show that it works. I'm not sure how to define the potential here. Especially since everything looks so "symmetric". I mean, I get that one can only do as many EXTRACT-MIN operations as one does INSERT operations, and so if we have $k$ INSERT operations and $\ell$ EXTRACT-MIN operations total, then the time this takes is surely not bigger than doing the INSERT first and the EXTRACT-MIN operations later. And so I guess, the time for all operations is $$\begin{align} t(k+\ell) &\le (\lg 1 + \lg 2 + \dots + \lg k) + (\lg k + \lg (k-1) + \dots + \lg (k-\ell+1)) \\ &= 2\lg k! - \lg \ell! \\ &= O(k\lg k) \end{align}$$ Hence we can assign $O(\lg k)$ to every insert operation and $O(1)$ to every EXTRACT-MIN operation (?). So the result at least makes some sense. But I still don't know how to do it formally using a potential. Thanks for your help! =) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369534850120544, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Virial_theorem
# Virial theorem ## Contents In mechanics, the virial theorem provides a general equation that relates the average over time of the total kinetic energy, $\left\langle T \right\rangle$, of a stable system consisting of N particles, bound by potential forces, with that of the total potential energy, $\left\langle V_\text{TOT} \right\rangle$, where angle brackets represent the average over time of the enclosed quantity. Mathematically, the theorem states $2 \left\langle T \right\rangle = -\sum_{k=1}^N \left\langle \mathbf{F}_k \cdot \mathbf{r}_k \right\rangle$ where Fk represents the force on the kth particle, which is located at position rk. The word "virial" derives from vis, the Latin word for "force" or "energy", and was given its technical definition by Rudolf Clausius in 1870.[1] The significance of the virial theorem is that it allows the average total kinetic energy to be calculated even for very complicated systems that defy an exact solution, such as those considered in statistical mechanics; this average total kinetic energy is related to the temperature of the system by the equipartition theorem. However, the virial theorem does not depend on the notion of temperature and holds even for systems that are not in thermal equilibrium. The virial theorem has been generalized in various ways, most notably to a tensor form. If the force between any two particles of the system results from a potential energy V(r) = αr n that is proportional to some power n of the inter-particle distance r, the virial theorem takes the simple form $2 \langle T \rangle = n \langle V_\text{TOT} \rangle.$ Thus, twice the average total kinetic energy $\left\langle T \right\rangle$ equals n times the average total potential energy $\left\langle V_\text{TOT} \right\rangle$. Whereas V(r) represents the potential energy between two particles, VTOT represents the total potential energy of the system, i.e., the sum of the potential energy V(r) over all pairs of particles in the system. A common example of such a system is a star held together by its own gravity, where n equals −1. Although the virial theorem depends on averaging the total kinetic and potential energies, the presentation here postpones the averaging to the last step. ## History In 1870, Rudolf Clausius delivered the lecture "On a Mechanical Theorem Applicable to Heat" to the Association for Natural and Medical Sciences of the Lower Rhine, following a 20 year study of thermodynamics. The lecture stated that the mean vis viva of the system is equal to its virial, or that the average kinetic energy is equal to 1/2 the average potential energy. The virial theorem can be obtained directly from Lagrange's Identity as applied in classical gravitational dynamics, the original form of which was included in Lagrange's "Essay on the Problem of Three Bodies" published in 1772. Karl Jacobi's generalization of the identity to n bodies and to the present form of Laplace's identity closely resembles the classical virial theorem. However, the interpretations leading to the development of the equations were very different, since at the time of development, statistical dynamics had not yet unified the separate studies of thermodynamics and classical dynamics.[2] The theorem was later utilized, popularized, generalized and further developed by James Clerk Maxwell, Lord Rayleigh, Henri Poincaré, Subrahmanyan Chandrasekhar, Enrico Fermi, Paul Ledoux and Eugene Parker. Fritz Zwicky was the first to use the virial theorem to deduce the existence of unseen matter, which is now called dark matter. As another example of its many applications, the virial theorem has been used to derive the Chandrasekhar limit for the stability of white dwarf stars. ## Statement and derivation ### Definitions of the virial and its time derivative For a collection of N point particles, the scalar moment of inertia I about the origin is defined by the equation $I = \sum_{k=1}^{N} m_{k} |\mathbf{r}_{k}|^{2} = \sum_{k=1}^{N} m_{k} r_{k}^{2}$ where mk and rk represent the mass and position of the kth particle. rk=|rk| is the position vector magnitude. The scalar virial G is defined by the equation $G = \sum_{k=1}^N \mathbf{p}_k \cdot \mathbf{r}_k$ where pk is the momentum vector of the kth particle. Assuming that the masses are constant, the virial G is one-half the time derivative of this moment of inertia $\frac{1}{2} \frac{dI}{dt} = \frac{1}{2} \frac{d}{dt} \sum_{k=1}^N m_{k} \, \mathbf{r}_k \cdot \mathbf{r}_k = \sum_{k=1}^N m_{k} \, \frac{d\mathbf{r}_k}{dt} \cdot \mathbf{r}_k = \sum_{k=1}^N \mathbf{p}_k \cdot \mathbf{r}_k = G\,.$ In turn, the time derivative of the virial G can be written $\begin{align} \frac{dG}{dt} & = \sum_{k=1}^N \mathbf{p}_k \cdot \frac{d\mathbf{r}_k}{dt} + \sum_{k=1}^N \frac{d\mathbf{p}_k}{dt} \cdot \mathbf{r}_k \\ & = \sum_{k=1}^N m_k \frac{d\mathbf{r}_{k}}{dt} \cdot \frac{d\mathbf{r}_k}{dt} + \sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k \\ & = 2 T + \sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k\,, \end{align}$ where mk is the mass of the k-th particle, $\mathbf{F}_k = \frac{d\mathbf{p}_k}{dt}$ is the net force on that particle, and T is the total kinetic energy of the system $T = \frac{1}{2} \sum_{k=1}^N m_k v_k^2 = \frac{1}{2} \sum_{k=1}^N m_k \frac{d\mathbf{r}_k}{dt} \cdot \frac{d\mathbf{r}_k}{dt}.$ ### Connection with the potential energy between particles The total force $\mathbf{F}_k$ on particle k is the sum of all the forces from the other particles j in the system $\mathbf{F}_k = \sum_{j=1}^N \mathbf{F}_{jk}$ where $\mathbf{F}_{jk}$ is the force applied by particle j on particle k. Hence, the force term of the virial time derivative can be written $\sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = \sum_{k=1}^N \sum_{j=1}^N \mathbf{F}_{jk} \cdot \mathbf{r}_k.$ Since no particle acts on itself (i.e., $\mathbf{F}_{jk} = 0$ whenever $j=k$), we have $\sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = \sum_{k=1}^N \sum_{j<k} \mathbf{F}_{jk} \cdot \mathbf{r}_k + \sum_{k=1}^N \sum_{j>k} \mathbf{F}_{jk} \cdot \mathbf{r}_k = \sum_{k=1}^N \sum_{j<k} \mathbf{F}_{jk} \cdot \left( \mathbf{r}_k - \mathbf{r}_j \right).$[3] where we have assumed that Newton's third law of motion holds, i.e., $\mathbf{F}_{jk} = -\mathbf{F}_{kj}$ (equal and opposite reaction). It often happens that the forces can be derived from a potential energy V that is a function only of the distance rjk between the point particles j and k. Since the force is the negative gradient of the potential energy, we have in this case $\mathbf{F}_{jk} = -\nabla_{\mathbf{r}_k} V = - \frac{dV}{dr} \left( \frac{\mathbf{r}_k - \mathbf{r}_j}{r_{jk}} \right),$ which is clearly equal and opposite to $\mathbf{F}_{kj} = -\nabla_{\mathbf{r}_j} V$, the force applied by particle $k$ on particle j, as may be confirmed by explicit calculation. Hence, the force term of the virial time derivative is $\sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = \sum_{k=1}^N \sum_{j<k} \mathbf{F}_{jk} \cdot \left( \mathbf{r}_k - \mathbf{r}_j \right) = -\sum_{k=1}^N \sum_{j<k} \frac{dV}{dr} \frac{\left( \mathbf{r}_k - \mathbf{r}_j \right)^2}{r_{jk}} = -\sum_{k=1}^N \sum_{j<k} \frac{dV}{dr} r_{jk}.$ Thus, we have $\frac{dG}{dt} = 2 T + \sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = 2 T - \sum_{k=1}^N \sum_{j<k} \frac{dV}{dr} r_{jk}.$ ### Special case of power-law forces In a common special case, the potential energy V between two particles is proportional to a power n of their distance r $V(r_{jk}) = \alpha r_{jk}^n,$ where the coefficient α and the exponent n are constants. In such cases, the force term of the virial time derivative is given by the equation $-\sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = \sum_{k=1}^N \sum_{j<k} \frac{dV}{dr} r_{jk} = \sum_{k=1}^N \sum_{j<k} n V(r_{jk}) = n V_\text{TOT}$ where VTOT is the total potential energy of the system $V_\text{TOT} = \sum_{k=1}^N \sum_{j<k} V(r_{jk}).$ Thus, we have $\frac{dG}{dt} = 2 T + \sum_{k=1}^N \mathbf{F}_k \cdot \mathbf{r}_k = 2 T - n V_\text{TOT}.$ For gravitating systems and also for electrostatic systems, the exponent n equals −1, giving Lagrange's identity $\frac{dG}{dt} = \frac{1}{2} \frac{d^2 I}{dt^2} = 2 T + V_\text{TOT}$ which was derived by Lagrange and extended by Jacobi. ### Time averaging The average of this derivative over a time, τ, is defined as $\left\langle \frac{dG}{dt} \right\rangle_\tau = \frac{1}\tau \int_{0}^\tau \frac{dG}{dt}\,dt = \frac{1}{\tau} \int_{G(0)}^{G(\tau)} \, dG = \frac{G(\tau) - G(0)}{\tau},$ from which we obtain the exact equation $\left\langle \frac{dG}{dt} \right\rangle_\tau = 2 \left\langle T \right\rangle_\tau + \sum_{k=1}^N \left\langle \mathbf{F}_k \cdot \mathbf{r}_k \right\rangle_\tau.$ The virial theorem states that, if $\left\langle {dG}/{dt} \right\rangle_\tau = 0$, then $2 \left\langle T \right\rangle_\tau = -\sum_{k=1}^N \left\langle \mathbf{F}_k \cdot \mathbf{r}_k \right\rangle_\tau.$ There are many reasons why the average of the time derivative might vanish, i.e., $\left\langle {dG}/{dt} \right\rangle_{\tau} = 0$. One often-cited reason applies to stably bound systems, i.e., systems that hang together forever and whose parameters are finite. In that case, velocities and coordinates of the particles of the system have upper and lower limits so that the virial, Gbound, is bounded between two extremes, Gmin and Gmax, and the average goes to zero in the limit of very long times τ $\lim_{\tau \rightarrow \infty} \left| \left\langle \frac{dG^{\mathrm{bound}}}{dt} \right\rangle_\tau \right| = \lim_{\tau \rightarrow \infty} \left| \frac{G(\tau) - G(0)}{\tau} \right| \le \lim_{\tau \rightarrow \infty} \frac{G_\max - G_\min}{\tau} = 0.$ Even if the average of the time derivative of G is only approximately zero, the virial theorem holds to the same degree of approximation. For power-law forces with an exponent n, the general equation holds $\langle T \rangle_\tau = -\frac{1}{2} \sum_{k=1}^N \langle \mathbf{F}_k \cdot \mathbf{r}_k \rangle_\tau = \frac{n}{2} \langle V_\text{TOT} \rangle_\tau.$ For gravitational attraction, n equals −1 and the average kinetic energy equals half of the average negative potential energy $\langle T \rangle_\tau = -\frac{1}{2} \langle V_\text{TOT} \rangle_\tau.$ This general result is useful for complex gravitating systems such as solar systems or galaxies. A simple application of the virial theorem concerns galaxy clusters. If a region of space is unusually full of galaxies, it is safe to assume that they have been together for a long time, and the virial theorem can be applied. Doppler measurements give lower bounds for their relative velocities, and the virial theorem gives a lower bound for the total mass of the cluster, including any dark matter. The averaging need not be taken over time; an ensemble average can also be taken, with equivalent results. Although derived for classical mechanics, the virial theorem also holds for quantum mechanics, which was proved by Fock[4] (the quantum equivalent of the l.h.s. $\left\langle {dG}/{dt} \right\rangle_\tau$ vanishes for energy eigenstates). ## In special relativity For a single particle in special relativity, it is not the case that $T = \frac 12 \mathbf{p} \cdot \mathbf{v}$. Instead, it is true that $T = (\gamma - 1) mc^2\,$ and $\begin{align} \frac 12 \mathbf{p} \cdot \mathbf{v} & = \frac 12 \vec{\beta} \gamma mc \cdot \vec{\beta} c = \frac 12 \gamma \beta^2 mc^2 = \left( \frac{\gamma \beta^2}{2(\gamma-1)}\right) T \,.\end{align}$ The last expression can be simplified to either $\left(\frac{1 + \sqrt{1-\beta^2}}{2}\right) T$ or $\left(\frac{\gamma + 1}{2 \gamma}\right) T$. Thus, under the conditions described in earlier sections (including Newton's third law of motion, $\mathbf{F}_{jk} = -\mathbf{F}_{kj}$, despite relativity), the time average for $N$ particles with a power law potential is $\frac n2 \langle V_\mathrm{TOT} \rangle_\tau = \left\langle \sum_{k=1}^N \left(\frac{1 + \sqrt{1-\beta_k^2}}{2}\right) T_k \right\rangle_\tau = \left\langle \sum_{k=1}^N \left(\frac{\gamma_k + 1}{2 \gamma_k}\right) T_k \right\rangle_\tau \,.$ In particular, the ratio of kinetic energy to potential energy is no longer fixed, but necessarily falls into an interval: $\frac{2 \langle T_\mathrm{TOT} \rangle}{n \langle V_\mathrm{TOT} \rangle} \in \left[1, 2\right)\,,$ where the more relativistic systems exhibit the larger ratios. ## Generalizations Lord Rayleigh published a generalization of the virial theorem in 1903.[5] Henri Poincaré applied a form of the virial theorem in 1911 to the problem of determining cosmological stability.[6] A variational form of the virial theorem was developed in 1945 by Ledoux.[7] A tensor form of the virial theorem was developed by Parker,[8] Chandrasekhar[9] and Fermi.[10] The following generalization of the virial theorem has been established by Pollard in 1964 for the case of the inverse square law:[11][12] the statement $2\lim\limits_{\tau\rightarrow+\infty}\langle T\rangle_\tau = \lim\limits_{\tau\rightarrow+\infty}\langle U\rangle_\tau$ is true if and only if $\lim\limits_{\tau\rightarrow+\infty}{\tau}^{-2}I(\tau)=0.$ A boundary term otherwise must be added, such as in Ref.[13] ## Inclusion of electromagnetic fields The virial theorem can be extended to include electric and magnetic fields. The result is[14] $\frac{1}{2}\frac{d^2I}{dt^2} + \int_Vx_k\frac{\partial G_k}{\partial t} \, d^3r = 2(T+U) + W^E + W^M - \int x_k(p_{ik}+T_{ik}) \, dS_i,$ where I is the moment of inertia, G is the momentum density of the electromagnetic field, T is the kinetic energy of the "fluid", U is the random "thermal" energy of the particles, WE and WM are the electric and magnetic energy content of the volume considered. Finally, pik is the fluid-pressure tensor expressed in the local moving coordinate system $p_{ik} = \Sigma n^\sigma m^\sigma \langle v_iv_k\rangle^\sigma - V_iV_k\Sigma m^\sigma n^\sigma,$ and Tik is the electromagnetic stress tensor, $T_{ik} = \left( \frac{\varepsilon_0E^2}{2} + \frac{B^2}{2\mu_0} \right) \delta_{ik} - \left( \varepsilon_0E_iE_k + \frac{B_iB_k}{\mu_0} \right).$ A plasmoid is a finite configuration of magnetic fields and plasma. With the virial theorem it is easy to see that any such configuration will expand if not contained by external forces. In a finite configuration without pressure-bearing walls or magnetic coils, the surface integral will vanish. Since all the other terms on the right hand side are positive, the acceleration of the moment of inertia will also be positive. It is also easy to estimate the expansion time τ. If a total mass M is confined within a radius R, then the moment of inertia is roughly MR2, and the left hand side of the virial theorem is MR2/τ2. The terms on the right hand side add up to about pR3, where p is the larger of the plasma pressure or the magnetic pressure. Equating these two terms and solving for τ, we find $\tau\,\sim R/c_s,$ where cs is the speed of the ion acoustic wave (or the Alfvén wave, if the magnetic pressure is higher than the plasma pressure). Thus the lifetime of a plasmoid is expected to be on the order of the acoustic (or Alfvén) transit time. ## In astrophysics The virial theorem is frequently applied in astrophysics, especially relating the gravitational potential energy of a system to its kinetic or thermal energy. Some common virial relations are, $\frac{3}{5} \frac{GM}{R} = \frac{3}{2} \frac{k_B T}{m_p} = \frac{1}{2} v^2$, for a mass $M$, radius $R$, velocity $v$, and temperature $T$. And the constants are Newton's constant $G$, the Boltzmann constant $k_B$, and proton mass $m_p$. Note that these relations are only approximate, and often the leading numerical factors (e.g. 3/5 or 1/2) are neglected entirely. ### Galaxies and cosmology (virial mass and radius) In astronomy, the mass and size of a galaxy (or general overdensity) is often defined in terms of the "virial radius" and "virial mass" respectively. Because galaxies and overdensities in continuous fluids can be highly extended (even to infinity in some models—e.g. an isothermal sphere), it can be hard to define specific, finite measures of their mass and size. The virial theorem, and related concepts, provide an often convenient means by which to quantify these properties. In galaxy dynamics, the mass of a galaxy is often inferred by measuring the rotation velocity of its gas and stars, assuming circular Keplerian orbits. Using the virial theorem, the dispersion velocity $\sigma$ can be used in a similar way. Taking the kinetic energy (per particle) of the system as, T = (1/2) v2 ~ (3/2) M $\sigma$2, and the potential energy (per particle) as, U ~ (3/5)(GM/R), we can write $\frac{GM}{R} \approx \sigma^2$. Here $R$ is the radius at which the velocity dispersion is being measured, and $M$ is the mass within that radius. The virial mass and radius are generally defined for the radius at which the velocity dispersion is a maximum, i.e. $\frac{GM_\text{vir}}{R_\text{vir}} \approx \sigma_\text{max}^2$. As numerous approximations have been made, in addition to the approximate nature of these definitions, order-unity proportionality constants are often omitted (as in the above equations). These relations are thus only accurate in an order of magnitude sense, or when used self-consistently. An alternate definition of the virial mass and radius is often used in cosmology where it is used to refer to the radius of a sphere, centered on a galaxy or a galaxy cluster, within which virial equilibrium holds. Since this radius is difficult to determine observationally, it is often approximated as the radius within which the average density is greater, by a specified factor, than the critical density, $\rho_\text{crit}=\frac{3H^2}{8\pi G}$. Where $H$ is the Hubble parameter and $G$ is the gravitational constant. A common (although mostly arbitrary) choice for the factor is 200, in which case the virial radius is approximated as $r_\text{vir} \approx r_{200}= r(\rho = 200 \cdot \rho_\text{crit})$. The virial mass is then defined relative to this radius as $M_\text{vir} \approx M_{200} = (4/3)\pi r_{200}^3 \cdot 200 \rho_\text{crit}$. ## References 1. Clausius, RJE (1870). "On a Mechanical Theorem Applicable to Heat". Philosophical Magazine, Ser. 4 40: 122–127. 2. Collins, G. W. (1978). The Virial Theorem in Stellar Astrophysics. Pachart Press. Introduction 3. Fock, V. (1930). "Bemerkung zum Virialsatz". Zeitschrift für Physik A 63 (11): 855–858. Bibcode:1930ZPhy...63..855F. doi:10.1007/BF01339281. 4. Lord Rayleigh (1903). Unknown. 5. Poincaré, H. Lectures on Cosmological Theories. Paris: Hermann. 6. Ledoux, P. (1945). "On the Radial Pulsation of Gaseous Stars". The Astrophysical Journal 102: 143–153. Bibcode:1945ApJ...102..143L. doi:10.1086/144747. Retrieved March 24, 2012. 7. Parker, E.N. (1954). "Tensor Virial Equations" (PDF). Physical Review 96 (6): 1686–1689. Bibcode:1954PhRv...96.1686P. doi:10.1103/PhysRev.96.1686. Retrieved March 24, 2012. 8. Chandrasekhar, S; Lebovitz NR (1962). "The Potentials and the Superpotentials of Homogeneous Ellipsoids" (PDF). Ap. J. 136: 1037–1047. Bibcode:1962ApJ...136.1037C. doi:10.1086/147456. Retrieved March 24, 2012. 9. Chandrasekhar, S; Fermi E (1953). "Problems of Gravitational Stability in the Presence of a Magnetic Field" (PDF). Ap. J. 118: 116. Bibcode:1953ApJ...118..116C. doi:10.1086/145732. Retrieved March 24, 2012. 10. Pollard, H. (1964). "A sharp form of the virial theorem" (PDF). Bull. Amer. Math. Soc. LXX (5): 703–705. doi:10.1090/S0002-9904-1964-11175-7. Retrieved March 24, 2012. 11. Pollard, Harry (1966). Mathematical Introduction to Celestial Mechanics. Englewood Cliffs, NJ: Prentice–Hall, Inc. 12. Kolár, M.; O'Shea, S. F. (July 1996). "A high-temperature approximation for the path-integral quantum Monte Carlo method" (PDF). Journal of Physics A: Mathematical and General 29 (13): 3471–3494. Bibcode:1996JPhA...29.3471K. doi:10.1088/0305-4470/29/13/018. Retrieved March 24, 2012. 13. Schmidt, George (1979). Physics of High Temperature Plasmas (Second ed.). Academic Press. p. 72. ## Further reading • Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison–Wesley. ISBN 0-201-02918-9 •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 73, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906520664691925, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/5303/why-do-statements-which-appear-elementary-have-complicated-proofs/5304
# Why do statements which appear elementary have complicated proofs? The motivation for this question is : http://math.stackexchange.com/questions/4066/rationals-of-the-form-fracpq-where-p-q-are-primes-in-a-b and some other problems in Mathematics which looks as if they are elementary but their proofs are very much sophisticated. I would like to consider two famous questions: First the "Fermat's Last Theorem" and next the unproven "Goldbach conjecture". These questions appear elementary in nature, but require a lot of Mathematics for even comprehending the solution. Even the problem, which i posed in the link is so elementary but i don't see anyone even giving a proof without using the prime number theorem. Now the question is: Why is this happening? If i able to understand the question, then i should be able to comprehend the solution as well. A Mathematician once quoted: Mathematics is the understanding of how nature works. Is nature so complicated that a common person can't understand as to how it works, or is it we are making it complicated. At the same time, i appreciate the beauty of Mathematics also: Paul Erdos proof of Bertrands postulate is something which i admire so much because, of its elementary nature. But at the same time i have my scepticism about FLT and other theorems. I have stated 2 examples of questions which appear elementary, but the proofs are intricate. I know some other problems, in number theory which are of this type. Are there any other problems of this type, which are not Number Theoritical? If yes, i would like to see some of them. - 16 I don't agree with "If i able to understand the question, then i should be able to comprehend the solution as well." I don't think that it is possible to answer the question "Why is this happening?". (Voted to close as subjective and argumentative.) – Rasmus Sep 23 '10 at 13:00 2 Rasmus's point is as valid in the natural sciences as in mathematics; there's a lot of seemingly simple physical phenomena that we are at a loss to explain properly. – J. M. Sep 23 '10 at 13:14 16 "If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is." — John von Neumann – Pandora Sep 23 '10 at 15:54 3 I've changed the title to more closely reflect what the question seems to be asking, but I'm not convinced that it's any 'better'... – Larry Wang Sep 25 '10 at 17:59 7 Why do bad things happen to good people? – Pete L. Clark Nov 28 '10 at 0:08 show 8 more comments ## 6 Answers Simple questions often have complex answers, or no answers at all. If you were to ask me why the sky was blue, to give a complete answer I would have to describe the heliocentric model, the earth's atmosphere, and the electromagnetic spectrum. If you asked me how my computer connected to the internet, the answer would take a considerable amount of time to explain. If you asked my whether there was life on other planets, I wouldn't be able to give an answer; we just don't know. Often, it is impossible to give a simple answer to a simple question. This is true in any field you might encounter, ranging from the sciences to the humanities. And it is true in mathematics, Mathematics allows us to formalize our questions within an axiomatic structure. It lets us ask our questions more precisely. But it in no way guarantees that the simplicity of the question would translate into simplicity of the answer. Some other simple problems which cannot be answered simply: The Four color theorem states that any map made up of continuous regions can be colored with 4 colors such that each region gets 1 color and no two adjacent regions get the same color. The proof is quite complex, requiring the use of a computer. Scheinerman's conjecture is the conjecture that "every planar graph is the intersection graph of a set of line segments in the plane" (sourced from http://en.wikipedia.org/wiki/Scheinerman%27s_conjecture). However, the proof was only completed in 2009, and is fairly difficult. - Very nice response! – Ryan Budney Sep 25 '10 at 19:18 I suppose there is a way to formalize your question into a relatively precise one. Let $P$ be the set of all propositions in the language of Zermelo-Frankel set theory for which there exist proofs. Given a natural number $k$ we want to have an upper bound $l(k)$ on the minimal-length of the proof of any proposition $p$ from $P$ such that the length of the proposition $p$ is at most $k$. Here I'm using "length" in the sense of the number of ASCII characters it would take to write the proposition (or proof of the proposition, respectively). Presumably $l(k)$ is a non-computable function. Since if $l(k)$ were computable, given a proposition $p$ of length $k$ you'd have a "simple" procedure to find a proof of any statement in Zermelo-Frankel set theory, provided one exists. The idea would be to iterate through all propositions of length at most $l(k)$ and verify they're proofs of $p$. :) I imagine such a statement is known to logicians. If it is known, I would call it the "rationalization is hard theorem". - 6 Yes, this is well known to logicians. In general, the set of provable statements from an effective first-order theory like ZFC is computably enumerable but not decidable, and thus the function $l(k)$ actually is noncomputable in general. In particular it is uncomputable in the cases of ZFC and Peano arithmetic. There are many examples of other theories in which provability is decidable, however. – Carl Mummert Sep 25 '10 at 23:29 @Chandru1: this should be the accepted answer. It is much more precise than the currently accepted answer and tells you exactly what to expect in the worst case. – Qiaochu Yuan Sep 27 '10 at 3:20 1 @Qiaochu: Users are only notified when addressed by `@` if they have previously commented on the same post. See the faq on meta.so. – Larry Wang Sep 28 '10 at 6:58 1 +1 for "rationalization is hard theorem"! – jericson Oct 14 '10 at 3:19 2 Yes, there is no computable upperbound on the size of shortest proof of theorems of first order logic. This and similar questions are studied in proof complexity. Search for finite versions of Godel's theorem or check the papers by Sam Buss or Pavel Pudlak. Also check their nice paper "How to Lie Without Being (Easily) Convicted and the Length of Proofs in Propositional Calculus". – Kaveh May 27 '11 at 1:41 This is a very deep question that many famous mathematicians have worked on, including Godel, Hilbert, ... So the question is why simple elementary problems (say problems with only universal quantifiers) do not have short elementary proofs. But before continuing we first need to agree on "what is an elementary proof?". In proof theory, an elementary proof is a proof that does not use concepts (and formulas) that does not appear in the axioms or in the theorem. Hilebrt tried to show that reasonable elementary axioms are sufficient for proving all elementary theorems. His attempt failed because of Godel's incompleteness theorem. For any reasonable set of axioms, there is an elementary statement that is not provable from them. So we need non-elementary axioms for proving elementary theorems. This is the first point. Complicated axioms are needed to prove some simple elementary theorems. Now let's assume that we have a proof of an elementary theorem (from some set of axioms). Then there is an elementary proof for the theorem from the axioms. This due to Gentzen's cut-elemnitation which asserts the existence of cut-free proofs, and moreover we have an algorithm that whenever we give it an arbitrary proof for a theorem, the algorithm converts it to a cut-free proof pf the same theorem from the same axioms. One of the nice properties of a cut-free proof is that any formula in the proof is either a subformlua of one of the axioms or a subformula of the theorem. That seems quite good. But why people don't use such proofs? The answer is that cut-free proofs can be a lot (super-exponentially) larger than proofs using cuts. Informally you can think of cuts as proving lemmas and then using them for further results. A 100 page proof can turn into a $2^{50000}/500$ page proof after cut-elimination (I am assuming each page contains 500 symbols). When we use concepts from outside, we usually define them to stand for complicated and hard to understand formulas. This simplifies the understanding of the proof if we are familiar with those concepts. Take for example the following pseudo-statement: $\forall \epsilon \exists \delta \exists m \forall n>m ...$ This is quite complicated formula if one is not already familiar with similar formulas. The first two quantifiers are the $\epsilon\text{-}\delta$ we know from limits. The second pair is just for all sufficiently large $n$. Or take for example: $\forall \epsilon \exists \delta \forall y \text{ s.t. } 0<|y-x|<\delta; \ |\frac{y^3+2y^2-x^3-2x^2}{y-x}-3x^2-4x|<\epsilon$ This is much more complicated than $(x^3+2x^2)'=3x^2+4x$. Here we are using newly defined concepts to shorten the formula and make it more human readable. Using lemmas about these concepts we can give a short proof of this equality, while the elementary proof can be much longer. Definitions of new concepts and lemmas can make a proof much shorter and more understandable. So elementary proofs might be harder to understand that non-elementary proofs using concepts from outside. This is the second point. Now apply what I said above to the several hundred page long proof of FLT, ignoring the fact that the full proof will need to contain the proofs of all theorems and lemmas that are used in the proof. The resulting proof will probably be practicably inexpressible on any medium and completely impossible to understand by a human-being. If we have a short elementary proof, then it is good. But there are elementary theorems with short non-elementary proofs which do not have short elementary proofs. This is the third point. In summery: 1. complicated axioms are needed to prove some simple elementary theorems (Godel's incompleteness theorem), 2. non-elementary proofs might be easier to understand, 3. there are elementary theorems with short non-elementary proofs which do not have short elementary proofs. Items 2 and 3 are related to what we call definitions and lemmas in mathematics and are related to proof theoretic and proof complexity concepts called cuts and extension by definitions. (There are other things that also make a proof more readable and related concepts in proof complexity but going into them would diverge from the question about a proof being elementary.) Two related notes: • There is a speed-up theorem by Godel in logic. • Having a short elementary proof is not sufficient for finding one (assuming widely believed conjectures in computational complexity theory). See this and this. - I love your perspective! – A New Guy Sep 28 '12 at 2:25 One essential piece of intuition from theoretical computer science: since P $\neq$ NP (we assume!), there are many statements which are easy to state but hard to prove. - The question may look almost self-evident, yet when translated into math, it sometimes looses that self-evident feature and so it requires (often fairly complicated) proof. My favorite example is the Jordan Theorem. A closed non-intersecting curve separates the plane into "inner" and "outer" regions. Looks like there isn't anything to prove here. But translate the notion of "closed non-intersecting curve" into the formula, and we no longer have any intuitive feel of "inner region", "outer region", and what is that region anyway. The simplest proof requires the machinery of algebraic topology and still quite long. - And how do you tell which is inner and which is outer? It might seem obvious, and maybe it's a lot easier than proving that there are two regions, but it's still not as easy as it sounds... – SamB Nov 28 '10 at 0:05 1 @SamB : A formal statement of the Jordan Curve Theorem looks something like this: "If $f: S^1 \to \mathbb{R}^2$ is continuous and 1-1, then $\mathbb{R}^2 \setminus f(S^1)$ has exactly two path-connected components one of which is bounded ('the inside') and one which is unbounded ('the outside'), both of which having $f(S^1)$ as their boundary." So there is at least a fairly good way to tell which one is the inside – kahen Nov 28 '10 at 1:16 There are elementary questions that have no elementary answers, other than questions in Number Theory and situations invoking Godel incompleteness. I have in mind finding antiderivatives of elementary functions such as $e^{-x^2}$, $(\sin x)/x$, $1/(\log x)$, and many others. It is known that the antiderivatives of these functions cannot be expressed in closed form in terms of powers, exponentials, logarithms, trig functions, etc. It's also known that the solutions of simple equations like $x+e^x=0$, $x=\cos x$, $x\log x=1$ and so on can't be expressed in closed form in terms of the standard elementary functions. - Gerry, I think the word elementary is overloaded (as many other mathematical names). "Elementary functions" are just a class of functions that are named "elementary functions". I think the question is asking for a different notion of elementary. – Kaveh May 27 '11 at 1:39 2 OK, let me put it another way. Why should $\int xe^{-x^2}$ be easy, while $\int e^{-x^2}$ is impossible (in terms of functions of 1st year Calculus)? Why should $e^x+e^{-x}=3$ be easy, while $e^x+x=3$ is impossible? I'm pointing out that problems that look very similar to easy problems may be not just complicated but impossible. – Gerry Myerson May 27 '11 at 6:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551746249198914, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81092?sort=newest
## Enumerating all Hamiltonian Cycles in a Bipartite Vertex Transitive Graph ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi everyone! This is my first post, apologies if I made any mistakes anywhere. Here goes the question: Consider all length 7 binary sequences. Let $X$ be the set of sequences with hamming weight 3 and let $Y$ be the set of sequences with hamming weight 4. For each vertex in $X$, connect an edge to each element in $Y$ if they differ by only 1 bit. For example, 4 edges will be connected from $0000111$ to $1000111, 0100111, 0010111$ and $0001111$. This will result in a Bipartite Vertex Transitive graph. To be precise, the resulting graph will be 4 regular. How can I enumerate all Hamiltonian Cycles in this graph? Now I am aware that a straight forward bruteforce of the 70 vertices will take forever and the generic dynamic programming approach of TSP is still quite unreachable due to the complexity being in the range of $O(2^{70})$. The closest reference I come across is Finding and Enumerating Hamilton Cycles in 4-Regular Graphs, where the complexity is $O(1.783^{70})$, but still nowhere close to being solvable. I am now wondering if the additional conditions of the graph being vertex-transitive and bipartite might help be bring down the complexity. Also, there is quite a bit of "special knowledge" about the graph that we can deduce. For example, there cannot be a cycle of length 4. Please note that this is related to Middle Layers Conjecture. I am interested to know all the Hamiltonian cycles for $k=4$ and $k=5$ (9 and 11 bit binary sequences). I played around with the idea of using permutations to classify several paths into 1 so that only one representative needs to be checked: Let starting vertex be $0000111$ and suppose I found all Hamiltonian cycles of the form: $R-0001111-0000111-0010111-S$. It appears to me that I can extend this result by applying, say a permutation of bit 1 and 4 to all elements in the cycle to get a new class: $R-1000111-0000111-0010111-S$. Since this is a bijective function between the 2 classes, I would have effectively found all cycles in the new class too. I can use this technique to reduce 6 types of cycles of the forms: $-0001111-0000111-0010111-$ $-0001111-0000111-0100111-$ $-0001111-0000111-1000111-$ $-0010111-0000111-0100111-$ $-0010111-0000111-1000111-$ $-0100111-0000111-1000111-$ into just one case. However, I cannot find a way to generalize this approach too far down to compress more paths. I am interested to know how everyone thinks of this question. Does it seem to be solvable in practical time? Finally, my pleasure to meet all of you! EDIT: Preliminary results 2 I had written better pruning codes and I realized that it takes only at most a couple of weeks to finish the running. However, based on extrapolation of the initial few branches I had completed, I will need some whooping 5+ TB of hard disk space just to store the results. Comparing with just measly 24 Hamiltonian cycles for $n=5$, it seems pretty clear that even if I can find all cycles for $n=9$ there is no way I can store all of them. Sad =( - Thanks for the helpful comment. I did not realize that I failed to put down a clear question at the start. – Ng Yong Hao Nov 16 2011 at 18:14 I deleted my comment as soon as I saw you addressed it, figuring no one else need bother to read it. – Barry Cipra Nov 16 2011 at 18:18 $n=7$ seems plausible and $n=9$ seems impossible if you want to actually look at all the cycles. – Brendan McKay Nov 16 2011 at 22:54 I feel that $n=7$ seems possible too. Although if its possible it looks like it will require some non trivial effort. – Ng Yong Hao Nov 18 2011 at 2:29 For those interested: It appears that there are > $2^{36}$ cycles and the search can easily be completed within 2 weeks. The main question happens to be shortage of storage space. – Ng Yong Hao 0 secs ago – Ng Yong Hao Dec 27 2011 at 1:49 show 1 more comment ## 2 Answers The case n=7 is pretty similar to counting the Hamiltonian cycles in the 6-cube (which has 64 vertices but more edges). That problem has been open for a long time, but was recently settled by Harri Haanpaa and myself. The approach is general and we just submitted a paper with the title "A Dynamic Programming Approach to Counting Hamiltonian Cycles in Bipartite Graphs". Our approach can be applied to the current problem as well (Brendan is right, as usual, n=7 can be done but n=9 is hopeless). I ran a program that I have which does not take the automorphism group into account, but it became clear that one really has to utilize the symmetries. This would require some instance-specific programming, which I had neither time nor motivation to do. I can send you a preprint of the paper if you are interested. And what is the number of Hamiltonian cycles in the 6-cube? 35 838 213 722 570 883 870 720 - Many thanks for the information! From my experiments fixing 30/70 vertices reduces the brute force to instantaneous. With a number of symmetries I felt that $n=7$ should be doable. Nothing better than a confirmation though. Like what you mentioned, I found a lot of symmetries each only applicable to specific sequences. I am planning the pseudocode at the moment, actual program will probably take a while. Do you think the effort, mainly on the exploration of the symmetries, will be meaningful enough to be put into a paper? (seeing this is a rather specific case) – Ng Yong Hao Nov 25 2011 at 13:46 Also, I noticed that there is another paper on [Hamiltonian cycles in 6-cube][1] It appears that their result of 14754666508334433250560 Hamiltonian cycles (directed) is recorded in [OEIS A066037][2] It seems to me that this is the also the question that your paper is addressing. Perhaps there are some miscalculations for their values? My apologies if I made any mistakes here. [1]: arxiv.org/abs/1003.4391 [2]: oeis.org/A066037/internal – Ng Yong Hao Nov 26 2011 at 8:29 The mentioned arxiv paper (which is actually not a paper but rather an announcement of a couple of numbers) is in error. Actually, it is a nice little problem for students in a combinatorics class to show how one can deduce the incorrectness based on the few tiny bits of information provided in the paper/announcement. I have informed OEIS about our new results. – Patric Ostergard Nov 27 2011 at 18:32 Interesting. I guess I will try to find the error when I have the time then. As for the preprint, I am fine to wait for the actual paper to be available online. After all, I am quite interested to play around with the problem to see how far I can go. Thanks for your assistance again! – Ng Yong Hao Nov 29 2011 at 17:55 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a list of suggestions, some of which may help. First try smaller cases. For n=3 there is 1 cycle; for n=5 I don't know but I suspect it is a small multiple of 120. The nice thing is that from a given edge, there are only 4 ways to continue towards a Hamiltonian cycle, and each way gives lots of restrictions. (It might even be related to Hamilton's original problem on a dodecahedron; I am doing this all in my head and so any such statements I makeI are suspect.) Based on this scant bit of information, I expect there to be a not so small multiple of 5040 as the number you seek, perhaps as large as 5040^3. Where possible, take advantage of the edges that can't be part of the cycle. For n=7, there are 9 ways to extend an edge to a sequence of three edges as part of a cycle; each such way affects 4 other vertices and limits their choices. If you do a small breadth first search, you might be able to exploit symmetries as in your post and reduce hundreds of cases to five or six basic cases. If you can, figure out the automorphism group of the graph. I suspect it has order 10080. This should serve as a check on your results. You can try some Monte Carlo simulations to guess at the number: suppose you choose 10 vertices on one side of the bipartite graph, and 20 edges coming from these vertices; how many Hamiltonian cycles can you build using those 20 edges as a base. If the number is 0 or small then you might have a feasible enumeration; if it is large then you might try estimating the logarithm of your number instead.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516662359237671, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/8891/how-does-centrifugal-force-work?answertab=votes
# How does centrifugal force work? I know what centrifugal force is, but how does it work? Why are things forced to the outside? - 2 There is no such thing as centrifugal force! – Josh Apr 19 '11 at 18:38 1 It is very real indeed: xkcd.com/123. Seriously, though, you have to read Ron Maimon's excellent answer (physics.stackexchange.com/a/13568/852) for this and other hypercorrections. Also very useful is Noldorin's intuitive derivation of the centrifugal force term (physics.stackexchange.com/a/393/852) – recipriversexclusion Apr 9 '12 at 15:34 ## 5 Answers The real force at work is centripetal force, or a force pushing inwards. Imagine you have a bucket on a string, and you swing that around in a circle: As you swing the bucket, it travels in a circle. The red line shows the path the bucket takes. In order to make it swing like this, you have to apply a constant force on the rope -- this is the green arrow in the image. At any given moment in time, the bucket wants to travel straight -- the blue line in the diagram. By applying the centripetal force, the inward force, you change the motion from straight to the circular motion (the red line). Because the contents of the bucket always want to go straight, and the force you apply always make them change direction, there seems to be an "outward" or "centrifugal" force "pushing" the contents against the side of the bucket. But it's an an illusion -- it's really just the momentum of the bucket and it's contents. - 6 I don't have time to write a full answer right now, but I will mention that the term "centrifugal force" does refer to a real and precisely defined physical concept. We tell intro-level physics students that it's an illusion just because that's a simpler way to keep them from getting confused than discussing noninertial reference frames. – David Zaslavsky♦ Apr 19 '11 at 20:41 4 The best proof that the centrifugal force is an illusion is the direction the bucket will fly if you let go of it: If there really was an outward-pulling force, it should fly away along the line connecting you and the bucket. In reality, it will fly along a line perpendicular to that. – Lagerbaer Apr 19 '11 at 20:43 2 I dislike the "centrifugal force is not real" concept. It is most certainly real. That it only applies in a particular reference frame could be said of nearly anything. This is like in grade school when they told you it is impossible to subtract a larger number from a smaller. It may have avoided confusion in children, but at this level, if we persist in claiming that it isn't real, we are being misleading, not helpful. – Colin K Apr 19 '11 at 21:05 2 @Josh: the point I meant to make is that if you draw a force diagram in the rotating reference frame, there is a real centrifugal force that does go on a force diagram. With respect to Lagerbaer's example, in the rotating reference frame, the bucket does fly away along the line connecting you and the center. In intro physics classes, we tell people that rotating reference frames "don't count" for simplicity, but in reality they are just as valid as inertial reference frames, and therefore the centrifugal force is just as valid as any other force. – David Zaslavsky♦ Apr 19 '11 at 22:24 2 @David: All models are wrong, but some are useful. (George E. P. Box) – Colin K Apr 19 '11 at 23:43 show 4 more comments ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address Centrifugal force is a particular example of a fictitious force. It is introduced so that Newton's second law holds in a rotating reference frame. Newton's second law says $$F = ma$$ This means that whenever we find an object accelerating (speeding up, slowing down, turning, or some combination), we can look around and find a physical reason why this happens. For example, a dropped stone accelerates towards the Earth, and this is due to Earth's gravity; if we drop the stone far from Earth, it won't fall. Your car turns a corner. This happens due to friction with the road. If the road were perfectly slick, the car would simply slide. Newton's second law holds in an inertial reference frame. It is simply a fact that such reference frames exist, and that they are all related to each other by moving past each other at constant velocities. (This becomes more complicated in general relativity, but that is not a major concern in everyday situations.) However, suppose you are in a train that begins accelerating forward (from the stationary track's point of view), and you are looking out the window at a ball sitting on the sidewalk nearby. From your reference frame in the train, the ball is accelerating backwards. However, there is no obvious source of a force on the ball that would make it accelerate backwards. This means that in an accelerating frame, Newton's second law doesn't work. Sometimes we would still like to do physics in such an accelerating frame, so we simply invent a new force, called a fictitious force, and say that the ball has a fictitious force of just the right amount needed to give it the acceleration we observe. Since the ball's acceleration is $a_b = -a_t$ with $a_t$ the acceleration of the train in an inertial frame, we need to introduce a fictitious force $$F_{fict} = -m a_t = m a_b$$ That way, Newton's laws still work and we can do physics as normal as long as the train's acceleration stays the same. We could, for example, play billiards in the accelerating train, noticing the the balls have curved trajectories across the table, and these curved trajectories would be perfectly explained by a fictitious force $-ma_t$ acting on each ball (with $m$ changing for balls with different masses). Keep in mind that the fictitious force points in the opposite direction of the train's acceleration. If the train accelerates forward, the ball appears to accelerate backwards, so the fictitious force must point backwards. Another type of accelerating frame is a rotating reference frame, for example a carousel. On the carousel, every part is accelerating towards the center (see Josh's answer). Therefore, to do physics in this frame, we must introduce a fictitious force $$F = -m a_c$$ as before. $a_c$ is the acceleration of the carousel at any point. Because this acceleration points in towards the center of the carousel, the fictitious force points the opposite direction - out towards the edge. This fictitious force is called the centrifugal force. Introducing the centrifugal force lets us do physics from the point of view of the rotating carousel, with the caveat that we can only handle statics this way. If things are actually moving on the carousel, we need to include the Coriolis force, which pushes things sideways. (See derivation here or some discussion of it in my answers here or here.) As for whether the centrifugal force is "real", it depends on what that means. In an inertial frame, each force can be traced back to some physical interaction like the exchange of a photon. That's not true for the centrifugal force. This is the essential difference people are referencing when they say the centrifugal force and other fictitious forces "are not real". - That is fascinating and a great explanation! I can see why I was not taught this in physics classes I took. – Josh Apr 20 '11 at 12:19 Centrifugal force is not a real force in Newtonian mechanics. Whenever a frame of reference is accelerated (except the cases where the acceleration is due to gravity) w.r.t. an inertial frame of reference, an observer situating in that reference frame experiences a "force" in the opposite direction of the acceleration. For an outside inertial observer this is nothing but the inertia of the objects situated in the accelerated frame. Since in Newtonian mechanics strictly the descriptions of only the inertial observers are legal hence the "force" experience by the objects in the accelerated frames is not a real force. It is called a pseudo force. A rotating framed of reference is also an accelerated frame where the acceleration is directed towards the center. It is called the centripetal force. As always, as described above, a "force" will be experienced by the objects situated in that rotating frame opposite to the centripetal acceleration i.e. away from the center. This is called the centrifugal "force". Obviously it is not a real force from the point of view of an outside inertial observer, who is the legal observer in the Newtonian mechanics. For this observer it is simply the inertia of the objects. Therefore we call this centrifugal force as a pseudo force and not a real force. From General Relativistic considerations however all observers are equivalent no matter how they are moving and all accelerations are equivalent to gravitational field in a short enough scaled. Therefore for an infinitesimally small rotating frame the centrifugal "force" can be thought of as a gravitational force! This directly follows from the equivalence principle. - +1 for pointing out the pseudo in it... – Vineet Menon Apr 9 '12 at 15:55 Because things want to go in a straight line. Imagine swinging an object around your head on a string. At any moment the object wants to go straight ahead (ie on a tangent) but it can't because of the string. It's as if the string was pulling it back in toward you - that's balanced by a force pushing it outward = centrifugal force. For a more technical explanation try physics.stackexchange.com, but expect an arguement about whether centrifugal force exists! - LOVE the xkcd! Much better than my nonfreehand circle ;-) – Josh Apr 19 '11 at 19:20 1 note - the answer was written for a how-things-work level site – Martin Beckett Apr 19 '11 at 21:52 Gravity requires an initial force such as a big bang which contains a substance moving all objects within that substance to move in the same direction. To cause this force to move requires an entirely unknown substance that carries this mass throughout a vacuum or substance of lesser density then the substance of the inertia. However, each objects of greater density then the substance of the initial inertia tumble and move in opposite directions within the initial force of the big bang and therefore create its own inertia. This movement causes gases and dust to form galaxies which in themselves cause their own inertia. This inertia causes the initial substance to move such as water whirling in a bowl and consequently forcing the objects of gas and dust to move together in accordance of their own mass & density. Eventually in time this creates the objects within to move towards each other or to remain in space until it is combined with another object. The question remains as what is the substance of space and what is the substance of outer space as nothing is nothing and logically cannot exist in our space. - unrelated to question – Arnold Neumaier Nov 15 '12 at 15:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385255575180054, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Constant_of_proportionality
# Proportionality (mathematics) (Redirected from Constant of proportionality) y is directly proportional to x. In mathematics, two variables are proportional if a change in one is always accompanied by a change in the other, and if the changes are always related by use of a constant. The constant is called the coefficient of proportionality or proportionality constant, when the constant is negative, the variables are described as negatively proportional. • If one variable is always the product of the other and a constant, the two are said to be directly proportional. x and y are directly proportional if the ratio $\tfrac yx$ is constant. • If the product of the two variables is always equal to a constant, the two are said to be inversely proportional. x and y are inversely proportional if the product $xy$ is constant. If a linear function transforms 0, a and b  into  0, c and d,  and if the product  a b c d  is not zero, we say a and b are proportional to c and d. An equality of two ratios such as  $\tfrac ac\ =\ \tfrac bd,$  where no term is zero, is called a proportion. ## Geometric illustration The two rectangles with stripes are similar, the ratios of their dimensions are horizontally written within the image. The duplication scale of a striped triangle is obliquely written, in a proportion obtained by inverting two terms of another proportion horizontally written. When the duplication of a given rectangle preserves its shape, the ratio of the large dimension to the small dimension is a constant number in all the copies, and in the original rectangle. The largest rectangle of the drawing is similar to one or the other rectangle with stripes. From their width to their height, the coefficient is  $\tfrac dc\ =\ \tfrac ba\ =\ \tfrac{d\,+\,b}{c\,+\,a}.$ A ratio of their dimensions horizontally written within the image, at the top or the bottom, determines the common shape of the three similar rectangles. The common diagonal of the similar rectangles divides each rectangle into two superposable triangles, with two different kinds of stripes. The four striped triangles and the two striped rectangles have a common vertex: the center of an homothetic transformation with a negative ratio −k  or  $\tfrac {-1}{k}$,  that transforms one triangle and its stripes into another triangle with the same stripes, enlarged or reduced. The duplication scale of a striped triangle is the proportionality constant between the corresponding sides lengths of the triangles, equal to a positive ratio obliquely written within the image: $\tfrac ca\ =\ k$  or   $\tfrac ac \ =\ \tfrac 1k.$ In the proportion $\tfrac ab\ =\ \tfrac cd$, the terms a and d are called the extremes, while b and c are the means, because a and d are the extreme terms of the list (a, b, c, d),  while b and c are in the middle of the list. From any proportion, we get another proportion by inverting the extremes or the means. And the product of the extremes equals the product of the means. Within the image, a double arrow indicates two inverted terms of the first proportion. Consider dividing the largest rectangle in two triangles, cutting along the diagonal. If we remove two triangles from either half rectangle, we get one of the plain gray rectangles. Above and below this diagonal, the areas of the two biggest triangles of the drawing are equal, because these triangles are superposable. Above and below the subtracted areas are equal for the same reason. Therefore, the two plain gray rectangles have the same area:  a d = b c. ## Symbol The mathematical symbol ∝ is used to indicate that two values are proportional. For example, A ∝ B means the variable A is directly proportional to the variable B. In Unicode this is symbol U+221D. ## Direct proportionality Given two variables x and y, y is directly proportional to x (x and y vary directly, or x and y are in direct variation) if there is a non-zero constant k such that $y = kx.\,$ The relation is often denoted, using the ∝ symbol, as $y \propto x$ and the constant ratio $k = y/x\,$ is called the proportionality constant or constant of proportionality. ### Examples • If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality. • The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to π. • On a map drawn to scale, the distance between any two points on the map is directly proportional to the distance between the two locations that the points represent, with the constant of proportionality being the scale of the map. • The force acting on a certain object due to gravity is directly proportional to the object's mass; the constant of proportionality between the mass and the force is known as gravitational acceleration. ### Properties Since $y = kx\,$ is equivalent to $x = \left(\frac{1}{k}\right)y,$ it follows that if y is directly proportional to x, with (nonzero) proportionality constant k, then x is also directly proportional to y with proportionality constant 1/k. If y is directly proportional to x, then the graph of y as a function of x will be a straight line passing through the origin with the slope of the line equal to the constant of proportionality: it corresponds to linear growth. ## Inverse proportionality The concept of inverse proportionality can be contrasted against direct proportionality. Consider two variables said to be "inversely proportional" to each other. If all other variables are held constant, the magnitude or absolute value of one inversely proportional variable will decrease if the other variable increases, while their product (the constant of proportionality k) is always the same. Formally, two variables are inversely proportional (also called varying inversely, in inverse variation, in inverse proportion, in reciprocal proportion) if one of the variables is directly proportional with the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant. It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that $y = {k \over x}$ The constant can be found by multiplying the original x variable and the original y variable. As an example, the time taken for a journey is inversely proportional to the speed of travel; the time needed to dig a hole is (approximately) inversely proportional to the number of people digging. The graph of two variables varying inversely on the Cartesian coordinate plane is a hyperbola. The product of the X and Y values of each point on the curve will equal the constant of proportionality (k). Since neither x nor y can equal zero (if k is non-zero), the graph will never cross either axis. ## Hyperbolic coordinates Main article: Hyperbolic coordinates The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that locates a point on a ray and the constant of inverse proportionality that locates a point on a hyperbola. ## Exponential and logarithmic proportionality A variable y is exponentially proportional to a variable x, if y is directly proportional to the exponential function of x, that is if there exist non-zero constants k and a $y = k a^x.\,$ Likewise, a variable y is logarithmically proportional to a variable x, if y is directly proportional to the logarithm of x, that is if there exist non-zero constants k and a $y = k \log_a (x).\,$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.801494300365448, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/119770/retrieving-angles-from-a-rotation-matrix?answertab=oldest
# Retrieving angles from a rotation matrix I'm working with rotations in n dimensions. I represent these rotations as a sequence of $(n^2 - n)/2$ angles, one for each pair of axes, in a fixed order. I can easily compute a rotation matrix from this, by generating the Givens matrix for each angle, and multiplying these matrices. The problem is doing it the other way around. Given an orthogonal matrix, how can I retrieve the angles in the right order? I'm using an evolutionary algorithm at the moment, which works reasonably well, but is very slow. It would be good if there were a fast method for calculating an exact solution. Unfortunately, most things I find are limited to the 3d case. My linear algebra is limited to the very practical. I'm willing to learn, of course, but I need to know where to look first. - ## 1 Answer I assume your orthogonal matrix $R$ has determinant 1, so it is possible to write it as a product of rotations. Let $e_j$, $j=1\ldots n$, be the standard unit vectors in ${\mathbb R}^n$. Here's one possible strategy. Take a rotation $R_{1,n}$ in coordinates $1$ and $n$ such that $(R_{1,n} R^{-1} e_n)_1 = 0$, then $R_{2,n}$ in coordinates $2$ and $n$ such that $(R_{2,n} R_{1,n} R^{-1} e_n)_2 = 0$ (noting that you'll still have $(R_{2,n} R_{1,n} R^{-1} e_n)_1 = 0$, and so on until $(R_{n-1,n} \ldots R_{1,n} R^{-1} e_n)_i = 0$ for $i=1,2,\ldots,n-1$. Then $(R_{n-1,n} \ldots R_{1,n} R^{-1} e_n)_n$ must be $\pm 1$. If it's $-1$, change $R_{n-1,n}$ by angle $\pi$ to make it $+1$. Note that, by orthogonality, $R_{n-1,n} \ldots R_{1,n} R^{-1} e_j)_n = 0$ for all $j < n$. Proceed in the same way on the $n-1$ component: $R_{n-2,n-1} \ldots R_{1,n-1} R_{n-1,n} \ldots R_{1,n} R^{-1} e_j = e_j$ for $j=n-1$ and $j=n$. Continue iterating down to $j=2$, obtaining finally $R_{1,2} R{2,3} R_{1,3} \ldots R_{1,n} R^{-1} e_j = e_j$ for $j = 2,3, \ldots, n$. Since, by assumption, the determinant is $1$, this will also be true for $j=1$. Thus $R_{1,2} R_{2,3} R_{1,3} \ldots R_{1,n} R^{-1} = I$, i.e. $R_{1,2} R_{2,3} R_{1,3} \ldots R_{1,n} = R$. - That's great. It seems to do exactly what I want (and yes, the determinant is 1, I thought orthogonality=rotation, but apparently that's not true). Can you comment on how you came up with this? Is this a specific instance of a standard method, or is this sort of stuff just plainly obvious once you reach a certain level of understanding of Linear Algebra? And in the case of the latter, could you suggest a textbook at about that level? – Peter Mar 14 '12 at 10:52 – Robert Israel Mar 14 '12 at 16:55 – Peter Apr 5 at 14:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434067606925964, "perplexity_flag": "head"}
http://gauravtiwari.org/tag/math/
# MY DIGITAL NOTEBOOK A Personal Blog On Mathematical Sciences and Technology Home » Posts tagged 'Math' # Tag Archives: Math ## Welcome 2012 – The National Mathematical Year in India Wednesday, December 28th, 2011 08:23 / 18 Comments Srinivasa Ramanujan (Photo credit: Wikipedia) I was very pleased on reading this news that Government of India has decided to celebrate the upcoming year 2012 as the National Mathematical Year. This is 125th birth anniversary of math-wizard Srinivasa Ramanujan (1887-1920). He is one of the greatest mathematicians India ever produced. Well this is ‘not’ the main reason for appointing 2012 as National Mathematical Year as it is only a tribute to him. Main reason is the emptiness of mathematical awareness in Indian Students. First of all there are only a few graduating with Mathematics and second, many not choosing mathematics as a primary subject at primary levels. As mathematics is not a very earning stream, most students want to go for professional courses such as Engineering, Medicine, Business and Management. Remaining graduates who enjoy science, skip through either physical or chemical sciences. Engineering craze has developed the field of Computer Science but not so much in theoretical Computer Science, which is one of the most recommended branches in mathematics. Statistics and Combinatorics are almost ‘died’ in many of Indian Universities and Colleges. No one wants to deal with those brain cracking math-problems: neither students nor professors. Institutes where mathematics is being taught are struggling with the lack of talented lecturers. Talented mathematicians don’t want to teach here since they aren’t getting much money and ordinary lecturers can’t do much more. India is almost ‘zero’ in Mathematics and some people including critics still roar that we discovered ‘zero’, ‘pi’ and we had Ramanujan. (more…) 26.740278 83.888889 ## Interesting and Must Read Papers and Articles in Mathematics Tuesday, December 27th, 2011 09:10 / 2 Comments Mathematics is beautiful and there is no place of ugly mathematics in this world. Mathematics is originated from creativity and it develops with research papers. Research Papers aren’t only very detailed and tough to understand for general student, but also interesting. Here, I have collected the list of some excellent articles and research papers (belong mainly to Math) which I have read and are easily available online. The main source of this list is ArXiv.org and you may find several research papers on ArXiv by visiting http://arxiv.org/. If you know any other paper/article which you find extremely interesting and that is not listed here, then please do comment mentioning the article name and URL. Papers/articles are cited as paper-title first, then http url and at last author-name. [It is better adviced to open these links to a new tab/window for smooth reading. ] (more…) 26.740278 83.888889 ## A Trip to Mathematics: Part-I Logic Tuesday, October 25th, 2011 13:34 / 5 Comments # About “A Trip To Mathematics”: A Trip to Mathematics is an indefinitely long series, aimed on generally interested readers and other undergraduate students. This series will deal Basic Mathematics as well as Advanced Mathematics in very interactive manners. Each post of this series is kept small that reader be able to grasp concepts. Critics and suggestions are invited in form of comments. # What is Logic? If mathematics is regarded as a language, then logic is its grammar. In other words, logical precision has the same importance in mathematics as grammatical accuracy in a language. As linguistic grammar has sentences, statements— logic has them too. Let we discuss about Sentence &Statements, then we shall proceed to further logic . # Sentence & Statements A sentence is a collection of some words, those together having some sense. For example: 1. Math is a tough subject. 2. English is not a tough subject. 3. Math and English both are tough subjects. 4. Either Math or English is tough subject. 5. If Math is a tough subject, then English is also a tough subject. 6. Math is a tough subject, English is a tough subject. Just have a quick look on above collections of words. Those are sentences, since they have some meaning too. First sentence is called Prime Sentence, i.e., . The five words • not • and • or • if …. then • if and only if or their combinations are called ‘connectives‘. The sentences (all but first) are called composite sentences, i.e., a ) in which one or more connectives appear. Remember that there is no difference between a sentence and statement in general logic. In this series, sentences and statements would have the same meaning. # Connectives not: A sentence which is modified by the word “not” is called the negation of the original sentence. For example: “English is not a tough subject” is the negation of “English is a tough subject“. Also, “3 is not a prime” is the negation of “3 is a prime“. Always note that negation doesn’t really mean the converse of a sentence. For example, you can not write “English is a simple subject” as the negation of “English is a tough subject“. In mathematical writings, symbols are often used for conciseness. The negation of sentences/statements is expressed by putting a slash (/) over that symbol which incorporates the principal verb in the statement. For example: The statement $x=y$ (read ‘x is equal to y’) is negated as $x \ne y$ (read ‘x is not equal to y‘). Similarly, $x \notin A$ (read ‘x does not belong to set A‘) is the negation of $x \in A$ (read ‘x belongs to set A‘). Statements are sometimes represented by symbols like p, q, r, s etc. With this notation there is a symbol, $\not$ or ¬ (read as ‘not’) for negation. For example if ‘p’ stands for the statement “Terence Tao is a professor” then $\not p$ [or ¬p] is read as ‘not p’ and states for “Terence Tao is not a professor.” Sometimes ~p is also used for the negation of p. and: The word “and” is used to join two sentences to form a composite sentence which is called the conjunction of the two sentences. For example, the sentence “I am writing, and my sister is reading” is the conjunction of the two sentences: “I am writing” and “My sister is reading“. In ordinary language (English), words like “but, while” are used as approximate synonyms for “and“, however in math, we shall ignore possible differences in shades of meaning which might accompany the use of one in the place of the other. This allows us to write “I am writing but my sister is reading” having the same mathematical meaning as above. The standard notation for conjunction is $\wedge$, read as ‘and‘. If p and q are statements then their conjunction is denoted by $p \wedge q$ and is read as ‘p and q’. or: A sentence formed by connecting two sentences with the word “or” is called the disjunction of the two sentences. For example, “Justin Bieber is a celebrity, or Sachin Tendulkar is a footballer.” is a disjunction of “Justin Bieber is a celebrity” and “Sachin Tendulkar is a footballer“. Sometimes we put the word ‘either‘ before the first statement to make the disjunction sound nice, but it is not necessary to do so, so far as a logician is concerned. The symbolic notation for disjunction is $\vee$ read ‘or’. If p and q are two statements, their disjunction is represented by $p \vee q$ and read as p or q. if….then: From two sentences we may construct one of the from “If . . . . . then . . .“; which is called a conditional sentence. The sentence immediately following IF is the antecedent, and the sentence immediately following THEN is the consequent. For example, “If 5 <6, 6<7, then 5<7” is a conditional sentence whith “5<6, 6<7” as antecedent and “5<7” as consequent. If p and q are antecedent and consequent sentences respectively, then the conditional sentence can be written as: “If p then q”. This can be mathematically represented as $p \Rightarrow q$ and is read as “p implies q” and the statement sometimes is also called implication statement. Several other ways are available to paraphrase implication statements including: 1. If p then q 2. p implies q 3. q follows from p 4. q is a logical consequence of p 5. p (is true) only if q (is true) 6. p is a sufficient condition for q 7. q is a necessary condition for p If and Only If:  The phrase “if and only if” (abbreviated as ‘iff‘) is used to obtain a bi-conditional sentence. For example, “A triangle is called a right angled triangle, if and only if one of its angles is 90°.” This sentence can be understood in either ways: “A triangle is called a right angled triangle if one of its angles is 90°” and “One of angles of a triangle is 90° if the triangle is right angled triangle.” This means that first prime sentence implies second prime sentence and second prime sentence implies first one. (This is why ‘iff’ is sometimes called double-implication.) Another example is “A glass is half filled iff that glass is half empty.“ If p and q are two statements, then we regard the biconditional statement as “p if and only if q” or “p iff q” and mathematically represent by ” $p\iff q$ “. $\iff$ represents double implication and read as ‘if and only if’. In the statement $p \iff q$, the implication $p \Rightarrow q$ is called direct implication and the implication $q \Rightarrow p$ is called the converse implication of the statement. # Other terms in logic: Stronger and Weaker Statements: A statement p is stronger than a statement q (or that q is weaker than p) if the implication statement $p \Rightarrow q$ is true. Strictly Stronger and Strictly Weaker Statements: The word ‘stronger‘(or weaker) does not necessarily mean ‘strictly stronger‘ (or strictly weaker). For example, every statement is stronger than itself, since $p \Rightarrow p$. The apparent paradox here is purely linguistical. If we want to avoid it, we should replace the word stronger by the phrase ‘stronger than‘ or ‘possibly as strong as‘. If $p \Rightarrow q$ is true but its converse is false ($q \not \Rightarrow p$), then we say that p is strictly stronger than q (or that q is strictly weaker than p). For example it is easy to say that a given quadrilateral is a rhombus that to say it is a parallelogram. Another understandable example is that ” If a blog is hosted on WordPress.com, it is powered with WordPress software.” is true but ” If a blog is powered with WordPress software , it is hosted on WordPress.com” is not true. # Logical Approach What exactly is the difference between a mathematician, a physicist and a layman? Let us suppose they all start measuring the angles of hundreds of triangles of various shapes, find the sum in each case and keep a record. Suppose the layman finds that with one or two exceptions the sum in each case comes out to be 180 degrees. He will ignore the exceptions and state ‘The sum of the three angles in a triangle is 180 degrees.’ A physicist will be more cautious in dealing the exceptional cases. He will examine then more carefully. If he finds that the sum in them some where 179 degrees to 181 degrees, say, then if will attribute the deviation to experimental errors. He will state a law – ‘The sum of the three angles of any triangle is 180 degrees.’ He will then watch happily as the rest of the world puts his law to test and finds that it holds good in thousands of different cases, until somebody comes up with a triangle in which the law fails miserably. The physicist now has to withdraw his law altogether or else to replace it by some other law which holds good in all the cases tried. Even this new law may have to be modified at a later date. And this will continue without end. A mathematician will be the fussiest of all. If there is even a single exception, he will refrain from saying anything. Even when millions of triangles are tried without a single exception, he will not state it as a theorem that the sum of the three angles in ‘any’ triangle is 180 degrees. The reason is that there are infinitely many different types of triangles. To generalise from a million to infinity is as baseless to a mathematician as to generalise from one to a million. He will at the most make a conjecture and say that there is a ‘strong evidence’ suggesting that the conjecture is true. The approach taken by the layman or the physicist is known as the inductive approach whereas the mathematician’s approach is called the deductive approach. # Inductive Approach In inductive approach, we make a few observations and generalise. Exceptions are generally not counted in inductive approach. # Deductive Approach In this approach, we deduce from something which is already proven. # Axioms or Postulates Sometimes, when deducting theorems or conclusion from another theorems, we reach at a stage where a certain statement cannot be proved from any ‘other’ proved statement and must be taken for granted to be true, then such a statement is called an axiom or a postulate. Each branch of mathematics has its own populates or axioms. For example, the most fundamental axiom of geometry is that infinitely many lines can be drawn passing through a single point. The whole beautiful structure of geometry is based on five or six such axioms and every theorem in geometry can be ultimately Deducted from these axioms. # Argument, Premises and Conclusion An argument is really speaking nothing more than an implication statement. Its hypothesis consists of the conjunction of several statements, called premises. In giving an argument, its premises are first listed (in any order), then connecting all, a conclusion is given. Example of an argument: Premises:   $p_1$         Every man is mortal. $p_2$                              Ram is a man. ———————————————————————————- Conclusion:                   $q$ Ram is mortal. Symbolically, let us denote the premises of an argument by $p_1, p_2, \ldots , p_n$ and its conclusion by $q$. Then the argument is the statement $(p_1 \wedge p_2 \wedge \ldots \wedge p_n) \Rightarrow q$. If this implication is true, the argument is valid otherwise it is invalid. To be continued…… ###### Suggested Readings: • Basic logic – connectives – NOT (gowers.wordpress.com) • Welcome to the Cambridge Mathematical Tripos (gowers.wordpress.com) 26.740278 83.888889 ## 381654729 : An Interesting Number Happened To Me Today Saturday, September 17th, 2011 18:21 / 5 Comments Image via Wikipedia You might be thinking why am I writing about an individual number? Actually, in previous year annual exams, my registration number was 381654729. Which is just an ‘ordinary’ 9-digit long number. I never cared about it- and forgot it after exam results were announced. But today morning, when I opened “Mathematics Today” magazine’s October 2010, page 8; I was brilliantly shocked. 381654729 is a nine digit number with each of the digits from 1 to 9 appearing once. The whole number is divisible by 9. If you remove the right-most digit, the remaining eight-digit number is divisible by 8. Again removing the next-right-most digit leaves a seven-digit number that is divisible by 7. Similarly, removing next-rightmost digit leaves a six-digit number that is divisible by 6. This property continues all the way down to one digit. Further research on this number provided a term for this number as Poly-divisible Number. And I also noticed that a similar problem has been asked in U S A Mathematical Talent Search  competition. See the first question in the doc below: To view this document in appropriate size click on View tab of the doc. After this beautiful incident, I would like to quote a statement here: Mathematical Wonders happen with Mathematicians. Numbers always chase me. ###### Related articles • Two Interesting Math Problems (wpgaurav.wordpress.com) 26.740278 83.888889 ## Social Networks for Math Majors Tuesday, September 13th, 2011 11:44 / 9 Comments Math or Mathematics is not as difficult as it is thought to be. Mathematical Patterns, Structures, Geometry and its use in everyday life make it beautiful. ‘Math majors’ term generally include Math students, Math professors and researchers or Mathematicians. Internet has always been a tonic for learners and whole internet is supposed to be a social network, in which one shares and others read, one asks & others answer. There are thousands of social networks (and growing) where you enjoy your days, share fun etc. However there are only a few social (mathematical) networks which are completely focused on math and related sciences. But these are brilliantly good enough to demonstrate the wisdom of mathematicians. I have tried to list my favorite social networking websites on mathematics. Please have a read and give feedback in form of comments Click On Images To Visit Corresponding Websites. # Math.Stack Exchange Mathematics StackExchange Website Mathematics Stack exchange is a website dedicated to all types of mathematical discussions. You can ask questions, give answers, comment on questions and vote for it. Registration is very easy and takes seconds. Depending on your work, you are given ‘reputations’. Depending upon some special works, you are also given some privileges. This is a free, community driven Q&A for people studying math at any level and professionals in related fields. It is a part of the Stack Exchange network of Q&A websites, and it was created through the open democratic process defined at Stack Exchange Area 51. (more…) 26.740278 83.888889
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428097605705261, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/frequency?sort=votes
# Tagged Questions The frequency tag has no wiki summary. 6answers 4k views ### What determines color — wavelength or frequency? What determines the color of light -- is it the wavelength of the light or the frequency? (i.e. If you put light through a medium other than air, in order to keep its color the same, which one would ... 2answers 6k views ### How does load affect frequency on the power grid? This story about the use of battery/freewheel based Frequency Regulators confused me about how the 60hz frequency of the North American power grid was set--saying that it was kept at that frequency by ... 3answers 625 views ### Can we transport energy over infinite distances through vacuum using light? I know that light (or electromagnetic radiation in general) attenuates in intensity as the square of the distance it travels. Why does it attenuate? Are the photons being scattered by the medium ... 3answers 278 views ### Why does inverting a song has no influence? I inverted the waveform of a given song and was wondering what will happen. The result is that it sounds the exact same way as before. I used Audacity and doublechecked if the wave-form really is ... 2answers 605 views ### How can a Human voice or animal voice have unique frequency Well this is pretty noobish question and I am not sure how to ask. When We talk we don't talk in an uniform frequency. Then how can one measure frequency of ones sound/voice ? I am asking this cause ... 5answers 966 views ### How to Make RF Waves Visible I understand RF (Radio Frequency) Waves are electromagnetic waves and a mode of communication for wireless technologies, such as cordless phones, radar, ham radio, GPS, and television broadcasts. Most ... 5answers 930 views ### How many colors exist? How many "colors" do exist? Our perception: As far as I know, colors are just different frequencies of light. According to wikipedia, we can see wavelengths from about 380 nm und 740 nm. This means ... 4answers 574 views ### How to calculate the quantum expectation of frequency of a particle? I know how to calculate the expectation of < $\Psi$|A|$\Psi$ > where the operator A is the eigenfunction of energy, momentum or position, but I'm not sure how to perform this for a pure frequency. ... 2answers 12k views ### Why does wavelength change as light enters a different medium? When light waves enter a medium of higher refractive index than the previous, why is it that: Its wavelength decreases? The frequency of it has to stay the same? 3answers 477 views ### What is the highest possible frequency for an EM wave? What is the highest possible frequency, shortest wavelength, for an electromagnetic wave in free space, and what limits it? Is the answer different for EM waves in other materials or circumstances? ... 2answers 933 views ### Strategies against 50 Hz mains hum on detector signals? I'm having problems with a strong 50 Hz mains hum on signals created by photodetectors. I assume that they are due to ground loops and I realize that the best option would be to remove those. What are ... 2answers 431 views ### The definition of “frequency” in different contexts I have been doing some research on all kinds of sound-related topics lately and have been a bit confused by the different uses of the term "frequency". Of course, the most general meaning of frequency ... 5answers 498 views ### Sound frequency of dropping bomb Everyone has seen cartoons of bombs being dropped, accompanied by a whistling sound as they drop. This sound gets lower in frequency as the bomb nears the ground. I've been lucky enough to not be ... 3answers 203 views ### Why frequency doesn't change during refraction? When light goes through one medium to another it's velocity and wavelength changes. Why frequency doesn't change in this phenomenon? 2answers 944 views ### The energy of an electromagnetic wave The intensity of an electromagnetic wave is only related to its amplitude $E^2$ and not its frequency. A photon has the same wavelength as the wave that's carrying it, and its energy is $h f$. So ... 1answer 256 views ### Doppler cooling limit vs recoil limit I was discussing laser cooling in class today and I understood that the main principle of the process is to tune a laser to a frequency lower than the absorbtion frequency of the atom and so only the ... 2answers 676 views ### Can 2 beams of ultraviolet light intersect and be visible where they intersect? Is it possible that if you have 2 ultraviolet lasers, that are invisible to the human eye, and if you aim their beams to intersect at some point, that the place of intersection will show a lower ... 1answer 40 views ### How to determine frequency components present in distorted signal, with the set of possible components already known? I am trying to choose the best approach to digitally analyse a signal, which is a mix of an unknown number (but less than 16) fundamental signals at specific frequencies (e.g., sines). The goal is ... 2answers 470 views ### What is the specific meaning of “Fourier frequency” (as opposed to simply “frequency”)? I've noticed that many journal articles (in optics) use the phrase "Fourier frequency" to describe, well, the frequency of something. Google scholar search for "Fourier frequency". Example: ... 2answers 396 views ### Is all kind of light same speed? Is there any speed different between blue or red color? Is there speed different? or there are same speed? 4answers 1k views ### How does power consumption vary with the processor frequency in a typical computer? I am looking for an estimate on the relationship between the rate of increase of power usage as the frequency of the processor is increased. Any references to findings on this would be helpful. 2answers 529 views ### Phase difference of driving frequency and oscillating frequency If a mass is attached to a spring and is oscillating (SHM). If a driving force is applied it must be at the same frequency as the mass's oscillation frequency. However I'm told that the phase ... 2answers 67 views ### Frequency Modulation If FM radios work by modulating the frequency, how is it that we can tune into a specific channel, and hear a song or station? Wouldn't the channel need to be modulated along with the varying ... 2answers 105 views ### What is the history behind the factors of 3 in the classification of electromagnetic radiation? What is the history behind the factors of 3 in the classification of electromagnetic radiation? See e.g. http://en.wikipedia.org/wiki/Radio_spectrum#By_frequency Is this (just) inherited from the ... 3answers 303 views ### How does energy depend on frequency in an alternating current circuit? In what relation is the energy input in an alternating current circuit to its frequency? I'd guess I have to compute something like $$E=\int P(\omega,t) dt=\int U(\omega,t) I(\omega,t) dt,$$ but ... 4answers 136 views ### Light refraction and causality One way how to look at refraction by a dielectric medium like water or glass is that (phase) velocity of light decreases because it is the wavelength rather than the frequency of the light which ... 1answer 166 views ### How much power can be drawn from stray electromagnetism in the atmosphere? I know this probably varies quite a bit from place to place on earth. But just some rough estimates: if I were to pull power via multiple antennae tuned to a variety of different frequencies, how much ... 1answer 313 views ### How can a Photon have a “frequency”? I picture a light ray as composed of photons with an energy equal to the frequency of the light ray according to E=hf. Is this the good way to picture this? Although I can solve elementary problems ... 6answers 2k views ### Limit of human eye flicker perception? I am designing a LED dimmer using software-controlled Pulse Width Modulation, and want to know the minimum PWM frequency that I must reach to make that LED dimming method indistinguishable from ... 3answers 132 views ### Frequency of a Tuning Fork Question: Which of the following affect the frequency of a tuning fork? Tine stiffness Tine length The force with which it's struck Density of the surrounding air Temperature of the surrounding air ... 1answer 184 views ### Radio waves and frequency of photon Is 89MHZ station emitting photons of 89MHZ frequency? (I mean $\nu$ in $E=h\nu$). 1answer 28 views ### Relation of color and frequency for the visible spectrum In this question the OP is looking for a way to see light that is outside of the visible spectrum without using electronic sensors. This got me wondering about the visible spectrum itself. Typically ... 1answer 72 views ### Is energy always proportional to frequency? Google has no results found for "energy not proportional to frequency" and many results for E=hf. Is there an example of an energy that is not proportional to frequency? 2answers 377 views ### Do we see color with higher frequency first? Out of the 7 colours of the rainbow, violet has the highest frequency and the smallest wavelength. Does this mean that our eye sees it first? If yes, then why? Does it travel at the same or higher ... 1answer 553 views ### static flow of water The title, I don't know whether it's correct or not, but I came across a video in youtube, http://youtu.be/_PkgQQqpH2M. The author of video used the title and hence I used the same.. The video ... 1answer 502 views ### Effects of high frequency lighting on human vision? I have a couple of different LED flashlights. One of them has three different "modes" of brightness, and the way it controls it is via pulse width modulation (PWM). Here is a picture that illustrates ... 3answers 181 views ### 5MHz RF pulse frequency analysed in software Is there software available that can analyse a 5MHz RF pulse to give a plot of frequency spectrum. The signal data is visible on a LCD screen or a print out could be obtained. 3answers 532 views ### How could this person have discovered the resonant frequency from this string of magnets? I stumbled onto this page http://mylifeisaverage.com/story/1364811/ and the post states that they were all making strings and shapes with these sets of 216 really small spherical earth ... 1answer 144 views ### Frequency Response RLC circuit - Current against Frequency graph - Symmetry? I understand that in a Frequency Response experiment dealing with an RLC circuit, the graph of Current against Frequency is supposed to be symmetrical about the resonant frequency theoretically. ... 3answers 301 views ### Does the Fundamental Frequency in a Vibrating String NOT Necessarily Have the Strongest Amplitude? I am doing some experiments on musical strings (guitar, piano, etc.). After performing a Fourier Transform on the sound recorded from those string vibrations, I find that the fundamental frequency is ... 2answers 319 views ### How to capture electomagnetic radiation/waves? If I wanted to find out what kind of electomagnetic waves "travel" through my room at which frequency, what kind of equipment would I need? Suppose I want to view frequencies from 0 Hz to 6 GHz. 2answers 206 views ### Can electrons change the frequency of light as they bounce off/around? I know that light does not interact with other light, but can interfere it, at least its amplitude. With that said, lights frequency can be changed via bouncing off matter, where matter might absorb ... 2answers 267 views ### Why are different frequency bands used in different countries? Why are different frequency bands used in different countries despite ITU's effort for a common frequency band use? There's got to be a reason behind this. For instance, U.S.-based Verizon Wireless ... 2answers 866 views ### Frequency of the sound when blowing in a bottle I'm sure you have tried sometime to make a sound by blowing in an empty bottle. Of course, the tone/frequency of the sound modifies if the bottle changes its shape, volume, etc. I am interested in ... 1answer 272 views ### Resonance and Natural Vibrations in Vacuum In my Physics textbook, it says that if two pendulums of the same natural frequency are placed next to each other and if one is set into vibration, the other starts resonating and when the first one ... 1answer 50 views ### Will changing amplitude change the frequency? Will changing the amplitude change the frequency of a wave, or is it possible for a specific frequency (50 Hz. for example) to generate from shifting amplitude patterns? 1answer 115 views ### How do the energy eigenvalues of rotational degrees of freedom in statistical mechanics come about? I want to understand the hierarchy different degrees of freedom of a mechanical system. Specifically, I want to understand which subsystems equibrilate faster and why. This question comes up: Why ... 1answer 242 views ### Angular frequency. Wrong interpretation at Wikipedia? This and this articles mention that the angular frequency is: number of oscillations per unit of time But this doesn't seem to be correct since the angular ... 1answer 28 views ### Why frequency and tension doesn't change in the two medium? I am reading a book about wave mechanics. There are two different cord (one light and one heavy) connected together, one person waving the lighter one, the wave transverse to the right from the ... 1answer 111 views ### Will a photon emitted from something moving quickly have a shorter wavelength? If a photon is emitted from a light source moving at any speed, the photon will nonetheless always move at c (assuming it is emitted in a vacuum.) If the speed of a photon's emitter cannot influence ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375594854354858, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/associated-graded
## Tagged Questions 1answer 156 views ### Liftability of a submodule from an associated graded module Let $k$ be a field, $A$ a $k$-algebra (probably noncommutative), and $M$ an $A$-module that's finite-dimensional as a vector space over $k$. Let $Gr(M;k)$ denote the set of all \$k … 0answers 151 views ### Generators of associated graded algebra Suppose that $A = \bigcup_{n=0}^{\infty} A_n$ is a filtered algebra over a field $k$. The associated graded algebra is $\mathrm{gr} A = \bigoplus_{n=0}^{\infty} A_n/A_{n-1}$, wher … 2answers 375 views ### An explicit description of $\operatorname{gr}(k \cdot G)$ for the filtration induced by the augmentation ideal? Let $A$ be any bialgebra (associative, unital, etc.) over a ring $k$. Then among other things it has a counit $\epsilon : A \to k$, and hence an augmentation ideal \$I = \ker \epsi … 1answer 349 views ### Associated graded of filtered module-algebra over a Hopf algebra I ran across the following statement in a paper, and it seems fishy to me: Lemma: If $A$ is any Hopf algebra, and if $U$ is an $\mathbb{N}_0$-filtered $A$-module algebra, then \$U … 2answers 472 views ### Associated graded and flatness Let $M$ be a filtered module over a filtered algebra $A$, and suppose $gr(M)$ is flat over $gr(A)$, where $gr$ means the associated graded module and algebra, respectively. What c … 2answers 719 views ### What is the universal property of associated graded? Given a filtered vector space (or module over a ring) 0=V0⊆V1⊆...⊆V, you can construct the associated graded vector space gr(V)=⊕iVi+1/Vi. Does gr(V) satisfy a … 2answers 292 views ### If associated-graded of a filtered bialgebra is Hopf, does it follow that the original bialgebra was Hopf? Warning: older texts use the word "Hopf algebra" for what's now commonly called "bialgebra", whereas now "Hopf" is an extra condition. So as to avoid any confusion, I'll give my d …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8963247537612915, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/ideal-gas
# Tagged Questions The ideal-gas tag has no wiki summary. 0answers 20 views ### Find the workdone to increase the temperature of an ideal gas by 300c if gas is expanding under [closed] Find the work done to increase the temperature of an ideal gas by $30^o C$ if gas is expanding under the condition $V \propto \dfrac{t^2}{3}$. 0answers 43 views ### Air pressure in balloon I have to calculate the air pressure inside of an hot air balloon. After some searching I found out that I can use the ideal gas law: PV = nRT (from Wikipedia) So to get the pressure in the balloon I ... 2answers 68 views ### Is it possible to obtain higher order corrections to the ideal gas law when one allows realistic phenomena to make their way into the equations? I had an interesting thought today that caused me to ask whether it'd be possible to make corrections to the ideal gas law via introducing terms derived from more realistic phenomena to make their ... 0answers 25 views ### What is the ration of volume to lift helium? [closed] I've been searching for a usable direct conversion, but most of what I've found used a lot of rounding. What is the exact formula? 0answers 34 views ### How can I interpret negative values of potential evapotranspiration? If I extract Potential Evaporation (PET, W/m$^2$) from the National Centers for Environmental Prediction (NCEP), climate reanalysis data (downloadable as netCDF files here), there are some negative ... 0answers 23 views ### Finding rms velocity in isothermal process [closed] I think none of these options are correct, I just need someone to confirm. Please, as no option is matching with $V_{rms}$. 1answer 62 views ### How fair is it to say that all chemistry arises from failures of the ideal gas law? I was reading here about how the ideal gas law assumes point masses and non-interaction. Is it fair to say that all chemistry arises from failures of that? Of course, such a sweeping generalization ... 1answer 63 views ### Volume of gas at which relative fluctuation of gas density occurs I have the following question: In what volume of gas occurs 10 % relative fluctuation of gas density under pressure of $10^5\text{ Pa}$ and temperature of $293.15\text{ K}$? I don't understand ... 1answer 50 views ### Conversion of ideal gas to real gas via $Z$ compression factor The ideal gas equation $PV=nRT$ can be converted into real gas equation by compression factor $Z$ i.e $PV=Z~ nRT)$. My question is what is $Z$ and how does it arise? Is $PV/nRT$ a compression ratio of ... 2answers 44 views ### What are the units of these virial coefficients? I'm reading some papers for calculating the vapor pressure of alkali metals as a function of temperature, and I've come across some familiar-looking virial expansions, but when I tried to work out the ... 1answer 54 views ### Calculating Air Density Lapse With Altitude (Specifically, pressures) This might be a bit more of an engineering question, but I'm calculating air density drop-off with altitude, and I'm having some problems calculating the pressure (I'll run through my method). This ... 2answers 76 views ### Temperature change inside pressure chamber Let's say there is a pressure chamber with some sort of sample / specimen (e.g. protein crystal) in it. Now I apply a certain amount of gas pressure, e.g. 10 or 20 atm. Let's say I use xenon as a gas. ... 1answer 55 views ### In $PdV$, what is the value of $P$? $P_1$ or $P_2$? Say I have an ideal gas that has a known $P_1$, $P_2$, $T_1$, and $T_2$ undergoing a reversible adiabatic process. I want to find the work done so I must use $PV = RT$ to get the change in $V$, so ... 0answers 20 views ### How fast gas transient flow changes to steady state in network? I need to do some natural gas flow in network calculations. I am using a steady state formulas, but since I am interested in the failure of the pipeline, I started to wonder how fast would entire gas ... 1answer 147 views ### Canonical partition of a boson gas I have a 1D gas made of $N$ particles placed in a harmonic potential well, so the Hamiltonian is: $$\mathcal H = \sum_{j=1}^N \left ( \frac{p_j^2}{2m} + \frac{1}{2}m\omega^2 x_j^2 \right )$$ The ... 1answer 370 views ### How to calculate the specific heat capacity of gases Could someone explain why the Specific Heat Capacity (SHC) of dry air is 1.005kJ/kg.K whereas the accepted SHC for ventilation air is quoted 1.300kJ/kg.K? Assuming the air for the ventilation calc’s ... 1answer 100 views ### Root Mean Square Speed of Gas The RMS speed of particles in a gas is $v_{rms} = \sqrt{\frac{3RT}{M}}$ where $M$ = molar mass; according to this Wiki entry: http://en.wikipedia.org/wiki/Root-mean-square_speed The gas laws ... 1answer 124 views ### Ideal gas temperature and pressure gradients? Consider an ideal gas in a $d\times d\times L$ box with the $L$ dimension in the $x$-direction. Suppose that the opposite $d\times d$ sides of the box are held at temperatures $T_1$ and $T_2$ with ... 2answers 172 views ### Ideal gas concentration under temperature gradient I'm trying to calculate the concentration of an ideal gas in an adiabatic container as a function of position where the top and bottom plates of the container are fixed at temperatures $T_1$ and ... 2answers 135 views ### With ideal gases, varying quantity of moles, and having a constant volume how do temperature and pressure behave? I'm trying to build a simulation of gases so I ended-up trying to use law of ideal gases ($PV = nRT$). In my scenario: volume is constant ($V=1\rm{m}^3$); a known quantity of moles are being added ... 0answers 40 views ### Experimental data for gas flow through pipelines Maybe some of you could point a good source of experimental data of gas flow through pipelines? What I need is a very simple flow of some gases through some diameter simple linear pipeline. A graph of ... 2answers 586 views ### Adiabatic process of an ideal gas derivation I am working through the derivation of an adiabatic process of an ideal gas $pV^{\gamma}$ and I can't see how to go from one step to the next. Here is my derivation so far which I understand: ... 3answers 184 views ### Understanding mathematically the free expansion process of an ideal gas I'm trying to understand mathematically that for the free expansion of an ideal gas the internal energy $E$ just depends on temperature $T$ and not volume $V$. In the free expansion process the ... 1answer 49 views ### Working out the mean velocity of particles in a gas I'm trying to answer the following question: Air consists of molecules Oxygen (Molecular mass = 32$amu$) and Nitrogen (Molecular mass = 28$amu$). Calculate the two mean translational kinetic ... 0answers 93 views ### Is it possible to add heat to a monoatomic ideal gas without increasing entropy? [closed] The Sackur-Tetrode equation expresses the entropy of a monoatomic ideal gas: [Equation from HyperPhysics] 0answers 58 views ### Using thermodynamics and Kinematics together to solve a parachuter problem? I need to find a parachutist's displacement after a given height (nearly 37000m) and at a given latitude. I have his mass, area, parachute area, drop height, parachute deployment height, data about ... 1answer 54 views ### Problem evaluating moles in a an isochor transformation I have a problem with an isochor transformation. Me and my group of study made an experiment that want to check Gay-Lussac’s law. We registered the equilibrium states and fitted the $P = nRT / V$, ... 1answer 67 views ### Thermodynamic process when nebula is heated The basic thermodynamics problem is stated as follows. The nebula contains a very tenuous gas of a given number density (atoms per volume) that is being heated to a given temperature. What is the ... 3answers 2k views ### How to deduce E=(3/2)kT? It says in my course notes that a particle has so-called "kinetic energy" $E=\frac{3}{2}kT=\frac{1}{2}mv^²$ Where does this formula come from? What is k? 1answer 172 views ### Velocity of real gas molecules? It is known that the velocity of ideal gas molecules can be computed using Maxwell-Boltzmann law of distribution of molecular velocities, with average velocity given as: ... 2answers 160 views ### Will ideal gas law apply to plasma? I have read that plasma is a state of matter that resembles gas but it consists of ions and electrons coexisting. So my question is : If plasma is just ionized gas, will ideal gas law apply for it ? 2answers 79 views ### Is it possible to find the number of gas atoms/molecules in a box when the number is small? Given very low number of particles in a system (e.g. in the 100s), is there a way to accurately measure the number of particles in the system? Assume temperature, pressure and volume is constant and ... 2answers 783 views ### Calculating work done on an ideal gas I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is ... 1answer 359 views ### Adiabatic expansion [closed] I'll start off by saying this is homework, but I ask because I don't understand how the math should work (I don't just want an answer, I'd like an explanation if possible). I understand if this is ... 1answer 49 views ### Performing work on a box of gas by lifting it, and first law of thermodynamics What happens if we lift a box of ideal gas? Work is done to the box but no heat is getting into it. So does it's internal energy increase by the amount of work done? Or is it that lifting is not ... 1answer 128 views ### Ideal gas and diatomic gas with same temperature If a box of ideal gas and another box of diatomic gas are in thermal equilibrium, does it mean that the average translational energy of ideal gas particle (A) is the same as that of diatomic gas ... 1answer 1k views ### Work Done by an Adiabatic Expansion I am given the information that a parcel of air expands adiabatically (no exchange of heat between parcel and its surroundings) to five times its original volume, and its initial temperature is 20° C. ... 1answer 585 views ### Work Done in an Isobaric Process I am given the information that an air parcel undergoes isobaric heating from 0° C to 20° C, and that's all I'm given. I have to determine the work done by the parcel on its surroundings. I know that ... 1answer 88 views ### Which heated, partially filled bottle will explode first? This is in reference to a pasteurization discussion on a homebrewing forum. I have four closed bottles which will explode if containing too much pressure. Two of them are 50% full (A and B), and two ... 1answer 1k views ### Ideal gas law, pressure increase and temperature If I had a container, full with air, and I suddenly decreased the volume of the container, forcing the air into a smaller volume, will it be considered as compression, will it result in an increase in ... 1answer 668 views ### Work on ideal gas by piston Imagine a thermally insulated cylinder containing a ideal gas closed at one end by a piston. If the piston is moved rapidly, so the gas expands from $V_i$ to $V_f$. The expanding gas will do work ... 1answer 165 views ### speed of sound and the potential energy of an ideal gas; Goldstein derivation I am looking the derivation of the speed of sound in Goldstein's Classical Mechanics (sec. 11-3, pp. 356-358, 1st ed). In order to write down the Lagrangian, he needs the kinetic and potential ... 1answer 168 views ### Difference in vertical stratification of partial pressure due to gravity Say you have a mixture of two ideal gases in the presence of gravity. There is a vertical pressure gradient on the mixture due to the force balance. This condition is required to prevent the entire ... 2answers 243 views ### Is it Possible to have Adiabatic Processes other than $PV^\gamma$ for the ideal Gas? Is it possible to represent an adiabatic process for an ideal gas by a formula other than $PV^\gamma=Const$?: Relevant Considerations: We always need to connect a pair of arbitrary points/states ... 1answer 616 views ### Calculate mass of air in a tyre from pressure How can one calculate the mass of air inside a tyre, given a particular tyre size; a pressure, in $kPa = \frac{1000kg}{m\cdot s^2}$; and assuming room temperature, and normal air composition? I can't ... 2answers 101 views ### May molecules of ideal gases have an inner structure? The following question is probably very elementary: whether molecules of ideal gases may have optic properties? As far as I understand, when one discusses optic properties, one assumes that molecules ... 4answers 6k views ### Why are volume and pressure inversely proportional to each other? It makes sense, that if you have a balloon and press it down with your hands, the volume will decrease and the pressure will increase. This confirms Boyle's Law, $pV=k=nRT$. But what if the ... 1answer 280 views ### What is the pressure drop in a venturi with a compressible fluid? I would like to know if there is an equation to predict the pressure drop in a venturi device using a compressible fluid as the working medium. In particular, I'd like to use this equation to predict ... 0answers 154 views ### Centrifugal Compressor Flow Rate For a centrifugal compressor, as found in most turbochargers on internal combustion engines, is there a noticeable change in flow rate versus a naturally aspirated flow rate? In other words, does the ... 1answer 216 views ### Black body balloon in vacuum [closed] The problem statement, all variables and given/known data There is a perfectly spherical balloon with surface painted black. It is placed in a perfect vacuum. It is gently inflated with an ideal ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281893968582153, "perplexity_flag": "middle"}
http://www.openwetware.org/index.php?title=Dunn:Tools/Wiki_Tutorial&diff=407702&oldid=405570
Dunn:Tools/Wiki Tutorial From OpenWetWare (Difference between revisions) | | | | | |----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | m | | | | | Line 96: | | Line 96: | | | | *Having a toolbar on every page you designate like we have above can be very useful for accessing pages easily. I would be very careful about making sure that the links you create on the toolbar correspond to the pages that you find elsewhere on the site. Take a look at how I made mine by viewing the code for our main page. | | *Having a toolbar on every page you designate like we have above can be very useful for accessing pages easily. I would be very careful about making sure that the links you create on the toolbar correspond to the pages that you find elsewhere on the site. Take a look at how I made mine by viewing the code for our main page. | | | ''' | | ''' | | | | + | ==Gadgets== | | | | + | *WikEd feature on Openwetware can be quite useful because it automatically inserts the necessary commands into a highlighted set of characters in order to accomplish many things (ie. underlining). I was able to add a whole bunch of tools to my edit toolbar. Here is how to install it: http://en.wikipedia.org/wiki/User:Cacycle/wikEd_installation#On-wiki_installation_code  and here is a description of all of the buttons: http://en.wikipedia.org/wiki/User:Cacycle/wikEd_help | | | | + | | | | =<center>Final Notes on Editing the Wiki</center>= | | =<center>Final Notes on Editing the Wiki</center>= | | | *A page can get completely lost if you don't place its link somewhere visible or if you don't make the new page as a subpage of an existing one. | | *A page can get completely lost if you don't place its link somewhere visible or if you don't make the new page as a subpage of an existing one. | | | *Try playing around with it. Here's a page that describes some cool things to try [http://openwetware.org/wiki/User:Ben_Yi_Tew/temp#Formatting out], and here's another [http://en.wikipedia.org/wiki/Wikipedia:Cheatsheet tutorial]. | | *Try playing around with it. Here's a page that describes some cool things to try [http://openwetware.org/wiki/User:Ben_Yi_Tew/temp#Formatting out], and here's another [http://en.wikipedia.org/wiki/Wikipedia:Cheatsheet tutorial]. | | | *Not all pages will have the easy to navigate main menu template or mini menu templates (though you can add them manually to your pages) so if you get lost somehow, the best way to get back to where you were would be to go back to the main page or go to the 'Recent Changes' link (both of which are on the top left of the sidebar). | | *Not all pages will have the easy to navigate main menu template or mini menu templates (though you can add them manually to your pages) so if you get lost somehow, the best way to get back to where you were would be to go back to the main page or go to the 'Recent Changes' link (both of which are on the top left of the sidebar). | Quick Intro • This page needs a lot of work, but touches on a lot of the tools that might be useful to the people that use OWW. I basically gathered information from a lot of different places on the internet and put them together here. Most of the sources are linked, so if you're looking for something that isn't mentioned here, it might be a good idea to visit that site directly. • Anyone on OWW is more than free to edit this page, but please leave your signature when you do decide to modify it. If we end up with something that is superhelpful in a few months, maybe we'll try to make a polished version for the OWW main page. --Ramalldf Setting up your User page • The Openwetware (OWW) site has a great page on creating this. • This other lab has a good list of things to try when setting your page up. Editing Existing Pages Adding Headers and Bullets • For me, the best way to post notes on the wiki is by using bullets. Typing a star (shift+8) at the beginning of a new line will give you a new bullet. • Typing double star underneath a line with a bullet will give you indented bullets: • <----- See? • You can continue this trend as you please. • The headings and subheadings within a page are arranged in the same way with = > == >=== • Notice how the wiki will automatically start placing shortcut links at the top of the page for each header/subheader that you create! (If the page requires a lot of scrolling). Storing/Accessing Files • Before going into details of how to do the following, I would like to suggest that before you hit the upload/submit button that you either: • Rename your file so that others can identify it as being yours and a date might help or • Use the comments box to leave this type of information so that it is easily identifiable. For papers, a summary might be good here as well. Uploading • Click on the 'upload file' link on the left sidebar (under 'toolbox'). • Follow the instructions onscreen to upload the paper. Read the information here as it will cover most of the tips that I will highlight below (also before you hit the 'upload' button, notice that there is a little 'summary' box where you can describe the file...this might be handy for searching through files later on). Now keep track of the title of the file after it has been uploaded ie. image:Koch et al. 2002.pdf so that you can call the file for when you actually want to use it. Accessing Files • A great demo of how to place a link that will access your file (or even display the file) is found here. • Most things that you will upload will have the word 'image' in front of it. Only picture files should retain the word 'image' if you want to post them on a page. For any other file (ie. .doc, .xls, .pdf etc...) use the word 'media' in front (type it in manually when you want to retrieve it on a page), for example 'media:Koch et al. 2002.pdf' will work appropriately. • Now go back to the page where you wanted to insert the file and edit the page. Use double brackets [[]] to close in the name of the file that you will upload. A good example of how to do this is found here (under 'storing files' heading). • Notice the use of the | character (shift+ button right above enter button on keyboard) whenever you want to add text to the link. It can also do a few other things for images like adjusting size and position on page. • After uploading the paper as you wanted click 'save' at the bottom of the screen. An uploaded paper will look like this if you didn't link specific text to the file (I uploaded one of Alex's papers to show an example: • Media:Dunnnsmb.pdf. • Here is the same file with text linked. Click on 'edit' up top if you'd like to see what I typed to get different results (copying and pasting the text that someone else wrote on a page is a good approach to making the wiki look the way someone made it look by the way). Linking to websites • Simply copy the address that you want to link and paste it inside single brackets. • ie. [1] or add a word right after the address but before the second bracket (separated by a space only) to link the address to a specific word: • ie. see? Making Comments on a Page • Identifying a comment by a specific person is as easy as typing four consecutive '~' characters. This will give the name of the user, and the date when the comment was made (three of those will only give the name). ie: • Ramalldf: Just name • Ramalldf 23:08, 8 May 2009 (EDT):Name and date. • I'd strongly encourage using this when editing a page created by someone else as it makes it very easy to find changes made to a page by someone else. It is also seen as common etiquette on OWW to place your signature next to your changes. Adding Mathematical Formulas • I haven't played around with this much, but it looks very straight forward. Here's a page that covers it in-depth. Here's an example: $\int\limits_{1}^{3}\frac{e^3/x}{x^2}\, dx$ • To make things easier for you, in the edit box, there's a 'math formula' button where the bold and italics buttons are. Embedding Apps • You can add things like calendars, and documents to a page. It is easiest when an online application gives you the direct code to embed the app. For example, our lab has youtube, scribd, and google accounts, all of which offer code to embed files. The simplest way to embed the file is to simply paste the ENTIRE embed code in between and commands. For example: • By flanking this entire code with <center> and </center> you can center the app on the screen: • The Dunn lab website uses the embedding feature a lot so you can look at different pages and see things like pdfs, calendars, and powerpoints embedded as well. • An additional feature that you can use is the iframe feature that allows you to embed entire webpages onto yours. I don't use it very much and am not a pro at it, but Anthony Salvagno from the Koch lab has mastered this feature so you can check out his notebook and get good tips about how to do this (ie. this Friendfeed page). Pubmed Footnoting • Ramalldf 01:31, 1 July 2009 (EDT):This is really cool! It will basically place all of the relevant information about the paper you're citing simply by entering the pubmed id # in the way that I have below [1]. Thus, with the first brackets you're basically placing the footnote # and with the second set of brackets, you tell it where you want the citation to go (in my example, under the references section). I wouldn't play around with trying to arrange the numbering at this point (notice how it may not be in order?), it can get really messy really quickly. As long as the character or word that you assign the citation to matches the one you want in the 'biblio' brackets you should get the correct citation. References 1. Pierobon P, Achouri S, Courty S, Dunn AR, Spudich JA, Dahan M, and Cappello G. . pmid:19450497. [Diego] Creating new pages and templates New Pages • You can create a page by editing any exisiting page and doing the following. It will only become a subpage if you assign it to be as described below, but until I find a way to create a new page (ie. like a sandbox?) without having to go and edit a random page, you should do the following: • So I've found out that you can create pages in the following way: • Two of these brackets [[ ]] with a word in the middle will create a new page wherever you are. Unfortunately, the wiki has the potential to become HUGE, so in order to keep track of the locations of these pages you should try to create a link back to the page where this one originated from. • In order to do this you must enter the name of the new page with a "/" mark in between it and the page that brought you to it. For example: Lab News/Editing wiki • "Editing wiki" is the new name of my page. It will be red until I click on it and begin typing stuff and saving it. • If I want the link to not look like "Lab News/Editing wiki" but instead want the link to have another word leading to the link, then I will put whatever this sign is "|" after the name of my new page and type the word after it. ie.: see? • This type of writing will work on any page that you want and even on the toolbar if you decided to change it (you must remember the name of the page of course, cause if it is not the same, then the wiki will create a new one). • You can link/access a page that you want either by copying down the url and inserting it or just by placing the name of the page in brackets. Templates • Templates are preset components of a page that you can immediately insert into a page without having to type out all of the individual commands every time (ie. the toolbar and green frame on our site). You can create a template simply by typing what you want the name of the template to be inside curly brackets {{}}. • This will appear in red and basically looks like a new page that you can edit any way you want (so it can include text and/or pictures). • A great example of a template is the main menu template (notice used the url to retrieve instead of using the curly brackets because the curly brackets would've put the actual template on the page which is not what I want to do). • Anyways, the following is some advice about editing those. I don't know that much about these, but this is how I learned how to use the toolbar template which I found on the openwetware public wiki somewhere. Colors • For a lot of templates you can adjust the size and colors of the components belonging to that template (ie. width and color of a box). • The colors that are available can be found in theselinks. Typing the name or number of the color as it appears on that page will allow you to change the color to exactly that one (think that its case sensitive). Template Toolbar • Having a toolbar on every page you designate like we have above can be very useful for accessing pages easily. I would be very careful about making sure that the links you create on the toolbar correspond to the pages that you find elsewhere on the site. Take a look at how I made mine by viewing the code for our main page. Gadgets • WikEd feature on Openwetware can be quite useful because it automatically inserts the necessary commands into a highlighted set of characters in order to accomplish many things (ie. underlining). I was able to add a whole bunch of tools to my edit toolbar. Here is how to install it: http://en.wikipedia.org/wiki/User:Cacycle/wikEd_installation#On-wiki_installation_code and here is a description of all of the buttons: http://en.wikipedia.org/wiki/User:Cacycle/wikEd_help Final Notes on Editing the Wiki • A page can get completely lost if you don't place its link somewhere visible or if you don't make the new page as a subpage of an existing one. • Try playing around with it. Here's a page that describes some cool things to try out, and here's another tutorial. • Not all pages will have the easy to navigate main menu template or mini menu templates (though you can add them manually to your pages) so if you get lost somehow, the best way to get back to where you were would be to go back to the main page or go to the 'Recent Changes' link (both of which are on the top left of the sidebar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236778616905212, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/189270-function-defined-why.html
# Thread: 1. ## The function is defined: why? I am looking at the graph of function f(x)=(x-1)/(x^2-1) produced by the program: "Graph" created by Ivan Johanson. Why is this rational function defined at x = 1 as the limiting value 0.5? Is this a convention for it to be defined as the limit? Or am I crazy? Attached Thumbnails 2. ## Re: The function is defined: why? Well, you are only crazy if you expect a computer to be as good as your brain! What is graphed there is NOT the function you give. What is true is that $\frac{x-1}{x^2-1}= \frac{x-1}{(x-1)(x+1)}$ which is not defined at x= 1 and so its graph should have no point on the vertical line at x= 1. However, for all x except x= 1, $\frac{x- 1}{(x-1)(x+1)}= \frac{1}{x+1}$ which is defined at x= 1. The computer, calculating values at a finite number of values of x, "misses" the problem at x= 1 and just graphs y= 1/(x+1). The graph is wrong. 3. ## Re: The function is defined: why? Thanks for clarifying.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945636510848999, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/3471/are-the-asymptotics-of-fourier-coefficients-to-periodic-solutions-of-ode-known
## Are the asymptotics of Fourier coefficients to periodic solutions of ODE known? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Van der Pol equation, given by $$x'' + x = g x' (1 - x^2),$$ has periodic solutions $x(t)$, with the period $T(g)$ depending on the parameter. Thus, one can expand $x(t)$ as a Fourier series with coefficients $a_n(g)$ also depending on $g$. Question: Can one find an asymptotic formula for $a_n(g)$, as $g \to\infty$? For example, the asymptotic formula for the period is well known: $$T(g) \sim g [ (3 - \log 4) + O(g^{-4/3}) ].$$ - I believe it should be $x''+x=gx'(1-x^2)$. At least, that is what one usually means by the Van der Pol equation. – fedja Nov 3 2009 at 2:33 Yes. Thanks for noticing. – Ricardo Nov 9 2009 at 1:37 ## 2 Answers This sounds like a homework problem :) By which means are we allowed to derived the form of $a_n(g)$ ? What is the asymptotic formula to be used for? - No, not a homework problem. I'm working (with some collegues) on improved perturbation theory methods (for large parameters) that use the asymptotics of whatever one wants to calculate. The method works well for calculating the period of Van der Pol solutions, for instance, or energy levels of nonharmonic oscillators (or even wave functions). It works OK for the actual solutions of Van der Pol, but the problem is that the asymptotics are not known (at least to us). To answer your first question: you can use whatever takes to calculate the asymptotics of $a_n(g)$. The second: I don't know. – Ricardo Nov 11 2009 at 1:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I am quite sure I found a perturbative solution to this during graduate school, what you are asking for is the Fourier representation of that. Are you familiar with the Method of Dominant Balance ? - Perturbative solutions to the Van der Pol are known. However, we are interested in large values of the parameter $g$, where those methods don't work. – Ricardo Nov 11 2009 at 9:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321585893630981, "perplexity_flag": "head"}
http://blog.aggregateknowledge.com/tag/sketching/
AK Tech Blog Everything You Need to Know ## Doubling the Size of an HLL Dynamically – Extra Bits… April 30, 2013 By Author’s Note: This post is related to a few previous posts on the HyperLogLog algorithm.  See Matt’s overview of the algorithm, and see this for an overview of “folding” or shrinking HLLs in order to perform set operations. It is also the final post in a series on doubling the number of bins in HLLs. The first post dealt with the recovery time after doubling, and the second dealt with doubling’s accuracy when taking unions of two HLLs. ### Introduction The main draw to the HyperLogLog algorithm is its ability to make accurate cardinality estimates using small, fixed memory.  In practice, there are two choices a user makes which determine how much memory the algorithm will use: the number of registers (bins) and the size of each register (how high they can count).  As Timon discussed previously, increasing the size of each register will only increase the accuracy if the true cardinality of the stream is HUGE. Recall that HyperLogLog (and most other streaming algorithms) is designed to work with a fixed number of registers, $m$, which is chosen as a function of the expected cardinality to approximate. We track a great number of different cardinality streams and in this context it is useful for us to not have one fixed value of $m$, but to have this evolve with the needs of a given estimation. We are thus confronted with many engineering problems, some of which we have already discussed. In particular, one problem is that the neat feature of sketches, namely that they allow for an estimate of the cardinality of the union of multiple streams at no cost, depends on having sketches of the same size. We’ve discussed how to get around this by folding HLLs, though with some increase in error. We’ve also explored a few options on how to effectively perform a doubling procedure. However, we started to wonder if any improvements could be made by using just a small amount of extra memory, say an extra bit for each register. In this post we will discuss one such idea and its use in doubling. Note: we don’t talk about quadrupling or more. We limit ourselves to the situation where HLL sketches only differ in $m$‘s by 1. ### The Setup One of the downfalls in doubling is that it there is no way to know, after doubling, whether a value belongs in its bin or its partner bin. Recall that a “partner bin” is the register that could have been used had our “prefix” (the portion of the hashed value which is used to decide which register to update) been one bit longer. If the binary representation of the bin index used only two bits of the hashed value, e.g. $01$, then in an HLL that used a three-bit index, the same hashed value could have been placed in the bin whose index is either $101$ or $001$. Since $001$ and $01$ are the same number, we call $101$ the “partner bin”. (See the “Key Processing” section in Set Operations On HLLs of Different Sizes). Consider an example where we have an HLL with $2^{10}$ bins.The $k^{th}$ bin has the value 7 in it, and after doubling we guess that its partner bin, at index $(2^{10} + k)^{th}$, should have a 5 in it. It is equally likely that the $k^{th}$ bin should have the 5 in it and the $(2^{10}+k)^{th}$ bin should have the 7 in it (since the “missing” prefix bit could have been a 1 or a 0)! Certainly the arrangement doesn’t change the basic cardinality estimate, but once we start getting involved with unions, the arrangement can make a very large difference. To see how drastic the consequences can be, let’s look at a simple example. Suppose we start with an HLL with 2 bins and get the value 6 in each of its bins. Then we run the doubling procedure and decide that the partner bins should both have 1′s in them. With this information, it is equally likely that both of the arrangements below, “A” and “B”, could be the “true” larger HLL. Further suppose we have some other data with which we wish to estimate the union. Below, I’ve diagrammed what happens when we take the union. Arrangement A leads to a cardinality estimate (of the union) of about 12 and Arrangement B leads to a cardinality estimate (of the union) of about 122. This is an order of magnitude different! Obviously not all cases are this bad, but this example is instructive. It tells us that knowing the true location of each value is very important. We’ve attempted to improve our doubling estimate by keeping an extra bit of information as we will describe below. ### The Algorithm Suppose we have an HLL with $m$ bins. Let’s keep another array of data which holds $m$ total bits, one for each bin — we will call these the “Cached Values.” For each bin, we keep a 0 if the value truly belongs in the bin in which it was placed (i.e. if, had we run an HLL with $2m$ bins, the value would have been placed in the first $m$ bins in the HLL), and we keep a 1 if the value truly belongs in the partner bin of the one in which it was placed (i.e. if, had we run an HLL with $2m$ bins, the value would have been placed in the last $m$ bins). See the image below for an example. Here we see two HLLs which have processed the same data. The one on the left is half the size and collects the cached values as it runs on the data. The one on the right is simply the usual HLL algorithm run on the same data. Looking at the first row of the small HLL (with $m$ bins), the $0$ cache value means that the 2 “belongs” in the top half of the large HLL, i.e. if we had processed the stream using a larger HLL the 2 would be in the same register. Essentially this cached bit allows you to know exactly where the largest value in a bin was located in the larger HLL (if the $i^{th}$ bin has value $V$ and cached value $S$, we place the value $V$ in the $S * 2^{\log{m}} + i$ = $(S\cdot m + i)^{th}$ bin). In practice, when we double, we populate the doubled HLL first with the (now correct location) bin values from the original HLL then we fill the remaining bins by using our “Proportion Doubling” algorithm. Before we begin looking at the algorithm’s performance, let’s think about how much extra space this requires. In our new algorithm, notice that for each bin, we keep around either a zero or a one as its cached value. Hence, we require only one extra bit per bin to accommodate the cached values. Our implementation of HLL requires 5 bits per bin, since we want to be able to include values up to $2^5 -1= 31$ in our bins. Thus, a standard HLL with $m$ bins, requires $5m$ bits. Hence, this algorithm requires $5m + m = 6m$ bits (with the extra $m$ bins representing the cached values). This implies that this sketch requires 20% more space. ### The Data Recall in the last post in this series, we explored doubling with two main strategies: Random Estimate (RE) and Proportion Doubling (PD). We did the same here, though using the additional information from this cached bit. We want to know a few things: • Does doubling using a cache bit work? i.e. is it better to fold the bigger one or double the smaller one when comparing HLL’s of different sizes? • Does adding in a cache bit change which doubling strategy is preferred (RE or PD)? • Does the error in union estimate depend on intersection size as we have seen in the past? Is it better to double or fold? For each experiment we took 2 sets of data (each generated from 200k random keys) and estimated the intersection size between them using varying methods. • “Folded”: estimate by filling up an HLL with $log_{2}(m) = 10$ and  comparing it to a folded HLL starting from $log_{2}(m) = 11$ and folded down $log_{2}(m) = 10$ • “Large”: estimate by using two HLL’s of a larger HLL of $log_{2}(m) = 11$.  This is effectively a lower bound for our doubling approaches. • “Doubled – PD”: estimate by taking an HLL of $log_{2}(m) = 10$ and double it up to $log_{2}(m) = 11$ using the Proportion Doubling strategy.  Once this larger HLL is approximated we estimate the intersection with another HLL of native size $log_{2}(m) = 11$ • “Doubled – RE”: estimate by taking an HLL of $log_{2}(m) = 10$ and doubling up to $log_{2}(m) = 11$ using Random Estimate strategy. We performed an experiment 300 times at varying intersection sizes from 0 up to 200k (100%) overlapping elements between sets (in steps of 10k). The plots below show our results (and extrapolate between points). The graph of the mean error looks pretty bad for Random Estimate doubling. Again we see that the error depends heavily on the intersection size and becomes more biased as the set’s overlap more. On the other hand, Proportion Doubling was much more successful  (recall that this strategy forces the proportion of bins in the to-be-doubled HLL and the HLL with which we will union it to be equal before and after doubling.)  It’s possible there is some error bias with small intersections but we would need to run more trials to know for sure. As expected, the “Folded” and the “Large” are centered around zero. But what about the spread of the error? The Proportion Doubling strategy looks great! In my last post on this subject, we found that this doubling strategy (without the cached part) really only worked well in the large intersection size regime, but here, with the extra cache bits, we seem to avoid that. Certainly the large intersection regime is where the standard deviation is lowest, but for every intersection size, it is significantly lower than that of the smaller HLL. This suggests that one of our largest sources of error when we use doubling in conjunction with unions is related to our lack of knowledge of the arrangement of the bins (i.e. when doubling, we do not know which of the two partner bins gets the larger, observed value). So it appears that the strategy of keeping cache bits around does indeed work, provided you use a decent doubling scheme. Interestingly, it is always much better to double a smaller cache HLL than to fold a larger HLL when comparing sketches of different sizes. This is represented above by the lower error of the doubled HLL than the small HLL. The error bounds do seem to depend on the size of the intersection between the two sets but this will require more work to really understand how, especially in the case of Proportion Doubling. Notes:  In this work we focus solely on doubling a HLL sketch and then immediately using this new structure to compute set operations. It would be interesting to see if set operation accuracy changes as a doubled HLL goes through its “recovery” period under varying doubling methods. It is our assumption that nothing out of the ordinary would come of this, but we definitely could be wrong. We will leave this as an exercise for the reader. ### Summary We’ve found an interesting way of trading space for accuracy with this cached bit method, but there are certainly other ways of using an extra bit or two (per bucket). For instance, we could keep more information about the distribution of each bin by keeping a bit indicating whether or not the bin’s value minus one has been seen. (If the value is $k$, keep track of whether $k-1$ has shown up.) We should be able to use any extra piece of information about the distribution or position of the data to help us obtain a more accurate estimate. Certainly, there are a myriad of other ideas ways of storing a bit or two of extra information per bin in order to gain a little leverage — it’s just a matter of figuring out what works best. We’ll be messing around more with this in the coming weeks, so if you have any ideas of what would work best, let us know in the comments! (P.S. A lot of our recent work has been inspired by Flajolet et al.’s paper on PCSA – check out our post on this here!) Thanks to Jeremie Lumbroso for his kind input on this post. We are much indebted to him and hopefully you will see more from our collaboration. Filed Under: Data Science, General ## Sketch of the Day: Probabilistic Counting with Stochastic Averaging (PCSA) April 2, 2013 By Before there was LogLog, SuperLogLog or HyperLogLog there was Probabilistic Counting with Stochastic Averaging (PCSA) from the seminal work “Probabilistic Counting Algorithms for Data Base Applications” (also known as the “FM Sketches” due to its two authors, Flajolet and Martin). The basis of PCSA matches that of the other Flajolet distinct value (DV) counters: hash values from a collection into binary strings, use patterns in those strings as indicators for the number of distinct values in that collection (bit-pattern observables), then use stochastic averaging to combine m trials into a better estimate. Our HyperLogLog post has more details on these estimators as well as stochastic averaging. ### Observables The choice of observable pattern in PCSA comes from the knowledge that in a collection of randomly generated binary strings, the following probabilities occur: $\begin{aligned} &P( ..... 1) &= 2^{-1} \\ &P( .... 10) &= 2^{-2} \\ &P( ... 100) &= 2^{-3} \end{aligned}$ $\vdots$ $P(...10^{k-1}) = 2^{-k}$ For each value added to the DV counter, a suitable hash is created and the position of the least-significant (right-most) 1 is determined. The corresponding position in a bitmap is updated and stored. I’ve created the simulation below so that you can get a feel for how this plays out. Click above to run the bit-pattern simulation (All bit representations in this post are numbered from 0 (the least-significant bit) on the right. This is the opposite of the direction in which they’re represented in the paper.) Run the simulation a few times and notice how the bitmap is filled in. In particular, notice that it doesn’t necessarily fill in from the right side to the left — there are gaps that exist for a time that eventually get filled in. As the cardinality increases there will be a block of 1s on the right (the high probability slots), a block of 0s on the left (the low probability slots) and a “fringe” (as Flajolet et al. called it) of 1s and 0s in the middle. I added a small pointer below the bitmap in the simulation to show how the cardinality corresponds to the expected bit position (based on the above probabilities). Notice what Flajolet et al. saw when they ran this same experiment: the least-significant (right-most) 0 is a pretty good estimator for the cardinality! In fact when you run multiple trials you see that this least-significant 0 for a given cardinality has a narrow distribution. When you combine the results with stochastic averaging it leads to a small relative error of $0.78 / \sqrt{m}$ and estimates the cardinality of the set quite well. You might have also observed that the most-significant (left-most) 1 can also be used for an estimator for the cardinality but it isn’t as clear-cut. This value is exactly the observable used in LogLog, SuperLogLog and HyperLogLog and does in fact lead to the larger relative error of $1.04 / \sqrt{m}$ (in the case of HLL). ### Algorithm The PCSA algorithm is elegant in its simplicity: ```m = 2^b # with b in [4...16] bitmaps = [[0]*32]*m # initialize m 32bit wide bitmaps to 0s ############################################################################################## # Construct the PCSA bitmaps for h in hashed(data): bitmap_index = 1 + get_bitmap_index( h,b ) # binary address of the rightmost b bits run_length = run_of_zeros( h,b ) # length of the run of zeroes starting at bit b+1 bitmaps[ bitmap_index ][ run_length ] = 1 # set the bitmap bit based on the run length observed ############################################################################################## # Determine the cardinality phi = 0.77351 DV = m / phi * 2 ^ (sum( least_sig_bit( bitmap ) ) / m) # the DV estimate ``` Stochastic averaging is accomplished via the arithmetic mean. You can see PCSA in action by clicking on the image below. Click above to run the PCSA simulation There is one point to note about PCSA and small cardinalities: Flajolet et al. mention that there are “initial nonlinearities” in the algorithm which result in poor estimation at small cardinalities ($n/m \approx 10 \, \text{to} \, 20$) which can be dealt with by introducing corrections but they leave it as an exercise for the reader to determine what those corrections are. Scheuermann et al. did the leg work in “Near-Optimal Compression of Probabilistic Counting Sketches for Networking Applications” and came up with a small correction term (see equation 6). Another approach is to simply use the linear (ball-bin) counting introduced in the HLL paper. ### Set Operations Just like HLL and KMV, unions are trivial to compute and lossless. The PCSA sketch is essentially a “marker” for runs of zeroes, so to perform a union you merely bit-wise OR the two sets of bitmaps together. Folding a PCSA down to a smaller m works the same way as HLL but instead of HLL’s max you bit-wise OR the bitmaps together. Unfortunately for intersections you have the same issue as HLL, you must perform them using the inclusion/exclusion principle. We haven’t done the plots on intersection errors for PCSA but you can imagine they are similar to HLL (and have the benefit of the better relative error $0.78 / \sqrt{m}$). ### PCSA vs. HLL That fact that PCSA has a better relative error than HyperLogLog with the same number of registers ($1.04 / 0.78 \approx 1.33$) is slightly deceiving in that $m$ (the number of stored observations) are different sizes. A better way to look at it is to fix the accuracy of the sketches and see how they compare. If we would like to have the same relative error from both sketches we can see that the relationship between registers is: $\text{PCSA}_{RE} = \text{HLL}_{RE}$ $\dfrac{0.78}{\sqrt{m_{\scriptscriptstyle PCSA}}} = \dfrac{1.04}{\sqrt{m_{\scriptscriptstyle HLL}}}$ $m_{\scriptscriptstyle PCSA} = \left( \dfrac{0.78}{1.04} \right)^2 m_{\scriptscriptstyle HLL} \approx 0.563 \ m_{\scriptscriptstyle HLL}$ Interestingly, PCSA only needs a little more than half the registers of an HLL to reach the same relative error. But this is also deceiving. What we should be asking is what is the size of each sketch if they provide the same relative error? HLL commonly uses a register width of 5 bits to count to billions whereas PCSA requires 32 bits. That means a PCSA sketch with the same accuracy as an HLL would be: $\begin{aligned} \text{Size of PCSA} &= 32 \text{bits} \ m_{\scriptscriptstyle PCSA} = 32 \text{bits} \, ( 0.563 \ m_{\scriptscriptstyle HLL} ) \\ \\ \text{Size of HLL} &= 5 \text{bits} \ m_{\scriptscriptstyle HLL} \end{aligned}$ Therefore, $\dfrac{\text{Size of PCSA}} {\text{Size of HLL}} = \dfrac{32 \text{bits} \, ( 0.563 \ m_{\scriptscriptstyle HLL} )}{5 \text{bits} \ m_{\scriptscriptstyle HLL}} \approx 3.6$ A PCSA sketch with the same accuracy is 3.6 times larger than HLL! ### Optimizations But what if you could make PCSA smaller by reducing the size of the bitmaps? Near the end of the paper in the Scrolling section, Flajolet et al. bring up the point that you can make the bitmaps take up less space. From the simulation you can observe that with a high probability there is a block of consecutive 1s on the right side of the bitmap and a block of consecutive 0s on the left side of the bitmap with a fringe in between. If one found the “global fringe” — that is the region defined by the left-most 1 and right-most 0 across all bitmaps — then only those bits need to be stored (along with an offset value). The authors theorized that a fringe width of 8 bits would be sufficient (though they fail to mention if there are any dependencies on the number of distinct values counted). We put this to the test. In our simulations it appears that a fringe width of 12 bits is necessary to provide an unbiased estimator comparable to full-fringe PCSA (32-bit) for the range of distinct values we analyzed. (Notice the consistent bias of smaller fringe sizes.) There are many interesting reasons that this “fringe” concept can fail. Look at the notes to this post for more. If we take the above math and update 32 to 12 bits per register (and include the 32 bit offset value) we get: $\begin{aligned} \dfrac{\text{Size of PCSA}} {\text{Size of HLL}} &= \dfrac{12 \text{bits} \, ( 0.563 \ m_{\scriptscriptstyle HLL} ) + 32\text{bits}} {5 \text{bits} \cdot m_{\scriptscriptstyle HLL}} \\ \\ &= \dfrac{12 \text{bits} \, ( 0.563 \ m_{\scriptscriptstyle HLL} )} {5 \text{bits} \cdot m_{\scriptscriptstyle HLL}} + \dfrac{32\text{bits}} {5 \text{bits} \cdot m_{\scriptscriptstyle HLL}} \\ \\ &\approx 1.35 \text{ (for }m\gg64 \text{)} \end{aligned}$ This is getting much closer to HLL! The combination of tighter bounds on the estimate and the fact that the fringe isn’t really that wide in practice result in PCSA being very close to the size of the much lauded HLL. This got us thinking about further compression techniques for PCSA. After all, we only need to get the sketch about 1/3 smaller to be comparable in size to HLL. In a future post we will talk about what happens if you Huffman code the PCSA bitmaps and the tradeoffs you make when you do this. ### Summary PCSA provides for all of the goodness of HLL: very fast updates making it suitable for real-time use, small footprint compared to the information that it provides, tunable accuracy and unions. The fact that it has a much better relative error per register than HLL indicates that it should get more credit than it does. Unfortunately, each bitmap in PCSA requires more space than HLL and you still get less accuracy per bit. Look for a future post on how it is possible to use compression (e.g. Huffman encoding) to reduce the number of bits per bitmap, thus reducing the error per bit to match that of HLL, resulting in an approach that matches HLL in size but exceeds its precision! ### Notes on the Fringe While we were putting this post together we discovered many interesting things to look at with respect to fringe optimization. One of the questions we wanted to answer was “How often does the limited size of the fringe muck up a bitmap?” Below is a plot that shows how often any given sketch had a truncation event (that affected the DV estimate) in the fringe of any one of its bitmaps for a given fringe width (i.e. some value could not be stored in the space available). Note that this is an upper bound on the error that could be generated by truncation. If you compare the number of runs that had a truncation event (almost all of the runs) with the error plot in the post it is quite shocking that the errors are as small as they are. Since we might not get around to all of the interesting research here, we are calling out to the community to help! Some ideas: 1. There are likely a few ways to improve the fringe truncation. Since PCSA is so sensitive to the least-significant 1 in each bitmap, it would be very interesting to see how different approaches affect the algorithm. For example, in our algorithm we “left” truncated meaning that all bitmaps had to have a one in the least-significant position of the bitmap in order to move up the offset. It would be interesting to look at “right” truncation. If one bitmap is causing many of the others to not record incoming values perhaps it should be bumped up. Is there some math to back up this intuition? 2. It is interesting to us that the fringe width truncation events are DV dependent. We struggled with the math on this for a bit before we just stopped. Essentially we want to know what is the width of the theoretical fringe? It obviously appears to be DV dependent and some sort of coupon collector problem with unequal probabilities. Someone with better math skills than us needs to help here. ### Closing thoughts We uncovered PCSA again as a way to go back to first principles and see if there are lessons to be learned that could be applied to HLL to make it even better. For instance, can all of this work on the fringe be applied to HLL to reduce the number of bits per register while still maintaining the same precision? HLL effectively records the “strandline” (what we call the left-most 1s). More research into how this strandline behaves and if it is possible to improve the storage of it through truncation could reduce the standard HLL register width from 5 bits to 4, a huge savings! Obviously, we uncovered a lot of open questions with this research and we feel there are algorithmic improvements to HLL right around the corner. We have done some preliminary tests and the results so far are intriguing. Stay tuned! Filed Under: Data Science, General ## Doubling the Size of an HLL Dynamically – Unions February 19, 2013 By Author’s Note: This post is related to a few previous posts dealing with the HyperLogLog algorithm.  See Matt’s overview of the algorithm, and see this post for an overview of “folding” or shrinking HLLs in order to perform set operations. It is also the second in a series of three posts on doubling the number of bins of HLLs. The first post dealt with the recovery time after doubling and the next post will deal with ways to utilize an extra bit or two per bin. ### Overview Let’s say we have two streams of data which we’re monitoring with the HLL algorithm, and we’d like to get an estimate on the cardinality of these two streams combined, i.e. thought of as one large stream.  In this case, we have to take advantage of the algorithm’s built-in “union” feature.  Done naively, the accuracy of the estimate will depend entirely on the the number of bins, $m$, of the smaller of the two HLLs.  In this case, to make our estimate more accurate, we would need to increase this $m$ of one (or both) of our HLLs.  This post will investigate the feasibility of doing this; we will apply our idea of “doubling” to see if we can gain any accuracy.  We will not focus on intersections, since the only support the HyperLogLog algorithm has for intersections is via the inclusion/exclusion principle. Hence the error can be kind of funky for this – for a better overview of this, check out Timon’s post here. For this reason, we only focus on how the union works with doubling. ### The Strategy: A Quick Reminder In my last post we discussed the benefits and drawbacks of many different doubling strategies in the context of recovery time of the HLL after doubling. Eventually we saw that two of our doubling strategies worked significantly better than the others. In this post, instead of testing many different strategies, we’ll focus instead on one strategy, “proportion doubling” (PD), and how to manipulate it to work best in the context of unions. The idea behind PD is to guess the approximate intersection cardinality of the two datasets and to force that estimate to remain after doubling. To be more specific, suppose we have an HLL $A$ and an HLL $B$ with$n$ bins and $2n$ bins, respectively. Then we check what proportion of bins in $A$, call it $p$, agree with the bins in $B$. When we doubled $A$, we fill in the bins by randomly selecting $p\cdot n$ bins, and filling them in with the value in the corresponding bins in $B$. To fill in the rest of the bins, we fill them in randomly according to the distribution. ### The Naive Approach To get some idea of how well this would work, I put the most naive strategy to the test. The idea was to run 100 trials where I took two HLLs (one of size $2^5 = 32$ and one of size $2^6 = 64$), ran 200K keys through them, doubled the smaller one (according to Random Estimate), and took a union. I had a hunch that the accuracy of our estimate after doubling would depend on how large the true intersection cardinality of the two datasets would be, so I ran this experiment for overlaps of size 0, 10K, 20K, etc. The graphs below are organized by the true intersection cardinality, and each graph shows the boxplot of the error for the trials. This graph is a little overwhelming and a bit of a strange way to display the data, but is useful for getting a feel for how the three estimates work in the different regimes.  The graph below is from the same data and just compares the “Small” and “Doubled” HLLs.  The shaded region represents the middle 50% of the data, and the blue dots represent the data points. The first thing to notice about these graphs is the accuracy of the estimate in the small intersection regime. However, outside of this, the estimates are not very accurate – it is clearly a better choice to just use the estimate from the smaller HLL. Let’s try a second approach. Above we noticed that the algorithm’s accuracy depended on the cardinality of the intersection. Let’s try to take that into consideration. Let’s use the “Proportion Doubling” (PD) strategy we discussed in our first post. That post goes more in depth into the algorithm, but the take away is that this doubling strategy preserves the proportion of bins in the two HLLs which agree. I ran some trials like I did above to get some data on this. The graphs below represent this. Here we again, show the data in a second graph comparing just the “Doubled” and “Small” HLL estimates.  Notice how much tighter the middle 50% region is on the top graph (for the “Doubled” HLL).  Hence in the large intersection regime, we get very accurate estimates. One thing to notice about the second set of graphs is how narrow the error bars are.  Even when the estimate is biased, it still has much smaller error.  Also, notice that this works well in the large intersection regime but horribly in the small intersection regime.  This suggests that we may be able to interpolate our strategies. The next set of graphs is for an attempt at this. The algorithm gets an estimate of the intersection cardinality, then decides to either double using PD, double using RE, or not double depending on whether the intersection is large, small, or medium. Here, the algorithm works well in the large intersection regime and doesn’t totally crap out outside of this regime (like the second algorithm), but doesn’t sustain the accuracy of the first algorithm in the small intersection regime. This is most likely because the algorithm cannot “know” which regime it is in and thus, must make a guess.  Eventually, it will guess wrong will severely underestimate the union cardinality. This will introduce a lot of error, and hence, our boxplot looks silly in this regime. The graph below shows the inefficacy of this new strategy. Notice that there are virtually no gains in accuracy in the top graph. ### Conclusion With some trickery, it is indeed possible to gain some some accuracy when estimating the cardinality of the union of two HLLs by doubling one.  However, in order for this to be feasible, we need to apply the correct algorithm in the correct regime. This isn’t a major disappointment since for many practical cases, it would be easy to guess which regime the HLLs should fall under and we could build in the necessary safeguards if we guess incorrectly.  In any case, our gains were modest but certainly encouraging! Filed Under: Data Science, General ## HLL Intersections December 17, 2012 By ### Why? The intersection of two streams (of user ids) is a particularly important business need in the advertising industry. For instance, if you want to reach suburban moms but the cost of targeting those women on a particular inventory provider is too high, you might want to know about a cheaper inventory provider whose audience overlaps with the first provider. You may also want to know how heavily your car-purchaser audience overlaps with a certain metro area or a particular income range. These types of operations are critical for understanding where and how well you’re spending your advertising budget. As we’ve seen before, HyperLogLog provides a time- and memory-efficient algorithm for estimating the number of distinct values in a stream. For the past two years, we’ve been using HLL at AK to do just that: count the number of unique users in a stream of ad impressions. Conveniently, HLL also supports the union operator ( $\cup$ ) allowing us to trivially estimate the distinct value count of any composition of streams without sacrificing precision or accuracy. This piqued our interest because if we can “losslessly” compute the union of two streams and produce low-error cardinality estimates, then there’s a chance we can use that estimate along with the inclusion-exclusion principle to produce “directionally correct” cardinality estimates of the intersection of two streams. (To be clear, when I say “directionally correct” my criteria is “can an advertiser make a decision off of this number?”, or “can it point them in the right direction for further research?”. This often means that we can tolerate relative errors of up to 50%.) The goals were: 1. Get a grasp on the theoretical error bounds of intersections done with HLLs, and 2. Come up with heuristic bounds around $m$, $overlap$, and the set cardinalities that could inform our usage of HLL intersections in the AK product. Quick terminology review: • If I have set of integers $A$, I’m going to call the HLL representing it $H_{A}$. • If I have HLLs $H_{A}, H_{B}$ and their union $H_{A \cup B}$, then I’m going to call the intersection cardinality estimate produced $|H_{A \cap B}|$. • Define the $overlap$ between two sets as $overlap(A, B) := \frac{|A \cap B|}{min(|A|, |B|)}$. • Define the cardinality ratio $\frac{max(|A|, |B|)}{min(|A|, |B|)}$ as a shorthand for the relative cardinality of the two sets. • We’ll represent the absolute error of an observation $|H_{A}|$ as $\Delta |H_{A}|$. That should be enough for those following our recent posts, but for those just jumping in, check out Appendices A and B at the bottom of the post for a more thorough review of the set terminology and error terminology. ### Experiment We fixed 16 $overlap$ values (0.0001, 0.001, 0.01, 0.02, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0) and 12 set cardinalities (100M, 50M, 10M, 5M, 1M, 500K, 100K, 50K, 10K, 5K, 1K, 300) and did 100 runs of each permutation of $(overlap, |A|, |B|)$. A random stream of 64-bit integers hashed with Murmur3 was used to create the two sets such that they shared exactly $min(|A|,|B|) \cdot overlap = |A \cap B|$  elements. We then built the corresponding HLLs $H_{A}$ and $H_{B}$ for those sets and calculated the intersection cardinality estimate $|H_{A} \cap H_{B}|$ and computed its relative error. Given that we could only generate and insert about 2M elements/second per core, doing runs with set cardinalities greater than 100M was quickly ruled out for this blog post. However, I can assure you the results hold for much larger sets (up to the multiple billion-element range) as long as you heed the advice below. ### Results This first group of plots has a lot going on, so I’ll preface it by saying that it’s just here to give you a general feeling for what’s going on. First note that within each group of boxplots $overlap$ increases from left to right (orange to green to pink), and within each plot cardinality ratio increases from left to right. Also note that the y-axis (the relative error of the observation) is log-base-10 scale. You can clearly see that as the set sizes diverge, the error skyrockets for all but the most similar (in both cardinality and composition) sets. I’ve drawn in a horizontal line at 50% relative error to make it easier to see what falls under the “directionally correct” criteria. You can (and should) click for a full-sized image. Note: the error bars narrow as we progress further to the right because there are fewer observations with very large cardinality ratios. This is an artifact of the experimental design. A few things jump out immediately: • For cardinality ratio > 500, the majority of observations have many thousands of percent error. • When cardinality ratio is smaller than that and $overlap > 0.4$, register count has little effect on error, which stays very low. • When $overlap \le 0.01$, register count has little effect on error, which stays very high. Just eyeballing this, the lesson I get is that computing intersection cardinality with small error (relative to the true value) is difficult and only works within certain constraints. Specifically, 1. $\frac{|A|}{|B|} < 100$, and 2. $overlap(A, B) = \frac{|A \cap B|}{min(|A|, |B|)} \ge 0.05$. The intuition behind this is very simple: if the error of any one of the terms in your calculation is roughly as large as the true value of the result then you’re not going to estimate that result well. Let’s look back at the intersection cardinality formula. The left-hand side (that we are trying to estimate) is a “small” value, usually. The three terms on the right tend to be “large” (or at least “larger”) values. If any of the “large” terms has error as large as the left-hand side we’re out of luck. So, let’s say you can compute the cardinality of an HLL with relative error of a few percent. If $|H_{A}|$ is two orders of magnitude smaller than $|H_{B}|$, then the error alone of $|H_{B}|$ is roughly as large as $|A|$. $|A \cap B| \le |A|$ by definition, so $|A \cap B| \le |A| \approx |H_{A}| \approx \Delta |H_{B}|$. In the best scenario, where $A \cap B = A$, the errors of $|H_{B}|$ and $|H_{A \cup B}| \approx |H_{B}|$ are both roughly the same size as what you’re trying to measure. Furthermore, even if $|A| \approx |B|$ but the overlap is very small, then $|A \cap B|$  will be roughly as large as the error of all three right-hand terms. ### On the bubble Let’s throw out the permutations whose error bounds clearly don’t support “directionally correct” answers ($overlap < 0.01$ and $\frac{|A|}{|B|} > 500$) and those that trivially do ($overlap > 0.4$) so we can focus more closely on the observations that are “on the bubble”. Sadly, these plots exhibit a good deal of variance inherent in their smaller sample size. Ideally we’d have tens of thousands of runs of each combination, but for now this rough sketch will hopefully be useful. (Apologies for the inconsistent colors between the two plots. It’s a real bear to coordinate these things in R.) Again, please click through for a larger, clearer image. By doubling the number of registers, the variance of the relative error falls by about a quarter and moves the median relative error down (closer to zero) by 10-20 points. In general, we’ve seen that the following cutoffs perform pretty well, practically. Note that some of these aren’t reflected too clearly in the plots because of the smaller sample sizes. Register Count Data Structure Size Overlap Cutoff Cardinality Ratio Cutoff 8192 5kB 0.05 10 16384 10kB 0.05 20 32768 20kB 0.05 30 65536 41kB 0.05 100 ### Error Estimation To get a theoretical formulation of the error envelope for intersection, in terms of the two set sizes and their overlap, I tried the first and simplest error propagation technique I learned. For variables $Y, Z, ...$, and $X$ a linear combination of those (independent) variables, we have $\Delta X = \sqrt{ (\Delta Y)^2 + (\Delta Z)^2 + ...}$ Applied to the inclusion-exclusion formula: $\begin{array}{ll} \displaystyle \Delta |H_{A \cap B}| &= \sqrt{ (\Delta |H_{A}|)^2 + (\Delta |H_{B}|)^2 + (\Delta |H_{A \cup B}|)^2} \\ &= \sqrt{ (\sigma\cdot |A|)^2 + (\sigma\cdot |B|)^2 + (\sigma\cdot |A \cup B|)^2} \end{array}$ where $\sigma = \frac{1.04}{\sqrt{m}}$ as in section 4 (“Discussion”) of the HLL paper. Aside: Clearly $|H_{A \cup B}|$ is not independent of $|H_{A}| + |H_{B}|$, though $|H_{A}|$ is likely independent of $|H_{B}|$. However, I do not know how to a priori calculate the covariance in order to use an error propagation model for dependent variables. If you do, please pipe up in the comments! I’ve plotted this error envelope against the relative error of the observations from HLLs with 8192 registers (approximately 5kB data structure). Despite the crudeness of the method, it provided a 95% error envelope for the data without significant differences across cardinality ratio or $overlap$. Specifically, at least 95% of observations satisfied $(|H_{A \cap B}| - |A \cap B|) < \Delta |H_{A \cap B}|$ However, it’s really only useful in the ranges shown in the table above. Past those cutoffs the bound becomes too loose and isn’t very useful. This is particularly convenient because you can tune the number of registers you need to allocate based on the most common intersection sizes/overlaps you see in your application. Obviously, I’d recommend everyone run these tests and do this analysis on their business data, and not on some contrived setup for a blog post. We’ve definitely seen that we can get away with far less memory usage than expected to successfully power our features, simply because we tuned and experimented with our most common use cases. ### Next Steps We hope to improve the intersection cardinality result by finding alternatives to the inclusion-exclusion formula. We’ve tried a few different approaches, mostly centered around the idea of treating the register collections themselves as sets, and in my next post we’ll dive into those and other failed attempts! ### Appendix A: A Review Of Sets Let’s say we have two streams of user ids, $S_{A}$ and $S_{B}$. Take a snapshot of the unique elements in those streams as sets and call them $A$ and $B$. In the standard notation, we’ll represent the cardinality, or number of elements, of each set as $|A|$ and $|B|$. Example: If $A = \{1,2,10\}$ then $|A| = 3$. If I wanted to represent the unique elements in both of those sets combined as another set I would be performing the union, which is represented by $A \cup B$. Example: If $A = \{1,2,3\}, B=\{2,3,4\}$ then $A \cup B = \{1,2,3,4\}$. If I wanted to represent the unique elements that appear in both $A$ and $B$ I would be performing the intersection, which is represented by $A \cap B$. Example: With $A, B$ as above, $A \cap B = \{2,3\}$. The relationship between the union’s cardinality and the intersection’s cardinality is given by the inclusion-exclusion principle. (We’ll only be looking at the two-set version in this post.) For reference, the two-way inclusion-exclusion formula is $|A \cap B| = |A| + |B| - |A \cup B|$. Example: With $A, B$ as above, we see that $|A \cap B| = 2$ and $|A| + |B| - |A \cup B| = 3 + 3 - 4 = 2$. For convenience we’ll define the $overlap$ between two sets as $overlap(A, B) := \frac{|A \cap B|}{min(|A|, |B|)}$. Example: With $A, B$ as above, $overlap(A,B) = \frac{|A \cap B|}{min(|A|, |B|)} = \frac{2}{min(3,3)} = \frac{2}{3}$. Similarly, for convenience, we’ll define the cardinality ratio $\frac{max(|A|, |B|)}{min(|A|, |B|)}$ as a shorthand for the relative cardinality of the two sets. The examples and operators shown above are all relevant for exact, true values. However, HLLs do not provide exact answers to the set cardinality question. They offer estimates of the cardinality along with certain error guarantees about those estimates. In order to differentiate between the two, we introduce HLL-specific operators. Consider a set $A$. Call the HLL constructed from this set’s elements $H_{A}$. The cardinality estimate given by the HLL algorithm for $H_{A}$ is $|H_{A}|$. Define the union of two HLLs $H_{A} \cup H_{B} := H_{A \cup B}$, which is also the same as the HLL created by taking the pairwise max of $H_{A}$‘s and $H_{B}$‘s registers. Finally, define the intersection cardinality of two HLLs in the obvious way: $|H_{A} \cap H_{B}| := |H_{A}| + |H_{B}| - |H_{A \cup B}|$. (This is simply the inclusion-exclusion formula for two sets with the cardinality estimates instead of the true values.) ### Appendix B: A (Very Brief) Review of Error The simplest way of understanding the error of an estimate is simply “how far is it from the truth?”. That is, what is the difference in value between the estimate and the exact value, also known as the absolute error. However, that’s only useful if you’re only measuring a single thing over and over again. The primary criteria for judging the utility of HLL intersections is relative error because we are trying to measure intersections of many different sizes. In order to get an apples-to-apples comparison of the efficacy of our method, we normalize the absolute error by the true size of the intersection. So, for some observation $\hat{x}$ whose exact value is non-zero $x$, we say that the relative error of the observation is $\frac{x-\hat{x}}{x}$. That is, “by what percentage off the true value is the observation off?” Example: If $|A| = 100, |H_{A}| = 90$ then the relative error is $\frac{100 - 90}{100} = \frac{10}{100} = 10\%$. ## HLLs and Polluted Registers December 3, 2012 By ### Introduction It’s worth thinking about how things can go wrong, and what the implications of such occurrences might be. In this post, I’ll be taking a look at the HyperLogLog (HLL) algorithm for cardinality estimation, which we’ve discussed before. ### The Setup HLLs have the property that their register values increase monotonically as they run. The basic update rule is: ```for item in stream: index, proposed_value = process_hashed_item(hash(item)) hll.registers[index] = max(hll.registers[index], proposed_value) ``` There’s an obvious vulnerability here: what happens to your counts if you get pathological data that blows up a register value to some really large number? These values are never allowed to decrease according to the vanilla algorithm. How much of a beating can these sketches take from such pathological data before their estimates are wholly unreliable? ### Experiment The First To get some sense of this, I took a 1024 bucket HLL, ran a stream through it, and then computed the error in the estimate. I then proceeded to randomly choose a register, max it out, and compute the error again. I repeated this process until I had maxed out 10% of the registers. In pseudo-python: ```print("n_registers_touched,relative_error") print(0, relative_error(hll.cardinality(), stream_size), sep = ",") for index, reg in random.sample(range(1024), num_to_edit): hll.registers[reg] = 32 print(index + 1, relative_error(hll.cardinality(), stream_size), sep = ",") ``` In practice, HLL registers are fixed to be a certain bit width. In our case, registers are 5 bits wide, as this allows us to count runs of 0s up to length 32. This allows us to count astronomically high in a 1024 register HLL. Repeating this for many trials, and stream sizes of 100k, 1M, and 10M, we have the following picture. The green line is the best fit line. What we see is actually pretty reassuring. Roughly speaking, totally poisoning x% of registers results in about an x% error in your cardinality estimate. For example, here are the error means and variances across all the trials for the 1M element stream: Number of Registers Modified Percentage of Registers Modified Error Mean Error Variance 0 0 -0.0005806094 0.001271119 10 0.97% 0.0094607324 0.001300780 20 1.9% 0.0194860481 0.001356282 30 2.9% 0.0297495396 0.001381753 40 3.9% 0.0395013208 0.001436418 50 4.9% 0.0494727527 0.001460182 60 5.9% 0.0600436774 0.001525749 70 6.8% 0.0706375356 0.001522320 80 7.8% 0.0826034639 0.001599104 90 8.8% 0.0937465662 0.001587156 100 9.8% 0.1060810958 0.001600348 ### Initial Reactions I was actually not too surprised to see that the induced error was modest when only a small fraction of the registers were poisoned. Along with some other machinery, the HLL algorithm uses the harmonic mean of the individual register estimates when computing its guess for the number of distinct values in the data stream. The harmonic mean does a very nice job of downweighting values that are significantly larger than others in the set under consideration: ```In [1]: from scipy.stats import hmean In [2]: from numpy import mean In [3]: f = [1] * 100000 + [1000000000] In [4]: mean(f) Out[4]: 10000.899991000089 In [5]: hmean(f) Out[5]: 1.0000099999999899 ``` It is this property that provides protection against totally wrecking the sketch’s estimate when we blow up a fairly small fraction of the registers. ### Experiment The Second Of course, the algorithm can only hold out so long. While I was not surprised by the modesty of the error, I was very surprised by how linear the error growth was in the first figure. I ran the same experiment, but instead of stopping at 10% of the registers, I went all the way to the end. This time, I have plotted the results with a log-scaled y-axis: Note that some experiments appear to start after others. This is due to missing data from taking the logarithm of negative errors. Without getting overly formal in our analysis, there are roughly three phases in error growth here. At first, it’s sublinear on the log-scale, then linear, then superlinear. This roughly corresponds to “slow”, “exponential”, and “really, really, fast”. As our mathemagician-in-residence points out, the error will grow roughly as p/(1-p) where p is the fraction of polluted registers. The derivation of this isn’t too hard to work out, if you want to give it a shot! The implication of this little formula matches exactly what we see above. When p is small, the denominator does not change much, and the error grows roughly linearly. As p approaches 1, the error begins to grow super-exponentially. Isn’t it nice when experiment matches theory? ### Final Thoughts It’s certainly nice to see that the estimates produced by HLLs are not overly vulnerable to a few errant register hits. As is often the case with this sort of analysis, the academic point must be put in balance with the practical. The chance of maxing out even a single register under normal operation is vanishingly small, assuming you chose a sane hash function for your keys. If I was running an HLL in the wild, and saw that 10% of my registers were pegged, my first thought would be “What is going wrong with my system!?” and not “Oh, well, at least I know my estimate to within 10%!” I would be disinclined to trust the whole data set until I got a better sense of what caused the blowups, and why I should give any credence at all to the supposedly unpolluted registers. Filed Under: Data Science, General ## Sketch of the Day: HyperLogLog — Cornerstone of a Big Data Infrastructure October 25, 2012 By ### Intro In the Zipfian world of AK, the HyperLogLog distinct value (DV) sketch reigns supreme. This DV sketch is the workhorse behind the majority of our DV counters (and we’re not alone) and enables us to have a real time, in memory data store with incredibly high throughput. HLL was conceived of by Flajolet et. al. in the phenomenal paper HyperLogLog: the analysis of a near-optimal cardinality estimation algorithm. This sketch extends upon the earlier Loglog Counting of Large Cardinalities (Durand et. al.) which in turn is based on the seminal AMS work FM-85, Flajolet and Martin’s original work on probabilistic counting. (Many thanks to Jérémie Lumbroso for the correction of the history here. I am very much looking forward to his upcoming introduction to probabilistic counting in Flajolet’s complete works.) UPDATE – Rob has recently published a blog about PCSA, a direct precursor to LogLog counting which is filled with interesting thoughts. There have been a few posts on HLL recently so I thought I would dive into the intuition behind the sketch and into some of the details. Just like all the other DV sketches, HyperLogLog looks for interesting things in the hashed values of your incoming data.  However, unlike other DV sketches HLL is based on bit pattern observables as opposed to KMV (and others) which are based on order statistics of a stream.  As Flajolet himself states: Bit-pattern observables: these are based on certain patterns of bits occurring at the beginning of the (binary) S-values. For instance, observing in the stream S at the beginning of a string a bit- pattern $O^{\rho-1}1$ is more or less a likely indication that the cardinality n of S is at least $2^\rho$. Order statistics observables: these are based on order statistics, like the smallest (real) values, that appear in S. For instance, if X = min(S), we may legitimately hope that n is roughly of the order of 1/X… In my mind HyperLogLog is really composed of two insights: Lots of crappy things are sometimes better than one really good thing; and bit pattern observables tell you a lot about a stream. We’re going to look at each component in turn. ### Bad Estimator Even though the literature refers to the HyperLogLog sketch as a different family of estimator than KMV I think they are very similar. It’s useful to understand the approach of HLL by reviewing the KMV sketch. Recall that KMV stores the smallest $k$ values that you have seen in a stream. From these $k$ values you get an estimate of the number of distinct elements you have seen so far. HLL also stores something similar to the smallest values ever seen. To see how this works it’s useful to ask “How could we make the KMV sketch smaller?” KMV stores the actual value of the incoming numbers. So you have to store $k$ 64 bit values which is tiny, but not that tiny. What if we just stored the “rank” of the numbers?  Let’s say the number 94103 comes through (I’ll use base 10 here to make things easier). That number is basically $9*10^4$ plus some stuff. So, let’s just store the exponent, i.e. 4. In this way I get an approximation of the size of numbers I have seen so far. That turns the original KMV algorithm into only having to store the numbers 1-19 (since $2^{64} \approx 10^{19}$) which is a whole lot less than $2^{64}$ numbers. Of course, this estimate will be much worse than storing the actual values. ### Bit Pattern Observables In actuality HLL, just like all the other DV sketches, uses hashes of the incoming data in base 2. And instead of storing the “rank” of the incoming numbers HLL uses a nice trick of looking for runs of zeroes in the hash values. These runs of zeroes are an example of “bit pattern observables”. This concept is similar to recording the longest run of heads in a series of coin flips and using that to guess the number of times the coin was flipped. For instance, if you told me that you spent some time this afternoon flipping a coin and the longest run of heads you saw was 2 I could guess you didn’t flip the coin very many times. However, if you told me you saw a run of 100 heads in a row I would gather you were flipping the coin for quite a while. This “bit pattern observable”, the run of heads, gives me information about the stream of data it was pulled from. An interesting thing to note is just how probable long runs of heads are. As Mark Shilling points out, you can almost always tell the difference between a human generated set of coin flips and an actual one, due to humans not generating long runs. (The world of coin flipping seems to be a deep and crazy pit.) Disclaimer: The only thing I am trying to motivate here is that by keeping a very small piece of information (the longest run of heads) I can get some understanding of what has happened in a stream. Of course, you could probably guess that even though we have now reduced the storage of our sketch the DV estimate is pretty crummy. But what if we kept more than one of them? ### Stochastic Averaging In order to improve the estimate, the HLL algorithm stores many estimators instead of one and averages the results. However, in order to do this you would have to hash the incoming data through a bunch of independent hash functions. This approach isn’t a very good idea since hashing each value a bunch of times is expensive and finding good independent hash families is quite difficult in practice. The work around for this is to just use one hash function and “split up” the input into $m$ buckets while maintaining the observable (longest run of zeroes) for each bucket. This procedure is called stochastic averaging. You could do this split in KMV as well and it’s easier to visualize. For an $m$ of 3 it would look like: To break the input into the $m$ buckets, Durand suggests using the first few ($k$) bits of the hash value as an index into a bucket and compute the longest run of zeroes ($R$) on what is left over. For example, if your incoming datum looks like 010100000110 and k = 3 you could use the 3 rightmost bits, 110, to tell you which register to update ($110_2 = 6$) and from the remaining bits, 010100000, you could take the longest run of zeroes (up to some max), which in this case is 5. In order to compute the number of distinct values in the stream you would just take the average of all of the $m$ buckets: $\displaystyle DV_{LL} = \displaystyle\text{constant} * m*2^{\overline{R}}$ Here $\overline{R}$ is the average of the values $R$ in all the buckets. The formula above is actually the estimator for the LogLog algorithm, not HyperLogLog. To get HLL, you need one more piece… ### Harmonic Mean A fundamental insight that Flajolet had to improve LogLog into HyperLogLog was that he noticed the distribution of the values in the $m$ registers is skewed to the right, and there can be some dramatic outliers that really mess up the average used in LogLog (see Fig. 1 below). He and Durand knew this when they wrote LogLog and did a bunch of hand-wavey stuff (like cut off the top 30% of the register values) to create what he called the “SuperLogLog”, but in retrospect this seems kind of dumb. He fixed this in HLL by tossing out the odd rules in SuperLogLog and deciding to take the harmonic mean of the DV estimates. The harmonic mean tends to throw out extreme values and behave well in this type of environment. This seems like an obvious thing to do. I’m a bit surprised they didn’t try this in the LogLog paper, but perhaps the math is harder to deal with when using the harmonic mean vs the geometric mean. Fig. 1:  The theoretical distribution of register values after $v$ distinct values have been run through an HLL. Throw all these pieces together and you get the HyperLogLog DV estimator: $\displaystyle DV_{HLL} = \displaystyle\text{constant} * m^2 *\left (\sum_{j=1}^m 2^{-R_j} \right )^{-1}$ Here $R_j$ is the longest run of zeroes in the $i^{th}$ bucket. ### Putting it All Together Even with the harmonic mean Flajolet still has to introduce a few “corrections” to the algorithm. When the HLL begins counting, most of the registers are empty and it takes a while to fill them up. In this range he introduces a “small range correction”. The other correction is when the HLL gets full. If a lot of distinct values have been run through an HLL the odds of collisions in your hash space increases. To correct for hash collisions Flajolet introduces the “large range collection”. The final algorithm looks like (it might be easier for some of you to just look at the source in the JavaScript HLL simulation): ```m = 2^b #with b in [4...16] if m == 16: alpha = 0.673 elif m == 32: alpha = 0.697 elif m == 64: alpha = 0.709 else: alpha = 0.7213/(1 + 1.079/m) registers = [0]*m # initialize m registers to 0 ############################################################################################## # Construct the HLL structure for h in hashed(data): register_index = 1 + get_register_index( h,b ) # binary address of the rightmost b bits run_length = run_of_zeros( h,b ) # length of the run of zeroes starting at bit b+1 registers[ register_index ] = max( registers[ register_index ], run_length ) ############################################################################################## # Determine the cardinality DV_est = alpha * m^2 * 1/sum( 2^ -register ) # the DV estimate if DV_est < 5/2 * m: # small range correction V = count_of_zero_registers( registers ) # the number of registers equal to zero if V == 0: # if none of the registers are empty, use the HLL estimate DV = DV_est else: DV = m * log(m/V) # i.e. balls and bins correction if DV_est <= ( 1/30 * 2^32 ): # intermediate range, no correction DV = DV_est if DV_est > ( 1/30 * 2^32 ): # large range correction DV = -2^32 * log( 1 - DV_est/2^32) ``` Rob wrote up an awesome HLL simulation for this post. You can get a real sense of how this thing works by playing around with different values and just watching how it grows over time. Click below to see how this all fits together. Click above to run the HyperLogLog simulation ### Unions Unions are very straightforward to compute in HLL and, like KMV, are lossless. All you need to do to combine the register values of the 2 (or $n$) HLL sketches is take the max of the 2 (or $n$) register values and assign that to the union HLL. With a little thought you should realize that this is the same thing as if you had fed in the union stream to begin with. A nice side effect about lossless unions is that HLL sketches are trivially parallelizable. This is great if, like us, you are trying to digest a firehose of data and need multiple boxes to do summarization. So, you have: ```for i in range(0, len(R_1)): R_new[i] = max( R_1[i], R_2[i] ) ``` To combine HLL sketches that have differing sizes read Chris’s blog post about it. ### Wrapping Up In our research, and as the literature says, the HyperLogLog algorithm gives you the biggest bang for the buck for DV counting. It has the best accuracy per storage of all the DV counters to date. The biggest drawbacks we have seen are around intersections. Unlike KMV, there is no explicit intersection logic, you have to use the inclusion/exclusion principle and this gets really annoying for anything more than 3 sets. Aside from that, we’ve been tickled pink using HLL for our production reporting. We have even written a PostgreSQL HLL data type that supports cardinality, union, and intersection. This has enabled all kinds of efficiencies for our analytics teams as the round trips to Hadoop are less and most of the analysis can be done in SQL. We have seen a massive increase in the types of analytics that go on at AK since we have adopted a sketching infrastructure and I don’t think I’m crazy saying that many big data platforms will be built this way in the future. P.S.  Sadly, Philippe Flajolet passed away in March 2011. It was actually a very sad day for us at Aggregate Knowledge because we were so deep in our HLL research at the time and would have loved to reach out to him, he seems like he would have been happy to see his theory put to practice. Based on all I’ve read about him I’m very sorry to have not met him in person. I’m sure his work will live on but we have definitely lost a great mind both in industry and academia. Keep counting Philippe! Photo courtesy of http://www.ae-info.org/ Tagged With: D3, DV Sketch, HLL, HyperLogLog, Probabilistic Sketching, Simulation, Sketching ## Doubling the Size of an HLL Dynamically October 12, 2012 By ### Introduction In my last post, I explained how to halve the number of bins used in an HLL as a way to allow set operations between that HLL and smaller HLLs.  Unfortunately, the accuracy of an HLL is tied to the number of bins used, so one major drawback with this “folding” method is that each time you have the number of bins, you reduce that accuracy by a factor of $\sqrt{2}$. In this series of posts I’ll focus on the opposite line of thinking: given an HLL, can one double the number of bins, assigning the new bins values according to some strategy, and recover some of the accuracy that a larger HLL would have had?  Certainly, one shouldn’t be able to do this (short of creating a new algorithm for counting distinct values) since once we use the HLL on a dataset the extra information that a larger HLL would have gleaned is gone.  We can’t recover it and so we can’t expect to magically pull a better estimate out of thin air (assuming Flajolet et al. have done their homework properly and the algorithm makes the best possible guess with the given information – which is a pretty good bet!).  Instead, in this series of posts, I’ll focus on how doubling plays with recovery time and set operations.  By this, I mean the following:  Suppose we have an HLL of size 2n and while its running, we double it to be an HLL of size 2n+1. Initially, this may have huge error, but if we allow it to continue running, how long will it take for its error to be relatively small?  I’ll also discuss some ways of modifying the algorithm to carry slightly more information. ### The Candidates Before we begin, a quick piece of terminology.  Suppose we have an HLL of size 2n and we double it to be an HLL of size 2 n+1.  We consider two bins to be partners if their bin numbers differ by 2n.  To see why this is important – check the post on HLL folding. Colin and I did some thinking and came up with a few naive strategies to fill in the newly created bins after the doubling. I’ve provided a basic outline of the strategies below. • Zeroes – Fill in with zeroes. • Concatenate – Fill in each bin with the value of its partner. • MinusTwo – Fill in each bin with the value of its partner minus two. Two may seem like an arbitrary amount, but quick look at the formulas involved in the algorithm show that this leaves the cardinality estimate approximately unchanged. • RandomEstimate (RE) – Fill in each bin according to its probability distribution. I’ll describe more about this later. • ProportionDouble (PD) – This strategy is only for use with set operations. We estimate the number of bins in the two HLLs which should have the same value, filling in the second half so that that proportion holds and the rest are filled in according to RE. #### Nitty Gritty of RE The first three strategies given above are pretty self-explanatory, but the last two are a bit more complicated. To understand these, one needs to understand the distribution of values in a given bin.  In the original paper, Flajolet et al. calculate the probability that a given bin takes the value $k$ to be given by $(1 - 0.5^k)^v - (1 - 0.5^{k-1})^v$ where $v$ is the number of keys that the bin has seen so far. Of course, we don’t know this value ($v$) exactly, but we can easily estimate it by dividing the cardinality estimate by the number of bins. However, we have even more information than this. When choosing a value for our doubled HLL, we know that that value cannot exceed its partner’s value. To understand why this is so, look back at my post on folding, and notice how the value in the partner bins in a larger HLL correspond to the value in the related bin in the smaller HLL. Hence, to get the distribution for the value in a given bin, we take the original distribution, chop it off at the relevant value, and rescale it to have total area 1. This may seem kind of hokey but let’s quickly look at a toy example. Suppose you ask me to guess a number between 1 and 10, and you will try to guess which number I picked. At this moment, assuming I’m a reasonable random number generator, there is a $1/10$ chance that I chose the number one, a $1/10$ chance that I chose the number two, etc. However, if I tell you that my guess is no larger than two, you can now say there there is a $1/2$ chance that my guess is a one, a $1/2$ chance that my guess is a two, and there is no chance that my guess is larger. So what happened here? We took the original probability distribution, used our knowledge to cut off and ignore the values above the maximum possible value, and then rescaled them so that the sum of the possible probabilities is equal to zero. RE consists simply of finding this distribution, picking a value according to it, and placing that value in the relevant bin. #### Nitty Gritty of PD Recall that we only use PD for set operations. One thing we found was that the accuracy of doubling with set operations according to RE is highly dependent on the the intersection size of the two HLLs. To account for this, we examine the fraction of bins in the two HLLs which contain the same value, and then we force the doubled HLL to preserve this fraction So how do we do this? Let’s say we have two HLLs: $H$ and $G$. We wish to double $H$ before taking its union with $G$. To estimate the proportion of their intersection, make a copy of $G$ and fold it to be the same size as $H$. Then count the number of bins where $G$ and $H$ agree, call this number $a$. Then if $m$ is the number of bins in $H$, we can estimate that $H$ and $G$ should overlap in about $a/m$ bins. Then for each bin, with probability $a/m$ we fill in the bin with the the minimum of the relevant bin from $G$ and that bin’s partner in $G$. With probability $1 - a/m$ we fill in the bin according to the rules of RE. ### The Posts • Recovery – after doubling, how long does it take for the error to decrease to an acceptable level? • Unions – how well does doubling play with unions? • Extra Bits – what are some other strategies to squeeze some extra accuracy out of the HLLs? (Links will be added as the posts are published. Keep checking back for updates!) Filed Under: Data Science, General ## Set Operations On HLLs of Different Sizes September 12, 2012 By ### Introduction Here at AK, we’re in the business of storing huge amounts of information in the form of 64 bit keys. As shown in other blog posts and in the HLL post by Matt, one efficient way of getting an estimate of the size of the set of these keys is by using the HyperLogLog (HLL) algorithm.  There are two important decisions one has to make when implementing this algorithm.  The first is how many bins one will use and the second is the maximum value one allows in each bin.  As a result, the amount of space this will take up is going to be the number of bins times the log of the maximum value you allow in each bin.  For this post we’ll ignore this second consideration and focus instead on the number of bins one uses.  The accuracy for an estimate is given approximately by 1.04/√b, where b is the number of bins.  Hence there is a tradeoff between the accuracy of the estimate and the amount of space you wish to dedicate to this estimate. Certainly, projects will have various requirements that call for different choices of number of bins. The HLL algorithm natively supports the union operation.  However, one requirement for this operation is that the HLLs involved are of the same size, i.e. have the same number of bins.  In practice, there’s no guarantee that HLLs will satisfy this requirement.  In this post, I’ll outline the method by which we transform an HLL with a certain number of bins to one with a fewer number of bins, allowing us to perform set operations on any two HLLs, regardless of size. ### Key Processing As discussed in the HyperLogLog paper, to get a cardinality estimate with an HLL with 2n bins on a data set we pass over each key, using the placement of the rightmost “1″ to determine the value of the key and the next n digits to the left to determine in which bin to place that value.  In each bin, we only store the maximum value that that bin has “seen.” Below I’ve shown how two HLLs (one of size 23 and one of size 24) process two different keys.  Here, the keys have the same value, because the purpose of this example is to illustrate how the location in which we place the key changes when the HLL has twice the number of bins. Above, the keys which are attributed to the fifth and thirteenth bins in the larger HLL would both have been attributed to the fifth bin in the smaller HLL.  Hence, unraveling the algorithm a bit, we see that the values which are seen by the fifth and thirteenth bins in the larger HLL would have been seen by the fifth bin in the smaller HLL had they run on the same dataset.  Because of this, in the case where the two algorithms estimate the same dataset, the value stored in the fifth bin in the smaller HLL is the maximum of the values stored in the fifth and thirteenth bins in the larger HLL. ### Folding HLLs What happened above is not an isolated phenomenon.  In general, if one uses the HLL algorithm twice on a dataset, once with 2n+1 bins and once with 2n bins, the value in the kth bin in the smaller HLL will be the maximum of the values in the kth and (k + 2n)th bins of the larger HLL.  As a result, if given an HLL of size 2n+1 that one wishes to transform to an HLL of size 2n, one can simply fold the HLL by letting the value of the kth bin in the folded HLL be given by the maximum of the values in the kth and (k + 2n)th bins of the original HLL. In fact, we can fold any HLL an arbitrary number of times.  Repeating this process, we can take an HLL of size 2n to an HLL of size 2m for any m which is less than or equal to n.  Hence if we wish to perform a set operation on two HLLs of different sizes, we can simply fold the larger HLL repeatedly until it is the same size as the smaller HLL.  After this, we can take unions and intersections as we wish. ### Folding – An Example Below, we show a simple example of how folding works.  Here we have an HLL with 23 bins which we fold to be an HLL with 22 bins.  In the diagram, I’ve torn an HLL of size 23 in half and placed the strips side by side to emphasize how we line up bins and take maximums in the folding process.  Notice that the values in the folded the bins of the folded HLL are the maximum of the relevant bins in the larger HLL. ### Advantages and Drawbacks This technique gives us the flexibility to be able to perform set operations on any two HLLs regardless of the number of bins used in the algorithms.  It’s usefulness in this regard is a bit offset by the fact that the accuracy of the estimate on these is limited by the accuracy of the least accurate HLL.  For example, an HLL of size 210 will have accuracy roughly 23 times better than an HLL of size 2 (to see where I’m getting these numbers from, you’ll have to read the paper!).  Unfortunately, if we combine these with a set operation, our resulting estimate will have the same accuracy as the smaller HLL. ### Summary The HyperLogLog algorithm supports set operations in a nice way only if the number of bins used is fixed.  Using folding, one can correct for this by reducing the size of the larger HLL to the size of the smaller.  The cost of this convenience is in the accuracy of the estimates after the folding process.  In my next post, I’ll explore some methods of performing the set operations without this loss of accuracy. Filed Under: Data Science, General Tagged With: DV Sketch, HLL, HyperLogLog, Probabilistic Sketching, Sketching, Union ## No BS Data Salon #3 May 21, 2012 By On Saturday Aggregate Knowledge hosted the third No BS Data Salon on databases and data infrastructure. A handful of people showed up to hear Scott Andreas of Boundary talk about distributed, streaming service architecture, and I also gave a talk about AK’s use of probabilistic data structures. The smaller group made for some fantastic, honest conversation about the different approaches to streaming architectures, the perils of distributing analytics workloads in a streaming setting, and the challenges of pushing scientific and engineering breakthroughs all the way through to product innovation. We’re all looking forward to the next event, which will be in San Francisco, in a month or two. If you have topics you’d like to see covered, we’d love to hear from you in the comments below! As promised, I’ve assembled something of a “References” section to my talk, which you can find below. ### (Hyper)LogLog • Original LL paper by Durand and Flajolet • Original HLL paper by Flajolet et al. • Java implementation by ClearSpring • Python implementation • A paper on near-optimal compression of HLLs by Scheuermann and Mauve • A post on LogLog and other similar probabilistic techniques like Count-min Sketch • A post by our friends at Metamarkets about HLL where they propose a map-based technique for saving on memory ### Random • Sean Gourley’s talk on human-scale analytics and decision-making • Muthu Muthukrishnan’s home page, where research on streaming in general abounds • A collection of C and Java implementations of different probabilistic sketches ## Sketching the last year May 13, 2012 By Sketching is an area of big-data science that has been getting a lot of attention lately. I personally am very excited about this.  Sketching analytics has been a primary focus of our platform and one of my personal interests for quite a while now. Sketching as an area of big-data science has been slow to unfold, (thanks Strata for declining our last two proposals on sketching talks!), but clearly the tide is turning. In fact, our summarizer technology, which relies heavily on our implementation of Distinct Value (DV) sketches, has been in the wild for almost a year now (and, obviously we were working on it for many months before that). #### Fast, But Fickle The R&D of the summarizer was fun but, as with most technical implementations, it’s never as easy as reading the papers and writing some code. The majority of the work we have done to make our DV sketches perform in production has nothing to do with the actual implementation.  We spend a lot of time focused on how we tune them, how we feed them, and make them play well with the rest of our stack. Likewise, setting proper bounds on our sketches is an ongoing area of work for us and has led down some very interesting paths.  We have gained insights that are not just high level business problems, but very low level watchmaker type stuff.  Hash function behaviors and stream entropy alongside the skewness of data-sets themselves are areas we are constantly looking into to improve our implementations. This work has helped us refine and find optimizations around storage that aren’t limited to sketches themselves, but the architecture of the system as a whole. #### Human Time Analytics Leveraging DV sketches as more than just counters has proven unbelievably useful for us. The DV sketches we use provide arbitrary set operations. This comes in amazingly handy when our customers ask “How many users did we see on Facebook and on AOL this month that purchased something?” You can imagine how far these types of questions go in a real analytics platform. We have found that DV counts alongside set operation queries satisfy a large portion of our analytics platforms needs. Using sketches for internal analytics has been a blast as well. Writing implementations and libraries in scripting languages enables our data-science team to perform very cool ad-hoc analyses faster and in “human-time”. Integrating DV sketches as custom data-types into existing databases has proven to be a boon for analysts and engineers alike. #### Reap The Rewards Over the course of the year that we’ve been using DV sketches to power analytics, the key takeaways we’ve found are: be VERY careful when choosing and implementing sketches; and leverage as many of their properties as possible.  When you get the formula right, these are powerful little structures. Enabling in-memory DV counting and set operations is pretty amazing when you think of the amount of data and analysis we support. Sketching as an area of big-data science seems to have (finally!) arrived and I, for one, welcome our new sketching overlords. Filed Under: Data Science, General Tagged With: Big Data, Data Science, Distinct Value Counting, Sketching
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 212, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316773414611816, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/D02/d02gbc.html
# NAG Library Function Documentnag_ode_bvp_fd_lin_gen (d02gbc) ## 1  Purpose nag_ode_bvp_fd_lin_gen (d02gbc) solves a general linear two-point boundary value problem for a system of ordinary differential equations using a deferred correction technique. ## 2  Specification #include <nag.h> #include <nagd02.h> void  nag_ode_bvp_fd_lin_gen (Integer neq, void (*fcnf)(Integer neq, double x, double f[], Nag_User *comm), void (*fcng)(Integer neq, double x, double g[], Nag_User *comm), double a, double b, double c[], double d[], double gam[], Integer mnp, Integer *np, double x[], double y[], double tol, Nag_User *comm, NagError *fail) ## 3  Description nag_ode_bvp_fd_lin_gen (d02gbc) solves the linear two-point boundary value problem for a system of neq ordinary differential equations in the interval $\left[a,b\right]$. The system is written in the form $y ′ = F x y + g x$ (1) and the boundary conditions are written in the form $Cy a + Dy b = γ$ (2) Here $F\left(x\right)$, $C$ and $D$ are neq by neq matrices, and $g\left(x\right)$ and $\gamma $ are neq component vectors. The approximate solution to (1) and (2) is found using a finite difference method with deferred correction. The algorithm is a specialisation of that used in the function nag_ode_bvp_fd_nonlin_gen (d02rac) which solves a nonlinear version of (1) and (2). The nonlinear version of the algorithm is described fully in Pereyra (1979). You need to supply an absolute error tolerance and may also supply an initial mesh for the construction of the finite difference equations (alternatively a default mesh is used). The algorithm constructs a solution on a mesh defined by adding points to the initial mesh. This solution is chosen so that the error is everywhere less than your tolerance and so that the error is approximately equidistributed on the final mesh. The solution is returned on this final mesh. If the solution is required at a few specific points then these should be included in the initial mesh. If, on the other hand, the solution is required at several specific points, then you should use the interpolation functions provided in Chapter e01 if these points do not themselves form a convenient mesh. ## 4  References Pereyra V (1979) PASVA3: An adaptive finite-difference Fortran program for first order nonlinear, ordinary boundary problems Codes for Boundary Value Problems in Ordinary Differential Equations. Lecture Notes in Computer Science (eds B Childs, M Scott, J W Daniel, E Denman and P Nelson) 76 Springer–Verlag ## 5  Arguments 1:     neq – IntegerInput On entry: the number of equations; that is neq is the order of system (1). Constraint: ${\mathbf{neq}}\ge 2$. 2:     fcnf – function, supplied by the userExternal Function fcnf must evaluate the matrix $F\left(x\right)$ in (1) at a general point $x$. The specification of fcnf is: void fcnf (Integer neq, double x, double f[], Nag_User *comm) 1:     neq – IntegerInput On entry: the number of differential equations. 2:     x – doubleInput On entry: the value of the independent variable $x$. 3:     f[${\mathbf{neq}}×{\mathbf{neq}}$] – doubleOutput On exit: the $\left(i,j\right)$th element of the matrix $F\left(x\right)$, for $i,j=1,2,\dots ,{\mathbf{neq}}$ where ${F}_{ij}$ is set by ${\mathbf{f}}\left[\left(i-1\right)×{\mathbf{neq}}+\left(j-1\right)\right]$. (See Section 9 for an example.) 4:     comm – Nag_User * Pointer to a structure of type Nag_User with the following member: p – Pointer On entry/exit: the pointer $\mathbf{comm}\mathbf{\to }\mathbf{p}$ should be cast to the required type, e.g., struct user *s = (struct user *)comm → p, to obtain the original object's address with appropriate type. (See the argument comm below.) 3:     fcng – function, supplied by the userExternal Function fcng must evaluate the vector $g\left(x\right)$ in (1) at a general point $x$. The specification of fcng is: void fcng (Integer neq, double x, double g[], Nag_User *comm) 1:     neq – IntegerInput On entry: the number of differential equations. 2:     x – doubleInput On entry: the value of the independent variable $x$. 3:     g[neq] – doubleOutput On exit: the $\mathit{i}$th element of the vector $g\left(x\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{neq}}$. (See Section 9 for an example.) 4:     comm – Nag_User * Pointer to a structure of type Nag_User with the following member: p – Pointer On entry/exit: the pointer $\mathbf{comm}\mathbf{\to }\mathbf{p}$ should be cast to the required type, e.g., struct user *s = (struct user *)comm → p, to obtain the original object's address with appropriate type. (See the argument comm below.) If you do not wish to supply fcng the actual argument fcng must be the NAG defined null function pointer NULLFN. 4:     a – doubleInput On entry: the left-hand boundary point, $a$. 5:     b – doubleInput On entry: the right-hand boundary point, $b$. Constraint: ${\mathbf{b}}>{\mathbf{a}}$. 6:     c[${\mathbf{neq}}×{\mathbf{neq}}$] – doubleInput/Output 7:     d[${\mathbf{neq}}×{\mathbf{neq}}$] – doubleInput/Output 8:     gam[neq] – doubleInput/Output On entry: the arrays c and d must be set to the matrices $C$ and $D$ in (2). gam must be set to the vector $\gamma $ in (2). On exit: the rows of c and d and the components of gam are re-ordered so that the boundary conditions are in the order: (i) conditions on $y\left(a\right)$ only; (ii) condition involving $y\left(a\right)$ and $y\left(b\right)$; and (iii) conditions on $y\left(b\right)$ only. The function will be slightly more efficient if the arrays c, d and gam are ordered in this way before entry, and in this event they will be unchanged on exit. Note that the boundary conditions must be of boundary value type, that is neither $C$ nor $D$ may be identically zero. Note also that the rank of the matrix $\left[C,D\right]$ must be neq for the problem to be properly posed. Any violation of these conditions will lead to an error exit. 9:     mnp – IntegerInput On entry: the maximum permitted number of mesh points. Constraint: ${\mathbf{mnp}}\ge 32$. 10:   np – Integer *Input/Output On entry: determines whether a default or user-supplied initial mesh is used. ${\mathbf{np}}=0$ np is set to a default value of 4 and a corresponding equispaced mesh ${\mathbf{x}}\left[0\right],{\mathbf{x}}\left[1\right],\dots ,{\mathbf{x}}\left[{\mathbf{np}}-1\right]$ is used. ${\mathbf{np}}\ge 4$ You must define an initial mesh using the array x as described. Constraint: ${\mathbf{np}}=0$ or $4\le {\mathbf{np}}\le {\mathbf{mnp}}$. On exit: the number of points in the final (returned) mesh. 11:   x[mnp] – doubleInput/Output On entry: if ${\mathbf{np}}\ge 4$ (see np above), the first np elements must define an initial mesh. Otherwise the elements of x need not be set. Constraint: $a = x[0] < x[1] < ⋯ < x[np-1] = b$ (3) , for ${\mathbf{np}}\ge 4$ On exit: ${\mathbf{x}}\left[0\right],{\mathbf{x}}\left[1\right],\dots ,{\mathbf{x}}\left[{\mathbf{np}}-1\right]$ define the final mesh (with the returned value of np) satisfying the relation (3). 12:   y[${\mathbf{neq}}×{\mathbf{mnp}}$] – doubleOutput On exit: the approximate solution ${z}_{j}\left({x}_{i}\right)$ satisfying (4), on the final mesh, that is $y[j-1×mnp+i-1] = z j x i , i = 1 , 2 , … , np ; ​ j = 1 , 2 , … , neq ,$ where np is the number of points in the final mesh. The remaining columns of y are not used. 13:   tol – doubleInput On entry: a positive absolute error tolerance. If $a = x 1 < x 2 < ⋯ < x np = b$ (4) is the final mesh, ${z}_{j}\left({x}_{i}\right)$ is the $j$th component of the approximate solution at ${x}_{i}$, and ${y}_{j}\left({x}_{i}\right)$ is the $j$th component of the true solution of equation (1) (see Section 3) and the boundary conditions, then, except in extreme cases, it is expected that $z j x i - y j x i ≤ tol , i = 1 , 2 , … , np ; ​ j = 1 , 2 , … , neq$ (5) Constraint: ${\mathbf{tol}}>0.0$. 14:   comm – Nag_User * Pointer to a structure of type Nag_User with the following member: p – Pointer On entry/exit: the pointer $\mathbf{comm}\mathbf{\to }\mathbf{p}$, of type Pointer, allows you to communicate information to and from fcnf and fcng. An object of the required type should be declared, e.g., a structure, and its address assigned to the pointer $\mathbf{comm}\mathbf{\to }\mathbf{p}$ by means of a cast to Pointer in the calling program, e.g., comm.p = (Pointer)&s. The type pointer will be void * with a C compiler that defines void * and char * otherwise. 15:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_2_REAL_ARG_LE On entry, ${\mathbf{b}}=〈\mathit{\text{value}}〉$ while ${\mathbf{a}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{b}}>{\mathbf{a}}$. NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BOUND_COND_COL More than neq columns of the neq by $2×{\mathbf{neq}}$ matrix $\left[C,D\right]$ are identically zero. i.e., the boundary conditions are rank deficient. The number of non-identically zero columns is $〈\mathit{\text{value}}〉$. NE_BOUND_COND_LC At least one row of the neq by $2×{\mathbf{neq}}$ matrix $\left[C,D\right]$ is a linear combination of the other rows, i.e., the boundary conditions are rank deficient. The index of the first such row is $〈\mathit{\text{value}}〉$. NE_BOUND_COND_MAT One of the matrices $C$ or $D$ is identically zero, i.e., the problem is of initial value and not of the boundary type. NE_BOUND_COND_NLC At least one row of the neq by $2×{\mathbf{neq}}$ matrix $\left[C,D\right]$ is a linear combination of the other rows determined up to a numerical tolerance, i.e., the boundary conditions are rank deficient. The index of first such row is $〈\mathit{\text{value}}〉$. There is some doubt as to the rank deficiency of the boundary conditions. However even if the boundary conditions are not rank deficient they are not posed in a suitable form for use with this function. For example, if $C = 1 0 1 ε , D = 1 0 1 0 , γ = γ 1 γ 2$ and $\epsilon $ is small enough, this error exit is likely to be taken. A better form for the boundary conditions in this case would be $C = 1 0 0 1 , D = 1 0 0 0 , γ = γ 1 ε -1 γ 2 - γ 1$ NE_BOUND_COND_ROW Row $〈\mathit{\text{value}}〉$ of the array c and the corresponding row of array d are identically zero, i.e., the boundary conditions are rank deficient. NE_CONV_MESH A finer mesh is required for the accuracy requested; that is mnp is not large enough. NE_CONV_MESH_INIT The Newton iteration failed to converge on the initial mesh. This may be due to the initial mesh having too few points or the initial approximate solution being too inaccurate. Try using nag_ode_bvp_fd_nonlin_gen (d02rac). NE_CONV_ROUNDOFF Solution cannot be improved due to roundoff error. Too much accuracy might have been requested. NE_INT_ARG_LT On entry, ${\mathbf{mnp}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{mnp}}\ge 32$. On entry, ${\mathbf{neq}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{neq}}\ge 2$. NE_INT_RANGE_CONS_2 On entry, ${\mathbf{np}}=〈\mathit{\text{value}}〉$ and ${\mathbf{mnp}}=〈\mathit{\text{value}}〉$. The argument np must satisfy either $4\le {\mathbf{np}}\le {\mathbf{mnp}}$ or ${\mathbf{np}}=0$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_LF_B_MESH On entry, the left boundary value a, has not been set to ${\mathbf{x}}\left[0\right]$: ${\mathbf{a}}=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[0\right]=〈\mathit{\text{value}}〉$. NE_NOT_STRICTLY_INCREASING The sequence x is not strictly increasing: ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$. NE_REAL_ARG_LE On entry, tol must not be less than or equal to 0.0: ${\mathbf{tol}}=〈\mathit{\text{value}}〉$. NE_RT_B_MESH On entry, the right boundary value b, has not been set to ${\mathbf{x}}\left[{\mathbf{np}}-1\right]$: ${\mathbf{b}}=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[{\mathbf{np}}-1\right]=〈\mathit{\text{value}}〉$. ## 7  Accuracy The solution returned by the function will be accurate to your tolerance as defined by the relation (4) except in extreme circumstances. If too many points are specified in the initial mesh, the solution may be more accurate than requested and the error may not be approximately equidistributed. ## 8  Further Comments The time taken by the function depends on the difficulty of the problem, the number of mesh points (and meshes) used and the number of deferred corrections. In the case where you wish to solve a sequence of similar problems, the use of the final mesh from one case is strongly recommended as the initial mesh for the next. ## 9  Example We solve the problem (written as a first order system) $ε y ′′ + y ′ = 0$ with boundary conditions $y 0 = 0 , y 1 = 1$ for the cases $\epsilon ={10}^{-1}$ and $\epsilon ={10}^{-2}$ using the default initial mesh in the first case, and the final mesh of the first case as initial mesh for the second (more difficult) case. We give the solution and the error at each mesh point to illustrate the accuracy of the method given the accuracy request ${\mathbf{tol}}=\text{1.0e−3}$. ### 9.1  Program Text Program Text (d02gbce.c) None. ### 9.3  Program Results Program Results (d02gbce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 104, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7770207524299622, "perplexity_flag": "middle"}