url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/2596/a-pedestrian-explanation-of-renormalization-groups-from-qed-to-classical-field?answertab=oldest
# A pedestrian explanation of Renormalization Groups - from QED to classical field theories shortly after the invention of quantum electrodynamics, one discovered that the theory had some very bad properties. It took twenty years to discover that certain infinities could be overcome by a process called renormalization. One might state the physical reason behind this that we are only aware of effective theories which are reliable on certain scales given by more or less fundamental constants. Renormalization tells us how to deal with this situation and only consider effects of a specific range. The technique to perform the calculations is called renormalization group. It is a powerful tool and it is no wonder it is under heavy development since nothing can be calculated without it in quantum field theories. Per-se, this procedure is not limited to its root and one might ask the question: ## How can we use the renormalization group to find effective theories for classical field theories? I suppose, an example where this has been done very recently can be found in Renormalization Group Analysis of Turbulent Hydrodynamics. I would be thankful for any insight, examples etc. Sincerely Robert PS.: The question is naturally linked to How to calculate the properties of Photon-Quasiparticles and in a loose line with A pedestrian explanation of conformal blocks. Since I am no expert in the field please advice me if something is not clear or simply wrong. - 4 I love how astoundingly formal this question is. – Noldorin Jan 8 '11 at 18:03 ## 3 Answers I'm not an expert in this topic too, but I'm trying wrap my head around it. Right now I'm trying to make an adequate hierarchy of concepts related to renormalization. Let me list them and tell how they are related: 1. Fields, Lagrangian (Hamiltonian) and coupling constants. 2. Perturbative calculations. 3. Different scales. 4. Self-similarity. 5. Quantum fields. 6. Ultraviolet divergences. 7. Renormalization. 8. Renormalization group and running couplings. (Let me stress that the "renormalization" and "renormalization group" are different concepts.) Of course the concept of a field and the way to describe (1) it is a starting point. Now, It seems to me (while I can be wrong) that every time we talk about renormalization we always deal with some perturbative approach (2). There is always something that we want to neglect. And if there is a way to make calculations without any approximations then one needn't to use techniques related to renormalization. One of simplest examples is a hydrodynamics -- you don't want to "get down" to the level of molecules to describe a stream of water. You would like to work with some "integral" quantities, like viscosity. And the viscosity can be used to describe processes at many different scales (3): bloodstream, butterfly, submarine, internals of the star, e.t.c. The hydrodynamics works at different scales because of the self-similarity (4): by going several orders of magnitude larger you are still able to describe your system with the same Lagrangian, but, maybe, with some parameters changed. When one does the transition from one scale to another one always neglects some peculiarities (2), that occur at smaller scale. This is the essence of renormalization group(8) techniques. The changing parameters are also called the running couplings. I recommend you to read about Kadanoff transformation, to get more insight about it. Note that I never mentioned divergences by far. Because this is a slightly different topic. And one can use renormalization group even if there is no infinities. UV divergences appear due to our ignorance about smaller scales. When we talk about hydrodynamics we know that there is a "fundamental scale" -- the aforementioned molecules. But when we talk about quantum fields (6) (like electromagnetic field or some fermion field) we don't know what is the "fundamental" scale for it. We don't even know if it exists at all. Different methods of dealing with the divergences are called the renormalization (7) methods. They are based on changes of the parameters of Larangian too, but now these changes are "infinite" because one have to "compensate the infinities" appearing from small scales. After cancelling the infinities this way one is still left with arbitrariness of choosing a finite values of the parameters. You can fix the parameters by getting them from experiment at certain scale(3) and use renormalization group (8) to go from one scale to another. - +1, nice summing up of the main points of renormalization. – Marek Jan 7 '11 at 19:16 1 – Robert Filter Jan 8 '11 at 17:56 First, the full notion of renormalization group, as studied in QFT, is definitely not needed in the classical theory. This is because QFT actually doesn't make sense without a renormalization scheme and for any theory one has to always investigate the flow of couplings towards some fixed points (corresponding to conformal field theories) to check whether the given theory is renormalizable in the first place. So renormalization is an integral part of the QFT in the great contrast with classical theories. Other place where notion of renormalization group flow is important is condensed matter theory. This is because these flows have fixed points that (when they are non-trivial) correspond to critical points (this is again connected with the mentioned conformal symmetry). The renormalization group is then used to see how the flow behaves around this point and this gives valuable information about the behavior of macroscopic quantities (like specific heat) at the critical point. But the notion by itself is not terribly important if all you care about is integrating UV degrees of freedom. I don't think you need any flow in classical theory. All you have to do is integrate some energetic interaction with an ambient field to obtain an effective mass (for a one concrete example). While renormalization group gives a useful framework for general understanding of scales of theories and their being effective, it's not really needed most of the time. - Dear Marek, thank you for your answer! The point is that you can also define some Green's function for differential equations and thus introduce the path integral formalism in classical field theories. By this meaning, it should be possible, at least in principle, to use the Renormalization Group for further insight. I might add this to the question. Furthermore, may I ask if you could please specify your idea of flow and effective mass for classical theories, e.g. by an example? I am not sure if I can follow your idea :) – Robert Filter Jan 7 '11 at 15:49 @Robert: yes, something like path integral definitely needs to be formulated in order to make some sense of partition sums. You can then obtain the effective theory by summing over just the states that represent microscopic behavior (in QFT you introduce an energy scale and in lattice model there is natural lattice spacing present). But I am actually not quite sure how this is done in classical theory. If you could point me to the reference for that path integral approach maybe I could help you more. – Marek Jan 7 '11 at 17:14 – Robert Filter Jan 8 '11 at 17:19 @Robert: well, Green functions in classical theory are used completely differently than correlators or propagators (which are often also called Green functions) in QFT and condensed matter theories. But it's certainly possible there is some connection, just that I don't know about it. – Marek Jan 8 '11 at 17:25 – Robert Filter Jan 8 '11 at 18:06 show 1 more comment Marek wrote: "First, the full notion of renormalization group, as studied in QFT, is definitely not needed in the classical theory...." Marek, the renormalization of mass first appeared in Classical Electromagnetism, didn't it? Take the "definition": $m_{physical} = m_{bare} + \delta m$. This is one constraint for two addenda so there is a one-parametric "invariance" group even in the CED. As soon as the mass renormalization (discarding $\delta m$ to originally physical $m$) is done exactly, only once, it is not really interesting, to say the least. In QED this "liberty" is extended to the charge and is done perturbatively, but the main sense remains - the renorm-group is a "liberty" in choosing the two terms to satisfy one constraint: $e_{physical} = e_{bare}(\Lambda) + \delta e(\Lambda)$ where $(\Lambda)$ is a cutoff. If we return to the original sense of renormalizations as to discarding unnecessary perturbative corrections to originally right, physical, fundamental constant values, no group appears, no stupid relationship between "bare" and "physical" constants are derived (no Landau pole), and everything is simple: one discards infinite or finite contributions of self-action. Self-action is an erroneous concept: it does not lead to any change (no action by definition), only to wrong terms to be discarded in the end. - I'll agree with you up to the point that problems occur also in classical theory. But to cure them (and I mean really go to the root of the problem) one needs QFT and renormalization. In pure classical physics renormalization is not needed. Well, that's my opinion anyway and I've yet to be convinced about the contrary. – Marek Jan 17 '11 at 23:04 – Vladimir Kalitvianski Jan 17 '11 at 23:22 1 Please give explanation every time you give a negative score. – Vladimir Kalitvianski Jan 19 '11 at 22:52 Dear downvoters, tell me where I am wrong, please. I would like to learn. – Vladimir Kalitvianski Jan 31 '11 at 16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349971413612366, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/14989/problem-creating-meanpredictionbands/14999
# Problem creating MeanPredictionBands I have a problem creating `MeanPredictionBands` for my function which I fitted using `NonlinearModelFit`. I have a complex-valued function and I am fitting the real part of the function. Mathematica 8 gives me back a fitted function and with it the `BestFitParameters`. But as soon as I try to feed it the `"MeanPredictionBands"` command, it gives me back the following line: Experimental`NumericalFunction::nnum: The function [...] is not a number at {fitparameters}. I have also tried using the `ComplexExpand` function on the real part of my function, which still doesn't help with my situation. Has anybody encountered this particular problem before? All my variables are real, as are the fit parameters. Thank you all for your help! From the file linked in comment a reduced code sample that exhibits the same problem: ````ClearAll[msymm2fit]; msymm2fit = Re[0.145` mV^6 Log[-(0.5929`/mV^2)] + 12 b0V \[Nu]^2 + 8 bDV \[Nu]^2] MyData2 = {{0.332, 0.807^2}, {0.386, 0.821^2}, {0.447, 0.855^2}}; MyError2 = {0.027, 0.011, 0.029}; MyFit2 = NonlinearModelFit[MyData2, msymm2fit, {mV, b0V, bDV}, \[Nu], VarianceEstimatorFunction -> (1 &), Weights -> 1/(MyError2)^2, Method -> {NMinimize, Method -> "DifferentialEvolution"}] MyFit2["BestFitParameters"] MyFit2["MeanPredictionBands"] ```` The inability to create the Bands apparently comes simply from taking the Real part of the function. I tried this out on another, really easy function. If you take ````ComplexExpand[Re[]] ```` of a real valued function, this problem with creating the error bands goes away. In my case, taking the Real part analytically (i.e. by hand) is not that easy, since it is created while i am fitting the data. TL;DR Simply taking the Real part of any function will hinder Mathematica 8 from creating error bands from the fitted function. - 5 Helping you would be difficult if you won't mention either the function or the data... – J. M.♦ Nov 21 '12 at 14:44 – Ludwig Nov 21 '12 at 15:11 @Ludwig It would be better if you could just paste the code in your question. – VLC Nov 21 '12 at 15:25 How does your comment about `ComplexExpand` cast light on your problem. Seems irrelevant to me. – m_goldberg Nov 21 '12 at 15:42 I added a reduced code sample from the file you linked. – Sjoerd C. de Vries Nov 21 '12 at 17:06 show 1 more comment ## 1 Answer Please report this issue to `[email protected]`. Meanwhile observe that changing `mysymm2fit` to get rid of `Re` makes the issue go away: ````msymm2fit = 0.145` mV^6 Log[(0.5929`/mV^2)] + 12 b0V \[Nu]^2 + 8 bDV \[Nu]^2 ```` - I removed about five lines of code from the OP's original function definition to get a usable, small example. I assume, but haven't checked this, that the `Re` is necessary in the original version and cannot be so easily removed there. BTW: did you just throw away a minus sign in the Log, or did I miss something? – Sjoerd C. de Vries Nov 21 '12 at 17:40 Yes, I just removed the minus sign using $\Re\left(\log(z)\right) = \log\left(|z|\right)$. – Sasha Nov 21 '12 at 17:47 This still does not solve my problem, because I cannot take the Real part by hand. So far, nothing I tried works, but I sent an email to Wolframs support, let's see what they come up with. – Ludwig Nov 22 '12 at 9:15 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045544862747192, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/213549/about-proving-a-arguments-is-valid
# About proving a arguments is valid I have a question is about proving a argument is valid or not. Again, cannot really understand the solution. The question is like this Determine if the following arguments are valid. 1. It is not the case that IBM or Xerox will take over the copier market. If RCA returns to >the computer market, then IBM will take over the copier market. Hence, RCA will not return >to the computer market. The solution is like this. Let a denote “IBM will take over the copier market”, x “Xerox will take the copier market”, r “RCA returns to the computer market”. Then we have the following argument: $$\lnot(a\lor x)$$ $${r \rightarrow a}\over {so \quad \lnot r }$$ 1. $\lnot (a \lor x)$ $\quad$ premise 2. $\lnot a \land \lnot x$ $\quad$ from 1 3. $\lnot a$ $\quad$ from 2 4. $r \rightarrow a$ $\quad$ premise 5. $\lnot a \rightarrow \lnot r$ $\quad$ from 4 6. $\lnot r$ $\quad$ from 3 and 5 and the statment is valid Why the step 2 can go to step 3? Obviously, "It is not the case that IBM or Xerox will take over the copier market" is not equal to "It is not the case that IBM take over the copier market". - You copied (2) incorrectly, or there was a typo in your source: it should be $\lnot a\land\lnot x$, derived from (1) by de Morgan’s law. And from that you clearly can get $\lnot a$. – Brian M. Scott Oct 14 '12 at 10:04 Step 2 should read $\neg a \wedge \neg x$ by De Morgan's laws – Shaktal Oct 14 '12 at 10:04 Corrected, but how did step 2 go to step 3. – Samuel Oct 14 '12 at 10:09 $p\land q\to p$, as you can check from the truth table, so if you have $p\land q$, you can infer $p$. Here $p$ is $\lnot a$, and $q$ is $\lnot x$. – Brian M. Scott Oct 14 '12 at 10:12 every step means having a $\rightarrow$ in the middle. I thought is the "=" sign.=.=' OIC, thanks. – Samuel Oct 14 '12 at 10:20 ## 1 Answer The move from line 2 to line 3 is called conjunction elimination. It says 'if I know that (A and B) is true, then I know that A is true', also 'if I know that (A and B) is true, then I know that B is true' - where A and B are well formed formulas. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919305145740509, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/09/29/uniqueness-of-the-differential/?like=1&source=post_flair&_wpnonce=5b12459335
# The Unapologetic Mathematician ## Uniqueness of the Differential Okay, for the moment let’s pick an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ for our vector space $\mathbb{R}^n$. This gives us coordinates on the Euclidean space of points. It also gives us the dual basis $\left\{\eta^i\right\}_{i=1}^n$ of the dual space $\left(\mathbb{R}^n\right)^*$. This lets us write any linear functional $\lambda:\mathbb{R}^n\rightarrow\mathbb{R}$ as a unique linear combination $\lambda=\lambda_i\eta^i$. The component $\lambda_i$ measures how much weight we give to the distance a vector extends in the $e_i$ direction. Now if we look at a particular point $x$ we can put it into our differential and leave the second (vector) slot blank: $df(x;\hphantom{\underline{x}})$. We will also write this simply as $df(x)$, and apply it to a vector by setting the vector just to its right: $df(x;t)=df(x)t$. Now $df(x)$ is a linear functional, and we can regard $df$ as a function from our space of points to the dual of the space of displacements. We can thus write it out uniquely in components $df(x)=\lambda_i(x)\eta^i$, where each $\lambda_i$ is a function of the point $x$, but not of the displacement $t$. We want to analyze these components. I assert that these are just the partial derivatives in terms of the orthonormal basis we’ve chosen: $\lambda_i(x)=\left[D_{e_i}f\right](x)$. In particular, I’m asserting that if the differential exists, then the partial derivatives exist as well. By the definition of the differential, for every $\epsilon>0$ there is a $\delta>0$ so that if $\delta>\lVert t\rVert>0$, then $\displaystyle\lvert\left[f(x+t)-f(x)\right]-df(x;t)\rvert<\epsilon\lVert t\rVert$ Now we can write $df(x;t)$ out in components $\displaystyle df(x;t)=\lambda_i(x)t^i$ Next for a specific index $k$ we can pick $t=\tau e_k$ for some value $\delta>\lvert\tau\rvert>0$. Then $\lVert t\rVert=\lvert\tau\rvert$, $t^k=\tau$, and $t^i=0$ for all the other indices $i\neq k$. Putting all these and the component representation of $df(x;t)$ into the definition of the differential we find $\displaystyle\lvert\left[f(x+\tau e_k)-f(x)\right]-\lambda_k(x)\tau\rvert<\epsilon\lvert\tau\rvert$ Dividing through by $\lvert\tau\rvert$ we find $\displaystyle\left\lvert\frac{f(x+\tau e_k)-f(x)}{\tau}-\lambda_k(x)\right\rvert<\epsilon$ And this is exactly what we need to find that $\left[D_{e_k}f\right](x)$ exists and equals $\lambda_k(x)$. Therefore if the function $f$ has a differential $df(x)$ at the point $x$, then it has all partial derivatives there, and these uniquely determine the differential at that point. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 10 Comments » 1. [...] of showing that the differential of a function at a point — if it exists at all — is unique (and thus we can say “the” differential), we showed that given an orthonormal basis we [...] Pingback by | September 30, 2009 | Reply 2. [...] haven’t yet seen any conditions that tell us that any such function exists. We know from the uniqueness proof that if it does exist, then given an orthonormal basis we have all partial derivatives, and [...] Pingback by | October 1, 2009 | Reply 3. minor typo: epsilon-eta confusion at beginning. A akways wonder if this sort of thign is worth pointing out Comment by Avery Andrews | October 1, 2009 | Reply 4. You may as well, since it’s easy enough to correct. Usually that sort of thing happens when I go back to review my earlier pieces, use old notation, then later in the process of writing decide to change it. The editor for WordPress isn’t the best, but I have to use it if I’m going to get any sort of preview. Comment by | October 1, 2009 | Reply 5. [...] showed that these partial derivatives are the components of the differential (when it exists), and so there should be some connection between the two [...] Pingback by | October 5, 2009 | Reply 6. [...] Chain Rule Since the components of the differential are given by partial derivatives, and partial derivatives (like all single-variable derivatives) [...] Pingback by | October 7, 2009 | Reply 7. [...] and is given by the product of these matrices, and the entries of the resulting matrix must (by uniqueness) be the partial derivatives of the composite [...] Pingback by | October 8, 2009 | Reply 8. [...] by uniqueness we can read off the partial derivatives of in terms of and [...] Pingback by | October 12, 2009 | Reply 9. [...] First let’s look at the second-order differential of a real-valued function of variables . We’ll use the as a basis for the space of differentials, which allows us to write out the components of the differential: [...] Pingback by | October 16, 2009 | Reply 10. [...] the partial derivative either does not exist or is equal to zero at . And because the differential subsumes the partial derivatives, if any of them fail to exist the differential must fail to exist as well. On the other hand, if [...] Pingback by | November 23, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262700080871582, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_6&oldid=31243
User:Michiexile/MATH198/Lecture 6 From HaskellWiki Revision as of 02:34, 28 October 2009 by Michiexile (Talk | contribs) IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. Contents 1 Useful limits and colimits With the tools of limits and colimits at hand, we can start using these to introduce more category theoretical constructions - and some of these turn out to correspond to things we've seen in other areas. Possibly among the most important are the equalizers and coequalizers (with kernel (nullspace) and images as special cases), and the pullbacks and pushouts (with which we can make explicit the idea of inverse images of functions). One useful theorem to know about is: Theorem The following are equivalent for a category C: • C has all finite limits. • C has all finite products and all equalizers. • C has all pullbacks and a terminal object. Also, the following dual statements are equivalent: • C has all finite colimits. • C has all finite coproducts and all coequalizers. • C has all pushouts and an initial object. For this theorem, we can replace finite with any other cardinality in every place it occurs, and we will still get a valid theorem. ====Equalizer, coequalizer==== Consider the equalizer diagram: A limit over this diagram is an object C and arrows to all diagram objects. The commutativity conditions for the arrows defined force for us fpA = pB = gpA, and thus, keeping this enforced equation in mind, we can summarize the cone diagram as: Now, the limit condition tells us that this is the least restrictive way we can map into A with some map p such that fp = gp, in that every other way we could map in that way will factor through this way. As usual, it is helpful to consider the situation in Set to make sense of any categorical definition: and the situation there is helped by the generalized element viewpoint: the limit object C is one representative of a subobject of A that for the case of Set contains all $x\in A: f(x) = g(x)$. Hence the word we use for this construction: the limit of the diagram above is the equalizer of f,g. It captures the idea of a maximal subset unable to distinguish two given functions, and it introduces a categorical way to define things by equations we require them to respect. One important special case of the equalizer is the kernel: in a category with a null object, we have a distinguished, unique, member 0 of any homset given by the compositions of the unique arrows to and from the null object. We define the kernel Ker(f) of an arrow f to be the equalizer of f,0. Keeping in mind the arrow-centric view on categories, we tend to denot the arrow from Ker(f) to the source of f by ker(f). In the category of vector spaces, and linear maps, the map 0 really is the constant map taking the value 0 everywhere. And the kernel of a linear map $f:U\to V$ is the equalizer of f,0. Thus it is some vector space W with a map $i:W\to U$ such that fi = 0i = 0, and any other map that fulfills this condition factors through W. Certainly the vector space $\{u\in U: f(u)=0\}$ fulfills the requisite condition, nothing larger will do, since then the map composition wouldn't be 0, and nothing smaller will do, since then the maps factoring this space through the smaller candidate would not be unique. Hence, $Ker(f) = \{u\in U: f(u) = 0\}$ just like we might expect. Dually, we get the coequalizer as the colimit of the equalizer diagram. A coequalizer has to fulfill that iBf = iA = iBg. Thus, writing q = iB, we get an object with an arrow (actually, an epimorphism out of B) that identifies f and g. Hence, we can think of $i_B:B\to Q$ as catching the notion of inducing equivalence classes from the functions. This becomes clear if we pick out one specific example: let $R\subseteq X\times X$ be an equivalence relation, and consider the diagram where r1 and r2 are given by the projection of the inclusion of the relation into the product onto either factor. Then, the coequalizer of this setup is an object X / R such that whenever x˜Ry, then q(x) = q(y). 1.1 Pullbacks The preimage f − 1(T) of a subset $T\subseteq S$ along a function $f:U\to S$ is a maximal subset $V\subseteq U$ such that for every $v\in V: f(v)\in T$. We recall that subsets are given by (equivalence classes of) monics, and thus we end up being able to frame this in purely categorical terms. Given a diagram like this: where i is a monomorphism representing the subobject, we need to find an object V with a monomorphism injecting it into U such that the map $fi: U\to S$ factors through T. Thus we're looking for dotted maps making the diagram commute, in a universal manner. The maximality of the subobject means that any other subobject of U that can be factored through T should factor through V. Suppose U,V are subsets of some set W. Their intersection $U\cap V$ is a subset of U, a subset of V and a subset of W, maximal with this property. Translating into categorical language, we can pick representatives for all subobjects in the definition, we get a diagram with all monomorphisms: where we need the inclusion of $U\cap V$ into W over U is the same as the inclusion over V. Definition A pullback of two maps $A \rightarrow^f C \leftarrow^g B$ is the limit of these two maps, thus: By the definition of a limit, this means that the pullback is an object D with maps $\bar f: D\to B$, $\bar g: D\to A$ and $f\bar g = g\bar f : D\to C$, such that any other such object factors through this. For the diagram $U\rightarrow^f S \leftarrow^i T$, with $i:T\to S$ one representative monomorphism for the subobject, we get precisely the definition above for the inverse image. For the diagram $U\rightarrow W \leftarrow V$ with both map monomorphisms representing their subobjects, the pullback is the intersection. 1.2 Pushouts Often, especially in geometry and algebra, we construct new structures by gluing together old structures along substructures. Possibly the most popularly known example is the Möbius band: we take a strip of paper, twist it once and glue the ends together. Similarily, in algebraic contexts, we can form amalgamated products that do roughly the same. All these are instances of the dual to the pullback: Definition A pushout of two maps $A\leftarrow^f C\rightarrow^g B$ is the co-limit of these two maps, thus: Hence, the pushout is an object D such that C maps to the same place both ways, and so that, contingent on this, it behaves much like a coproduct. 2 Free and forgetful functors Recall how we defined a free monoid as all strings of some alphabet, with concatenation of strings the monoidal operation. And recall how we defined the free category on a graph as the category of paths in the graph, with path concatenation as the operation. The reason we chose the word free to denote both these cases is far from a coincidence: by this point nobody will be surprised to hear that we can unify the idea of generating the most general object of a particular algebraic structure into a single categorical idea. The idea of the free constructions, classically, is to introduce as few additional relations as possible, while still generating a valid object of the appropriate type, given a set of generators we view as placeholders, as symbols. Having a minimal amount of relations allows us to introduce further relations later, by imposing new equalities by mapping with surjections to other structures. One of the first observations in each of the cases we can do is that such a map ends up being completely determined by where the generators go - the symbols we use to generate. And since the free structure is made to fulfill the axioms of whatever structure we're working with, these generators combine, even after mapping to some other structure, in a way compatible with all structure. To make solid categorical sense of this, however, we need to couple the construction of a free algebraic structure from a set (or a graph, or...) with another construction: we can define the forgetful functor from monoids to sets by just picking out the elements of the monoid as a set; and from categories to graph by just picking the underlying graph, and forgetting about the compositions of arrows. Now we have what we need to pinpoint just what kind of a functor the free widget generated by-construction does. It's a functor $F: C\to D$, coupled with a forgetful functor $U: D\to C$ such that any map $S\to U(N)$ in C induces one unique mapping $F(S)\to N$ in D. For the case of monoids and sets, this means that if we take our generating set, and map it into the set of elements of another monoid, this generates a unique mapping of the corresponding monoids. This is all captured by a similar kind of diagrams and uniquely existing maps argument as the previous object or morphism properties were defined with. We'll show the definition for the example of monoids. Definition A free monoid on a generating set X is a monoid F(X) such that there is an inclusion $i_X: X\to UF(X)$ and for every function $X\to U(M)$ for some other monoid M, there is a unique homomorphism $g:F(X)\to M$ such that f = U(g)iX, or in other words such that this diagram commutes: We can construct a map $\phi:Hom_{Mon}(F(X),M) \to Hom_{Set}(X,U(M))$ by $\phi: g\mapsto U(g)\circ i_X$. The above definition says that this map is an isomorphism. 3 Adjunctions Modeling on the way we construct free and forgetful functors, we can form a powerful categorical concept, which ends up generalizing much of what we've already seen - and also leads us on towards monads. We draw on the definition above of free monoids to give a preliminary definition. This will be replaced later by an equivalent definition that gives more insight. Definition A pair of functors, is called an adjoint pair or an adjunction, with F called the left adjoint and U called the right adjoint if there is natural transformation $\eta: 1\to UF$, and for every $f:A\to U(B)$, there is a unique $g: F(A)\to B$ such that the diagram below commutes. The natural transformation η is called the unit of the adjunction. This definition, however, has a significant amount of asymmetry: we can start with some $f:A\to U(B)$ and generate a $g: F(A)\to B$, while there are no immediate guarantees for the other direction. However, there is a proposition we can prove leading us to a more symmetric statement: Proposition For categories and functors the following conditions are equivalent: 1. F is left adjoint to U. 2. For any $c\in C_0$, $d\in D_0$, there is an isomorphism $\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)$, natural in both c and d. moreover, the two conditions are related by the formulas • $\phi(g) = U(g) \circ \eta_c$ • ηc = φ(1Fc Proof sketch For (1 implies 2), the isomorphism is given by the end of the statement, and it is an isomorphism exactly because of the unit property - viz. that every $f:A\to U(B)$ generates a unique $g: F(A)\to B$. Naturality follows by building the naturality diagrams Image:NaturalityDiagramsAdjointProposition.png and chasing through with a $f: Fc\to d$. For (2 implies 1), we start out with a natural isomorphism φ. We find the necessary natural transformation ηc by considering $\phi: Hom(Fc,Fc) \to Hom(c, UFc)$. QED. By dualizing the proof, we get the following statement: Proposition For categories and functors the following conditions are equivalent: 1. For any $c\in C_0$, $d\in D_0$, there is an isomorphism $\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)$, natural in both c an 2. There is a natural transformation $\epsilon: FU \to 1_D$ with the property that for any $g: F(c) \to d$ there is a unique $f: c\to U(d)$ such that $g = \epsilon_D\circ F(f)$, as in the diagram Image:NaturalityDiagramsDualAdjointProposition.png moreover, the two conditions are related by the formulas • $\psi(f) = \epsilon_D\circ F(f)$ • εd = ψ(1Ud where ψ = φ − 1. Hence, we have an equivalent definition with higher generality, more symmetry and more horsepower, as it were: Definition An adjunction consists of functors and a natural isomorphism The unit η and the counit ε of the adjunction are natural transformations given by: • $\eta: 1_C\to UF: \eta_c = \phi(1_{Fc})$ • $\epsilon: FU\to 1_D: \epsilon_d = \psi(1_{Ud})$. Some of the examples we have had difficulties fitting into the limits framework show up as adjunctions: The free and forgetful functors are adjoints; and indeed, a more natural definition of what it means to be free is that it is a left adjoint to some forgetful functor. Curry and uncurry, in the definition of an exponential object are an adjoint pair. The functor $-\times A: X\mapsto X\times A$ has right adjoint $-^A: Y\mapsto Y^A$. 3.1 Notational aid One way to write the adjoint is as a bidirectional rewrite rule: $\frac{F(X) \to Y}{X\to G(Y)}$, where the statement is that the hom sets indicated by the upper and lower arrow, respectively, are transformed into each other by the unit and counit respectively. The left adjoint is the one that has the functor application on the left hand side of this diagram, and the right adjoint is the one with the functor application to the right. 4 Homework Complete homework is 6 out of 10 exercises. 1. Prove that an equalizer is a monomorphism. 2. Prove that a coequalizer is an epimorphism. 3. Prove that given any relation $R\subseteq X\times X$, its completion to an equivalence relation is the kernel of the coequalizer of the component maps of the relation 4. Prove that if the right arrow in a pullback square is a mono, then so is the left arrow. Thus the intersection as a pullback really is a subobject. 5. Prove that if both the arrows in the pullback 'corner' are mono, then the arrows of the pullback cone are all mono. 6. What is the pullback in the category of posets? 7. What is the pushout in the category of posets? 8. Prove that the exponential and the product functors above are adjoints. What are the unit and counit? 9. (worth 4pt) Consider the unique functor $!:C\to 1$ to the terminal category. 1. Does it have a left adjoint? What is it? 2. Does it have a right adjoint? What is it? 10. Suppose is an adjoint pair. Find a natural transformation $FUF\to F$. Conclude that there is a natural transformation $\mu: UFUF\to UF$. Prove that this is associative, in other words that the diagram commutes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 61, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226999878883362, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/161603-set.html
# Thread: 1. ## Set Hi everybody, How to show that exists a set $\mathbb{C}={a+ib, a,b \in \mathbb{R}}$ such as $i^2=-1$ 2. Starting from what basis? If you are given a "symbol", i, such that $i^2= -1$, then the rest is easy. If not, then the usual construction of the complex numbers is:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8969162702560425, "perplexity_flag": "middle"}
http://www.physicsforums.com/showpost.php?p=1768085&postcount=16
View Single Post Recognitions: Gold Member Science Advisor Staff Emeritus Recall the basic computational formula for the coordinate representation [T] of a linear transformation T:[T(v)] = [v] [T](Incidentally, this is the main reason I prefer the convention where vectors are column vectors; so that this identity isn't 'backwards') Or, with indices, if w = T(v), then $w^i = T^i_j v^j$ If you want to try working it out again, then don't read below this point. Okay, the calculations work out to: $$[v]_{B'} [G]_{B'} = [G(v)]_B' = [G(v)]_B \Lambda = [v]_B [G]_B \Lambda = [v]_{B'} \Lambda^{-1} [G]_B \Lambda$$ and so $$[v]_{B'} \left( [G]_{B'} - \Lambda^{-1} [G]_B \Lambda \right) = 0$$ which implies, because this is true for all coordinate tuples $[v]_{B'}$, $$[G]_{B'} = \Lambda^{-1} [G]_B \Lambda$$ Your observation was correct -- this is different than the constraint we needed on G in order for our naive attempt at defining an inner product to work! The notion of an inner product clearly make sense -- but it is not so clear that inner products bear a nice relation to linear transformations of tangent vectors. Now that we've uncovered a contradiction, how would you proceed in developing differential geometry?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306510090827942, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/103705/global-definition-of-the-dolbeault-complex-of-a-vector-bundle/103714
Global Definition of the Dolbeault Complex of a Vector Bundle Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For an $2n$-dimensional complex manifold $M$, and a smooth vector bundle $E$ over $M$, it is well-known (see Voisin, Huybrechts) that there exists an operator $\overline{\partial}$, built locally from the usual anti-holomorphic derivative, that acts on $\Gamma^{\infty}(E) \otimes_{C^{\infty}} \Omega^{(0,\bullet)}(M)$ so as to give a complex $$\Gamma^{\infty}(E) \overset{\overline{\partial}}{\to} \Gamma^{\infty}(E) \otimes_{C^{\infty}} \Omega^{(0,1)}(M) \overset{\overline{\partial}}{\to} \cdots \overset{\overline{\partial}}{\to} \Gamma^{\infty}(E) \otimes_{C^{\infty}} \Omega^{(0,n)}(M)\overset{\overline{\partial}}{\to} 0.$$ As I prefer global constructions, I began to wonder how one would construct this globally. To apply id $\otimes \overline{\partial}$ to $\Gamma^{\infty}(E) \otimes_{C^{\infty}} \Omega^{(0,\bullet)}(M)$ is of course not well-defined since we are tensoring over ${C^{\infty}(M)}$. So what can one do . . .? P.S. Does such a complex exist in the purely real case, and if not, then why not? - 1 Locally $\overline{\partial}_E$ "looks like" $\overline{\partial}$, but this is because you have chosen a holomorphic trivialisation of $E$. As a global operator $\overline{\partial}_E$ is not something that you can get from $\overline{\partial}$, i.e., from the complex structure on $M$. It is determined by (and determines) the holomorphic structure on $E$. There may be many such (or none) on the same complex vector bundle. – Peter Dalakov Aug 1 at 19:36 2 Answers To get the Dolbeault complex, you need a choice of holomorphic structure on $E$, not just a smooth one. If $\mathcal{E}$ is the locally free sheaf of $\mathcal{O}_M$-modules corresponding to $E$, then $\mathcal{E}\subset \mathcal{A}^0(\mathcal{E})=\mathcal{C}^\infty_M\otimes_{\mathcal{O}_M}\mathcal{E}$ and $\overline{\partial}_E: \mathcal{A}^0(\mathcal{E})\to \mathcal{A}^{0,1}(\mathcal{E})$ is the unique morphism (of sheaves of $\mathbb{C}$ vector spaces) satisfying $$\overline{\partial}_E(f\sigma) = \overline{\partial}f\otimes \sigma + f\overline{\partial}_E(\sigma),$$ for any smooth function $f$ and $\sigma$ a smooth section of $E$, such that $\left. \overline{\partial}_E\right| _{\mathcal{E}}=0$. The first term in the Leibniz formula involves $\overline{\partial}=d^{0,1}$. You can then extend the Dolbeault operator to $\overline{\partial}_E: \mathcal{A}^{0,p}(\mathcal{E})\to \mathcal{A}^{0,p+1}(\mathcal{E})$, $\overline{\partial}_E^2=0$, by imposing the Leibniz rule with the usual sign. This gives you the Dolbeault resolution $$0\to \mathcal{E}\to \mathcal{A}^0(\mathcal{E})\to \mathcal{A}^{0,1}(\mathcal{E})\to\ldots$$ The complex you write is obtained by passing to global sections of $\mathcal{A}^{0,\bullet}(\mathcal{E})$. You cannot do any of this without the holomorphic structure. Differently put, you need the total space of $E$ to be a complex manifold and the projection $E\to M$ to be holomorphic. You cannot play the same game in the real case, but if you are willing to assume that $E$ carries a flat connection, then you can look at the de Rham resolution of the corresponding local system, as David explains. ADDENDUM The requirement that a (smooth) complex vector bundle $V$ admits a holomorphic structure $\overline{\partial}_E$ is non-trivial. It can be phrased as follows: $V$ admits a holomorphic structure if and only if it admits a connection, $D$, such that $D^{0,1}\circ D^{0,1}=0$, i.e., a connection for which the $(0,2)$ component of the curvature vanishes. - So in the Kahler case all holomorphic structures on $E$ induce the same set of holomorphic sections? – Jean Delinez Aug 1 at 21:00 2 There can be many inequivalent holomorphic structures even in the Kaehler case. For example, if $M$ is a Riemann surface of genus $g>1$, then the trivial line bundle admits a whole $g$-dimensional torus of non-isomorphic holomorphic structures, $\textrm{Pic}^0(M)$. – Peter Dalakov Aug 1 at 21:30 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is an analogous construction in the purely real case. The analogue of the sheaf of homolorphic functions is the sheaf $LC$ of locally constant functions. A locally free $LC$-module is equivalent to a local system, and hence to a vector bundle $V$ with integrable connection $\nabla: V \to V \otimes \Omega^1$. One then has the exact sequence of sheaves: `$$LC(V) \to C^{\infty}(V) \to C^{\infty}(V) \otimes \Omega^1 \to C^{\infty}(V) \otimes \Omega^2 \to \cdots \to C^{\infty}(V) \otimes \Omega^n.$$` Here $LC(V)$ is the sheaf of locally constant sections of the local system $V$, and $C^{\infty}(V)$ is the sheaf of smooth sections. Just as in the complex case, one can then show that the sheaf cohomology $H^q(X, V)$ is isomorphic to the deRham cohomology $$\mathrm{Ker}(C^{\infty}(V) \otimes \Omega^q \to C^{\infty}(V) \otimes \Omega^{q+1}) / \mathrm{Im} (C^{\infty}(V) \otimes \Omega^{q-1} \to C^{\infty}(V) \otimes \Omega^{q}).$$ In particular, if $V$ is the trivial local system, this is the isomorphism between sheaf cohomology and deRham cohomology. Regarding your question about constructing $\bar{\partial}$ in a coordinate free way: Surprisingly, I don't know how to do this. But I can give you a coordinate free axiomatization of $\bar{\partial}$. Let $\mathcal{O}$ be the sheaf of holomorphic functions and $C^{\infty}$ the sheaf of smooth functions. Let $M$ be a locally free $\mathcal{O}$ sheaf and let $C^{\infty}(M)$ be $M \otimes_{\mathcal{O}} C^{\infty}$. $\bar{\partial}: C^{\infty}(M) \to C^{\infty}(M) \otimes \Omega^{0,1}$ is the unique $\bar{\partial}$ connection annihilating $M$. Once you have used coordinates to check that this connection exists and is unique, you can probably prove anything you want about it faster from this description than from the coordinate formulae. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043416380882263, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2439/solving-straight-line-motion-question-for-time
# Solving straight-line motion question for time I apologise in advance if this question doesn't appeal to the advanced questions being asked in this Physics forum, but I'm a great fan of the Stack Exchange software and would trust the answers provided here to be more correct than that of Yahoo! Answers etc. A car is travelling with a constant speed of 80km/h and passes a stationary motorcycle policeman. The policeman sets off in pursuit, accelerating to 80km/h in 10 seconds reaching a constant speed of 100 km/h after a further 5 seconds. At what time will the policeman catch up with the car? The answer in the back of the book is 32.5 seconds. The steps/logic I completed/used to solve the equation were: - If you let x equal each other, the displacement will be the same, and the time can be solved algebraically. Therefore: $$x=vt$$ As the car is moving at 80km/h, we want to convert to m/s. 80/3.6 = 22.22m/s $$x=22.22t$$ As for the policeman, he reaches 22.22m/s in 10 seconds. $$\begin{aligned} x &= \frac12 (u+v) t \\ &= \frac12 \times 22.22 \times 10 \\ &= 111.11 \mathrm m \end{aligned}$$ The policeman progresses to travel a further 5 seconds and increases his speed to 100km/h. 100km/h -> m/s = 100 / 3.6 = 27.78m/s. $$\begin{aligned} x &= \frac12 (u+v) t \\ x &= \frac12 \times (22.22 + 27.78) \times 5 \\ x &= \frac12 \times 50 \times 5 \\ x &= 250 / 2 \\ x &= 125 \mathrm m \end{aligned}$$ By adding these two distances together we get 236.1m. So the equation I have is: $$22.22t = 27.78t - 236.1$$ Which solves to let t = 42.47s which is really wrong. - – Justin L. Jan 2 '11 at 9:42 I don't believe this solution helps as the police-car does not follow a consistent linear relationship. This would involve creating a linear regression which would be inaccurate and impractical for me. – RodgerB Jan 2 '11 at 15:37 ## 1 Answer Your mistake is in the equation $$22.22t = 27.78t - 236.1$$ Everything up to there made good sense, but if the police officer has already traveled 236 meters, you should add that to his distance traveled, not subtract it. You'll also need to account for the way the police officer only began traveling at full speed 15 seconds into the chase. Anyway, it is much easier to do the problem by thinking about the relative speeds. During the first ten seconds, the car is going 80kph and the police officer is going 40kph on average. So the police officer loses ground at an average of 40kph for 10 seconds. We can think of this as 10 seconds' worth of loss, and ask how many seconds' worth of loss the police officer gains as he speeds up further. In the next segment, the police officer gains ground at an average of 10kph for 5 seconds. He's gaining ground 1/4 as fast as he lost it earlier and does it for 5 seconds, so this makes up for 5/4 of a second's worth of loss, leaving 8 3/4 seconds' lost ground remaining. Finally he gains ground at 20kph until he catches up. He's gaining here at half the rate he was originally losing ground, so it takes him double the remaining seconds' worth time, or 17.5 seconds, to finish the pursuit. This method is much simpler to calculate, eliminating many opportunities for errors. - Thanks for the in-depth reply, the relative speeds has really helped me understand the logical process involved. Although this solves the problem, I'm still determined to find the algebraic way to solve it, to get the end answer of 32.5 (the relative speeds answer is 33.75 in which I assume precision has been lost). – RodgerB Jan 2 '11 at 12:50 So I've tried to find a logical method of accounting for the velocity of the full chase. I have solved the equation for 32.5m and the correct velocity is supposedly 14.9551. LHS = 22.22 * 32.5 (722.15 = v*32.5 + 236.11) v = 14.9551 We need to find the average velocity. At first I tried finding the mean, this left 16.62 which is wrong, because 2/3 of the time it was increasing speed 0-22.22, and 1/3 of the time it was increasing speed 22.22-27.78, and then constantly travelling at 27.78. So my working is: ((2/3)*22.22 + (1/3)*(27.78-22.22) + 27.78) / 3 Is this thinking logically correct? – RodgerB Jan 2 '11 at 14:36 @RodgerB The relative speeds answer is 32.5. I'm not sure where the 33.75 came from, but there is no loss of precision. My post even explicitly identified the length of the last segment as 17.5 seconds. If you add the 15 seconds before that you get 32.5. I can't understand your second comment. There are too many numbers popping up haphazardly. The correct version of the equation I earlier identified as your problem is $22.22t = 27.78(t-15) + 236.1$ – Mark Eichenlaub Jan 2 '11 at 15:23 Thank you very much, sorry for misinterpreting your answer (tis' close to 3am). Not even my TI-89 has the capacity to remember what it was that made it 33.75, but I was adding the numbers as your example progressed and didn't quite understand. I'll try reading over it once again after getting some sleep, it's been a long day. – RodgerB Jan 2 '11 at 15:53 @RodgerB Okay, hope it works out for you. – Mark Eichenlaub Jan 2 '11 at 16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646235704421997, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/wavelength+electromagnetic-radiation
# Tagged Questions 1answer 45 views ### Is the de Broglie wavelength of a photon equal to the EM wavelength of the radiation? Is the de Broglie (matter) wavelength $\lambda=\frac{h}{p}$ of a photon equal to the electromagnetic wavelength of the radiation? I guess yes, but how come that photons have both a matter wave and an ... 1answer 74 views ### Holograms? Sci Fi or future fact? Based on how light behaves when it passes through mediums, i.e. the wavelength of light changes when it passes through mediums of different refractive indexes, wouldn't it be possible to convert ... 2answers 186 views ### If photons move linearly, what's actually stopping them from passing through a microwave oven mesh? So, my understanding is that the wavelength of a photon is the distance traveled in the time it takes it's magnetic field to oscillate. And it's inversely proportional to it's energy and it's ... 3answers 195 views ### Why aren't the graphs for black body radiation straight lines? We know that a wave which has greater frequency will have low wavelength and high energy. So, by decreasing the wavelength, the frequency and consequently energy (intensity) of that wave will increase ... 2answers 2k views ### Why is Near Field Communication (NFC) range limited to about 20cm? Near Field Communication (NFC) operates at 13.56 MHz. Near Field is the region situated at a distance r << λ λ = c/f ... 3answers 159 views ### What if $\gamma$-rays in Electron microscope? I was referring Electron microscopes and read that the electrons have wavelength way less than that of visible light. But, the question I can't find an answer was that, If gamma radiation has the ... 2answers 396 views ### Is all kind of light same speed? Is there any speed different between blue or red color? Is there speed different? or there are same speed? 2answers 1k views ### In electromagnetic radiation, how do electrons actually “move”? I've always pictured EM radiation as a wave, in common drawings of radiation you would see it as a wave beam and that had clouded my understanding recently. Illustration on the simplest level: ... 2answers 12k views ### Why does wavelength change as light enters a different medium? When light waves enter a medium of higher refractive index than the previous, why is it that: Its wavelength decreases? The frequency of it has to stay the same? 3answers 936 views ### Some questions about car radio and cellphone antennas 1-Why the antenna of the radio of cars is located outside the car and not inside? 2-If the answer to 1 is because that cars are like Faraday cages then how come my cell phone can receive signal ... 3answers 790 views ### Why is it necessary for an object to have a bigger size than the wavelength of light in order for us to see it? I keep hearing this rule that an object must have a bigger size than the wavelength of light in order for us to see it, and though I don't have any professional relationship with physics, I want to ... 2answers 541 views ### What is the minimum wavelength of electromangetic radiation? As a first approximation, I don't see how a wavelength of less than 2 Planck distances could exist. The question is: are there any other limits that would come into play before that? For example: ... 1answer 183 views ### Is the number of wavelengths of light spanning a distance invarient with respect to spacetime distortion? I was recently asked by a friend how the expansion of spacetime effects photons. I gave him what I feel is a satisfactory general response, but it got me wondering how, exactly to calculate this ... 1answer 127 views ### Merge different wavelength rays Let's say an array of rays of light is given. Each ray has a specific wavelength (in the range of visible light). Example: ... 4answers 1k views ### Light emitted by an object according to its temperature According to this picture the light emitted by an object depends on its temperature. That makes perfect sense when we heat a metal. As its temperature raises we see it red at first, then orange, ... 2answers 317 views ### Magnetron limits What are the practical limits on generated wavelength in a Magnetron? We know that Magnetrons could be used efficiently for generating microwaves for water heating, or for radar applications, but ... 4answers 783 views ### What causes polarised materials to change colour under stress? Our physics teacher showed the class a really interesting demonstration. He used two polarised filters in opposite orientations, then he took some antistatic tape and stretched it under the two ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481906294822693, "perplexity_flag": "middle"}
http://nrich.maths.org/6241
### Forgotten Number I have forgotten the number of the combination of the lock on my briefcase. I did have a method for remembering it... ### Factoring Factorials Find the highest power of 11 that will divide into 1000! exactly. ### Powerful Factorial 6! = 6 x 5 x 4 x 3 x 2 x 1. The highest power of 2 that divides exactly into 6! is 4 since (6!) / (2^4 ) = 45. What is the highest power of two that divides exactly into 100!? # Weekly Problem 13 - 2009 ##### Stage: 3 Short Challenge Level: The symbol $50!$ represents the product of all the whole numbers from 1 to 50 inclusive; that is, $50!=1 \times 2 \times 3 \times \dots \times 49 \times 50$. If I were to calculate the actual value, how many zeros would the answer have at the end? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269551038742065, "perplexity_flag": "middle"}
http://www.speedylook.com/Speed.html
# Speed see also: Etymology of Speed ## Definition • In physics, the speed is a size which measures the report/ratio of an evolution at time. Example: speed of Sedimentation, speed of a Chemical reaction, etc • In kinematics, speed is a vector quantity which measures for a movement, the report/ratio of the distance covered at time. ## Speed in kinematics One distinguishes: • the curvilinear speed , which is the distance $d$ traversed on a Courbe per unit of Temps $t$. It is a size Scalaire. $v = \ frac \left\{D\right\} \left\{T\right\}$ • the vector-speed or the speed in the space , which is the Vecteur $\ vec \left\{v\right\} = \ frac \left\{\ vec \left\{\ mathrm Dr.\right\}\right\} \left\{\ mathrm dt\right\}$ whose standard is worth the speed and whose directions and direction are those of the movement of the object considered. Formally, vector-speed is the Dérivée from the position of the object compared to the Temps. When that does not involve confusions, one calls simply vector-speed “speed”. It is a vector quantity here. The international unit speed is the Mètre by second (m.s-1). For the motor vehicles, one also frequently uses the kilometer hour (km/h), the Anglo-Saxon system uses the thousand per hour ( mile per hour , mph). In the navy, one uses the node, which is worth one thousand sailor per hour, that is to say 0,514  4  m.s-1. In aviation, one uses sometimes the Mach, Mach 1 being the speed of the his (which varies according to the temperature and from the pressure). ## History of the concept speed A formal definition missed a long time with the concept speed, because the mathematicians avoided making the quotient of two nonhomogeneous sizes. To divide a distance by a time thus appeared as false to them as could seem to us today the sum of these two values. Thus to know if a body went more quickly than another, Galileo (1564-1642) compared the ratio of the distances covered by these bodies with the report/ratio of corresponding time. It applied for that following equivalence: $\ frac \left\{s_1\right\} \left\{s_2\right\} \ the \ frac \left\{t_1\right\} \left\{t_2\right\} \ Leftrightarrow \ frac \left\{s_1\right\} \left\{t_1\right\} \ the \ frac \left\{s_2\right\} \left\{t_2\right\}$ The concept instantaneous speed is formally defined for the first time by Pierre Varignon (1654-1722) the July 5th 1698, like the report/ratio an infinitely small length $\ mathrm dx$ over time infinitely small $\ mathrm dt$ put to traverse this length. It earlier uses for that the formalism of the differential Calculus developped at the item fourteen years by Leibniz (1646-1716). ## The concept speed It is necessary to distinguish two types of speed: • the mean velocity, which answers the elementary definition very precisely. It is calculated by dividing the distance covered by run time; it has a direction over a given period; • the instantaneous speed, which is obtained by passage to the limit of the definition speed. It is defined in one precise moment, via the concept of derivation $v = \ tfrac \left\{\ partial R\right\} \left\{\ partial T\right\}$. For example in calculations of Kinematic, speed is a Vecteur obtained by deriving the Cartesian Coordonnées from the position compared to time: $\ vec \left\{v\right\} = \ frac \left\{\ partial \ vec \left\{R\right\}\right\} \left\{\ partial T\right\} = \ begin \left\{pmatrix\right\} \ frac \left\{\ partial X\right\} \left\{\ partial T\right\} \ \ \ frac \left\{\ partial there\right\} \left\{\ partial T\right\} \ \ \ frac \left\{\ partial Z\right\} \left\{\ partial T\right\} \ end \left\{pmatrix\right\}$ ## Vector-speed Instantaneous vector-speed $\ vec v$ of an object whose position at time $t$ is given by $\ vec X \left(T\right)$ calculated like the Dérivée $\ vec v = \ frac \left\{\ mathrm D \ vec X\right\} \left\{\ mathrm dt\right\}$ Acceleration is the derivative speed, and speed is the derivative of the distance according to time. The Accélération is the rate of shifting of speed of an object over the period. The average acceleration $a$ of an object of which speed changes starting from $v_i$ with $v_f$ for one period $t$ is given by: $has = \ frac \left\{v_f - v_i\right\} t$ The vector of instantaneous acceleration $\ vec a$ of an object whose position at time $t$ is given by $\ vec X \left(T\right)$ is $\ vec has = \ frac \left\{\ mathrm D \ vec v\right\} \left\{\ mathrm dt\right\} = \ frac \left\{\ mathrm d^2 \ vec X\right\} \left\{\ mathrm dt^2\right\}$ The final speed $v_f$ of an object starting with speed $v_i$ then accelerating with a constant rate $a$ during a time $t$ is: $v_f = v_i + has T \,$ The mean velocity of an object undergoing a constant acceleration is $\left\{\ scriptstyle \ frac12\right\} \left(v_i + v_f\right)$. To find $d$ displacement of such an object accelerating for the period $t$, to substitute this expression in the first formula to obtain: $D = T \ times \ frac \left\{v_i + v_f\right\} 2$ When only the initial swiftness of the object is known, the expression $D = v_i T + \ frac \left\{has t^2\right\} 2$ can be used. These basic equations for the final swiftness and displacement can be combined to form an equation which is independent of time: $v_f^2 = v_i^2 + 2 has d$ The equations above are valid for at the same time the traditional Mécanique but not for the restricted Relativité. In particular in traditional mechanics, all will be of agreement on the value of $t$ and the rules of transformation for the position create a situation in which all the observers not accelerating would describe the acceleration of an object with the same values. Neither one nor the other are true for restricted relativity. The kinetic energy of a moving object is linear with its Masse and the square its speed: $E_c = \ tfrac1 2 mv^2$ The kinetic energy is a quantity Scalaire. ### Polar coordinates In Coordinated polar, speed in the plan can be broken up of radial speed, $\ mathrm Dr. \ mathrm dt$, moving away or going towards the origin and speed orthoradiale, in the perpendicular direction (which one will not confuse with the tangential component), equal to $r \ tfrac \left\{\ mathrm D \ theta\right\} \left\{\ mathrm dt\right\}$ (see angular Velocity). The Angular momentum in the plan is $\ vec L= m \ \ vec R \ wedge \ vec V = m \; r^2 \; \ frac \left\{\ mathrm D \ theta\right\} \left\{\ mathrm D T\right\} \ vec k$. One recognizes in $\ frac \left\{1\right\} \left\{2\right\} r^2 \ frac \left\{\ mathrm D \ theta\right\} \left\{\ mathrm D T\right\} = \ frac \left\{\ mathrm D has \left(T\right)\right\}\left\{\ mathrm D T\right\}$ areal speed. If the force is central (see Mouvement with central force), then areal speed is constant (second law of Kepler). ## See too • Acceleration • Average acceleration • Speed of light • Speed of phase • Speed of group • relative Speed • Speeds (aerodynamics) • kinematic Torque • Beats per minute (BPM): measure “speed” of a piece of music ## External bond • Site of conversion of units (of which speed) Simple: Speed Simple: Velocity Random links: Kava | Ghislain Bouchard | Gene (gastronomy) | EIGIP | Murua (Álava) | Elf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.901354968547821, "perplexity_flag": "middle"}
http://www.mathplanet.com/education/algebra-1/exploring-real-numbers/the-distributive-property
# The Distributive property We begin with an example Adam has two rectangles with the same height and he wants to know the total area of these two rectangles. He can calculate this in two different ways. Adam can either calculate the areas separately and then add the two areas. $\\ Rectangle\: 1:\: 3\cdot 5=15 \\$ $\\ Rectangle\: 2:\: 3\cdot 7=21 \\$ $\\ 15+21=36 \\$ Or as the height is the same in both rectangles we can multiply the the height with the sum of the bases $\\ 3\left ( 5+7 \right )=3\cdot 12=36 \\$ As we can see $\\ 3\cdot 5+3\cdot 7=3\left ( 5+7 \right )=36 \\$ This is an example of the distributive property which can be used to find the product of a number and a sum or a difference. $\\ a\left ( b+c \right )=\left ( b+c \right )a=ab+ac \\$ $\\ a\left ( b-c \right )=ab-ac=ba-ca=\left ( b-c \right )a \\$ The parts of the expression are called terms. A term could either be a number, a variable, or a product. If we have a term that contains both a number and a variable as in 2x the number part of the term, in this case 2, is called the coefficient. A term that only contains a number and no variable part is called a constant term. If we look at the expression: $\\ 5+3x-2+7x \\$ This expression has 4 terms where two of the terms are constant terms 5 and -2. The two other terms have the coefficients 3 and 7. Terms like 3x and 7x that have the same variable part are called like terms. The constant terms are like terms as well. Like terms can be combined as is stated in the distributive property $\\ 3x+7x=\left ( 3+7 \right )x=10x \\$ Expressions like 3x+7x and 10x are equivalent expressions since they denote the same number. An expression is written in its simplest form when it contains no like terms and no parentheses Example: Simplify the expression $\\ 2\left ( 3p+5 \right )-\left ( p+2 \right ) \\$ Notice that the second parenthesis is multiplied by -1. We can instead write the expression as $\\ 2\left ( 3p+5 \right )+\left ( -1 \right )\left ( p+2 \right ) \\$ By using the distributive property we can rewrite the expression as $\\ 6p+10-p-2 \\$ And by combining the like terms we'll get $\\ 5p+8 \\$ Which in this case is the simplest form. Videolesson: Simplify the expression $\\x (4 - 2) + 3(2x + 1)-(x - 1) \\$ Next Class:  Exploring real numbers, Square roots • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 14, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310551881790161, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/263366/outliers-of-a-stem-leaf-plot?answertab=oldest
# Outliers Of A Stem-Leaf Plot I have another question concerning one of previous posts: Stem-Leaf Display In the textbook, in a later paragraph, they remark about how "...there are no observations that are unusually far from the bulk of the data (no outliers), as would be the case if one of the 26% values had instead been 86%." Isn't the $4\%$ value particularly distant from the bulk of the data? And how would having an additional $86\%$ value in place of one of the $26\%$ values affect anything? Also, is the representative value always found in the place where the bulk is, where most of the data is concentrated? Another remark the author makes: "The most surprising feature of this data is that, at most colleges in the sample, at least one-quarter of the students are binge drinkers. The problem of heavy drinking on campuses is much more pervasive than many had suspected." Wouldn't it actually be nearly $50\%$ of students at universities be binge drinkers, because that is where the values are concentrated? - ## 1 Answer I’ll start with the last question. The statement is about the number of colleges with at least $25$ binge drinkers, not about the number of students who are binge drinkers. Remember that each entry gives the percentage of binge drinkers at one college. Thus, every entry that is $25$ or more is for a school where at least a quarter of the students are binge drinkers. If I counted correctly, only $16$ of the $150$ entries are below $25$: the one on the $0$ line, the $10$ on the $1$ line, and the first $5$ on the $2$ line. That means that at $134$ of the $150$ colleges in the survey, at least a quarter of the students were binge drinkers. I think that $134$ out of $150$ qualifies as ‘most’! Most representative value really isn’t a technical term with a precise meaning. In some distributions there really isn’t any value that could be described as ‘most representative’ in any useful sense. The $4$% value is fairly far from the large number of points in the $30$s and $40$s, but it’s not separated from the rest of the data by an inordinately large gap: it’s just $7$ percentage points from the next datum, and we expect some spreading in the tails of the distribution. An $86$% value, on the other hand, would be separated from the rest of the data by $18$ percentage points, a much bigger gap. The larger size of the gap would show up visually, too: the empty $70$ line would between $68$ and $86$ would stand out quite clearly. Changing one of the $26$% values to $76$%, on the other hand, would make the upper end of the distribution look a bit more like the lower end: I would not call that an outlier. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658458232879639, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/60176/move-two-wheeled-robot-from-one-point-to-another/124191
# Move two wheeled robot from one point to another The inner circle represents the potential path of the left wheel, and the outer the potential path of the right. The circle in between represents the "midpoint" between these two circles. Given $A_L, A_R, A_N$ and $B_N$, I need to determine $d_L$ and $d_R$. Here's what I've got so far: $$\frac{d_L}{2\pi \cdot \overline {OA_L}} = \frac{d_R}{2\pi \cdot \overline {OA_R}}$$ Where $\overline {OA_L}$ and $\overline {OA_R}$ are the radii of the inner and outer circles respectively. - 2 I feel like the title of your question was not chosen well; the question itself is simply a geometry question. – Greg Martin Sep 27 '11 at 0:16 ## 3 Answers Assuming all you have are the positions of $A_L$, $A_R$, $A_N$, and $B_N$, that is, you don't even know where $O$ is in advance, this is a cute little geometry problem. As $A_N$ and $B_N$ are at the same distance from $O$, the latter lies on the perpendicular bisector of the line segment $A_NB_N$. Also, it's clear from the picture that $O$ must lie on the line $A_LA_R$. This fixes the position of $O$. Then you can use Ilmari's answer to find the lengths of the arcs $d_L$ and $d_R$. For an analytical solution, let $\vec u = \vec A_L - \vec A_R$ and $\vec v = \vec B_N - \vec A_N$. As $O$ lies on the line through $A_N$ parallel to $\vec u$, we can write $\vec O = \vec A_N + c \vec u$ for some scalar $c$. Also, as $O$ is equidistant from $A_N$ and $B_N$, we have $\lVert \vec O - \vec A_N \rVert^2 = \lVert \vec O - \vec B_N \rVert^2$. Substituting $\vec O = \vec A_N + c\vec u$ and using $\lVert \vec x \rVert^2 = \vec x \cdot \vec x$ for any $\vec x$, we get $$(c \vec u) \cdot (c \vec u) = (c \vec u - \vec v)\cdot(c \vec u - \vec v),$$ so $$c = \frac12 \frac{\vec v \cdot \vec v}{\vec u \cdot \vec v}$$ Finally, as $OA_NB_N$ is an isosceles triangle with sides $\lVert c \vec u \rVert$, $\lVert c \vec u \rVert$, and $\lVert \vec v \rVert$, the angle $\theta$ at $O$ satisfies $$2 \sin \frac\theta2 = \frac{\lVert \vec v \rVert}{\lVert c \vec u \rVert},$$ so $$\theta = 2 \sin^{-1} \frac{\lVert \vec v \rVert}{2\lVert c \vec u \rVert} = 2 \sin^{-1} \frac{\vec u \cdot \vec v}{\lVert \vec u \rVert \lVert \vec v \rVert}$$ and that should be you everything you need to compute the quantities in Ilmari's answer. - Let $\theta = \angle AOB$ in radians. Then $d_L = \theta r_L$ and $d_R = \theta r_R$, where $r_L$ and $r_R$ are the radii of the inner and outer circles respectively. - How do I determine what $\theta$ is? Or $r_L$, for that matter. I've only got $A_L, A_R, A_N$ and $B_N$. – muntoo Aug 27 '11 at 21:14 – Ilmari Karonen Aug 27 '11 at 21:30 If $\alpha$ is the angle $(\overrightarrow{OA_L},\overrightarrow{OB_L})$, $R_L$ the radius of the inner circle, then the length of the chord $A_L A_R$ is $$2 R_L \sin{\alpha \over 2}$$ The sinus can be obtained from the cosinus. $$\sin{\alpha \over 2} = \pm \sqrt{1-\cos \alpha \over 2}$$ And the cosinus from the scalar product : $$\cos \alpha = \frac{\overrightarrow{OA_L} \cdot \overrightarrow{OB_L}}{||\overrightarrow{OA_L}|| ||\overrightarrow{OB_L}||} = \frac{\overrightarrow{OA_L} \cdot \overrightarrow{OB_L}}{R_L^2}$$ The same can be done for the outer circle. Edit : If you do not know $O$, you can compute it as the intersection of the two lines $(A_L B_L)$ and $(A_R B_R)$. It will also give you $R_L$ as the distance between $O$ and $A_L$. Also, if $\alpha$ is small, you can use approximations like the post of Ilmari Karonen. - – muntoo Aug 27 '11 at 21:25 I have added the definition of $R_L$, thank you. The two bars are the norm of a vector. – alex_reader Aug 27 '11 at 21:30 1 I read the question as asking for the lengths of the arcs from $A_L$ to $B_L$ and $A_R$ to $B_R$, not of the chords. For that, my answer is exact. As you note, though, for small angles the chord and arc lengths are approximately the same. – Ilmari Karonen Dec 26 '11 at 6:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322402477264404, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Exponential_family
# Exponential family Not to be confused with the exponential distribution. "Natural parameter" links here. For the usage of this term in differential geometry, see differential geometry of curves. In probability and statistics, an exponential family is an important class of probability distributions sharing a certain form, specified below. This special form is chosen for mathematical convenience, on account of some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural distributions to consider. The concept of exponential families is credited to[1] E. J. G. Pitman,[2] G. Darmois,[3] and B. O. Koopman[4] in 1935–36. The term exponential class is sometimes used in place of "exponential family".[5] The exponential families include many of the most common distributions, including the normal, exponential, gamma, chi-squared, beta, Dirichlet, Bernoulli, categorical, Poisson, Wishart, Inverse Wishart and many others. A number of common distributions are exponential families only when certain parameters are considered fixed and known, e.g. binomial (with fixed number of trials), multinomial (with fixed number of trials), and negative binomial (with fixed number of failures). Examples of common distributions that are not exponential families are Student's t, most mixture distributions, and even the family of uniform distributions with unknown bounds. See the section below on examples for more discussion. Consideration of exponential-family distributions provides a general framework for selecting a possible alternative parameterisation of the distribution, in terms of natural parameters, and for defining useful sample statistics, called the natural sufficient statistics of the family. See below for more information. ## Definition The following is a sequence of increasingly general definitions of an exponential family. A casual reader may wish to restrict attention to the first and simplest definition, which corresponds to a single-parameter family of discrete or continuous probability distributions. ### Scalar parameter A single-parameter exponential family is a set of probability distributions whose probability density function (or probability mass function, for the case of a discrete distribution) can be expressed in the form $f_X(x|\theta) = h(x)\ \exp[\ \eta(\theta) \cdot T(x)\ -\ A(\theta)\ ]$ where T(x), h(x), η(θ), and A(θ) are known functions. An alternative, equivalent form often given is $f_X(x|\theta) = h(x)\ g(\theta) \exp[\ \eta(\theta) \cdot T(x)\ ]\,$ or equivalently $f_X(x|\theta) = \exp[\ \eta(\theta) \cdot T(x)\ -\ A(\theta) + B(x)\ ]$ The value θ is called the parameter of the family. Note that x is often a vector of measurements, in which case T(x) is a function from the space of possible values of x to the real numbers. If η(θ) = θ, then the exponential family is said to be in canonical form. By defining a transformed parameter η = η(θ), it is always possible to convert an exponential family to canonical form. The canonical form is non-unique, since η(θ) can be multiplied by any nonzero constant, provided that T(x) is multiplied by that constant's reciprocal. Even when x is a scalar, and there is only a single parameter, the functions η(θ) and T(x) can still be vectors, as described below. Note also that the function A(θ) or equivalently g(θ) is automatically determined once the other functions have been chosen, and assumes a form that causes the distribution to be normalized (sum or integrate to one over the entire domain). Furthermore, both of these functions can always be written as functions of η, even when η(θ) is not a one-to-one function, i.e. two or more different values of θ map to the same value of η(θ), and hence η(θ) cannot be inverted. In such a case, all values of θ mapping to the same η(θ) will also have the same value for A(θ) and g(θ). Further down the page is the example of a normal distribution with unknown mean and known variance. ### Factorization of the variables involved What is important to note, and what characterizes all exponential family variants, is that the parameter(s) and the observation variable(s) must factorize (can be separated into products each of which involves only one type of variable), either directly or within either part (the base or exponent) of an exponentiation operation. Generally, this means that all of the factors constituting the density or mass function must be of one of the following forms: $f(x)$, $g(\theta)$, $c^{f(x)}$, $c^{g(\theta)}$, ${[f(x)]}^c$, ${[g(\theta)]}^c$, ${[f(x)]}^{g(\theta)}$, ${[g(\theta)]}^{f(x)}$, ${[f(x)]}^{h(x)g(\theta)}$, or ${[g(\theta)]}^{h(x)j(\theta)}$, where $f(x)$ and $h(x)$ are arbitrary functions of $x$; $g(\theta)$ and $j(\theta)$ are arbitrary functions of $\theta$; and $c$ is an arbitrary "constant" expression (i.e. an expression not involving $x$ or $\theta$). There are further restrictions on how many such factors can occur. For example, an expression of the sort ${[f(x) g(\theta)]}^{h(x)j(\theta)}$ is the same as ${[f(x)]}^{h(x)j(\theta)} [g(\theta)]^{h(x)j(\theta)}$, i.e. a product of two "allowed" factors. However, when rewritten into the factorized form, ${[f(x) g(\theta)]}^{h(x)j(\theta)} = {[f(x)]}^{h(x)j(\theta)} [g(\theta)]^{h(x)j(\theta)} = e^{[h(x) \ln f(x)] j(\theta) + h(x) [j(\theta) \ln g(\theta)]}\, ,$ it can be seen that it cannot be expressed in the required form. (However, a form of this sort is a member of a curved exponential family, which allows multiple factorized terms in the exponent.[citation needed]) To see why an expression of the form ${[f(x)]}^{g(\theta)}$ qualifies, note that ${[f(x)]}^{g(\theta)} = e^{g(\theta) \ln f(x)}\,$ and hence factorizes inside of the exponent. Similarly, ${[f(x)]}^{h(x)g(\theta)} = e^{h(x)g(\theta)\ln f(x)} = e^{[h(x) \ln f(x)] g(\theta)}\,$ and again factorizes inside of the exponent. Note also that a factor consisting of a sum where both types of variables are involved (e.g. a factor of the form $1+f(x)g(\theta)$) cannot be factorized in this fashion (except in some cases where occurring directly in an exponent); this is why, for example, the Cauchy distribution and Student's t distribution are not exponential families. ### Vector parameter The definition in terms of one real-number parameter can be extended to one real-vector parameter ${\boldsymbol \theta} = (\theta_1, \theta_2, \ldots, \theta_d)^T$. A family of distributions is said to belong to a vector exponential family if the probability density function (or probability mass function, for discrete distributions) can be written as $f_X(x|\boldsymbol \theta) = h(x) \exp\left(\sum_{i=1}^s \eta_i({\boldsymbol \theta}) T_i(x) - A({\boldsymbol \theta}) \right) \,\!$ Or in a more compact form, $f_X(x|\boldsymbol \theta) = h(x) \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x) - A({\boldsymbol \theta})\ \Big) \,\!$ This form writes the sum as a dot product of vector-valued functions $\boldsymbol\eta({\boldsymbol \theta})$ and $\mathbf{T}(x)$. An alternative, equivalent form often seen is $f_X(x|\boldsymbol \theta) = h(x) g(\boldsymbol \theta) \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x)\ \Big) \,\!$ As in the scalar valued case, the exponential family is said to be in canonical form if $\eta_i({\boldsymbol \theta}) = \theta_i$, for all $i$. A vector exponential family is said to be curved if the dimension of ${\boldsymbol \theta} = (\theta_1, \theta_2, \ldots, \theta_d)^T$ is less than the dimension of the vector ${\boldsymbol \eta}(\boldsymbol \theta) = (\eta_1(\boldsymbol \theta), \eta_2(\boldsymbol \theta), \ldots, \eta_s(\boldsymbol \theta))^T$. That is, if the dimension of the parameter vector is less than the number of functions of the parameter vector in the above representation of the probability density function. Note that most common distributions in the exponential family are not curved, and many algorithms designed to work with any member of the exponential family implicitly or explicitly assume that the distribution is not curved. Note that, as in the above case of a scalar-valued parameter, the function $A(\boldsymbol \theta)$ or equivalently $g(\boldsymbol \theta)$ is automatically determined once the other functions have been chosen, so that the entire distribution is normalized. In addition, as above, both of these functions can always be written as functions of $\boldsymbol\eta$, regardless of the form of the transformation that generates $\boldsymbol\eta$ from $\boldsymbol\theta$. Hence an exponential family in its "natural form" (parametrized by its natural parameter) looks like $f_X(x|\boldsymbol \eta) = h(x) \exp\Big(\ \boldsymbol\eta \cdot \mathbf{T}(x) - A({\boldsymbol \eta})\ \Big) \,\!$ or equivalently $f_X(x|\boldsymbol \eta) = h(x) g(\boldsymbol \eta) \exp\Big(\ \boldsymbol\eta \cdot \mathbf{T}(x)\ \Big) \,\!$ Note that the above forms may sometimes be seen with $\boldsymbol\eta^T \mathbf{T}(x)\,$ in place of $\boldsymbol\eta \cdot \mathbf{T}(x)\,$. These are exactly equivalent formulations, merely using different notation for the dot product. Further down the page is the example of a normal distribution with unknown mean and variance. ### Vector parameter, vector variable The vector-parameter form over a single scalar-valued random variable can be trivially expanded to cover a joint distribution over a vector of random variables. The resulting distribution is simply the same as the above distribution for a scalar-valued random variable with each occurrence of the scalar $x$ replaced by the vector $\mathbf{x} = (x_1, x_2, \ldots, x_k)$. Note that the dimension $k$ of the random variable need not match the dimension $d$ of the parameter vector, nor (in the case of a curved exponential function) the dimension $s$ of the natural parameter $\boldsymbol\eta$ and sufficient statistic $T(\mathbf{x})$. The distribution in this case is written as $f_X(\mathbf{x}|\boldsymbol \theta) = h(\mathbf{x})\ \exp\left(\sum_{i=1}^s \eta_i({\boldsymbol \theta}) T_i(\mathbf{x}) - A({\boldsymbol \theta}) \right) \,\!$ Or more compactly as $f_X(\mathbf{x}|\boldsymbol \theta) = h(\mathbf{x})\ \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x}) - A({\boldsymbol \theta})\ \Big) \,\!$ Or alternatively as $f_X(\mathbf{x}|\boldsymbol \theta) = h(\mathbf{x})\ g(\boldsymbol \theta)\ \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x})\ \Big) \,\!$ ### Measure-theoretic formulation We use cumulative distribution functions (cdf) in order to encompass both discrete and continuous distributions. Suppose H is a non-decreasing function of a real variable. Then Lebesgue–Stieltjes integrals with respect to dH(x) are integrals with respect to the "reference measure" of the exponential family generated by H. Any member of that exponential family has cumulative distribution function $dF(\mathbf{x}|\boldsymbol\eta) = e^{\boldsymbol\eta^{\rm T} \mathbf{T}(\mathbf{x}) - A(\boldsymbol\eta)}\, dH(\mathbf{x}).$ If F is a continuous distribution with a density, one can write dF(x) = f(x) dx. H(x) is a Lebesgue–Stieltjes integrator for the reference measure. When the reference measure is finite, it can be normalized and H is actually the cumulative distribution function of a probability distribution. If F is absolutely continuous with a density, then so is H, which can then be written dH(x) = h(x) dx. If F is discrete, then H is a step function (with steps on the support of F). ## The meaning of "exponential family" It is critical, when considering the above definitions, to use proper terminology and to keep in mind exactly what is being spoken of when the term "exponential family" is used. Properly speaking, there is no such thing as "the" exponential family, but rather an exponential family, and properly speaking, it is not a "distribution" but a family of distributions that either is or is not an exponential family. The problem lies in the fact that we often say, e.g., "the normal distribution" when properly we mean something like "the family of normal distributions with unknown mean and variance". A family of distributions is defined by a set of parameters that can be varied, and what makes a family be an exponential family is a particular relationship between the domain of a family of distributions (the variable over which each distribution in the family is defined) and the parameters. As an example, what is often said to be "the binomial distribution" is in fact a family of related distributions characterized by a parameter n of Bernoulli trials, each of which is drawn using a parameter p (a probability of success). A particular setting of n and p characterizes a particular probability distribution over a discrete random variable, with possible outcomes (the support of the distribution) ranging between 0 and n. Consider the following cases: 1. If both n and p are given particular settings (e.g. n=20, p=0.1), a single binomial distribution arises. 2. If n is given a particular setting (e.g. n=20), but p is allowed to vary, a family of binomial distributions arises, characterized by the parameter p. 3. If both n and p are allowed to vary, a different (and larger) family of binomial distributions arises, characterized by the parameters n and p. All of the above cases can be referred to using the term "binomial distribution", but not all of them are exponential families. In fact, only the second one is an exponential family: • The first case (with fixed n and p) is not a family of distributions at all, but a single distribution, and hence cannot logically be an exponential family. • The third case happens not to be an exponential family. In general, exponential families cannot have a support that varies according to a parameter; rather, the support must remain the same across all distributions in the family. Even more confusing is the case of the uniform distribution. It is common to say something like "draw a number from a uniform distribution" to mean specifically to draw a number from a continuous uniform distribution that ranges between 0 and 1. Similarly, it is sometimes said that "the uniform distribution is a special case of the beta distribution", again referring to a continuous uniform distribution ranging between 0 and 1. Since the beta distribution is an exponential family, it is tempting to conclude that the uniform distribution is also an exponential family. In fact, however, both examples above refer to a specific uniform distribution, not a family. The family of uniform distributions is defined by either an unknown upper bound, unknown lower bound, or unknown upper and lower bounds — and none of these families are exponential families. (This can be seen by considering what was said above — the support of an exponential family cannot vary depending on a particular parameter.) Hence, it is often said that the "uniform distribution" is not an exponential family, which is correct but imprecise. ## Interpretation In the definitions above, the functions $T(x),$ $\eta(\theta),$ and $A(\eta)$ were apparently arbitrarily defined. However, these functions play a significant role in the resulting probability distribution. • $T(x)$ is a sufficient statistic of the distribution. For exponential families, the sufficient statistic is a function of the data that fully summarizes the data $x$ within the density function. While other data sets $y$ may be quite different $d(x,y)>0$ if $T(x)=T(y)$ then the density value is the same. The dimension of $T(x)$ equals the number of parameters of $\theta$ and encompasses all of the information regarding the data related to the parameter $\theta$. The sufficient statistic of a set of independent identically distributed data observations is simply the sum of individual sufficient statistics, and encapsulates all the information needed to describe the posterior distribution of the parameters, given the data (and hence to derive any desired estimate of the parameters). This important property is further discussed below. • $\eta$ is called the natural parameter. The set of values of $\eta$ for which the function $f_X(x;\theta)$ is finite is called the natural parameter space. It can be shown that the natural parameter space is always convex. • $A(\eta)$ is called the log-partition function because it is the logarithm of a normalization factor, without which $f_X(x;\theta)$ would not be a probability distribution ("partition function" is often used in statistics as a synonym of "normalization factor"): $A(\eta) = \ln\left\{ \int_x h(x)\ \exp[\ \eta(\theta) \cdot T(x)\ ] \operatorname{d}\!x\right\}$ The function A is important in its own right, because the mean, variance and other moments of the sufficient statistic $T(x)$ can be derived simply by differentiating $A(\eta)$. For example, because $\ln x$ is one of the components of the sufficient statistic of the gamma distribution, $\mathbb{E}[\ln x]$ can be easily determined for this distribution using $A(\eta)$. (Technically, this is true because $K(u|\eta) = A(\eta+u) - A(\eta)$ is the cumulant generating function of the sufficient statistic.) ## Properties Exponential families have a large number of properties that make them extremely useful for statistical analysis. In many cases, it can be shown that, except in a few exceptional cases, only exponential families have these properties. Examples: • Exponential families have sufficient statistics that can summarize arbitrary amounts of independent identically distributed data using a fixed number of values. • Exponential families have conjugate priors, an important property in Bayesian statistics. • The posterior predictive distribution of an exponential-family random variable with a conjugate prior can always be written in closed form (provided that the normalizing factor of the exponential-family distribution can itself be written in closed form). Note that these distributions are often not themselves exponential families. Common examples of non-exponential families arising from exponential ones are the Student's t-distribution, beta-binomial distribution and Dirichlet-multinomial distribution. • In the mean-field approximation in variational Bayes (used for approximating the posterior distribution in large Bayesian networks), the best approximating posterior distribution of an exponential-family node with a conjugate prior is in the same family as the node.[citation needed] ## Examples It is critical, when considering the examples in this section, to remember the discussion above about what it means to say that a "distribution" is an exponential family, and in particular to keep in mind that the set of parameters that are allowed to vary is critical in determining whether a "distribution" is or is not an exponential family. The normal, exponential, log-normal, gamma, chi-squared, beta, Dirichlet, Bernoulli, categorical, Poisson, geometric, inverse Gaussian, von Mises and von Mises-Fisher distributions are all exponential families. Some distributions are exponential families only if some of their parameters are held fixed. The family of Pareto distributions with a fixed minimum bound xm form an exponential family. The families of binomial and multinomial distributions with fixed number of trials n but unknown probability parameter(s) are exponential families. The family of negative binomial distributions with fixed number of failures (a.k.a. stopping-time parameter) r is an exponential family. However, when any of the above-mentioned fixed parameters are allowed to vary, the resulting family is not an exponential family. As mentioned above, as a general rule, the support of an exponential family must remain the same across all parameter settings in the family. This is why the above cases (e.g. binomial with varying number of trials, Pareto with varying minimum bound) are not exponential families — in all of the cases, the parameter in question affects the support (particularly, changing the minimum or maximum possible value). For similar reasons, neither the discrete uniform distribution nor continuous uniform distribution are exponential families regardless of whether one of the bounds is held fixed. (If both bounds are held fixed, the result is a single distribution, not a family at all.) The Weibull distribution with fixed shape parameter k is an exponential family. Unlike in the previous examples, the shape parameter does not affect the support; the fact that allowing it to vary makes the Weibull non-exponential is due rather to the particular form of the Weibull's probability density function (k appears in the exponent of an exponent). In general, distributions that result from a finite or infinite mixture of other distributions, e.g. mixture model densities and compound probability distributions, are not exponential families. Examples are typical Gaussian mixture models as well as many heavy-tailed distributions that result from compounding (i.e. infinitely mixing) a distribution with a prior distribution over one of its parameters, e.g. the Student's t-distribution (compounding a normal distribution over a gamma-distributed precision prior), and the beta-binomial and Dirichlet-multinomial distributions. Other examples of distributions that are not exponential families are the F-distribution, Cauchy distribution, hypergeometric distribution and logistic distribution. Following are some detailed examples of the representation of some useful distribution as exponential families. ### Normal distribution: Unknown mean, known variance As a first example, consider a random variable distributed normally with unknown mean $\mu$ and known variance $\sigma^2$. The probability density function is then $f_\sigma(x;\mu) = \frac{1}{\sqrt{2 \pi}|\sigma|} e^{-(x-\mu)^2/2\sigma^2}.$ This is a single-parameter exponential family, as can be seen by setting $h_\sigma(x) = e^{-x^2/2\sigma^2}/\sqrt{2\pi}|\sigma|$ $T_\sigma(x) = x/\sigma\!\,$ $A_\sigma(\mu) = \mu^2/2\sigma^2\!\,$ $\eta_\sigma(\mu) = \mu/\sigma.\!\,$ If σ = 1 this is in canonical form, as then η(μ) = μ. ### Normal distribution: Unknown mean and unknown variance Next, consider the case of a normal distribution with unknown mean and unknown variance. The probability density function is then $f(x;\mu,\sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-(x-\mu)^2/(2 \sigma^2)}.$ This is an exponential family which can be written in canonical form by defining $\boldsymbol {\eta} = \left({\mu \over \sigma^2},{-1 \over 2\sigma^2} \right)^{\rm T}$ $h(x) = {1 \over \sqrt{2 \pi}}$ $T(x) = \left( x, x^2 \right)^{\rm T}$ $A({\boldsymbol \eta}) = { \mu^2 \over 2 \sigma^2} + \ln |\sigma| = -\eta_1^2/4\eta_2 + 1/2\ln|1/2\eta_2|$ ### Binomial distribution As an example of a discrete exponential family, consider the binomial distribution with known number of trials n. The probability mass function for this distribution is $f(x)={n \choose x}p^x (1-p)^{n-x}, \quad x \in \{0, 1, 2, \ldots, n\}.$ This can equivalently be written as $f(x)={n \choose x}\exp\left(x \log\left({p \over 1-p}\right) + n \log\left(1-p\right)\right),$ which shows that the binomial distribution is an exponential family, whose natural parameter is $\eta = \log{p \over 1-p}.$ This function of p is known as logit. ## Table of distributions The following table shows how to rewrite a number of common distributions as exponential-family distributions with natural parameters. For a scalar variable and scalar parameter, the form is as follows: $f_X(\mathbf{x}|\boldsymbol \theta) = h(\mathbf{x})\ \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x}) - A({\boldsymbol \eta})\ \Big) \,\!$ For a scalar variable and vector parameter: $f_X(x|\boldsymbol \theta) = h(x) \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x) - A({\boldsymbol \theta})\ \Big) \,\!$ $f_X(x|\boldsymbol \theta) = h(x) g(\boldsymbol \theta) \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(x)\ \Big) \,\!$ For a vector variable and vector parameter: $f_X(\mathbf{x}|\boldsymbol \theta) = h(\mathbf{x})\ \exp\Big(\ \boldsymbol\eta({\boldsymbol \theta}) \cdot \mathbf{T}(\mathbf{x}) - A({\boldsymbol \eta})\ \Big) \,\!$ The above formulas choose the functional form of the exponential-family with a log-partition function $A({\boldsymbol \eta})$. The reason for this is so that the moments of the sufficient statistics can be calculated easily, simply by differentiating this function. Alternative forms involve either parameterizing this function in terms of the normal parameter $\boldsymbol\theta$ instead of the natural parameter, and/or using a factor $g(\boldsymbol\eta)$ outside of the exponential. The relation between the latter and the former is: $A(\boldsymbol\eta) = -\ln g(\boldsymbol\eta)$ $g(\boldsymbol\eta) = e^{-A(\boldsymbol\eta)}$ To convert between the representations involving the two types of parameter, use the formulas below for writing one type of parameter in terms of the other. Distribution Parameter(s) Natural parameter(s) Inverse parameter mapping Base measure $h(x)$ Sufficient statistic $T(x)$ Log-partition $A(\boldsymbol\eta)$ Log-partition $A(\boldsymbol\theta)$ Bernoulli distribution p $\ln\frac{p}{1-p}$ • This is the logit function. $\frac{1}{1+e^{-\eta}} = \frac{e^\eta}{1+e^{\eta}}$ • This is the logistic function. $1$ $x$ $\ln (1+e^{\eta})$ $-\ln (1-p)$ binomial distribution with known number of trials n p $\ln\frac{p}{1-p}$ $\frac{1}{1+e^{-\eta}} = \frac{e^\eta}{1+e^{\eta}}$ ${n \choose x}$ $x$ $n \ln (1+e^{\eta})$ $-n \ln (1-p)$ Poisson distribution λ $\ln\lambda$ $e^\eta$ $\frac{1}{x!}$ $x$ $e^{\eta}$ $\lambda$ negative binomial distribution with known number of failures r p $\ln p$ $e^\eta$ ${x+r-1 \choose x}$ $x$ $-r \ln (1-e^{\eta})$ $-r \ln (1-p)$ exponential distribution λ $-\lambda$ $-\eta$ $1$ $x$ $-\ln(-\eta)$ $-\ln\lambda$ Pareto distribution with known minimum value xm α $-\alpha-1$ $-1-\eta$ $1$ $\ln x$ $-\ln (-1-\eta) + (1+\eta) \ln x_{\mathrm m}$ $-\ln \alpha - \alpha \ln x_{\mathrm m}$ Weibull distribution with known shape k λ $-\lambda^k$ $(-\eta)^{1/k}$ $x^{k-1}$ $x^k$ $\ln(-\eta) -\ln k$ $k\ln\lambda -\ln k$ Laplace distribution with known mean μ b $-\frac{1}{b}$ $-\frac{1}{\eta}$ $1$ $|x-\mu|$ $\ln\left(-\frac{2}{\eta}\right)$ $\ln 2b$ chi-squared distribution ν $\frac{\nu}{2}-1$ $2(\eta+1)$ $e^{-x/2}$ $\ln x$ $\ln \Gamma(\eta+1)+(\eta+1)\ln 2$ $\ln \Gamma\left(\frac{\nu}{2}\right)+\frac{\nu}{2}\ln 2$ normal distribution known variance μ $\frac{\mu}{\sigma}$ $\sigma\eta$ $\frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2\sigma^2}}$ $\frac{x}{\sigma}$ $-\frac{\eta^2}{2}$ $\frac{\mu^2}{2\sigma^2}$ normal distribution μ,σ2 $\begin{bmatrix} \dfrac{\mu}{\sigma^2} \\[10pt] -\dfrac{1}{2\sigma^2} \end{bmatrix}$ $\begin{bmatrix} -\dfrac{\eta_1}{2\eta_2} \\[15pt] -\dfrac{1}{2\eta_2} \end{bmatrix}$ $\frac{1}{\sqrt{2\pi}}$ $\begin{bmatrix} x \\ x^2 \end{bmatrix}$ $-\frac{\eta_1^2}{4\eta_2} - \frac12\ln(-2\eta_2)$ $\frac{\mu^2}{2\sigma^2} + \ln \sigma$ lognormal distribution μ,σ2 $\begin{bmatrix} \dfrac{\mu}{\sigma^2} \\[10pt] -\dfrac{1}{2\sigma^2} \end{bmatrix}$ $\begin{bmatrix} -\dfrac{\eta_1}{2\eta_2} \\[15pt] -\dfrac{1}{2\eta_2} \end{bmatrix}$ $\frac{1}{\sqrt{2\pi}x}$ $\begin{bmatrix} \ln x \\ (\ln x)^2 \end{bmatrix}$ $-\frac{\eta_1^2}{4\eta_2} - \frac12\ln(-2\eta_2)$ $\frac{\mu^2}{2\sigma^2} + \ln \sigma$ inverse Gaussian distribution μ,λ $\begin{bmatrix} -\dfrac{\lambda}{2\mu^2} \\[15pt] -\dfrac{\lambda}{2} \end{bmatrix}$ $\begin{bmatrix} \sqrt{\dfrac{\eta_2}{\eta_1}} \\[15pt] -2\eta_2 \end{bmatrix}$ $\frac{1}{\sqrt{2\pi}x^{3/2}}$ $\begin{bmatrix} x \\[5pt] \dfrac{1}{x} \end{bmatrix}$ $-2\sqrt{\eta_1\eta_2} -\frac12\ln(-2\eta_2)$ $-\frac{\lambda}{\mu} -\frac12\ln\lambda$ gamma distribution α,β $\begin{bmatrix} \alpha-1 \\ -\beta \end{bmatrix}$ $\begin{bmatrix} \eta_1+1 \\ -\eta_2 \end{bmatrix}$ $1$ $\begin{bmatrix} \ln x \\ x \end{bmatrix}$ $\ln \Gamma(\eta_1+1)-(\eta_1+1)\ln(-\eta_2)$ $\ln \Gamma(\alpha)-\alpha\ln\beta$ k,θ $\begin{bmatrix} k-1 \\[5pt] -\dfrac{1}{\theta} \end{bmatrix}$ $\begin{bmatrix} \eta_1+1 \\[5pt] -\dfrac{1}{\eta_2} \end{bmatrix}$ $\ln \Gamma(k)+k\ln\theta$ inverse gamma distribution α,β $\begin{bmatrix} -\alpha-1 \\ -\beta \end{bmatrix}$ $\begin{bmatrix} -\eta_1-1 \\ -\eta_2 \end{bmatrix}$ $1$ $\begin{bmatrix} \ln x \\ 1/x \end{bmatrix}$ $\ln \Gamma(-\eta_1-1)-(-\eta_1-1)\ln(-\eta_2)$ $\ln \Gamma(\alpha)-\alpha\ln\beta$ scaled inverse chi-squared distribution ν,σ2 $\begin{bmatrix} -\dfrac{\nu}{2}-1 \\[10pt] -\dfrac{\nu\sigma^2}{2} \end{bmatrix}$ $\begin{bmatrix} -2(\eta_1+1) \\[10pt] \dfrac{\eta_2}{\eta_1+1} \end{bmatrix}$ $1$ $\begin{bmatrix} \ln x \\ 1/x \end{bmatrix}$ $\ln \Gamma(-\eta_1-1)-(-\eta_1-1)\ln(-\eta_2)$ $\ln \Gamma\left(\frac{\nu}{2}\right)-\frac{\nu}{2}\ln\frac{\nu\sigma^2}{2}$ beta distribution α,β $\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$ $\begin{bmatrix} \eta_1 \\ \eta_2 \end{bmatrix}$ $\frac{1}{x(1-x)}$ $\begin{bmatrix} \ln x \\ \ln (1-x) \end{bmatrix}$ $\ln \Gamma(\eta_1) + \ln \Gamma(\eta_2) - \ln \Gamma(\eta_1+\eta_2)$ $\ln \Gamma(\alpha) + \ln \Gamma(\beta) - \ln \Gamma(\alpha+\beta)$ multivariate normal distribution μ,Σ $\begin{bmatrix} \boldsymbol\Sigma^{-1}\boldsymbol\mu \\[5pt] -\frac12\boldsymbol\Sigma^{-1} \end{bmatrix}$ $\begin{bmatrix} -\frac12\boldsymbol\eta_2^{-1}\boldsymbol\eta_1 \\[5pt] -\frac12\boldsymbol\eta_2^{-1} \end{bmatrix}$ $(2\pi)^{-k/2}$ $\begin{bmatrix} \mathbf{x} \\[5pt] \mathbf{x}\mathbf{x}^\mathrm{T} \end{bmatrix}$ $-\frac{1}{4}\boldsymbol\eta_1^{\rm T}\boldsymbol\eta_2^{-1}\boldsymbol\eta_1 - \frac12\ln\left|-2\boldsymbol\eta_2\right|$ $\frac12\boldsymbol\mu^{\rm T}\boldsymbol\Sigma^{-1}\boldsymbol\mu + \frac12 \ln |\boldsymbol\Sigma|$ categorical distribution (variant 1) p1,...,pk where $\textstyle\sum_{i=1}^k p_i=1$ $\begin{bmatrix} \ln p_1 \\ \vdots \\ \ln p_k \end{bmatrix}$ $\begin{bmatrix} e^{\eta_1} \\ \vdots \\ e^{\eta_k} \end{bmatrix}$ where $\textstyle\sum_{i=1}^k e^{\eta_i}=1$ $1$ $\begin{bmatrix} [x=1] \\ \vdots \\ {[x=k]} \end{bmatrix}$ • $[x=i]$ is the Iverson bracket (1 if $x=i$, 0 otherwise). $0$ $0$ categorical distribution (variant 2) p1,...,pk where $\textstyle\sum_{i=1}^k p_i=1$ $\begin{bmatrix} \ln p_1+C \\ \vdots \\ \ln p_k+C \end{bmatrix}$ $\begin{bmatrix} \dfrac{1}{C}e^{\eta_1} \\ \vdots \\ \dfrac{1}{C}e^{\eta_k} \end{bmatrix} =$ $\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix}$ where $\textstyle\sum_{i=1}^k e^{\eta_i}=C$ $1$ $\begin{bmatrix} [x=1] \\ \vdots \\ {[x=k]} \end{bmatrix}$ • $[x=i]$ is the Iverson bracket (1 if $x=i$, 0 otherwise). $0$ $0$ categorical distribution (variant 3) p1,...,pk where $p_k = 1 - \textstyle\sum_{i=1}^{k-1} p_i$ $\begin{bmatrix} \ln \dfrac{p_1}{p_k} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k-1}}{p_k} \\[15pt] 0 \end{bmatrix} =$ $\begin{bmatrix} \ln \dfrac{p_1}{1-\sum_{i=1}^{k-1}p_i} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k-1}}{1-\sum_{i=1}^{k-1}p_i} \\[15pt] 0 \end{bmatrix}$ • This is the inverse softmax function, a generalization of the logit function. $\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix} =$ $\begin{bmatrix} \dfrac{e^{\eta_1}}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_{k-1}}}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \\[15pt] \dfrac{1}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \end{bmatrix}$ • This is the softmax function, a generalization of the logistic function. $1$ $\begin{bmatrix} [x=1] \\ \vdots \\ {[x=k]} \end{bmatrix}$ • $[x=i]$ is the Iverson bracket (1 if $x=i$, 0 otherwise). $\ln \left(\sum_{i=1}^{k} e^{\eta_i}\right) = \ln \left(1+\sum_{i=1}^{k-1} e^{\eta_i}\right)$ $-\ln p_k = -\ln \left(1 - \sum_{i=1}^{k-1} p_i\right)$ multinomial distribution (variant 1) with known number of trials n p1,...,pk where $\textstyle\sum_{i=1}^k p_i=1$ $\begin{bmatrix} \ln p_1 \\ \vdots \\ \ln p_k \end{bmatrix}$ $\begin{bmatrix} e^{\eta_1} \\ \vdots \\ e^{\eta_k} \end{bmatrix}$ where $\textstyle\sum_{i=1}^k e^{\eta_i}=1$ $\frac{n!}{\prod_{i=1}^{k} x_i!}$ $\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}$ $0$ $0$ multinomial distribution (variant 2) with known number of trials n p1,...,pk where $\textstyle\sum_{i=1}^k p_i=1$ $\begin{bmatrix} \ln p_1+C \\ \vdots \\ \ln p_k+C \end{bmatrix}$ $\begin{bmatrix} \dfrac{1}{C}e^{\eta_1} \\ \vdots \\ \dfrac{1}{C}e^{\eta_k} \end{bmatrix} =$ $\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix}$ where $\textstyle\sum_{i=1}^k e^{\eta_i}=C$ $\frac{n!}{\prod_{i=1}^{k} x_i!}$ $\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}$ $0$ $0$ multinomial distribution (variant 3) with known number of trials n p1,...,pk where $p_k = 1 - \textstyle\sum_{i=1}^{k-1} p_i$ $\begin{bmatrix} \ln \dfrac{p_1}{p_k} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k-1}}{p_k} \\[15pt] 0 \end{bmatrix} =$ $\begin{bmatrix} \ln \dfrac{p_1}{1-\sum_{i=1}^{k-1}p_i} \\[10pt] \vdots \\[5pt] \ln \dfrac{p_{k-1}}{1-\sum_{i=1}^{k-1}p_i} \\[15pt] 0 \end{bmatrix}$ $\begin{bmatrix} \dfrac{e^{\eta_1}}{\sum_{i=1}^{k}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_k}}{\sum_{i=1}^{k}e^{\eta_i}} \end{bmatrix} =$ $\begin{bmatrix} \dfrac{e^{\eta_1}}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \\[10pt] \vdots \\[5pt] \dfrac{e^{\eta_{k-1}}}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \\[15pt] \dfrac{1}{1+\sum_{i=1}^{k-1}e^{\eta_i}} \end{bmatrix}$ $\frac{n!}{\prod_{i=1}^{k} x_i!}$ $\begin{bmatrix} x_1 \\ \vdots \\ x_k \end{bmatrix}$ $\ln \left(\sum_{i=1}^{k} e^{\eta_i}\right) = \ln \left(1+\sum_{i=1}^{k-1} e^{\eta_i}\right)$ $-\ln p_k = -\ln \left(1 - \sum_{i=1}^{k-1} p_i\right)$ Dirichlet distribution α1,...,αk $\begin{bmatrix} \alpha_1-1 \\ \vdots \\ \alpha_k-1 \end{bmatrix}$ $\begin{bmatrix} \eta_1+1 \\ \vdots \\ \eta_k+1 \end{bmatrix}$ $1$ $\begin{bmatrix} \ln x_1 \\ \vdots \\ \ln x_k \end{bmatrix}$ $\sum_{i=1}^k \ln \Gamma(\eta_i+1) - \ln \Gamma\left(\sum_{i=1}^k\Big(\eta_i+1\Big)\right)$ $\sum_{i=1}^k \ln \Gamma(\alpha_i) - \ln \Gamma\left(\sum_{i=1}^k\alpha_i\right)$ Wishart distribution V,n $\begin{bmatrix} -\frac12\mathbf{V}^{-1} \\[5pt] \dfrac{n-p-1}{2} \end{bmatrix}$ $\begin{bmatrix} -\frac12{\boldsymbol\eta_1}^{-1} \\[5pt] 2\eta_2+p+1 \end{bmatrix}$ $1$ $\begin{bmatrix} \mathbf{X} \\ \ln|\mathbf{X}| \end{bmatrix}$ $-\left(\eta_2+\frac{p+1}{2}\right)\ln|-\boldsymbol\eta_1|$ $+ \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right) =$ $-\frac{n}{2}\ln|-\boldsymbol\eta_1| + \ln\Gamma_p\left(\frac{n}{2}\right) =$ $\left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln|\mathbf{V}|)$ $+ \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right)$ • Three variants with different parameterizations are given, to facilitate computing moments of the sufficient statistics. $\frac{n}{2}(p\ln 2 + \ln|\mathbf{V}|) + \ln\Gamma_p\left(\frac{n}{2}\right)$ NOTE: Uses the fact that ${\rm tr}(\mathbf{A}^{\rm T}\mathbf{B}) = \operatorname{vec}(\mathbf{A}) \cdot \operatorname{vec}(\mathbf{B}),$ i.e. the trace of a matrix product is much like a dot product. The matrix parameters are assumed to be vectorized (laid out in a vector) when inserted into the exponential form. Also, V and X are symmetric, so e.g. $\mathbf{V}^{\rm T} = \mathbf{V}.$ inverse Wishart distribution Ψ,m $\begin{bmatrix} -\frac12\boldsymbol\Psi \\[5pt] -\dfrac{m+p+1}{2} \end{bmatrix}$ $\begin{bmatrix} -2\boldsymbol\eta_1 \\[5pt] -(2\eta_2+p+1) \end{bmatrix}$ $1$ $\begin{bmatrix} \mathbf{X}^{-1} \\ \ln|\mathbf{X}| \end{bmatrix}$ $\left(\eta_2 + \frac{p + 1}{2}\right)\ln|-\boldsymbol\eta_1|$ $+ \ln\Gamma_p\left(-\Big(\eta_2 + \frac{p + 1}{2}\Big)\right) =$ $-\frac{m}{2}\ln|-\boldsymbol\eta_1| + \ln\Gamma_p\left(\frac{m}{2}\right) =$ $-\left(\eta_2 + \frac{p + 1}{2}\right)(p\ln 2 - \ln|\boldsymbol\Psi|)$ $+ \ln\Gamma_p\left(-\Big(\eta_2 + \frac{p + 1}{2}\Big)\right)$ $\frac{m}{2}(p\ln 2 - \ln|\boldsymbol\Psi|) + \ln\Gamma_p\left(\frac{m}{2}\right)$ normal-gamma distribution α,β,μ,λ $\begin{bmatrix} \alpha-\frac12 \\ -\beta-\dfrac{\lambda\mu^2}{2} \\ \lambda\mu \\ -\dfrac{\lambda}{2}\end{bmatrix}$ $\begin{bmatrix} \eta_1+\frac12 \\ -\eta_2 + \dfrac{\eta_3^2}{4\eta_4} \\ -\dfrac{\eta_3}{2\eta_4} \\ -2\eta_4 \end{bmatrix}$ $\dfrac{1}{\sqrt{2\pi}}$ $\begin{bmatrix} \ln \tau \\ \tau \\ \tau x \\ \tau x^2 \end{bmatrix}$ $\ln \Gamma\left(\eta_1+\frac12\right) - \frac12\ln\left(-2\eta_4\right) -$ $- \left(\eta_1+\frac12\right)\ln\left(-\eta_2 + \dfrac{\eta_3^2}{4\eta_4}\right)$ $\ln \Gamma\left(\alpha\right)-\alpha\ln\beta-\frac12\ln\lambda$ The three variants of the categorical distribution and multinomial distribution are due to the fact that the parameters $p_i$ are constrained, such that $\sum_{i=1}^{k} p_i = 1 .$ Thus, there are only $k-1$ independent parameters. • Variant 1 uses $k$ natural parameters with a simple relation between the standard and natural parameters; however, only $k-1$ of the natural parameters are independent, and the set of $k$ natural parameters is nonidentifiable. The constraint on the usual parameters translates to a similar constraint on the natural parameters. • Variant 2 demonstrates the fact that the entire set of natural parameters is nonidentifiable: Adding any constant value to the natural parameters has no effect on the resulting distribution. However, by using the constraint on the natural parameters, the formula for the normal parameters in terms of the natural parameters can be written in a way that is independent on the constant that is added. • Variant 3 shows how to make the parameters identifiable in a convenient way by setting $C = -\ln p_k .$ This effectively "pivots" around $p_k$ and causes the last natural parameter to have the constant value of 0. All the remaining formulas are written in a way that does not access $p_k,$ so that effectively the model has only $k-1$ parameters, both of the usual and natural kind. Note also that variants 1 and 2 are not actually standard exponential families at all. Rather they are curved exponential families, i.e. there are $k-1$ independent parameters embedded in a $k$-dimensional parameter space. Many of the standard results for exponential families do not apply to curved exponential families. An example is the log-partition function A(x), which has the value of 0 in the curved cases. In standard exponential families, the derivatives of this function correspond to the moments (more technically, the cumulants) of the sufficient statistics, e.g. the mean and variance. However, a value of 0 suggests that the mean and variance of all the sufficient statistics are uniformly 0, whereas in fact the mean of the ith sufficient statistic should be $p_i.$ (This does emerge correctly when using the form of A(x) in variant 3.) ## Moments and cumulants of the sufficient statistic ### Normalization of the distribution We start with the normalization of the probability distribution. In general, an arbitrary function $f(x)$ that serves as the kernel of a probability distribution (the part encoding all dependence on x) can be made into a proper distribution by normalizing: i.e. $p(x) = \frac{1}{Z} f(x)$ where $Z = \int_x f(x) dx.$ The factor Z is sometimes termed the normalizer or partition function, based on an analogy to statistical physics. In the case of an exponential family where $p(x; \boldsymbol\eta) = g(\boldsymbol\eta) h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)},$ the kernel is $K(x) = h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)}$ and the partition function is $Z = \int_x h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx.$ Since the distribution must be normalized, we have $1 = \int_x g(\boldsymbol\eta) h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx = g(\boldsymbol\eta) \int_x h(x) e^{\boldsymbol\eta \cdot \mathbf{T}(x)} dx = g(\boldsymbol\eta) Z.$ In other words, $g(\boldsymbol\eta) = \frac{1}{Z}$ or equivalently $A(\boldsymbol\eta) = - \ln g(\boldsymbol\eta) = \ln Z.$ This justifies calling A the log-normalizer or log-partition function. ### Moment generating function of the sufficient statistic Now, the moment generating function of T(x) is $M_T(u) \equiv E[e^{u^{\rm T} T(x)}|\eta] = \int_x h(x) e^{(\eta+u)^{\rm T} T(x)-A(\eta)} dx = e^{A(\eta + u)-A(\eta)}$ proving the earlier statement that $K(u|\eta) = A(\eta+u) - A(\eta)$ is the cumulant generating function for T. An important subclass of the exponential family the natural exponential family has a similar form for the moment generating function for the distribution of x. #### Differential identities for cumulants In particular, using the properties of the cumulant generating function, $E(T_{j}) = \frac{ \partial A(\eta) }{ \partial \eta_{j} }$ and $\mathrm{cov}(T_{i},T_{j}) = \frac{ \partial^{2} A(\eta) }{ \partial \eta_{i} \, \partial \eta_{j} }.$ The first two raw moments and all mixed second moments can be recovered from these two identities. Higher order moments and cumulants are obtained by higher derivatives. This technique is often useful when T is a complicated function of the data, whose moments are difficult to calculate by integration. Another way to see this that does not rely on the theory of cumulants is to begin from the fact that the distribution of an exponential family must be normalized, and differentiate. We illustrate using the simple case of a one-dimensional parameter, but an analogous derivation holds more generally. In the one-dimensional case, we have $p(x) = g(\eta) h(x) e^{\eta T(x)} .$ This must be normalized, so $1 = \int_x p(x) dx = \int_x g(\eta) h(x) e^{\eta T(x)} dx = g(\eta) \int_x h(x) e^{\eta T(x)} dx .$ Take the derivative of both sides with respect to η: $\begin{align} 0 &= g(\eta) \frac{d}{d\eta} \int_x h(x) e^{\eta T(x)} dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= g(\eta) \int_x h(x) \left(\frac{d}{d\eta} e^{\eta T(x)}\right) dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= g(\eta) \int_x h(x) e^{\eta T(x)} T(x) dx + g'(\eta)\int_x h(x) e^{\eta T(x)} dx \\ &= \int_x T(x) g(\eta) h(x) e^{\eta T(x)} dx + \frac{g'(\eta)}{g(\eta)}\int_x g(\eta) h(x) e^{\eta T(x)} dx \\ &= \int_x T(x) p(x) dx + \frac{g'(\eta)}{g(\eta)}\int_x p(x) dx \\ &= \mathbb{E}[T(x)] + \frac{g'(\eta)}{g(\eta)} \\ &= \mathbb{E}[T(x)] + \frac{d}{d\eta} \ln g(\eta) \end{align}$ Therefore, $\mathbb{E}[T(x)] = - \frac{d}{d\eta} \ln g(\eta) = \frac{d}{d\eta} A(\eta).$ #### Example 1 As an introductory example, consider the gamma distribution, whose distribution is defined by $p(x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1}e^{-\beta x}.$ Referring to the above table, we can see that the natural parameter is given by $\eta_1 = \alpha-1,$ $\eta_2 = -\beta,$ the reverse substitutions are $\alpha = \eta_1+1,$ $\beta = -\eta_2,$ the sufficient statistics are $(\ln x, x),$ and the log-partition function is $A(\eta_1,\eta_2) = \ln \Gamma(\eta_1+1)-(\eta_1+1)\ln(-\eta_2).$ We can find the mean of the sufficient statistics as follows. First, for η1: $\begin{align} \mathbb{E}[\ln x] &= \frac{ \partial A(\eta_1,\eta_2) }{ \partial \eta_1 } = \frac{ \partial }{ \partial \eta_1 } \left(\ln \Gamma(\eta_1+1)-(\eta_1+1)\ln(-\eta_2)\right) \\ &= \psi(\eta_1+1) - \ln(-\eta_2) \\ &= \psi(\alpha) - \ln \beta, \end{align}$ Where $\psi(x)$ is the digamma function (derivative of log gamma), and we used the reverse substitutions in the last step. Now, for η2: $\begin{align} \mathbb{E}[x] &= \frac{ \partial A(\eta_1,\eta_2) }{ \partial \eta_2 } = \frac{ \partial }{ \partial \eta_2 } \left(\ln \Gamma(\eta_1+1)-(\eta_1+1)\ln(-\eta_2)\right) \\ &= -(\eta_1+1)\frac{1}{-\eta_2}(-1) = \frac{\eta_1+1}{-\eta_2} \\ &= \frac{\alpha}{\beta}, \end{align}$ again making the reverse substitution in the last step. To compute the variance of x, we just differentiate again: $\begin{align} \operatorname{Var}(x) &= \frac{ \partial^2 A(\eta_1,\eta_2) }{ \partial \eta_2^2 } = \frac{ \partial }{ \partial \eta_2 } \frac{\eta_1+1}{-\eta_2} \\ &= \frac{\eta_1+1}{\eta_2^2} \\ &= \frac{\alpha}{\beta^2}. \end{align}$ All of these calculations can be done using integration, making use of various properties of the gamma function, but this requires significantly more work. #### Example 2 As another example consider a real valued random variable $\scriptstyle X$ with density $p_\theta (x) = \frac{ \theta e^{-x} }{(1 + e^{-x})^{\theta + 1} }$ indexed by shape parameter $\theta \in (0,\infty)$ (this is called the skew-logistic distribution). The density can be rewritten as $\frac{ e^{-x} } { 1 + e^{-x} } \exp( -\theta \log(1 + e^{-x}) + \log(\theta))$ Notice this is an exponential family with natural parameter $\eta = -\theta, \,$ sufficient statistic $T = \log(1 + e^{-x}), \,$ and log-partition function $A(\eta) = -\log(\theta) = -\log(-\eta) \,$ So using the first identity, $E(\log(1 + e^{-X})) = E(T) = \frac{ \partial A(\eta) }{ \partial \eta } = \frac{ \partial }{ \partial \eta } [-\log(-\eta)] = \frac{1}{-\eta} = \frac{1}{\theta},$ and using the second identity $\mathrm{var}(\log(1 + e^{-X})) = \frac{ \partial^2 A(\eta) }{ \partial \eta^2 } = \frac{ \partial }{ \partial \eta } \left[\frac{1}{-\eta}\right] = \frac{1}{(-\eta)^2} = \frac{1}{\theta^2}.$ This example illustrates a case where using this method is very simple, but the direct calculation would be nearly impossible. #### Example 3 The final example is one where integration would be extremely difficult. This is the case of the Wishart distribution, which is defined over matrices. Even taking derivatives is a bit tricky, as it involves matrix calculus, but the respective identities are listed in that article. From the above table, we can see that the natural parameter is given by $\boldsymbol\eta_1 = -\frac12\mathbf{V}^{-1},$ $\eta_2 = \frac{n-p-1}{2},$ the reverse substitutions are $\mathbf{V} = -\frac12{\boldsymbol\eta_1}^{-1},$ $n = 2\eta_2+p+1,$ and the sufficient statistics are $(\mathbf{X}, \ln|\mathbf{X}|).$ The log-partition function is written in various forms in the table, to facilitate differentiation and back-substitution. We use the following forms: $A(\boldsymbol\eta_1, n) = -\frac{n}{2}\ln|-\boldsymbol\eta_1| + \ln\Gamma_p\left(\frac{n}{2}\right),$ $A(\mathbf{V},\eta_2) = \left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln|\mathbf{V}|) + \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right).$ Expectation of X (associated with η1) To differentiate with respect to η1, we need the following matrix calculus identity: $\frac{\partial \ln |a\mathbf{X}|}{\partial \mathbf{X}} =(\mathbf{X}^{-1})^{\rm T}$ Then: $\begin{align} \mathbb{E}[\mathbf{X}] &= \frac{ \partial A(\boldsymbol\eta_1,\ldots) }{ \partial \boldsymbol\eta_1 } = \frac{ \partial }{ \partial \boldsymbol\eta_1 } \left[-\frac{n}{2}\ln|-\boldsymbol\eta_1| + \ln\Gamma_p\left(\frac{n}{2}\right) \right] \\ &= -\frac{n}{2}(\boldsymbol\eta_1^{-1})^{\rm T} = \frac{n}{2}(-\boldsymbol\eta_1^{-1})^{\rm T} \\ &= n(\mathbf{V})^{\rm T} \\ &= n\mathbf{V} \end{align}$ The last line uses the fact that V is symmetric, and therefore it is the same when transposed. Expectation of ln |X| (associated with η2) Now, for η2, we first need to expand the part of the log-partition function that involves the multivariate gamma function: $\ln \Gamma_p(a)= \ln \left(\pi^{p(p-1)/4}\prod_{j=1}^p \Gamma\left[ a+(1-j)/2\right]\right) = p(p-1)/4 \ln \pi + \sum_{j=1}^p \ln \Gamma\left[ a+(1-j)/2\right]$ We also need the digamma function $\psi(x) = \frac{d}{dx} \ln \Gamma(x) .$ Then: $\begin{align} \mathbb{E}[\ln |\mathbf{X}|] &= \frac{ \partial A(\ldots,\eta_2) }{ \partial \eta_2 } = \frac{ \partial }{ \partial \eta_2 } \left[ -\left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln|\mathbf{V}|) + \ln\Gamma_p\left(\eta_2+\frac{p+1}{2}\right) \right] \\ &= \frac{ \partial }{ \partial \eta_2 } \left[ \left(\eta_2+\frac{p+1}{2}\right)(p\ln 2 + \ln|\mathbf{V}|) + p(p-1)/4 \ln \pi + \sum_{j=1}^p \ln \Gamma\left[\eta_2+\frac{p+1}{2}+(1-j)/2\right] \right] \\ &= p\ln 2 + \ln|\mathbf{V}| + \sum_{j=1}^p \psi\left[\eta_2+\frac{p+1}{2}+(1-j)/2\right] \\ &= p\ln 2 + \ln|\mathbf{V}| + \sum_{j=1}^p \psi\left[\frac{n-p-1}{2}+\frac{p+1}{2}+(1-j)/2\right] \\ &= p\ln 2 + \ln|\mathbf{V}| + \sum_{j=1}^p \psi\left[\frac{n}{2}+(1-j)/2\right] \\ &= p\ln 2 + \ln|\mathbf{V}| + \sum_{j=1}^p \psi\left(\frac{n+1-j}{2}\right) \end{align}$ This latter formula is listed in the Wishart distribution article. Both of these expectations are needed when deriving the variational Bayes update equations in a Bayes network involving a Wishart distribution (which is the conjugate prior of the multivariate normal distribution). Computing these formulas using integration would be much more difficult. The first one, for example, would require matrix integration. ## Maximum entropy derivation The exponential family arises naturally as the answer to the following question: what is the maximum-entropy distribution consistent with given constraints on expected values? The information entropy of a probability distribution dF(x) can only be computed with respect to some other probability distribution (or, more generally, a positive measure), and both measures must be mutually absolutely continuous. Accordingly, we need to pick a reference measure dH(x) with the same support as dF(x). The entropy of dF(x) relative to dH(x) is $S[dF|dH]=-\int {dF\over dH}\ln{dF\over dH}\,dH$ or $S[dF|dH]=\int\ln{dH\over dF}\,dF$ where dF/dH and dH/dF are Radon–Nikodym derivatives. Note that the ordinary definition of entropy for a discrete distribution supported on a set I, namely $S=-\sum_{i\in I} p_i\ln p_i$ assumes, though this is seldom pointed out, that dH is chosen to be the counting measure on I. Consider now a collection of observable quantities (random variables) Ti. The probability distribution dF whose entropy with respect to dH is greatest, subject to the conditions that the expected value of Ti be equal to ti, is a member of the exponential family with dH as reference measure and (T1, ..., Tn) as sufficient statistic. The derivation is a simple variational calculation using Lagrange multipliers. Normalization is imposed by letting T0 = 1 be one of the constraints. The natural parameters of the distribution are the Lagrange multipliers, and the normalization factor is the Lagrange multiplier associated to T0. For examples of such derivations, see Maximum entropy probability distribution. ## Role in statistics ### Classical estimation: sufficiency According to the Pitman–Koopman–Darmois theorem, among families of probability distributions whose domain does not vary with the parameter being estimated, only in exponential families is there a sufficient statistic whose dimension remains bounded as sample size increases. Less tersely, suppose Xk, (where k = 1, 2, 3, ... n) are independent, identically distributed random variables. Only if their distribution is one of the exponential family of distributions is there a sufficient statistic T(X1, ..., Xn) whose number of scalar components does not increase as the sample size n increases; the statistic T may be a vector or a single scalar number, but whatever it is, its size will neither grow nor shrink when more data are obtained. ### Bayesian estimation: conjugate distributions Exponential families are also important in Bayesian statistics. In Bayesian statistics a prior distribution is multiplied by a likelihood function and then normalised to produce a posterior distribution. In the case of a likelihood which belongs to the exponential family there exists a conjugate prior, which is often also in the exponential family. A conjugate prior $\pi$ for the parameter $\boldsymbol\eta$ of an exponential family is given by $p_\pi(\boldsymbol\eta|\boldsymbol\chi,\nu) = f(\boldsymbol\chi,\nu) \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi - \nu\, A(\boldsymbol\eta)),$ or equivalently $p_\pi(\boldsymbol\eta|\boldsymbol\chi,\nu) = f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi),$ where $\boldsymbol\chi \in \mathbb{R}^s$ (where $s$ is the dimension of $\boldsymbol\eta$) and $\nu>0$ are hyperparameters (parameters controlling parameters). $\nu$ corresponds to the effective number of observations that the prior distribution contributes, and $\boldsymbol\chi$ corresponds to the total amount that these pseudo-observations contribute to the sufficient statistic over all observations and pseudo-observations. $f(\boldsymbol\chi,\nu)$ is a normalization constant that is automatically determined by the remaining functions and serves to ensure that the given function is a probability density function (i.e. it is normalized). $A(\boldsymbol\eta)$ and equivalently $g(\boldsymbol\eta)$ are the same functions as in the definition of the distribution over which $\pi$ is the conjugate prior. A conjugate prior is one which, when combined with the likelihood and normalised, produces a posterior distribution which is of the same type as the prior. For example, if one is estimating the success probability of a binomial distribution, then if one chooses to use a beta distribution as one's prior, the posterior is another beta distribution. This makes the computation of the posterior particularly simple. Similarly, if one is estimating the parameter of a Poisson distribution the use of a gamma prior will lead to another gamma posterior. Conjugate priors are often very flexible and can be very convenient. However, if one's belief about the likely value of the theta parameter of a binomial is represented by (say) a bimodal (two-humped) prior distribution, then this cannot be represented by a beta distribution. It can however be represented by using a mixture density as the prior, here a combination of two beta distributions; this is a form of hyperprior. An arbitrary likelihood will not belong to the exponential family, and thus in general no conjugate prior exists. The posterior will then have to be computed by numerical methods. To show that the above prior distribution is a conjugate prior, we can derive the posterior. First, assume that the probability of a single observation follows an exponential family, parameterized using its natural parameter: $p_F(x|\boldsymbol \eta) = h(x) g(\boldsymbol\eta) \exp\Big(\ \boldsymbol\eta^{\rm T} \mathbf{T}(x)\ \Big) \,\!$ Then, for data $\mathbf{X} = (x_1,\ldots,x_n)$, the likelihood is computed as follows: $p(\mathbf{X}|\boldsymbol\eta) = \left( \prod_{i=1}^n h(x_i) \right) g(\boldsymbol\eta)^n \exp\left(\ \boldsymbol\eta^{\rm T} \Big(\sum_{i=1}^n \mathbf{T}(x_i)\Big) \ \right)$ Then, for the above conjugate prior: $\begin{align} p_\pi(\boldsymbol\eta|\boldsymbol\chi,\nu) &= f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) &\propto g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \end{align}$ We can then compute the posterior as follows: $\begin{align} p(\boldsymbol\eta|\mathbf{X},\boldsymbol\chi,\nu)& \propto p(\mathbf{X}|\boldsymbol\eta) p_\pi(\boldsymbol\eta|\boldsymbol\chi,\nu) \\ & = \left( \prod_{i=1}^n h(x_i) \right) g(\boldsymbol\eta)^n \exp\left(\ \boldsymbol\eta^{\rm T} \Big(\sum_{i=1}^n \mathbf{T}(x_i)\Big) \ \right) f(\boldsymbol\chi,\nu) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \\ & \propto g(\boldsymbol\eta)^n \exp\left(\ \boldsymbol\eta^{\rm T} \Big(\sum_{i=1}^n \mathbf{T}(x_i)\Big) \ \right) g(\boldsymbol\eta)^\nu \exp(\boldsymbol\eta^{\rm T} \boldsymbol\chi) \\ & \propto g(\boldsymbol\eta)^{\nu + n} \exp\left(\ \boldsymbol\eta^{\rm T} \Big(\boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i)\Big) \ \right) \end{align}$ The last line is the kernel of the prior distribution, i.e. $p(\boldsymbol\eta|\mathbf{X},\boldsymbol\chi,\nu) = p_\pi\Big(\boldsymbol\eta|\boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i), \nu + n\Big)$ This shows that the posterior has the same form as the prior. Note in particular that the data $\mathbf{X}$ enters into this equation only in the expression $\mathbf{T}(\mathbf{X}) = \sum_{i=1}^n \mathbf{T}(x_i),$ which is termed the sufficient statistic of the data. That is, the value of the sufficient statistic is sufficient to completely determine the posterior distribution. The actual data points themselves are not needed, and all sets of data points with the same sufficient statistic will have the same distribution. This is important because the dimension of the sufficient statistic does not grow with the data size — it has only as many components as the components of $\boldsymbol\eta$ (equivalently, the number of parameters of the distribution of a single data point). The update equations are as follows: $\begin{align} \boldsymbol\chi' &= \boldsymbol\chi + \mathbf{T}(\mathbf{X}) = \boldsymbol\chi + \sum_{i=1}^n \mathbf{T}(x_i) \\ \nu' &= \nu + n \end{align}$ This shows that the update equations can be written simply in terms of the number of data points and the sufficient statistic of the data. This can be seen clearly in the various examples of update equations shown in the conjugate prior page. Note also that because of the way that the sufficient statistic is computed, it necessarily involves sums of components of the data (in some cases disguised as products or other forms — a product can be written in terms of a sum of logarithms). The cases where the update equations for particular distributions don't exactly match the above forms are cases where the conjugate prior has been expressed using a different parameterization than the one that produces a conjugate prior of the above form — often specifically because the above form is defined over the natural parameter $\boldsymbol\eta$ while conjugate priors are usually defined over the actual parameter $\boldsymbol\theta .$ ### Hypothesis testing: Uniformly most powerful tests Further information: Uniformly most powerful test The one-parameter exponential family has a monotone non-decreasing likelihood ratio in the sufficient statistic T(x), provided that η(θ) is non-decreasing. As a consequence, there exists a uniformly most powerful test for testing the hypothesis H0: θ ≥ θ0 vs. H1: θ < θ0. ### Generalized linear models The exponential family forms the basis for the distribution function used in generalized linear models, a class of model that encompass many of the commonly used regression models in statistics. ## References 1. Andersen, Erling (September 1970). "Sufficiency and Exponential Families for Discrete Sample Spaces". (Journal of the American Statistical Association, Vol. 65, No. 331) 65 (331): 1248–1255. doi:10.2307/2284291. JSTOR 2284291. MR 268992. 2. Pitman, E.; Wishart, J. (1936). "Sufficient statistics and intrinsic accuracy". 32 (4): 567–579. doi:10.1017/S0305004100019307. 3. Darmois, G. (1935). "Sur les lois de probabilites a estimation exhaustive". C.R. Acad. Sci. Paris (in French) 200: 1265–1266. 4. Koopman, B (1936). "On distribution admitting a sufficient statistic". (Transactions of the American Mathematical Society, Vol. 39, No. 3) 39 (3): 399–409. doi:10.2307/1989758. JSTOR 1989758. MR 1501854. 5. Kupperman, M. (1958) "Probabilities of Hypotheses and Information-Statistics in Sampling from Exponential-Class Populations", Annals of Mathematical Statistics, 9 (2), 571–575 JSTOR 2237349 ## Further reading • Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation. pp. 2nd ed., sec. 1.5. • Keener, Robert W. (2006). Statistical Theory: Notes for a Course in Theoretical Statistics. Springer. pp. 27–28, 32–33. • Fahrmeier, Ludwig; Tutz, G. (1994). Multivariate statistical modelling based on generalized linear models. Springer. pp. 18–22, 345–349.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 405, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8784821629524231, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91337/uniformizing-the-surcomplex-unit-circle/91370
## Uniformizing the surcomplex unit circle ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is the multiplicative Group of surcomplex numbers of modulus 1 isomorphic to the additive Group of the surreal numbers modulo the sub-Group of surreal integers? And, do Norman Alling's surreal extensions of sine and cosine (defined in section 7.5 of his book "Foundations of analysis over surreal number fields") accomplish the isomorphism? - ## 3 Answers Let me say at least this: the usual series for sine and cosine "converge" for the finite surreals, and provide an isomorphism from (the finite surreals modulo the standard integers) onto (the surcomplex unit circle). An alternate for the sine on the finite surreals, write $x = a+z$ where $a$ is a standard real and $z$ is infinitesimal, then use the addition formulas for $\sin(a+z)$ and $\cos(a+z)$. added March 18 Extension to all surreals depends on the choice for the complementary subgroup of the finite surreals. What (beyond the usual $\mathbb Z$) should be called an "integer". Conway has such a choice in his formulation, called $\mathbf{Oz}$. surjective ... Conway emphasizes more the algebraic and combinatorial side, less the analytic side. But, in fact, this same thing will work in all the usual canonical ways of constructing nonarchimedean extensions of the reals. In nonstandard analysis, $\sin$ and $\cos$ have corresponding nonstandard versions, and surjectivity is a first-order property, so it transfers. In transseries, there are many possibilities: series expansion for $\arcsin$; an integral; a solution of a differential equation; ... In the surreals, Erlich [LINK] showed $\mathbf{No}$ can be realized as a space of Hahn series, and after that it will be the same as for transseries. It does seem less convenient in Conway's original formulation, admittedly. added March 19 Here is how we do it when using Hahn series. Once you reach a certain point in Conway's book ONAG, you can do this also for surreals, using his Theorem 23 with his "normal forms". Hahn series look like $\sum_{i \in I} c_i g_i$, where the coefficients $c_i$ are real, and the "monomials" $g_i$ are reverse well-ordered. One possible monomial is $1$; monomials larger than that are "infinite", those smaller are "infinitesimal". The set of possible monomials is an ordered abelian group under multiplication. Given a general element $A$ of our field of Hahn series, we write it as $A = L + t + S$, where every monomial in $L$ is infinite, $t \in \mathbb R$, and every monomial in $S$ is infinitesimal. Define $$\begin{align} \sin A &= \sin t \cos S + \cos t \sin S, \cr \cos A &= \cos t \cos S - \sin t \sin S \end{align}$$ and for infinitesimal $S$, $$\begin{align*} \sin S &= S - \frac{1}{6} S^3 + \frac{1}{5!} S^5 + \dots, \cr \cos S &= 1 - \frac{1}{2} S^2 + \frac{1}{4!}S^4 + \dots, \end{align*}$$ with convergence in the most trivial sense: each monomial occurs in only finitely many terms of the expansion, so you just collect terms. Then observe that there is an inverse series: $$\arcsin T = T + \frac{1}{6} T^3 + \frac{3}{40} T^5 + \dots$$ with convergence in the same sense. Actually, for the surjectivity in this problem, it may be more convenient to use one series $\arctan T$ rather than two series $\arcsin$ and $\arccos$. So: Given $X,T$ with $X^2+Y^2=1$ we claim there is $A$ with $\sin A = X, \cos A = Y$. We should take either $A = \arctan Y/X$ or that plus $\pi$, depending on the signs of $X$ and $Y$. This is getting to be too long for an answer... - How do we know the map is surjective? – James Propp Mar 18 2012 at 2:47 Thanks for the remark about the choice of "complementary subgroup"; it clarified things for me. But I still don't see the surjectivity of the map from the surreals to the surcomplex unit circle. I agree that in non-standard analysis there's a transfer principle, but surreal analysis is different from NSA (I'll start a new thread on this: mathoverflow.net/questions/91646/…), and in any case it's not clear to me that bracket-based definitions of sin and cos are first-order. – James Propp Mar 19 2012 at 18:11 I don't even know what bracket-based definitions of sin and cos are. In ONAG Conway only remarks that the obvious ones don't work. – Gerald Edgar Mar 19 2012 at 19:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer to the first question is yes and the answer to the second question is no. As Ovidiu Costin confirmed in an email to me, the desired isomorphism can be constructed using an idea I learned from him regarding how to define sin/cos on all the surreals. The idea in Ovidiu's words follows, where N ranges over the omnific integers (finite and infinite). With sin/cos the idea is not mine but Martin's (or it even goes back to Conway). What it gives is the following prescription: sin(2 pi N+delta)=sin(delta), if delta\in [0,2\pi). This can be taken as a definition as well. Similarly with cos. Clearly sin/cos are well defined on all surreals. Any isomorphism should now be straightforward. Regards, Philip Ehrlich - I don't understand; say N=1. Then the prescription sin(2 pi N delta) = sin(delta) doesn't hold. Maybe N can only be infinite? In any case, I'd like to see more details. – James Propp Mar 16 2012 at 18:30 Presumably: sin(2 pi N + delta)=sin(delta) – Gerald Edgar Mar 16 2012 at 21:10 Oops, it should have read: sin(2 pi N+delta)=sin(delta) not "sin(2 pi N delta)". Sorry! I have now corrected the typo in the original answer. Philip Ehrlich – Philip Ehrlich Mar 16 2012 at 21:30 Does one prove surjectivity using power series expansions of arcsine and arccosine? If not that way, then how? – James Propp Mar 18 2012 at 2:50 The following two questions were asked: 1: Is the multiplicative Group of surcomplex numbers of modulus 1 isomorphic to the additive Group of the surreal numbers modulo the subgroup of surreal integers? 2: Does Norman Alling’s surreal extensions of sin and cos (defined in his book) accomplish the isomorphism. In my earlier posting I said the answer to 1 is yes and the answer to 2 is no. In response to the request for further details, first note that, as Alling himself observes, his extensions of the definitions of sin and cos via series only applies to infinitesimals. Accordingly, we need to know that sin and cos are well defined throughout the surreals. This is the import of Ovidiu Costin’s observation (taught to him by Martin Kruskal) that one can define sin(2 pi N+delta)=sin(delta), if delta is in [0, 2pi) (and analogously for Cos) where N ranges over all the omnific integers (finite and infinite). Hence, my answer to 2. As to the isomorphism itself, note that since the properties of sin, cos are the same for real as for surreal numbers, one would simply write that (x+iy) with x, y in [-1,1] and x^2+y^2=1 is mapped to theta where there is a unique theta such that cos(theta)=x, sin(theta) =y. For theta in [0, 2pi], both sin(theta) and cos(theta) can be defined (following Kruskal) in terms of a surreal loop bracket { | } containing upper and lower truncates of the usual Taylor series (using the ideas found on pp. 145-146 of Gonshor’s book on surreal numbers). Alternatively, one can skip the use of surreal loop brackets and proceed as follows: for all surreal x, write x = 2pi N+r+delta, where N is an omnific integer, r is a real and delta is an infinitesimal and define sin(x)=sin(r)cos(delta)+cos(r)sin(delta) and cos(x)=cos(r)cos(delta)-sin(r)sin(delta), where sin(r) and cos(r) are the usual sin and cos, and sin(delta) and cos(delta) are defined in terms of Taylor series. - I'm afraid I still don't get it. Philip writes: "since the properties of sin, cos are the same for real as for surreal numbers, one would simply write that (x+iy) with x, y in [-1,1] and x^2+y^2=1 is mapped to theta where there is a unique theta such that cos(theta)=x, sin(theta)=y". But why are the properties of sin and cos the same for real as for surreal numbers? If there is a transfer principle at work here, I'd like to see a clear statement of it. – James Propp Mar 19 2012 at 17:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124945402145386, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2036/does-a-public-key-absolutely-need-to-be-used-to-initiate-an-encrypted-session/2092
# Does a public key absolutely need to be used to initiate an encrypted session? I am a software developer working on an application with the .NET platform. This application needs to provide a secure connection and encrypt all data between the client and the server. It is a standaone app that does not use a web browser and will be using a symmetrical key for the vast majority of the messages for best perfomance. Sessions can be expected to last between a few minutes to more than 24 hours. I am somewhat of a beginner when it comes to cryptography, but I have done some research. From the research I have done, it appears that the current thinking is to use a public/private pair of keys to initiate the session. The first message sent by the client is encrypted with the public key, which is then decrypted by the server with a private key. In the first message to the server, a random symmetric session key is also transmitted which is then used for the rest of the session and then discarded after use. All this seems to be quite reasonable, but I do have some questions and concerns: 1. If public/private keys are to be used for the initial message, then should the same public client / server private key be used for all clients? Or, should each user have a separate public / private key? The application client would not need to maintain a private key for decrypting public key messages from the server since all further messages would be using symmetric encryption. My concern is that if only one key pair is used and someone does somehow manage to figure out that private 2048-bit key, then all the users of the application would be vulnerable... 2. From research I have done, it looks like some are recommending that nothing less than 2048 bit RSA keys should be used. If this application takes off, it could easily have 500,000 users within 5 years or so. Generating public/private 2048 bit key pairs takes some time (if you know how much time it takes - please let me know along with the hardware) and with 500,000 users, could cause some serious time delays in the future. Imagine a scenario where all of the 500,000 key pairs need to be regenerated. How long woutd that take? So, I'm wondering if there is another way of initiating the session that is just as good and circumvents the need to use public / private key pairs to initiate the session... 3. The application will have a website assocatiated with it that will allow users to register with a user id and password. What I am thinking about doing is to apply the password to generate the initial symmetric key that would be used to encrypt the first message to the server. The random session key would still be transmitted and used after the first message. To identify the user, a unique user token would be supplied to the client installation upon registration that would be appended to the first encrypted message and the user token would not be encrypted. Once the server receives the first message, it looks up the token and obtains the password. The symmetric key for that password is then generated on the server exactly the same way as was done on the client. The .NET platform provides a way to generate keys from a password with the PasswordDeriveBytes class. So, this approach seems like it will work, but I am not an expert on this topic and would appreciate hearing from others who are more experienced and are in a position to point out anything that I might be missing... 4. I have also researched symmetric ciphers. Since performance is an issue as well as security, I am thinking of going with CAST-128 instead of Rijndael which is now being used by just about everyone as the AES. The reason for this is because according to some benchmarks, CAST-128 is faster on Windows systems. Also, from a security point of view, I would much prefer to use a much less popular cipher than one that is being used by the US, other governments, and most companies. There is much more interest in breaking Rijndael than CAST-128 which is only being used by the Canadian govt and PGP. - 2 – Antony Vennard Mar 8 '12 at 11:01 1 On #4, the popularity and analysis of AES should be a positive, not a negative. The more people looking at AES, who still haven't broken it, the better. More eyes/brains unsuccessfully trying to break it should make you feel more secure and thus more likely to deploy it. On the performance, most CPUs these days have specific AES instructions. I'm guessing if your machine has the AES instructions, it will be much faster. – mikeazo♦ Mar 8 '12 at 12:16 ## 8 Answers We used it in PGP because that was one of the better choices in 1997. It is no longer 1997. There is nothing wrong with it, but you can do a lot better these days. CAST was even at the time a bit controversial, but we liked it. It was actually designed from a framework, and was one of the first attempts to develop a cipher framework. But at that time there were relatively few other choices. 3DES was (and is) slow, and there was lots of residual ire about DES. IDEA was patented and had inconsistent licensing. Blowfish was perceived to be slow, as well, but as it turns out, Blowfish is damned fast if you're using it on anything of size (i.e. when the slow key schedule is amortized). These days, you should use AES, unless there's a good reason for some other reason. The reason you state (the US Government) is actually the wrong reason. Don't make crypto decisions based on politics. AES should be faster than CAST, on any decent implementation. But more importantly, AES has a 16-byte block. All of the ciphers of the era when they had 8-byte blocks should be avoided because they have birthday attacks that matter at data lengths of relatively few gigabytes. (It's a ~50% chance at $2^{32}$ cipher blocks, and that's 32 GB.) I hear you say, "but I'm not going to encrypt anything that large." You will, trust me. And even if you won't, that's again the wrong reason not to use AES. You want the 128-bit block. If you really don't want to use AES, you should use Twofish or Serpent. They are IP-free finalists for AES and completely reasonable functions. If you still want to use something that's not AES, I recommend Threefish. It runs at twice the speed as software AES and has the advantage of a very large block size. You can find implementations of it via the Skein / Threefish web site. I'm using Werner Dittmann's implementation myself. - Thank you for explaining why Cast should not be used today. Because of this, I am choosing your answer as the best answer. – Bob Bryan Mar 15 '12 at 19:43 Considering the SHA-3 competition isn't finished yet, how likely are further tweaks to threefish/skein? Since we are already in the final phase, am I right to assume that there won't be any more changes? – CodesInChaos Mar 17 '12 at 17:33 @CodeInChaos well, it's up to NIST, but you can rest assured that if the authors make any changes now, they will be on the bottom on the list. NIST may make changes though, but I would be very very very surprised if they do. Time for tweaking is long past. – owlstead Mar 21 '12 at 0:31 First recommendation: Don't invent your own protocol, but use an existing one. Use SSL/TLS, in the newest version possible if you don't have to provide downwards compatibility to existing clients. This will take care of most problems here, you simply put in a pair of plaintext data streams, and get a pair of encrypted streams. There are TLS implementations for most programming languages available, I'm quite sure that .NET has one, too. TLS is quite flexible about the key exchange, authentication and encryption algorithms (bundled together as "cipher suite"). You'll have a public key for the server (usually with a certificate, whose subscribing key can already be embedded in the client application), and then use (for example) Diffie-Hellman key exchange with public-key authentication. There are also password-based key exchange algorithms, then you even don't need a public key (but client and server need some other way to first get the common password). 1. SSL/TLS works fine with a constant public key, not a different one per-client. Of course, if the private key gets leaked, all future connections are broken. If you did use the simple "encrypt session key by public key" model (RSA key exchange), also all past connections can be decrypted. Diffie-Hellman provides forward-secrecy, as the key is generated from random input of both sides, and the public key is only used for authentication. If you create a new key pair per-client, you'll need some way of the client to tell its name (or key ID or similar) before the actual connection. 2. I have no data about how long key generation nowadays takes, sorry. You also have to take care of having enough entropy for your keys. But as said before, there is no need to generate new keys regularly. 3. Instead of doing this yourself, have a look on the secure remote password protocol for generating a shared session key authenticated by a password. This is also available (standardized) as a key exchange method for TLS, though I'm not sure if it is supported by .NET's implementation. 4. You can use whatever cipher you want, but as said by others, AES is actually more likely to be secure than other ciphers. Also, modern processors have actually AES-instructions build in, which could make the performance difference quite small or even negative (i.e. AES-128 could be faster than CAST-128), if you use a library which actually can make use of these. - I actually typed most of this already yesterday, but somehow forgot to actually post it as an answer. – Paŭlo Ebermann♦ Mar 9 '12 at 12:16 Also, from a security point of view, I would much prefer to use a much less popular cipher than one that is being used by the US, other governments, and most companies. There is much more interest in breaking Rijndael than CAST-128 which is only being used by the Canadian govt and PGP. Since there is much more interest in breaking Rijndael, any flaws in it are much more likely to have already been discovered. In addition, any new breakthroughs are likely to be publicly disclosed with their relevant to Rijndael mentioned first. Thus you can have more confidence that Rijndael doesn't have hidden flaws than CAST-128 right now. And you can have more confidence that if cryptographic breakthroughs comprise the algorithm, you're more likely to find out the good way (by reading academic papers) than the bad way (by your application being compromised). Also, say Rijndael is compromised. You followed the industry recommendation, and everyone's stuff will break. Customers will understand it's not your issue alone. If you pick CAST-128, the opposite situation is likely. Also, if some bad person does compromise Rijndael, you're very unlikely to be the first thing they go after. With CAST-128, your odds of being the first target are much higher. That said, CAST-128 is not a bad choice. There's nothing wrong with it, and it's sufficiently popular that any significant cryptographic breakthroughs would be published. Just understand that if you choose it, you will have to monitor the literature. It won't be front page news the way an AES issue would be. - As mentioned, the block size would be one thing that is wrong with it.... – owlstead Mar 21 '12 at 0:34 Regard your sub question 4: Your argument is unfortunately flawed. A broken cipher algorithm becomes a security issue if both of the following conditions are true: 1. Your adversary knows how to break the algorithm, 2. You don't known that your adversary knows how to break the algorithm. Suppose you are choosing between the two algorithms A and B. Both algorithms have the same alleged security strength, but not necessarily the same actual security strength (i.e. security strength after all known and unknown attacks have been accounted for). You don't know the actual security strength of either algorithm. However, algorithm A is more popular than algorithm B and receives more attention from researchers. Does the relative popularity of algorithm A compare to algorithm B mean that the probability of 1 & 2 is higher for algorithm A? The common sense opinion is that it is actually the other way around. The probability that 1 will be true without 2 being false is lower for a popular algorithm. - 1) If your private key is compromised you are screwed, but this is pretty much always the case. I'd suggest one long term public key and securing that box to the highest degree. However, if you're worried, the key to clients will be interesting. You could do TLS, and create a certificate authority that's stored on a usb key(I have a similiar setup it's a live encrypted os on the key, boot it and i can be fairly sure it's clean). Each time you need a new key, stIf you have belief that you've been compromised, set a revocation for the current certificate, and then sign a new certificate. This is arduous, but it means that someone needs to compromise the usb key in order to get the certificate. Otherwise, I suggest doing short Term keys with the aforementioned solution if you don't want full TLS. I'm going to give the standard, please don't roll your own cryptographic protocol, but it appears to be too late. Have some signing authority sign a key and have them only be valid for a month or two at a time, each month boot up your usb key and then get anew signed certificate. This minimizes compromise damage, but does not come even close to negating it. 1A) On individual keys- This is dependent on your security model. Do you want to mutual authentication, maybe a PKI system? If so probably want each client to have their own certificate you give them during registration. If you just want to make sure the servers the server, then you only need for the server to have a public key. 2) 2048 Bit keys are most definitely necessary. However, RSA is not, but some form of key agreement is. Based on my completely unscientific tests of running rsagen on my computer, it seems to take around a second or so, and we'll use this number for our calculations. You most likely do not need to calculate 500k keys, just one and have them connect to you. If you want to have ach key be calculated, what'd you do is have the client calculate the key, and submit it to the server for signing. Secondly, if you use DH this number becomes a lot smaller, since a DH key is choose a number between 1 and 1 million, unlike RSA where the numbers must be prime, so there are only the numbers that work, on top of that you must run primality tests each time. Hence, diffie key generation is faster. All you probably really need is key exchange. 3) This is a horrible idea. Please abandon it, it's going to go wrong in ways you haven't even thought about yet, and are going to be to subtle for us to notice. If your server is ever compromised, the person can now imitate clients forever, since they can derive the first request, have the authentication for it, and then pretty much are able to login forever. You're essentially storing your encryption keys as passwords in plaintext. Just have the server use a private key, hardcode a CA inside your application, or some public key and use that to set up an authenticated chat. Most likely there's also some way that can manipulate it to be a MITM attack because you're doing something you didn't realize. Or maybe, just flipping a few bits and causing the encrypted message to be a new one that I already have the key for, or maybe replay between client and server or ... 4) Use AES David schwartz sums it really well, and I'd say the same thing. 5) Though you didn't ask anything about this i'm going to say it anyway. MAC Everything. Have the key you setup create two subkeys, one to encrypt one to MAC, since most likely a bit flip to you is a really bad thing. Don't repeat IV's. Use cbc or ctr mode. I suggest the encrypt then mac model. If your set on doing this please atleast read this - Does a public key absolutely need to be used to initiate an encrypted session? No. It is sufficient that the client and the server have "a shared secret". The server can encrypt a random (rotating session) key with the shared secret, and send it to the client. The client can then decrypt the key and send encrypted data to the server. You used a shared secret as an alternative to a public key. - 1. This of course depends on how many logins you're getting. If logins are rare, it might even be no slower to generate a new public key for each session. (It would certainly be better. Also see *) 2. http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange, although here too I would recommend using at least a 2048-bit prime. (also see *) 3. No, no, no. That idea is only for if the server cannot send and cannot have sent anything to the client. Since the server will have previously been able to make an authenticated communication to the client (preferably when providing the application, otherwise right before it receives the password) and will be able to interact with the client, the server signs the transcript of the initiation phase once that phase is otherwise complete, which the client verifies before proceeding. (In particular, the client's verification occurs before anything that depends on the password.) 4. (I have no clue here.) * The server needs to sign as described in my answer to 3. Your number 3 also leaves me worried that, if nothing else, you are planning on using PasswordDeriveBytes. The server should store an scrypt value from the password, and not the password itself, a PasswordDeriveBytes value from the password, or anything else derived from the password. - Thanks to everyone for their suggestions. I am convinced that going with CAST is not a good idea and will use AES-128 for time-limited (typically < 1 day and where the info has no value after disconnection) transactions where performance is more important and use AES-256 for financial transactions where longer term security is more important. In order to see how fast a public/private key pair can be created (twice - one for the server and one for the client), the public key exported and then imported into the client, the data encrypted with the public key and then decrypted with the private key, I have written a little benchmark test program in C# to see how quickly this will take for both 1024 bit and 2048 bit keys: ```` static void Main() { // This is a console project. try { //Create a UnicodeEncoder to convert between byte array and string. UnicodeEncoding ByteConverter = new UnicodeEncoding(); DateTime StartTime, EndTime; TimeSpan DeltaTime; StartTime = DateTime.Now; // RSASpeedTest(100, 1024); EndTime = DateTime.Now; DeltaTime = EndTime - StartTime; //Display the time it took to run the test. Console.WriteLine(" Time for 1024 bit key = " + DeltaTime.ToString()); Console.ReadKey(); // StartTime = DateTime.Now; RSASpeedTest(100, 2048); EndTime = DateTime.Now; DeltaTime = EndTime - StartTime; //Display the time it took to run the test. Console.WriteLine(" Time for 2048 bit key = " + DeltaTime.ToString()); Console.ReadKey(); } catch (Exception E) { // If an excpetion occurred, then display it to the user. Console.WriteLine("Main - exception thrown. Msg = " + E.Message); Console.ReadKey(); } } static public void RSASpeedTest(int TestCount, int KeySize) { // This method tests how quickly the following can be done: // 1. Create a public / private key pair for the client and server. // 2. Export and import a public key. // 3. Encrypt the data using the public key. // 4. Decrypt the data using the private key. // The above steps are tested TestCount number of times. int Loop; try { UnicodeEncoding ByteConverter = new UnicodeEncoding(); byte[] dataToEncrypt = ByteConverter.GetBytes("Data to Encrypt"); byte[] encryptedData; byte[] decryptedData; for (Loop = 0; Loop < TestCount; Loop++) { RSACryptoServiceProvider RSA = new RSACryptoServiceProvider(KeySize); RSACryptoServiceProvider RSA2 = new RSACryptoServiceProvider(KeySize); // The next line of code shows how to export a public key from the public/private key pair. byte[] CspBlob = RSA.ExportCspBlob(false); // The next line of code shows how to import the public key into the object. RSA2.ImportCspBlob(CspBlob); // The next line of code shows how to move the public and private keys into a string. // String S = RSA.ToXmlString(true); // The next line of code shows how to move just the public key into a string. // String S1 = RSA2.ToXmlString(false); // The next line of code shows a different way of getting the public key data. // RSAParameters RSAParams = RSA.ExportParameters(false); // The next line of code encrypts the data with the public key. encryptedData = RSA2.Encrypt(dataToEncrypt, false); // The next line of code decrypts the data with the private key. decryptedData = RSA.Decrypt(encryptedData, false); } } catch (Exception E) { // If an excpetion occurred, then display it to the user. Console.WriteLine("RSASpeedTest - exception thrown. Msg = " + E.Message); Console.ReadKey(); } } ```` On my computer, which uses a Pentium I7 950 running at around 3.5 ghz with Win 7 and 2000 ghz memory, the 1024 bit key test took just over 10 seconds. So, since the test is run 100 times it takes roughly 1/10 of a second on average for it to complete just once. The 2048 bit test is just over 2.6 times slower. The test makes use of the .NET Framework 3.5 and is single threaded. So, what I am thinking of doing is not storing any keys at all. Instead, all keys will be generated on the fly since the tests show that creating the keys can be done fairly quickly. For 99%+ of most transactions for my application, a 1024 bit key can be used since there is no value to the data after the user terminates the connection. The connection will not usually last more than 24 hours and certainly not more than 72 hours. For financial transactions, then a 2048 bit key will be used to deliver the 256 bit AES symmetric key. - Note that at least your client needs some way of knowing that they speak actually to the server, and not to some man-in-the-middle. Often a good way to do this is that the server has a private key, and the client knows the corresponding public key (or some other public key which signed a certificate for the server's key). If you generate everything fresh, you are open to an interception attack. – Paŭlo Ebermann♦ May 12 '12 at 22:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406733512878418, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/105772-help-basic-integral-problem-print.html
# help with basic integral problem? Printable View • October 2nd 2009, 06:09 PM yoman360 help with basic integral problem? $\int_{-5}^5 \frac{2}{x^3} dx$ the answer i got is 0. the answer in the back of the book says: Does not exist what did i do incorrectly? This is what I did: $\int_{-5}^5 \frac{2}{x^3} dx$ = $\frac{-1}{x^2}|_{-5}^5$ = $\frac{-1}{(5)^2}-(\frac{-1}{(-5)^2})$ = $\frac{-1}{25}+\frac{1}{25}$ =0 • October 2nd 2009, 06:14 PM mr fantastic Quote: Originally Posted by yoman360 $\int_{-5}^5 \frac{2}{x^3} dx$ the answer i got is 0. the answer in the back of the book says: Does not exist what did i do incorrectly? Note* this post will be updated soon so i can show you my work The update will be unnecessary since you probably neglected to recognise it as an improper integral (the integrand is undefined at x = 0 which lies in the interval of integration). The first step is therefore to write: $\lim_{\alpha \rightarrow 0} \int_{-5}^{\alpha}\frac{2}{x^3} \, dx + \lim_{\beta \rightarrow 0} \int_{\beta}^{5}\frac{2}{x^3} \, dx$. (A pre-emptive) by the way, statements like $\infty - \infty$ make no sense and certainly do not equal zero .... • October 2nd 2009, 06:20 PM yoman360 Quote: Originally Posted by mr fantastic The update will be unnecessary since you probably neglected to recognise it as an improper integral (the integrand is undefined at x = 0 which lies in the interval of integration). The first step is therefore to write: $\lim_{\alpha \rightarrow 0} \int_{-5}^{\alpha}\frac{2}{x^3} \, dx + \lim_{\beta \rightarrow 0} \int_{\beta}^{5}\frac{2}{x^3} \, dx$. (A pre-emptive) by the way, statements like $\infty - \infty$ make no sense and certainly do not equal zero .... ah i see since f is not continuous on [a,b] I can't use the fundamental theorem of calculus: $\int_a^b f(x)dx=F(b)-F(a)$ where F is any antiderivative of f, that is, a function such that F'=f Thank you so much for pointing that out I just forgot to check if its continuous on [-5,5] since its discontinuous at x=0 the integral does not exist All times are GMT -8. The time now is 03:09 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954098105430603, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/110679/list
## Return to Question 2 added 243 characters in body; added 19 characters in body Is there any exact formula or at least exact inequalities for the following intehral $$\int_2^x\frac{dt}{[\frac{\log x}{\log t}]\log t}$$ where [x] is the greatest integer less than or equal to x. added: When I use $$x-1<[x]\le x$$ I get $$\frac{x-2}{\log x}=\int_2^x\frac{dt}{\log x}\leq \int_2^x\frac{dt}{[\frac{\log x}{\log t}]\log t}\le \int_2^x\frac{dt}{\log x-\log t}$$ but they are not exact enough. I need more closer bounds. 1 # integrate of functions involving floor Is there any exact formula or at least exact inequalities for the following intehral $$\int_2^x\frac{dt}{[\frac{\log x}{\log t}]\log t}$$ where [x] is the greatest integer less than or equal to x
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243096113204956, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/08/22/engels-theorem/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## Engel’s Theorem When we say that a Lie algebra $L$ is nilpotent, another way of putting it is that for any sufficiently long sequence $\{x_i\}$ of elements of $L$ the nested adjoint $\mathrm{ad}(x_n)\left[\dots\mathrm{ad}(x_2)\left[\mathrm{ad}(x_1)[y]\right]\right]$ is zero for all $y\in L$. In particular, applying $\mathrm{ad}(x)$ enough times will eventually kill any element of $L$. That is, each $x\in L$ is ad-nilpotent. It turns out that the converse is also true, which is the content of Engel’s theorem. But first we prove this lemma: if $L\subseteq\mathfrak{gl}(V)$ is a linear Lie algebra on a finite-dimensional, nonzero vector space $V$ that consists of nilpotent endomorphisms, then there is some nonzero $v\in V$ for which $l(v)=0$ for all $l\in L$. If $\dim(L)=1$ then $L$ is spanned by a single nilpotent endomorphism, which has only the eigenvalue zero, and must have an eigenvector $v$, proving the lemma in this case. If $K$ is any nontrivial subalgebra of $L$ then $\mathrm{ad}(k)\in\mathrm{End}(L)$ is nilpotent for all $k\in K$. We also get an everywhere-nilpotent action on the quotient vector space $L/K$. But since $\dim(K)<\dim(L)$, the induction hypothesis gives us a nonzero vector $x+K\in L/K$ that gets killed by every $k\in K$. But this means that $[k,x]\in K$ for all $k\in K$, while $x\notin K$. That is, $K$ is strictly contained in the normalizer $N_L(K)$. Now instead of just taking any subalgebra, let $K$ be a maximal proper subalgebra in $L$. Since $K$ is properly contained in $N_L(K)$, we must have $N_L(K)=L$, and thus $K$ is actually an ideal of $L$. If $\dim(L/K)>1$ then we could find an even larger subalgebra of $L$ containing $K$, in contradiction to our assumption, so as vector spaces we can write $L\cong K+\mathbb{F}z$ for any $z\in L\setminus K$. Finally, let $W\subseteq V$ consist of those vectors killed by all $\in K$, which the inductive hypothesis tells us is a nonempty collection. Since $K$ is an ideal, $L$ sends $W$ back into itself: $k(l(w))=l(k(w))-[l,k](w)=0$. Picking a $z\in L\setminus K$ as above, its action on $W$ is nilpotent, so it must have an eigenvector $w$ with $z(w)=0$. Thus $l(w)=0$ for all $l\in L=K+\mathbb{F}z$. So, now, to Engel’s theorem. We take a Lie algebra $L$ consisting of ad-nilpotent elements. Thus the algebra $\mathrm{ad}(L)\subseteq\mathfrak{gl}(L)$ consists of nilpotent endomorphisms on the vector space $L$, and there is thus some nonzero $z\in L$ for which $[L,z]=0$. That is, $L$ has a nontrivial center — $z\in Z(L)$. The quotient $L/Z(L)$ thus has a lower dimension than $L$, and it also consists of ad-nilpotent elements. By induction on the dimension of $L$ we assume that $L/Z(L)$ is actually nilpotent, which proves that $L$ itself is nilpotent. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras ## 3 Comments » 1. [...] lemma leading to Engel’s theorem boils down to the assertion that there is some common eigenvector for all the endomorphisms in a [...] Pingback by | August 25, 2012 | Reply 2. [...] like to have matrix-oriented versions of Engel’s theorem and Lie’s theorem, and to do that we’ll need flags. I’ve actually referred to [...] Pingback by | August 25, 2012 | Reply 3. [...] obvious that if is nilpotent then will be solvable. And Engel’s theorem tells us that if each is ad-nilpotent, then is itself nilpotent. We can now combine this with our [...] Pingback by | September 1, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 65, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278110861778259, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/111299-how-find-inverse-f-x-x-sqrt-x.html
# Thread: 1. ## How to find the inverse of f(x)=x+sqrt(x)? $f(x)=x+\sqrt{x}$ Is it even possible to find the inverse of this? *edit* the question is find $f^{-1} (6)$ but to do that I must 1st find $f^{-1} (x)$ 2. it is possible, but rather pointless for this exercise, you just say $x=y+\sqrt{y}$ 3. Originally Posted by artvandalay11 when you say is it possible i believe you're talking about elementary operations and as far as i know, you can't solve for that inverse algebraically in terms of y, you just say $x=y+\sqrt{y}$ *edit* the question is find $f^{-1} (6)$ but to do that I must 1st find $f^{-1} (x)$ 4. no, you just need to find what value of x gives 6 as an answer, when they say find $f^{-1}(6)$ they are asking what value of x gives 6 as an output for f(x), and you do not need a formula for $f^{-1}$ Just by looking I can see the answer is 4 $6=x+\sqrt{x}$ 5. Yes, x= 4 works. And that is as good a method as any!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9778176546096802, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/21274/lukasiewicz-logic-tautology/21423
# Lukasiewicz Logic Tautology Suppose a statement form $\varphi$ always has value T or U. Show $\varphi$ is a classical tautology. - This question is quite badly formulated. What is T? What is U? Do you mean first-order or just propositional? – boumol Feb 10 '11 at 0:34 – JDH Feb 10 '11 at 0:38 ## 1 Answer Suppose that a statement $\varphi$ of propositional logic (using only $\wedge$, $\vee$ and $\to$) is not a propositional tautology in classical logic. It follows that some row of the truth table for $\varphi$ has value $F$. Now, consider the truth table of $\varphi$ in Łukasiewicz three-valued logic, which is called Kleene logic on the Wikipedia page. The key observation is that Łukasiewicz three-valued logic agrees with classical logic on classical logic input. In other words, if all the propositional variables of $\varphi$ are given classical T/F values, then the truth value of $\varphi$ will agree with the classical logic value. Thus, since $\varphi$ had a classical row with value $F$, the very same row will have value F in the Łukasiewicz truth table of $\varphi$, contrary to the assumption that $\varphi$ had only values T or U. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9019530415534973, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76188/is-there-any-fibration-mathbbrn-to-mathbbsn/76251
## is there any fibration $\mathbb{R}^n\to \mathbb{S}^n$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is probably a trivial question. But I don't see the answer. Is there any Hurewicz fibration $\mathbb{R}^n\to \mathbb{S}^n$ ? Is there any fibration $X\to \mathbb{S}^n$, when $X\subset \mathbb{R}^n$? I appreciate any help. Thank you very much! - 2 Well, certainly there is when $n=1$... – Daniel Litt Sep 23 2011 at 7:38 10 My approach would be similar to Dylan's: If such a fibration existed for $n>1$, the fibre would be a closed subset $F\subset \mathbb{R}^n$ with the weak homotopy type of $\Omega S^n$, and would therefore have nonzero (co)homology in arbitrarily high degrees. In some sense you have an infinite dimensional space embedded in a finite dimensional Euclidean space, which feels wrong. I can't right now see how to rule it out though. – Mark Grant Sep 23 2011 at 9:18 6 Along this line, it's worth mentioning that Barratt-Milnor constructed examples (higher analogues of the Hawaiian earrings) that embed in $\mathbb{R}^n$ but have cohomology in arbitrarily high degrees. User BS discussed them here: mathoverflow.net/questions/4478/… – Tyler Lawson Sep 23 2011 at 12:39 1 However, $\Omega S^n$ has the homotopy type of a CW complex, and therefore (iirc?) its singular homology is the same as its \check{C}ech homology! – some guy on the street Sep 23 2011 at 14:36 1 A small note, the answer to the first question is clearly no for $n=2$ by the classification of non-compact $2$-manifolds + Mark's comment. I imagine there's a reasonable proof the answer is always no for $n>1$ but off the top of my head I'm not seeing it. – Ryan Budney Sep 23 2011 at 21:51 show 5 more comments ## 2 Answers Edit: The following simplifies the original answer (which unnecessarily used singular cohomology). If $f:\Bbb R^n\to S^n$ is a fibration, then as Mark noted, a fiber $F$ of $f$ is weak homotopy equivalent to $\Omega S^n$ (using the 5-lemma, see Prop. 4.66 in Hatcher). I claim that $F$ is in fact homotopy equivalent to $\Omega S^n$. Indeed, $F$ is homotopy equivalent to the corresponding homotopy fiber of $f$ (Prop. 4.65 in Hatcher). The homotopy fiber consists of pairs $(x,p)$ where $x\in\Bbb R^n$ and $p$ is a path in $S^n$ connecting $f(x)$ and the basepoint. It is homotopy equivalent to the space $X$ of maps $[0,1]\to MC(f)$ (the mapping cylinder) sending $0$ into $\Bbb R^n$ and $1$ into the basepoint of $S^n$. Milnor showed that $X$ and $\Omega S^n$ are homotopy equivalent to CW-complexes. Hence, being weak homotopy equivalent to each other, by Whitehead's theorem they are homotopy equivalent to each other. Now $F$ is finite-dimensional, so its Cech cohomology is eventually zero. Cech cohomology is a homotopy invariant, so we get that the cohomology of $\Omega S^n$ is eventually zero, contradicting Mark's comment. (It does not matter which cohomology of $\Omega S^n$, they are all isomorphic since $\Omega S^n$ is homotopically a CW-complex.) - Hi Sergey, when you say that $F$ is finite-dimensional, are you talking about its covering dimension (I see no reason why $F$ should be something nice like a manifold or complex)? Do closed subsets of $\mathbb{R}^n$ necessarily have covering dimension less than or equal to $n$? – Mark Grant Sep 24 2011 at 7:55 3 Mark, yes, $F$ is of covering dimension $\le n$, that is, every open cover of $F$ can be refined by a cover whose nerve is of dimension $\le n$. Indeed, by the definition of subspace topology, every open cover of $F$ is the intersection with $F$ of an open cover of an open neighborhood of $F$; hence also of an open cover $C$ of $\Bbb R^n$. This $C$ has a refinement $D$ with a nerve $N$ of dimension $\le n$ since $\Bbb R^n$ is $n$-dimensional (see nice pictures at en.wikipedia.org/wiki/Lebesgue_covering_dimension). Finally, the nerve of $D$ intersected with $F$ is a subcomplex of $N$. – Sergey Melikhov Sep 24 2011 at 16:14 2 I should add that perhaps some confusion arises from Tyler Lawson's remark that the Barratt-Milnor examples of a subset of $\Bbb R^3$ "have cohomology in arbitrarily high degrees". This is true only of singular cohomology, which is not a geometrically interesting cohomology theory, in particular because it is not Brown representable (see mathoverflow.net/questions/47544). – Sergey Melikhov Sep 24 2011 at 16:49 1 Assume that $p: E\to B$ is a fibration and $E$ is contractible. We have that the homotopy fiber is homotopy equivalent to the fiber. But the homotopy fiber is the pullback of $p: E\to B$ along $v_0: PB\to B$. Since $v_ 0$ is a fibration, the homotopy fiber is, indeed, the homotopy pullback. Hence, the pullback of $\ast\to B$ along $v_0$ is homotopy equivalent to the homotopy fiber (since $E$ is contractible). This new pullback is $\Omega B$. So, in general, if $p: E\to B$ is a fibration with contractible total space, the fiber is homotopy equivalent to $\Omega B$. – Fernando Lucatelli Feb 25 at 17:19 This makes it much simpler, thanks! – Sergey Melikhov Mar 1 at 0:40 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think there's a pretty simple answer, patching together everything that's been said in the comments. By Mark's answer, the fibers have $H_{n-1}$ isomorphic to $\mathbb Z$. And if $n>1$ these fibers are connected. Alexander duality tells you the Cech cohomology of the fiber in dimension $n-1$ is isomorphic to the reduced $0$-dimensional homology of the complement of the fiber in $\mathbb R^n$. And some guy on the street's comment tells you that Cech cohomology is regular cohomology. So this is saying that the fibers separate $\mathbb R^n$, but since the base is $S^n$ with $n \geq 2$, that's impossible. - 1 Ryan, I'm not getting this. Mark has a weak homotopy equivalence. Cech cohomology is not an invariant of weak equivalence. – Sergey Melikhov Sep 24 2011 at 0:43 Since $\Omega S^n$ is a CW-space and so is the fiber, then the weak equivalence is an actual equivalence. – Dylan Wilson Sep 24 2011 at 0:50 1 Dylan, the fiber is not a CW complex: the original fibration is not assumed PL or smooth. – Sergey Melikhov Sep 24 2011 at 1:05 2 @Sergey: as you've pointed out in your answer, the weak equivalence can be made into a homotopy equivalence. Sorry I abandoned the question -- my calculus class had an exam and I had to run out of the room. Your answer is the more complete so I'd like to suggest Fernando accept yours. – Ryan Budney Sep 24 2011 at 3:48 1 Ryan, thanks. I'm still a bit puzzled by your comment on the $n=2$ case, can you do this case without using Milnor's theorem on functional spaces? In this connection the following example is worth mentioning. The pseudo-arc is a compact subset of $\Bbb R^2$ which does not have any nontrivial paths in it. So it is weakly equivalent to a discrete set of cardinality continuum (which I guess can be seen as CW-complex), but is not homotopy equivalent to any CW-complex. – Sergey Melikhov Sep 24 2011 at 17:32 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393420815467834, "perplexity_flag": "head"}
http://mathoverflow.net/questions/86909?sort=votes
Penner’s formula for volume of the Moduli Space Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In his paper "Weil-Petersson Volumes" Penner gives the following formula for the integral of a top-dimensional cohomology class $\omega$ on the moduli space $\mathcal M_g^s$ of $s$-punctured riemann surfaces of genus $g$: $$\int_{\mathcal M_g^s}\omega=\sum_G\frac{1}{|Aut(G)|}\int_{D(G)}\phi^*\omega$$ where the sum is taken over all isomorphism classes of ribbon graphs $G$, $\phi:\widetilde{\mathcal T_g^s}\to\mathcal M_g^s$ is the natural map from the decorated Teichmüller space to the moduli space, and $D(G)$ is more or less the pre-image under $\phi$ of the orbicell in $\mathcal M_g^s$ corresponding to $G$. My question is to what extent can this formula be generalized. Specifically, 1) Is a similar formula for evaluating forms of arbitrary degree? 2) Is this formula the shadow'' of a general formula for evaluating cohomology classes on (smooth) orbifolds? I myself cannot understand Penner's paper well enough to see where the top-dimensional assumption on $\omega$ enters the picture. References to the literature would be much appreciated as well. - 1 (of course, neither side makes sense if $\omega$ is not top dimensional) – unknown (google) Jan 28 2012 at 18:56 Right. By similar formula of arbitrary degree'' I'm asking for the integral over a cycle of the appropriate dimension. – Steve Jan 29 2012 at 2:27 2 Isn't this formula essentially tautological, once you believe that this triangulation exists? i.e. a similar formula exists for any triangulated manifold (with no need for the 1/|Aut| factor for an honest triangulation, of course). – Tom Church Jan 29 2012 at 6:06 1 Answer The major breakthrough in this was made by Maryam Mirzakhani in her thesis (see this preprint) Her stuff is is relevant to general intersection theory on moduli space (see references to Witten's and Weitsman's papers in her bibliography), which is one way to interpret your question about "not top-dimensional classes"). Her work has received a lot of attention (Google Scholar shows 83 citations) -- I am sure those 83 papers have a lot more information. - The link appears to be broken. Do you know the title of the work it is supposed to link to? – Steve Jan 28 2012 at 21:51 Presumably this math.sunysb.edu/~mlyubich/Archive/Geometry/… – Aaron Bergman Jan 28 2012 at 22:11 Yes, sorry. Damn PC. – Igor Rivin Jan 28 2012 at 22:27 Should be fixed now. – Igor Rivin Jan 28 2012 at 22:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.900041937828064, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/153822-triangle-under-1-z-complex-field.html
# Thread: 1. ## Triangle under 1/z in the complex field... Hey guys. I'm trying to find the "picture" (sorry if it's not the word) of the triangle under 1/z in the complex field. I think that [0,2i] -> [-i/2, inf], [0,2] -> [-1/2, inf] but I'm not sure about the 3rd side of the triangle. I tried putting in some number, but all it did is to confuse me even more. Can I please have some help? Thanks a lot. 2. Originally Posted by asi123 Hey guys. I'm trying to find the "picture" (sorry if it's not the word) of the triangle under 1/z in the complex field. I think that [0,2i] -> [-i/2, inf], [0,2] -> [-1/2, inf] but I'm not sure about the 3rd side of the triangle. If I understand the question correctly, you have a triangle in the complex plane, with vertices at 0, 2 and 2i, and you want to find its image under the map $z\mapsto 1/z$. The image of the side [0,2] is the segment $[\frac12,\infty)$ on the real axis. The image of the side [0,2i] is the segment $[-\frac12i,-\infty i)$ on the imaginary axis. The remaining side is a segment of the line $x+y=2$, where $z = x+iy$. Let $w=1/z$ and write $w = u+iv$. Then $x+iy = \dfrac1{u+iv} = \dfrac{u-iv}{u^2+v^2}$. Take real and imaginary parts to see that $x = u/(u^2+v^2)$ and $y = -v/(u^2+v^2)$. Then the equation $x+y=2$ becomes $u-v = 2(u^2+v^2)$. Rewrite this as $\bigl(u-\frac14\bigr)^2 + \bigl(v+\frac14\bigr)^2 = \frac18$. This represents a circle centred at $\frac14-\frac14i$, with radius $1/\sqrt8$. The image of the third side of the triangle is therefore part of that circle, namely the part lying in the fourth quadrant. The image of the triangle and its interior is the whole of the fourth quadrant apart from the interior of that circle.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533128142356873, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2845/transparent-cipher?answertab=oldest
# Transparent cipher I'm trying to implement this protocol: 1. Alice has her permanent key $K_1$. She computes $E(K_1, P)$. 2. She wants to share $P$ with Bob, Carol and Dave. For each of them she: 2.1. Generates $K_2$. 2.2. Computes $E_k(K_2, K_1)$ as $C_k$ and sends it via a secure channel. 2.3. Computes $E(K_2, E(K_1, P))$ as $C$ and sends it via a secure channel. 3. Everyone decrypts $P$ using $D(C_k, C)$ Looks like I need something like this property: $$D(K_1 + K_2, E(K_2, E(K_1, P))) = P,$$ with $E_k: (K_2, K_1)\mapsto E_k(K_2, K_1) = K_1+K_2$, where $+$ doesn't necessarily mean 'concatenation'. I don't know the proper term for this, although I'm pretty sure it exists. Edit: "secure channel" in 2.2 and 2.3; $K_2$ in 2.3. The original problem has Frank as a relay host. He has $E(K_1, P)$ and he executes 2.1, 2.2 and 2.3. I don't want anyone to have to know $K_1$ to decrypt the original message, that's why I'm trying to come up with something complicated. I can't alter step 1 - if I could, assymetric cipher whould solve my problem. - Are you sure it is a good idea to send the key for the next encryption in plain? (I.e. your two 2.2 steps seem not good fitting together.) – Paŭlo Ebermann♦ Jun 10 '12 at 12:10 I agree with Paŭlo; it sounds like everyone who gets both messages will be able to recover the plaintext. What's the motivation behind this protocol? What security properties is it supposed to have? If it is privacy, how is Bob supposed to decrypt, but Eve cannot? What distinguishes Bob from Eve? – poncho Jun 10 '12 at 16:44 Welcome to Cryptography Stack Exchange. Your question was migrated here because of being not directly related to software development (the topic of Stack Overflow), and being fully on-topic here. Please register your account here, too, to be able to comment and accept an answer. – Paŭlo Ebermann♦ Jun 10 '12 at 19:10 There was a mistake in 2.3, fixed it and added some details – Kirill Morarenko Jun 11 '12 at 21:51 Ah, that's now consistent. By secure channel you likely mean it is safe from alteration. Is D in (3) bound to be the reverse of E in (1)? Is E in (2.3) bound to be exactly the same as E in (1)? If E is unchangeable, what can we assume about it? – fgrieu Jun 12 '12 at 5:58 ## 1 Answer The one-time pad has this property. Specifically, letting $\oplus$ denote the bitwise XOR operation, the binary OTP is defined as: $$E(K,M) = D(K,M) = K \oplus M.$$ From the commutativity and cancellation properties of $\oplus$, it then follows that $$\begin{aligned} D(K_1 \oplus K_2, E(K_1, E(K_2, M))) &= (K_1 \oplus K_2) \oplus (K_1 \oplus (K_2 \oplus M)) \\ &= (K_1 \oplus K_2) \oplus (K_1 \oplus K_2) \oplus M \\ &= M \end{aligned}$$ More generally, any synchronous stream cipher also has essentially the same property. Letting $S(K)$ be the keystream generated by the key $K$, a binary additive stream cipher is defined as: $$E(K,M) = D(K,M) = S(K) \oplus M.$$ Thus, if we define $S(K_1 + K_2)$ as $S(K_1) \oplus S(K_2)$ (where $K_1 + K_2$ denotes any encoded value from which we can unambiguously decode $K_1$ and $K_2$), then the stream cipher has the property you seek. Even more generally, various other commutative encryption schemes, such as (textbook) RSA, could be used to achieve a similar property. However, as others have noted in the comments, it's not at all clear that this actually accomplishes any useful security goal. I'd suggest that you may want to rethink your protocol, and preferably provide a more explicit description of what you want to accomplish with it. I do have a hunch that what you may be looking for might be something like the three-pass protocol, which allows one party to send a secret message to another using commutative encryption (specifically, exponentiation in a finite field) even if the two parties don't possess any shared keys. On a general level, the three-pass protocol works like this: 1. Alice → Bob: $C_A = E(K_A, M)$ 2. Bob → Alice: $C_{AB} = E(K_B, C_A) = E(K_B, E(K_A, M))$ 3. Alice → Bob: $C_B = D(K_A, C_{AB}) = D(K_A, E(K_B, E(K_A, M))) = E(K_B, M)$ Here, $K_A$ and $K_B$ are random keys chosen by Alice and Bob respectively. Bob then computes $M = D(K_B, C_B)$. Step 3 above works because we're using commutative encryption — specifically, $E(K, M) = M^K$ and $D(K, M) = M^{K^{-1}}$, where the exponentiation is done in a finite field ($GF(p)$ for Shamir's three-pass protocol, $GF(2^n)$ for the Massey–Omura version). See the Wikipedia article for more details. - Yes the OTP has the property $D(K_1\oplus K_2, E(K_2, E(K_1, P))) = P$, but not the property $E(C_k, E(K_1, P))=C$ required by (2.3) when one defines $C_k=K_1\oplus K_2$ wich is how we read (2.2) and the introduction of $+$ in the question. The question appears to have a bug. – fgrieu Jun 11 '12 at 19:09 fixed that bug, thanks! – Kirill Morarenko Jun 11 '12 at 21:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363290667533875, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=616308
Physics Forums ## Rigid Body- Derivation of Momentum Equations 1. The problem statement, all variables and given/known data Hello, I'm having some issues deriving the momentum equation for rigid body motion in a non-inertial frame. Consider a rigid body moving in a two dimensional (inertial XY) plane. A body fixed frame (origin denoted as B) is located off the center of gravity. The body has arbitrary motion in the XY plane and arbitrary rotation about the Bz axis (denoted as ω$\hat{k}$) There are two common methods I've seen to derive solutions to this problems. The first should always be true and I've included it for completeness (i.e. just so people can double check I haven't gone completely insane), but the second analysis is what is giving me trouble. 2. Relevant equations None 3. The attempt at a solution 3i. Analysis in a Stationary Inertial Frame (V$_{x}$$\hat{I}$, V$_{y}$$\hat{J}$=0) First consider an arbitrary point on the rigid body (denoted as A). There are three significant position vectors to consider: 1.) Vector from the inertial frame to the body fixed frame (r$_{OB}$), 2.) Vector from body fixed frame to the arbitrary point (r$_{BA}$), and 3.) Vector from the inertial frame to the arbitrary point (r$_{OA}$). The rate of change of r$_{OA}$ is simply $\frac{d}{dt}r_{OA}= \frac{d}{dt}r_{OB}+\frac{D}{Dt}r_{BA}:\frac{D}{Dt}$ represents the rate of change of the the vector BA w.r.t. the inertial frame. $\frac{D}{Dt}r_{BA}= \frac{d}{dt}r_{BA}+ω×r_{BA}$ $\Rightarrow\frac{d}{dt} r_{OA}= \frac{d}{dt}r_{OB}+\frac{d}{dt}r_{BA}+ω×r_{BA}$ and $\frac{d^{2}}{dt^{2}}r_{OA}= \frac{d^{2}}{dt^{2}}r_{OB}+\frac{d^{2}}{dt^{2}}r_{BA}+2ω×\frac{d}{dt}r_ {BA}+\dot{ω}×r_{BA}+ω×(ω×r_{BA})$ With the assumption that the body is rigid, $\frac{d}{dt}r_{BA}$ and $\frac{d^{2}}{dt^{2}}r_{BA}$ equal 0. If point A represents an arbitrary point mass, the momentum of the point (viewed from the stationary inertial frame) is $p=m_A a_A: a_A=\frac{d^{2}}{dt^{2}}r_{OA}$ is the acceleration of point A viewed in the inertial frame. Summing over all points and assuming that all points share the same velocity and acceleration (rigid body assumption) gives $\dot{p}=m\frac{d^{2}}{dt^{2}}r_{OA}= m(\frac{d^{2}}{dt^{2}}r_{OB}+\dot{ω}×r_{B-cg}+ω×(ω×r_{B-cg}))$ B-cg is the position vector from the the body frame to the body center of gravity. Note that if B is located at the object cg $r_{B-cg}=0$ and the rate of change of momentum equals the acceleration of the body center of gravity. Aside Question 1: We can also note that we have described the rate of change (viewed in an inertial frame) of an arbitrary vector defined in a non-inertial reference frame. We can note that momentum is actually a vector defined in any frame (although the rate of change of momentum only equals the sum of the external forces in an inertial frame). I would have thought the rate of change of momentum (defined in the body fixed frame) w.r.t. a zero velocity inertial frame would be given by $\frac{d}{dt} p_{inertial}= \frac{d}{dt}p_{frame-motion}+\frac{d}{dt}p_{body}+ω×p_{body}$ but most of the references I've seen just have the last two terms on the RHS (a possible reason is given in 3ii). Additionally, I would think the momentum defined in the body fixed frame would be equal to zero. I think this because the body fixed frame should be translating with the same velocity of the body, and therefore the velocity vector in the body fixed frame is a zero magnitude vector. If someone could shed some light on this I'd appreciate it. 3ii. Analysis in Moving Inertial Frame (V$_{x}$$\hat{I}$, V$_{y}$$\hat{J}$=constant (≠0) We can note that all inertial frames differ at most by a constant velocity. If, at each instant in time, an inertial frame with the same velocity as the rigid body is considered, the rate of change of an arbitrary vector (defined in the body fixed frame) w.r.t. the inertial frame is given by $\frac{d}{dt} r_{OA}=$\frac{d}{dt}r_{OB}$+\frac{d}{dt}r_{BA}+ω×r_{BA}$ where $\frac{d}{dt}r_{OB}$ is zero because the inertial frame has the same velocity at the instant considered. This would give that the rate of change of momentum is $\frac{d}{dt} p_{inertial}=\frac{d}{dt}p_{body}+ω×p_{body}$ but I still don't know why the momentum in the body fixed frame isn't equal to zero. This is the formulation that also leads to the Euler Equations (what I'm also trying to derive). I'm not really sure how this formulation is helpful, but it is very prevalent in the mechanical systems modeling references I've found. If anyone has thoughts on why this formulation is preferred, or good references they'd be appreciated. Thanks, Brad PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Homework Help I think part of the difficulty is giving the correct interpretation to the equation ($\frac{dA}{dt}$)$_{I}$=($\frac{dA}{dt}$)$_{B}$ + ωxA, which says that the rate of change of a vector A as viewed in the inertial frame (I) equals the rate of change of the same vector as viewed in the body frame (B) plus ωxA. Now let the vector A be the momentum of the body relative to the inertial frame, $P_{I}$. Then we would have ($\frac{dP_{I}}{dt}$)$_{I}$=($\frac{dP_I}{dt}$)$_{B}$ + ωx$P_{I}$. Note that the same vector $P_{I}$ appears throughout. In particular, the first term on the right side of the equation is not ($\frac{dP _ B}{dt}$)$_{B}$ where $P_{B}$ is the momentum of the body with respect to the body (which would be zero). It can be very confusing. Ah that makes sense. Let me work with this for a little while, and if I have any issues I'll re-post. Thanks! Thread Tools Similar Threads for: Rigid Body- Derivation of Momentum Equations Thread Forum Replies Introductory Physics Homework 5 Introductory Physics Homework 1 Advanced Physics Homework 2 Classical Physics 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270354509353638, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/104627-multivariable-calc-derivatives.html
# Thread: 1. ## multivariable calc, derivatives calculate the jacobian matrix for the given function at the indicated point. write a formula for the total derivative. H(r, theta)= (rcostheta, rsintheta) a= (square root of 2, 3pi/2) so i changed it to H(x,y) and got the matrix 1 0 0 1 but the answer in the back of the book is 0 squ.rt 2 -1 0 i feel like it has to do with the point a but i don't know where i'm supposed to plug this point in? or how to write the formula for the total derivative? sorry this is my first time doing derivatives in this class so i'm a bit confused 2. Originally Posted by holly123 calculate the jacobian matrix for the given function at the indicated point. write a formula for the total derivative. H(r, theta)= (rcostheta, rsintheta) a= (square root of 2, 3pi/2) so i changed it to H(x,y) and got the matrix 1 0 0 1 but the answer in the back of the book is 0 squ.rt 2 -1 0 i feel like it has to do with the point a but i don't know where i'm supposed to plug this point in? or how to write the formula for the total derivative? sorry this is my first time doing derivatives in this class so i'm a bit confused The Jacobian matrix in this case is $\mathcal{J}\!\left(r,\theta\right)=\begin{bmatrix} \frac{\partial}{\partial r}r\cos\theta & \frac{\partial}{\partial \theta}r\cos \theta\\ \frac{\partial}{\partial r}r\sin \theta & \frac{\partial}{\partial \theta}r\sin \theta\end{bmatrix}=\begin{bmatrix}\cos\theta &-r\sin\theta\\ \sin\theta & r\cos\theta\end{bmatrix}$ Now what is $\mathcal{J}\!\left(\sqrt{2},\tfrac{3\pi}{2}\right)$? 3. ohh that makes more sense. i got the answer in the back now. but how do i write a formula for the total derivative? is there a general formula for this i can't seem to find it in the book 4. By total derivative are you talking about the total differential? By total derivative we can also mean the derivative of a function of several variables with respect to one of the input variables. Both formulas are in the attachment If you are talking about the Total Derivative of a Transformation if I am not mistaken that is the Jacobian matrix. I would like some clarification on that myself. My best guess is based on what I know of differentials for single valued functions is detJ * drd(theta) = rdrd(theta) (areal element for polar coords) Attached Thumbnails 5. thanks, nevermind i was just supposed to put the jacobian matrix into a function. kind of like in calc one, taking the derivative then finding the equation of the tangent line
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313095808029175, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Electric_fields
# Electric field (Redirected from Electric fields) Electric field lines emanating from a point positive electric charge suspended over a negatively charged infinite sheet Electromagnetism • Electric field • Conductor Scientists An electric field surrounds electrically charged particles and time-varying magnetic fields. The electric field depicts the surrounding force of an electrically charged particle exerted on other electrically charged objects. The concept of an electric field was introduced by Michael Faraday. ## Qualitative description The electric field is a vector field with SI units of newtons per coulomb (N C−1) or, equivalently, volts per metre (V m−1). The SI base units of the electric field are kg⋅m⋅s−3⋅A−1. The strength or magnitude of the field at a given point is defined as the force that would be exerted on a positive test charge of 1 coulomb placed at that point; the direction of the field is given by the direction of that force. Electric fields contain electrical energy with energy density proportional to the square of the field amplitude. The electric field is to charge as gravitational acceleration is to mass and force density is to volume. An electric field that changes with time, such as due to the motion of charged particles in the field, influences the local magnetic field. That is, the electric and magnetic fields are not completely separate phenomena; what one observer perceives as an electric field, another observer in a different frame of reference perceives as a mixture of electric and magnetic fields. For this reason, one speaks of "electromagnetism" or "electromagnetic fields". In quantum electrodynamics, disturbances in the electromagnetic fields are called photons, and the energy of photons is quantized. ## Quantitative definition Electric field from a positive Q Electric field from a negative Q Consider a point charge q with position (x,y,z). Now suppose the charge is subject to a force Fon q due to other charges. Since this force varies with the position of the charge and by Coloumb's Law it is defined at all points in space, Fon q is a continuous function of the charge's position (x,y,z). This suggests that there is some property of the space that causes the force which is exerted on the charge q. This property is called the electric field and it is defined by $\mathbf{E}(x,y,z)=\frac{\mathbf{F}_\text{on q}(x,y,z)}{q}$ Notice that the magnitude of the electric field has units of Force/Charge. Mathematically, the E field can be thought of as a function that associates a vector with every point in space. Each such vector's magnitude is proportional to how much force a charge at that point would "feel" if it were present and this force would have the same direction as the electric field vector at that point. It is also important to note that the electric field defined above is caused by a configuration of other electric charges. This means that the charge q in the equation above is not the charge that is creating the electric field, but rather, being acted upon by it. This definition does not give a means of computing the electric field caused by a group of charges. From the definition, the direction of the electric field is the same as the direction of the force it would exert on a positively charged particle, and opposite the direction of the force on a negatively charged particle. Since like charges repel and opposites attract, the electric field is directed away from positive charges and towards negative charges. ## Superposition ### Array of discrete point charges Electric fields satisfy the superposition principle. If more than one charge is present, the total electric field at any point is equal to the vector sum of the separate electric fields that each point charge would create in the absence of the others. $\mathbf{E} = \sum_i \mathbf{E}_i = \mathbf{E}_1 + \mathbf{E}_2 + \mathbf{E}_3 \cdots \,\!$ The total E-field due to N point charges is simply the superposition of the E-fields due to each point charge: $\mathbf{E} = \sum_{i=1}^N \mathbf{E}_i = \frac{1}{4\pi\varepsilon_0} \sum_{i=1}^N \frac{Q_i}{r_i^2} \mathbf{\hat{r}}_i.$ where ri is the position of charge Qi, $\mathbf{\hat{r}}_i$ the corresponding unit vector. ### Continuum of charges The superposition principle holds for an infinite number of infinitesimally small elements of charges – i.e. a continuous distribution of charge. The limit of the above sum is the integral: $\mathbf{E} = \int_V d\mathbf{E} = \frac{1}{4\pi\varepsilon_0} \int_V\frac{\rho}{r^2} \mathbf{\hat{r}}\,\mathrm{d}V = \frac{1}{4\pi\varepsilon_0} \int_V\frac{\rho}{r^3} \mathbf{r}\,\mathrm{d}V \,\!$ where ρ is the charge density (the amount of charge per unit volume), and dV is the differential volume element. This integral is a volume integral over the region of the charge distribution. The electric field at a point is equal to the negative gradient of the electric potential there, $\mathbf{E} = -\nabla \Phi$ Coulomb's law is actually a special case of Gauss's Law, a more fundamental description of the relationship between the distribution of electric charge in space and the resulting electric field. While Columb's law (as given above) is only true for stationary point charges, Gauss's law is true for all charges either in static or in motion. Gauss's law is one of Maxwell's equations governing electromagnetism. Gauss's law allows the E-field to be calculated in terms of a continuous distribution of charge density $\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon _0}.$ where ∇⋅ is the divergence operator, ρ is the total charge density, including free and bound charge, in other words all the charge present in the system (per unit volume). ## Electrostatic fields Main article: Electrostatics Electrostatic fields are E-fields which do not change with time, which happens when the charges are stationary. Illustration of the electric field surrounding a positive (red) and a negative (blue) charge in one dimension if the right charge is changing from positive to negative Illustration of the electric field surrounding a positive (red) and a negative (blue) charge. The electric field at a point E(r) is equal to the negative gradient of the electric potential $\scriptstyle \mathbf{\Phi}(\mathbf{r})$, a scalar field at the same point: $\mathbf{E} = -\nabla \Phi$ where ∇ is the gradient. This is equivalent to the force definition above, since electric potential Φ is defined by the electric potential energy U per unit (test) positive charge: $\Phi = \frac{U}{q}$ and force is the negative of potential energy gradient: $\mathbf{F} = - \nabla U$ If several spatially distributed charges generate such an electric potential, e.g. in a solid, an electric field gradient may also be defined. ### Uniform fields A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of edge effects. Ignoring such effects, the equation for the magnitude of the electric field E is: $E = - \frac{\Delta\phi}{d}$ where Δϕ is the potential difference between the plates and d is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nanoapplications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of 1 volt/µm achieved by applying a voltage of the order of 1 volt between conductors spaced 1 µm apart. ### Parallels between electrostatic and gravitational fields Electric field from a negative Q where $\mathbf{F}=q\left(\frac{Q}{4\pi\varepsilon_0}\frac{\mathbf{\hat{r}}}{|\mathbf{r}|^2}\right)=q\mathbf{E}$ Coulomb's law, which describes the interaction of electric charges: $\mathbf{F}=q\left(\frac{Q}{4\pi\varepsilon_0}\frac{\mathbf{\hat{r}}}{|\mathbf{r}|^2}\right)=q\mathbf{E}$ is similar to Newton's law of universal gravitation: $\mathbf{F}=m\left(-GM\frac{\mathbf{\hat{r}}}{|\mathbf{r}|^2}\right)=m\mathbf{g}$. This suggests similarities between the electric field E and the gravitational field g, so sometimes mass is called "gravitational charge". Similarities between electrostatic and gravitational forces: 1. Both act in a vacuum. 2. Both are central and conservative. 3. Both obey an inverse-square law (both are inversely proportional to square of r). Differences between electrostatic and gravitational forces: 1. Electrostatic forces are much greater than gravitational forces for natural values of charge and mass. For instance, the ratio of the electrostatic force to the gravitational force between two electrons is about 1042. 2. Gravitational forces are attractive for like charges, whereas electrostatic forces are repulsive for like charges. 3. There are not negative gravitational charges (no negative mass) while there are both positive and negative electric charges. This difference, combined with the previous two, implies that gravitational forces are always attractive, while electrostatic forces may be either attractive or repulsive. ## Electrodynamic fields Main article: Electrodynamics Electrodynamic fields are E-fields which do change with time, when charges are in motion. An electric field can be produced not only by a static charge, but also by a changing magnetic field (in which case it is a non-conservative field). The electric field is then given by: $\mathbf{E} = - \nabla \phi - \frac { \partial \mathbf{A} } { \partial t }$ in which B satisfies $\mathbf{B} = \nabla \times \mathbf{A}$ and ∇× denotes the curl. The vector field B is the magnetic flux density and the vector A is the magnetic vector potential. Taking the curl of the electric field equation we obtain, $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}} {\partial t}$ which is Faraday's law of induction, another one of Maxwell's equations.[1] ## Energy in the electric field Main article: Electric potential energy The electrostatic field stores energy. The energy density u (energy per unit volume) is given by[2] $u = \frac{1}{2} \varepsilon |\mathbf{E}|^2 \, ,$ where ε is the permittivity of the medium in which the field exists, and E is the electric field vector (in newtons per coulomb). The total energy U stored in the electric field in a given volume V is therefore $U = \frac{1}{2} \varepsilon \int_{V} |\mathbf{E}|^2 \, \mathrm{d}V \, ,$ ## Further extensions ### Definitive equation of vector fields In the presence of matter, it is helpful in electromagnetism to extend the notion of the electric field into three vector fields, rather than just one:[3] $\mathbf{D}=\varepsilon_0\mathbf{E}+\mathbf{P}\!$ where P is the electric polarization – the volume density of electric dipole moments, and D is the electric displacement field. Since E and P are defined separately, this equation can be used to define D. The physical interpretation of D is not as clear as E (effectively the field applied to the material) or P (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents. ### Constitutive relation Main article: Constitutive equation The E and D fields are related by the permittivity of the material, ε.[4][5] For linear, homogeneous, isotropic materials E and D are proportional and constant throughout the region, there is no position dependence: For inhomogeneous materials, there is a position dependence throughout the material: $\mathbf{D(r)}=\varepsilon\mathbf{E(r)}$ For anisotropic materials the E and D fields are not parallel, and so E and D are related by the permittivity tensor (a 2nd order tensor field), in component form: $D_i=\varepsilon_{ij}E_j$ For non-linear media, E and D are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy. ## See also • Classical electromagnetism • Magnetism • Teltron Tube • Teledeltos, a conductive paper that may be used as a simple analog computer for modelling fields. ## References 1. Huray, Paul G. (2009), Maxwell's Equations, Wiley-IEEE, p. 205, ISBN 0-470-54276-4 , Chapter 7, p 205
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188151955604553, "perplexity_flag": "head"}
http://mathoverflow.net/questions/84443?sort=votes
## Complexity of computing expansion of a newform level 18 weight 3 and character [3] - OEIS A116418 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am not familiar with newforms, so this may not make any sense. OEIS sequence A116418 is Expansion of a newform level 18 weight 3 and character [3] Numerical evidence suggest that up to $10^5$ $$\text{A116418}[n] \equiv \sigma(3n+1) \pmod 3$$ What is the complexity of computing A116418[n], possibly assuming $n$ is factored (for modular form coefficients after $n$ is factored the coefficient is efficiently computable). Gjergji Zaimi proved a similar congruence involving eta and A116418 is expansion of an eta formula. Added My main interest is computing $\sigma(3n+1) \mod 3$ and a comment by Dror Speiser suggests the coefficient of the newform is computable in polynomial time assuming $n$ is factored. The factorization of $n$ is not related related to the factorization of $3n+1$ and for numbers of form $3 \cdot 2^n + 1$ the factorization is trivial. Is A116418 really the expansion of the newform or is it a typo in OEIS? Is the congruence $\text{A116418}[n] \equiv \sigma(3n+1) \pmod 3$ identity or just coincidence for the the first $10^5$ terms? - Computing the nth coefficient, assuming n is factored, is polynomial in the level, weight, and $\log{n}$. This is proven (for level one, but claimed to work for all levels) in Computational Aspects of Modular Forms and Galois Representations, Edixhoven et al. – Dror Speiser Dec 28 2011 at 14:18 Thank you Dror. I am interested in computing sigma(3n+1) mod 3. The factorization of $n$ is not related to the factorization of $3n+1$ so one can factor $n$ by say ECM and for numbers of form say $3 \cdot 2^n + 1$ the factorization is trivial. Is indeed A116418 the expansion of the newform in question or it is a typo in OEIS? – joro Dec 28 2011 at 14:39 2 @joro: The sequence's nth number is the $3n+1$-th coefficient of the modular form - so $\sigma(3n+1)=a_{3n+1}(f)$, and not $\sigma(3n+1)=a_n(f)$. So you would still need to factorise $3n+1$. In any case, your observation is not a coincidence. Indeed, the modular form is equal mod 3 to an Eisenstein series. You can read up on this by searching keywords such as "congruences between modular forms". – Dror Speiser Dec 28 2011 at 16:49 It seems this newform has CM by $K=\mathbf{Q}(\sqrt{-3})$, in which case it comes from a Grössencharakter $\psi$ of $K$ (by a theorem of Ribet). Its Fourier expansion then has the form $f=\sum_{I} \psi(I) q^{N(I)}$ where sum is taken over ideals (I don't know if this helps for computation). About CM newforms a good reference is Matthias Schütt's dissertation iag.uni-hannover.de/~schuett/publik_en.html – François Brunault Dec 28 2011 at 17:34 1 If this form were CM then it would indeed help computation: the Edixhoven program would not be needed, and computing the $q^n$ coefficient would be almost(?) equivalent to factoring $n$. But alas it's not CM, see my answer below. – Noam D. Elkies Dec 31 2011 at 5:07 show 1 more comment ## 1 Answer [More a comment than an answer, but too long for the comment space] Call this form $$\varphi := \frac{\eta(q^3)^2 \eta(q^6)^3 \eta(q^9)^2}{\eta(q^{18})} = q - 2q^4 - 4q^7 + 6q^{10} + 8q^{13} \cdots.$$ The listing of coefficients in the OEIS is correct as far as it goes (checked with copy-and-paste to gp). The form is not CM: the coefficients are supported on $q^n$ with $n \equiv 1 \bmod 3$ but do not vanish even for $n$ such as $10$ and $22$ that are $1 \bmod 3$ but not norms from ${\bf Q}(\sqrt{-3})$. In particular the coefficients aren't multiplicative, so $\varphi$ isn't quite an eigenform. It seems that the relevant eigenforms are obtained as follows. Apply $w_{18}$ to get (within a multiplicative factor) $$\phi := \frac{\eta(q^6)^2 \eta(q^3)^3 \eta(q^2)^2}{\eta(q)} = q + q^2 - 2q^4 - 3q^5 - 4q^7 - 2q^8 + 6q^{10} + 12q^{11} + 8q^{13} - 4q^{14} \cdots,$$ whose $q^n$ coefficient is 0 if $n \equiv 0 \bmod 3$, and coincides with the $q^n$ coefficient of $\varphi$ also when $n \equiv 1 \bmod 3$, but need not vanish for $n \equiv 2 \bmod 3$. Then "experimentally" if $m,n$ are relatively prime then the $q^{mn}$ coefficient of $\phi$ equals the product of the $q^m$ and $q^n$ coefficients, unless both $m$ and $n$ are $2 \bmod 3$, when the $q^{mn}$ coefficient is $-2$ times that product. Hence we obtain an eigenform by choosing a square root of $-2$ and multiplying the $q^n$ coefficient of $\phi$ by that square root for each $n \equiv 2 \bmod 3$. As Dror Speiser notes, the Edixhoven program promises to compute the $q^n$ coefficient of such a form in time $\log^{O(1)}n$ for $n$ prime, and thus for all $n$ given the factorization of $n$; but I don't think this has been implemented yet to the point that one could actually carry out the computation this way. For specific forms there can be shortcuts that make a $\log^{O(1)}n$ computation practical (still assuming $n$ is factored), but here I've tried a few things and not yet(?) found such a shortcut. [added later] Curiously the images of $\phi$ under the other two $w$ operators are in the linear span of $\varphi$ and $\phi$: if we write $$\psi = \frac{\eta(q^3)^3 \eta(q^6)^2 \eta(q^{18})^2}{\eta(q^9)} = q^2 - 3q^5 - 2q^8 + 12q^{11} - 4q^{14} \cdots$$ for (a multiple of) the $w_2$ image, then $\phi = \varphi + \psi$, while $\varphi - 2 \psi$ is the multiple $$\frac{\eta(q)^2 \eta(q^3)^2 \eta(q^6)^3}{\eta(q^2)} = q - 2q^2 - 2q^4 + 6q^5 - 4q^7 + 4q^8 + 6q^{10} - 24q^{11} + 8q^{13} + 8q^{14} \ldots$$ of the $w_9$ image. - "Hence... and multiplying each coefficient..." do you mean "multiplying the $n \equiv 2 mod 3$ coefficients"? – David Hansen Dec 29 2011 at 4:08 @David Hansen: Yes, thanks; fixed now. – Noam D. Elkies Dec 29 2011 at 4:35 Using Magma I found out that $\varphi= \frac12 (f+\overline{f})$ where $f$ is a weight 3 newform of level 18 and coefficients in $\mathbf{Z}(\sqrt{-2})$. The character $\varepsilon$ of $f$ is the nontrivial character of conductor $3$. This is consistent with what you found because $\overline{f} = f \otimes \overline{\varepsilon}$ so that if we write $f = \sum a_n q^n$, we have $\varphi = \frac12 \sum a_n (1+\varepsilon(n)) q^n$ (this also explains the vanishing of the $n \equiv 2 \mod{3}$ coefficients of $\varphi$). – François Brunault Dec 31 2011 at 19:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329697489738464, "perplexity_flag": "head"}
http://meta.cstheory.stackexchange.com/questions/225/official-faq-for-theoretical-computer-science/231
# Official FAQ for Theoretical Computer Science This is the official site FAQ for Theoretical Computer Science. This question will contain a list of questions, with each question linking to a single answer. If there is prior discussion on an answer, or you wish to discuss a particular answer, please create a separate meta question and link to it (so that this FAQ remains uncluttered). The scope of this site: How to use this site: How to help the site: This site is run by YOU, and your help is needed to maintain it as a valuable resource for the community. Here are some ways in which you can help (once you acquire enough reputation points): Best practices: Please use this thread if you have suggestions for improving this FAQ. - ## How do I write formulas? This site uses MathJax. You can simply type (almost) any LaTeX equations between a pair of \$-signs, and the system will display it properly typeset. This should work in questions, answers, and comments (with some exceptions). ## Why don't I see any formulas? Why do I get a [Math Processing Error]? Try this page first and see if MathJax works in your system at all. If the demo page works, but formulas on this site don't work, try to clear your browser's cache. Related discussion: [1] [2] [3] [4] - 2 I think MathJax behavior has changed. You no longer need the "double-backslash trick." For instance, `$A = \{a,b,c\}$` works as $A = \{a,b,c\}$, which is perfect. – Sadeq Dousti Dec 15 '10 at 13:43 ## When should I downvote/upvote? First of all, try to vote often. You have a budget of 30 votes per day, and it is perfectly fine to use up your daily quota. Here are some guidelines that you can use to decide whether you should upvote, downvote, or not vote at all: • Upvote only questions and answers you understand but do so whenever they are of high quality. To evaluate quality for a question think of whether it is interesting, whether it is at the appropriate level, whether it is well formulated. To evaluate quality for an answer think whether it addresses the question and whether you learned something from it. Above all, use your judgement. • Downvote a question only if it is clearly not at the appropriate level or it is clearly not well formulated. Downvote an answer only if it is technically wrong, or does not address the question, or is so badly formulated that it is impossible to understand. You should leave a comment explaining why you downvoted whenever you do so, unless that would duplicate an already existing comment. • You should ignore a question/answer's current score when you decide on your vote. You can't upvote or downvote your own question/answer. Related discussion: [1] [2] [3] - ## Do I need to use my real name? You do not have to, but you are strongly encouraged to use your real name. We hope that this site will become a valuable resource for the research community, and that reputation earned here will translate into reputation outside as well. Using your real name will help make that connection. We also hope that research discussions on this site will occasionally lead to actual citations in papers, or even new research ! From that perspective, using your real name will help immensely. Related discussion: [1] [2] - ## What kind of questions are too basic? Our aim is to ask and answer research-level questions in theoretical computer science. "Research-level" means, roughly, questions that might be discussed between two professors, or between graduate students working on Ph.D.'s, but not usually between a professor and the typical undergraduate student. It does not include questions at the level of difficulty of undergraduate homework. Here are some examples of questions that are off-topic because they are not research-level or because they are too easy: • Questions that can be easily answered by reading a Wikipedia article on the topic. • Questions that can be easily solved by Googling. • Questions that are solved by browsing one of these web sites. • Typical homework problems in textbooks. If you do not think that your question is research-level but you cannot find an answer in above options, you may want to try http://math.stackexchange.com/. Related discussion: [1] [2] [3] - – becko Jun 14 '12 at 3:48 ## Useful resources Complexity classes: • Complexity Zoo. In questions that are related to complexity classes, it's a good idea to first check the Zoo and see if your question is already answered there – or if it's a well-known open problem. The Zoo also defines the standard naming convention for complexity classes on this site. Hardness and approximability: Problems related to graph families: Related discussion: [1] - proposals: wikipedia.org scholar.google.com oeis.org – Radu GRIGore Sep 7 '10 at 11:56 – Kaveh♦ Aug 2 '11 at 2:56 ## My (wonderful!) question was closed! What do I do now? There are many reasons a particular question might be closed. Don't be discouraged if this happens to you! Many such questions have a lot of merit but are closed for reasons that may not have occurred to you. You should consider re-asking your question after reading the comments others leave and adjusting your presentation. If, after reading through the rest of this FAQ, you have additional concerns about how to effectively formulate your question, post a new question on meta.cstheory.stackexchange.com and link that question to the original source. Related discussion: here - ## My answer turned out to be incorrect. What should I do? There is no perfect rule telling you what to do in such cases. Here are some of the options: • Edit it to a correct answer. In this case, leaving a comment to the answer explaining the edit might be also a good idea. • Leave it as it is after adding a clear notice that it is incorrect and an explanation of why at the beginning of the answer. • Delete it. The consensus seems to be that leaving only a comment about incorrectness is not sufficient because readers will not notice it until they read the comments. An incorrect answer clearly marked as such is often informative because it shows an incorrect approach which other people may think is correct. Therefore, some people consider that we should not delete an answer just because it is incorrect. Related discussions: [1] [2] - ## When to use "community wiki"? The "community wiki" (CW) flag should be reserved for questions where either (i) you do not believe that there exists one (or even a few) good answers, and where any individual response contributes a small enough piece of the overall answer that no reputation increase is warranted, or (ii) you want to encourage other users to edit your answer or question, perhaps if you realise there are gaps in your answers that you would like other users to fill in. CW is irreversible. When in doubt, do not mark a question CW – this can always be changed later. A crucial technical point: any answers made to a CW question will themselves be CW. Often, a question that seems like it should be CW can be modified to be more direct and focused, thus avoiding the need for the tag. For example, the seemingly CW question "What are the key papers in topic X" can often be reworded as the more focused "What I should start reading when studying topic X". The latter has one or a few definite answers and need not be made CW. If a question or answer is CW, there is something of an invitation to edit it: the reasons for editing are broader for CW posts than non-CW posts. Related discussion: [1] [2] ## When to mark an answer as "community wiki"? Anyone can edit a CW (community wiki) answer and no one earns reputation for it. One good use of the CW flag is to aggregate multiple answers into a single one with uniform formatting. Another possible use of the CW flag is to invite others to fill in an incomplete answer. (Perhaps you only have time to sketch the idea, or perhaps you do not know the complete answer but have a strong hunch that your idea points in the right direction.) This answer is flagged CW for good reason: There's room for improvement. - 2 A quick note: Part of this no longer applies because, as of today, the option to declare questions as community wiki became a privileged action which only moderators can use (meta.stackoverflow.com/questions/67581/…). Answers can still be declared as community wiki as before. – Tsuyoshi Ito Oct 14 '10 at 15:06 @Tsuyoshi, I edited taking into account your comment. Don't forget this is a CW answer too. :) – Radu GRIGore Feb 9 '11 at 14:21 ## Questions about Theorem X in paper Y? It is perfectly fine to ask very specific questions like this: How does one reach eq 3.14 from eq 3.13 in paper X, page Y? I tried the following ... but it didn't work. However, if you suspect that there is a mistake in a paper, it might be polite to first contact the authors by email before posting a question here. Related discussion: [1] [2] - ## What is the policy on crossposting to/from MathOverflow? Crossposting from MathOverflow is perfectly fine, as long as they aren't done in parallel. That is, if you post a question on one site, you should only post to the other site after you have not received a satisfactory answer for some time, and you should provide a link in each post (or in the comments on it) to the other one. As a courtesy, if you post your question here after trying MathOverflow, please try to integrate in your question the answers you received on MathOverflow (even if they did not answer everything). Related discussion: [1] [2] - ## Why can't I comment on my own question? Ideally, you should always be able to comment on your own question. If you cannot find a link labeled “add comment” below your own question, you may be using a different account from the one you used to post the question. Note that using the same user name and the same icon does not mean that you are using the same account. If this happens, ask the moderators to merge the two accounts by flagging the question and state the situation briefly in the flagging comment. Alternatively, you can open a thread on meta to explain the situation. To prevent this from happening in the future, we strongly encourage you to register your account. Related discussion: [1] - 1 I realized that a user needs 15 rep points to flag a post and 5 rep points to post on meta. This means that a new user (including a user who lost a browser cookie for his/her own unregistered account) cannot use either of the suggestions in this answer. We need a better way to handle this. – Tsuyoshi Ito Jun 12 '12 at 2:46 I remember a discussion about using answers for commenting when a user doesn't have enough reputation. – Kaveh♦ Jun 12 '12 at 4:16 ## How do I delete my own question/answer? The conditions to delete a post is explained in detail on meta.stackoverflow.com. If you can delete your post, there is a link labeled “delete” below your post (on the same line as “link” and “flag.”) If you have not registered, you cannot delete a post. To delete your post in this case, register first and then try to delete it. If you still cannot, you may have an unregistered account which is not merged to your registered account. In this case, ask moderators to merge your unregistered accounts into the registered one. - ## How do I copy-and-paste a formula? Right-click on it, and select "Show Source". You can try it here: $\sum_x 2f(v_x)^2$. Note that the MathJax menu (which contains the item “Show Source”) may be behind the context menu of your browser. Related discussion: [1] - ## There are too many questions on boring topic X that I don't care about. What should I do? Tagging provides a mechanism to filter out topics that you aren't interested in, and retain topics that you are. On the right of the front page you will see two lists: interesting tags and ignored tags. If you enter tags relating to subjects you care about (and the system will auto-suggest based on what you type), then questions tagged with those tags will be highlighted in the main list. Conversely, if you add tags for topics you don't care about, any question containing those tags will be hidden from view. If a question contains both interesting and ignored tags, it will be displayed with a faded color. It's easy to delete tags from either list just by clicking on the little cross. - ## What kinds of tags should I use? Each question should have at least one "ArXiv tag", and any number of additional tags (there is a limitation of at most 5 tags per question and at most 24 characters per tag). Try to re-use existing tags whenever possible, but you are also welcome to propose new tags. See this page for our current list of tag synonyms. ### ArXiv tags [FIXME: add answer] Related discussion: ### [soft-question] A soft-question is subjective and argumentative without a precise answer and can be a question about theoretical computer science rather than being a question in it. Examples: • How would you teach X to undergrads? • Why computer scientists do X when Y happens? • Is theoretical computer science part of pure math? • How long will it take to settle P vs NP in your opinion? More discussion. ### [big-picture] A big-picture question is higher-level question, and although it can be subjective and argumentative it does not need to be, and is a question in theoretical computer science. Examples: • What is the main idea behind Razborov-Rudich natural proofs? • What are the obstacles to prove a super-polynomial lower-bound for SAT? • Where did the idea of Domain theory came from? • What are the main objects and constructions used in crypto? • Is there a program to settle Unique Games Conjecture? • Is polynomial time Turing computable the right definition for feasible computation? • What evidence do we have for Church-Turing thesis? More discussion. - ## What if I see a question that I think is inappropriate/off topic/offensive ? (this may need to be factored into multiple answers) One of the most important ways in which you can help this site flourish is by making sure that the questions listed on it reflect the true interests of the community. A general procedure to follow if you have issues with a question is: 1. If it's a matter of scope, check this FAQ entry. 2. If the question is incoherent or unclear, suggest in comments ways in which the original poster (OP) might fix it. 3. If there's a disagreement, initiate a discussion on meta. Cross link the meta discussion and the original post. 4. If you have enough reputation, you may choose to vote to close the question. In that case, please post your reason as a comment and link to a meta discussion thread. 5. If the question is offensive, or spam, you can flag it and a moderator will deal with it. 6. If you wish to bring this question to moderator attention for any other reason, flag it. - ### What is meta ? Why do people keep telling me to go there ? I just want my question answered !! CSTheory is a Q&A site, so it's geared towards asking and answering questions, rather than general discussions. Often, a discussion breaks out regarding the merits of a question or its suitability on the site. Also, you might have a general question about using the site that isn't covered in this FAQ. For this purpose, there's a "meta" site where you can post such questions. As a general rule, if you have any questions/discussions about a particular question or set of questions, post a new question on meta.cstheory.stackexchange.com and link that question to the original source. - ## Are questions about [this area of computer science] on-topic here? As our FAQ explains, we interpret "Theoretical Computer Science" broadly. Roughly speaking, the subareas of computer science that put emphasis on mathematical technique and rigor are on-topic. Here are some further clarifications regarding possible borderline cases: • Pure math: if the problem has some applications in theoretical computer science, it is welcome here – more discussion. • Artificial intelligence: theoretical aspects of artificial intelligence are on-topic; philosophical questions are probably off-topic – more discussion. Further discussion: [1] [2] - ## I did a search for a tag, and a mislabelled question came up. what do I do ? If you have enough reputation points, you can retag mislabelled questions. This is important to make sure tags reflect the true content of the question, which makes searching and filtering easier. Please use the tagging guide to decide what the right tags should be. - ## Can I discuss questions and answers on the site? In addition to questions and answers, cstheory.sx supports, like other Stack Exchange sites, comments on questions and answers. This supports discussions, and is important to improving posts, challenging possible mistakes, and bringing attention to relevant resources. Of course, discussions only work if the participants know of the comments. Besides comments appearing after the question or answer they comment on, the author of the question or answer will see that you have commented, as will anyone who you have notified by @name, where the site tries to figure out who @name might be by looking at the user names of other users who have participated in the thread. You need at least the first three characters of the user name for the site to figure out that the @nam might be about them. For more details on comment replies, see “How do comment replies work?” on meta.stackoverflow.com. - ## My question is not a research-level question in TCS, where can I ask it? For questions other than research-level questions in TCS, you may want to consider the following places to ask: • General Computer Science — Computer Science - Stack Exchange • General Mathematics (including theoretical CS) — Mathematics - Stack Exchange • Research-level Mathematics — Math Overflow • Social and Professional Academic Issues — Academia - Stack Exchange • Programming — Stack Overflow • General Artificial Intelligence — Meta Optimize • Statistics and Data Mining — Cross Validated • Applied Cryptography and Security — Crypto - Stack Exchange • Computation used in Science and Engineering — Computational Science - Stack Exchange
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353320002555847, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=90831
Physics Forums ## lagrange multipliers Find the shortest and longest distance from the origin to the curve $$x^2 + xy + y^2=16$$ and give a geometric interpretation...the hint given is to find the maximum of $$x^2+y^2$$ i am not sure what to do for this problem thanks PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by thenewbosco Find the shortest and longest distance from the origin to the curve $$x^2 + xy + y^2=16$$ and give a geometric interpretation...the hint given is to find the maximum of $$x^2+y^2$$ i am not sure what to do for this problem thanks Are you sure you need Lagrange Multipliers for this? it says for the hint to use the method of lagrange multipliers to find the maximum of $$x^2 + y^2$$ but i am not sure how to do it using any method, so any help is appreciated. ## lagrange multipliers Quote by thenewbosco it says for the hint to use the method of lagrange multipliers to find the maximum of $$x^2 + y^2$$ but i am not sure how to do it using any method, so any help is appreciated. Solve for y. use rate of change respect to the distance. that is the "cal 1 method" the path equation is constraint i think. apply Lagrange Multipliers on the distance formula solve for y in what though. in the question it says $$x^2+y^2$$ this isnt even an equation though. im sorry i still dont get it Quote by thenewbosco solve for y in what though. in the question it says $$x^2+y^2$$ this isnt even an equation though. im sorry i still dont get it you can solve for y in tern of x and then using the distance formula D = (y^2+x^2)^0.5 sub the y equation into the distance formula take the first derivative fine 0s test it done that is cal 1 method, it requires a lot of work $$x^2+y^2$$ looks really similar to the distance formula $$D^2 = x^2 + y^2$$ you can set $$D = f(x)$$ or $$D^2 = f(x)$$ and find the del of it, since the square doesnt where the extreme occurs, therefore the text tells you to fine the max of $$x^2+y^2$$ Thread Tools | | | | |-------------------------------------------|----------------------------|---------| | Similar Threads for: lagrange multipliers | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Calculus | 7 | | | Calculus & Beyond Homework | 3 | | | Calculus | 1 | | | General Math | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8769255876541138, "perplexity_flag": "middle"}
http://www.impan.pl/cgi-bin/dict?conceivably
## conceivably [see also: possibly] Conceivably, $S$ may also contain other sets. Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7577213644981384, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/31066/what-is-the-influence-of-c-in-svms-with-linear-kernel
# What is the influence of C in SVMs with linear kernel? I'm currently using an SVM with a linear kernel to classify my data. There is no error on the training set. I tried several values for the parameter C (10^-5, ..., 10^2). This did not change the error on the test set. Now I wonder: is this an error caused by the ruby bindings for libsvm I am using (https://github.com/febeling/rb-libsvm) or is this theoretically explainable? Should the parameter C always change the performance of the classifier? - Just a comment, not an answer: Any program that minimizes a sum of two terms, such as $|w|^2 + C \sum{ \xi_i },$ should (imho) tell you what the two terms are at the end, so that you can see how they balance. (For help on computing the two SVM terms yourself, try asking a separate question. Have you looked at a few of the worst-classified points ? Could you post a problem similar to yours ?) – Denis Jul 13 '12 at 20:02 ## 2 Answers The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable. - – alfa Jun 24 '12 at 12:31 1 I would suggest trying a wider range of C values, maybe 10^[-5,...,5], or more if the optimization is fast on your dataset, to see if you get something that looks more reasonable. Both the training error and the value of the minimum cost should change as C is varied. Also, is the scale of your data extreme? In general, an optimal C parameter should be larger when you scale down your data, and vice versa, so if you have very small values for features, make sure to include very large values for the possible C values. If none of the above helps, I'd guess the problem is in the ruby bindings – Marc Shivers Jun 24 '12 at 19:59 What I said is partially wrong. Actually, the value of C has an influence but it is marginal. I am calculating the balanced accuracy ((tp/(tp+fn)+tn/(tn+fp))/2) on my test set. If the complexity is 10^-5 or 10^-4, the balanced accuracy will be 0.5. When I set C to 10^-3 it is 0.79, for C=10^-2 it is 0.8, for C=10^-1 it is 0.85 and for C=10^0,...,10^7 it is 0.86, which seems to be the best possible value here. The data is normalized such that the standard deviation is 1 and the mean is 0. – alfa Jun 25 '12 at 9:43 2 changing the balanced accuracy from 0.5 (just guessing) to 0.86 doesn't sound like a marginal influence to me. It would be a good idea to investigate a finer grid of values for C as Marc suggests, but the results you gave given seem to be fairly normal behaviour. One might expect the error to go back up again as C tends to infinity due to over-fitting, but that doesn't seem to much of a problem in this case. Note that if you are really interested in balanced error and your training set doesn't have a 50:50 split, then you may be able to get better results... – Dikran Marsupial Jun 25 '12 at 12:58 2 ... by using different values of C for patterns belonging to the positive and negative classes (which is asymptotically equivalent to resampling the data to change the proportion of patterns belonging to each class). – Dikran Marsupial Jun 25 '12 at 12:59 show 2 more comments C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge parameter in ridge regression (in fact in practice there is little difference in performance or theory between linear SVMs and ridge regression, so I generally use the latter - or kernel ridge regression if there are more attributes than observations). Tuning C correctly is a vital step in best practice in the use of SVMs, as structural risk minimisation (the key principle behind the basic approach) is party implemented via the tuning of C. The parameter C enforces an upper bound on the norm of the weights, which means that there is a nested set of hypothesis classes indexed by C. As we increase C, we increase the complexity of the hypothesis class (if we increase C slightly, we can still form all of the linear models that we could before and also some that we couldn't before we increased the upper bound on the allowable norm of the weights). So as well as implementing SRM via maximum margin classification, it is also implemented by the limiting the complexity of the hypothesis class via controlling C. Sadly the theory for determining how to set C is not very well developed at the moment, so most people tend to use cross-validation (if they do anything). - OK, I think I understand the meaning of C now. :) – alfa Jun 25 '12 at 19:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201259613037109, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/107243/cohomology-of-finite-cyclic-groups
# Cohomology of finite cyclic groups I got stuck on the following: Let $G$ be a finite cyclic group. Then it is a well-known fact, that one can compute its Tate-cohomology groups from the complex $$\cdots\xrightarrow{\;\tau \;-\;\text{Id} \;}M \xrightarrow{\;\;Tr_{G}\;\;} M \xrightarrow{\;\tau \;-\;\text{Id} \;} M \xrightarrow{\;\;Tr_G\;\;} M \xrightarrow{\;\tau \;-\;\text{Id} \;}\cdots$$ where $\tau$ is multiplication by some generator of $G$ and $Tr_G(x)= \sum_{g \in G} gx$. Can you tell me why this defines an acyclic resolution of our module? I do not see why this complex necessarily has to compute the cohomology groups. - ## 1 Answer It doesn't define an acyclic resolution (at least, unless $M$ is acyclic). But one can show that the cohomology groups that arise from that complex are isomorphic to the usual ones as follows: The following is from Chapter VIII, $\S$4 of Serre's Local Fields (I changed the notation a bit to fit yours though): Define a cochain complex $K$ as follows: $K^i=\mathbb{Z}[G]$ for all $i$, $d:K^i\to K^{i+1}$ is $(\tau - \text{Id})$ if $i$ is even and $Tr_G$ if $i$ is odd. For each $G$-module $A$, put $K(A)=K\otimes_{\mathbb{Z}[G]}A$. Then $K^i(A)=A$ for all $i$, with the induced maps being the same. An exact sequence $0\to A\to B\to C\to 0$ gives rise to an exact sequence of complexes $0\to K(A)\to K(B)\to K(C)\to 0$ whence to an exact cohomology sequence, and, in particular, to a coboundary operator $\delta$. Proposition 6. The cohomological functor $\{H^q(K(-)),\delta\}$ is isomorphic to the functor $\{H^q(G,-),\delta\}$. First of all it is clear that $\widehat{H}{}^0(G,A)=H^0(K(A))$, $\widehat{H}{}^{-1}(G,A)=H^{-1}(K(A))$, and that the coboundary operator $\delta$ relating $H^0$ to $H^{-1}$ is the same. Hence $$H^q(K(A))=0$$ for $q=0,-1$ when $A$ is relatively projective, thence for all $q$ (as the $H^q(K(A))$ depend only on the parity of $q$). That suffices to give the isomorphism. The last bit is using that a $G$-module is relatively projective if and only if it is relatively injective when $G$ is finite, and an abstract characterization of derived functors (of which group cohomology is an example) - see the last sentence of the section here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392076730728149, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/76037/curvature-of-a-connection-of-vector-bundle
# Curvature of a Connection of vector bundle Let $X$ be a scheme or manifold and $\nabla: V \rightarrow \Omega^1 \otimes V$ be a connection on a vector bundle $V$ on $X$. Let $R:=\nabla^2$ denote the curvature homomorphism. Does it hold that $\nabla R =0$? How does one show this? - 2 If your $V$ is $TX$ with $X$ a riemannian manifold, and the connection is the Levi-Cività connection, then there is a well known Bianchi identity which would be silly if $\nabla R$ were zero :) Indeed, in this situation, the condition $\nabla R=0$ is the definition of locally symmetric spaces, and not all riemannian manifolds are locally symmetric. – Mariano Suárez-Alvarez♦ Oct 26 '11 at 12:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159585237503052, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/150015/determine-x-for-which-the-function-has-the-maximum-value
# Determine $x$ for which the function has the maximum value? We have $f:R\rightarrow R$ $f(x)=\frac{3x-1}{3x^2+1}$ Determine $x$ for which the function has the maximum value. How can I determine the maximum of this function? Thank you very much in advance! - I have been teaching calculus for years and I have no idea what you are referring to with $a$ or what you mean by "peak/delta formula". It's got me a little curious! – rschwieb May 26 '12 at 12:29 By a>0 I meant the concavity or convexity of the function. We can determine this for a grade 2 function by only looking at $x^2$ coefficient. By peak/delta I meant $V(\frac{-b}{2a};\frac{-\delta}{4a})$ which would work for a parabola. Didn't exactly know how to translate it into English from my language. Sorry for the misunderstandings! – Grozav Alex Ioan May 26 '12 at 12:33 1 No problem, I was speculating you meant this. However this is not a quadratic (grade 2) function at all, so it would definitely be inappropriate to apply the method. It's good though that you sought to use a simple method first! Some students just blindly use the sledgehammer... – rschwieb May 26 '12 at 12:36 ## 2 Answers You can first observe that it has a horizontal asymptote at y=0, so it is not going to go up or down forever on the ends. In this case, it's probably best to go straight for the derivative and find the critical points $x=1$ and $x=-1/3$. Doing the first or second derivative test tells you the max occurs at $x=1$, and so that maximum value of $f$ is $1/2$. Fill in the gaps! - Thank you for your help! Makes sense :).. The fractions were confusing me, that's why I didn't try the extreme points from the start. – Grozav Alex Ioan May 26 '12 at 12:41 I don't know what you mean by "$a>0$", but you can solve this by just taking the derivative and setting it equal to zero: $$f'(x) = \frac{(3x^2 + 1) 3 - (3x-1)(6x)}{(3x^2 +1)^2} = \frac{-9x^2 + 6x +3}{(3x^2+1)^2}$$ $$= -3 \frac{3x^2 - 2x - 1}{(3x^2 + 1)^2}$$ So just find the roots of $3x^2 - 2x - 1=0$, assuming my algebra doesn't have any mistakes. Take whichever one has a larger value when you plug into $f$, and then show that it is, in fact, a maximum. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527978897094727, "perplexity_flag": "head"}
http://chemistry.stackexchange.com/questions/2563/computation-of-ph-when-an-acid-and-base-are-mixed-in-solution
# Computation of pH when an acid and base are mixed in solution I'm doing a basic chemistry course, and we are currently learning how to compute $\text{pH}$ from the acid dissociation constant (using $\left[\text{H}^{+}_{(\text{aq})}\right]=\sqrt{K_{a}\left[\text{HA}_{(\text{aq})}\right]}$) along with computing the $\text{pH}$ of strong bases by assuming full dissociation into $\text{OH}^{-}$ ions, and then using the ionic product of water to calculate the concentration of protons. The question I have been given is: Calculate the pH of the solution obtained when $14.9\text{ cm}^{3}$ of $0.100\text{ mol dm}^{-3}$ sodium hydroxide solution has been added to $25.0\text{ cm}^{3}$ solution of methanoic acid of concentration $0.100\text{ mol dm}^{-1}$ ($K_{a}=1.60\times 10^{-4}\text{ mol dm}^{-3}$) I'm not sure how I should go about solving this problem, I can calculate concentrations of hydrogen and hydroxide ions for each of the solutions but I'm unsure how to combine them (as presumably some of the $\text{OH}^{-}$ ions will react with the $\text{H}^{+}$ ions to form $\text{H}_{2}\text{O}$?). Thanks in advance! - ## 1 Answer You are headed in the right direction. Pretend that the $\ce{H+}$ ions and the $\ce{OH-}$ ions neutralize each other 1 for 1. That will leave you with only one of those ions left. The water ionization won't affect your problem as that ionization is repressed by the excess ion present in the solution. Now you should be able to solve the problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474695324897766, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5479/milnors-cartography-problem
## Milnor’s cartography problem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\Omega$ be a round disc of radius $\alpha<\pi/2$ on the standard sphere. It is easy to construct a $(1,\tfrac{\alpha}{\sin\alpha})$-bi-Lipschitz map from $\Omega$ to the plane. Is it true that any convex domain $\Omega'$ on $S^2$ with the same area as $\Omega$ also admits a $(1,\tfrac{\alpha}{\sin\alpha})$-bi-Lipschitz map to the plane? Comments: • This problem appears in Milnor's A problem in cartography. Amer. Math. Monthly 76 1969 1101--1112. • I spent quite a bit of time to solve it, but without success. I only noticed that if one exchange "area" above to "perimeter" then the answer is YES. - 2 In the non-convex case: Are you defining distance as spherical distance, or distance along paths that stay in your domain? (For spherical distance and a global lipschitz condition, I think there are quick counterexamples.) – Martin M. W. Nov 14 2009 at 15:33 Right, the condition is local. – Anton Petrunin Nov 14 2009 at 15:58 Could it be that for the nonconvex problem you still need that the set be simply-connected or something? Otherwise you can take a small-area neighborhood of a "triangulation by tiny triangles"'s 1-skeleton and I think it makes a counterexample, since if none of the grid's 1-cycles expands a huge amount, then the inverse map has a huge lipschitz constant. – Mircea Mar 22 2012 at 12:39 1 @Mircea, well, let's do convex first. You are definetely right if the Lipschitz condition is global; if it is only local then I am not sure. – Anton Petrunin Mar 22 2012 at 19:14 @Anton Petrunin, I agree, the maps could still "crumple" the skeleton while keeping the local condition. Can I ask you how it all works in the case of perimeter? I thought that if one takes the (globally)1-lipschitz map defined just on the boundary whose image encloses maximum area, then that should extend to the wanted map (as the projection did for the disk), but I got stuck in proving that. – Mircea Mar 24 2012 at 11:55 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8862631916999817, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/194394-how-solve-x-2-2x-8-0-algebraically.html
Thread: 1. How to solve x^2 + 2x - 8 > 0 algebraically $x^2 + 2x - 8 > 0$ $(x + 1)^2 - 9 > 0$ $(x + 1)^2 > 9$ $x + 1 > 3$ But what to do about the -3. Eg Do I say: $x + 1 > -3$ or $x + 1 < -3$ It seems that that it has to be x + 1 < -3 - works if look at curve graphically. But then what if problem is $x^2 + 2x - 8 < 0$ $(x + 1)^2 < 9$ $x + 1 < 3$ and $x + 1 < -3$ which is wrong. Should be $x + 1 < 3$ and $x + 1 > -3$. -4 < x < 2 - can see graphically that is under x axis and so between -4 and 2. So what is the rule for bit when get square root of either side??? 2. Re: How to solve x^2 + 2x - 8 > 0 algebraically Originally Posted by angypangy $x^2 + 2x - 8 > 0$ $(x + 1)^2 - 9 > 0$ $(x + 1)^2 > 9$ $x + 1 > 3$ But what to do about the -3. Eg Do I say: $x + 1 > -3$ or $x + 1 < -3$ It seems that that it has to be x + 1 < -3 - works if look at curve graphically. But then what if problem is $x^2 + 2x - 8 < 0$ $(x + 1)^2 < 9$ $x + 1 < 3$ and $x + 1 < -3$ which is wrong. Should be $x + 1 < 3$ and $x + 1 > -3$. -4 < x < 2 - can see graphically that is under x axis and so between -4 and 2. So what is the rule for bit when get square root of either side??? You need to say that $\pm(x+1) > 3$ Then either $x+1 > 3 \text{ or } -(x+1) > 3 \Leftrightarrow x+1 < -3$ 3. Re: How to solve x^2 + 2x - 8 > 0 algebraically Hello, angypangy! $x^2 + 2x - 8 \,>\, 0$ $(x + 1)^2 - 9 \,>\, 0$ $(x + 1)^2 \,>\, 9$ $x + 1 \,>\, 3$ . . . . Not quite When we take the square root, we get: . $|x+1| \:>\:3$ This means: . $\begin{Bmatrix}x+1 \:>\:3 & \Rightarrow & x \:>\:2 \\ & \text{or} \\ x+1 \:<\:\text{-}3 & \Rightarrow & x \:<\:\text{-}4 \end{Bmatrix}$ The solution is: . $(\text{-}\infty,\,-4)\,\cup\,(2,\,\infty)$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Graphically, we have a parabola: . $y \:=\:x^2+2x-8$ The question is: when is the graph above the x-axis? Since the parabola opens upward, . . it is positive outside of its x-intercepts. The x-intercepts are: . $(x+4)(x-2) \:=\:0 \quad\Rightarrow\quad x \:=\:\text{-}4,\,2$ And we can "see" the solution . . . Code: ``` | ♥ | ♥ | | ♥ | ♥ | ♥ | ♥ - - * - - - + - * - - - -4 * | * 2 * | |``` If the problem were: . $x^2 + 2x - 8 \;\;{\color{red}<}\;\;0$ . . the solution is between the x-intercepts. . . . . . $\text{-}4 \;<\;x\,<\;2$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 37, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9082355499267578, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/186293-number-positive-divisors-n.html
# Thread: 1. ## Number of positive divisors of n Is it true that for all positive integers n, the inequality $d(n)\leq 1 + log_2 n$ holds? 2. ## Re: Number of positive divisors of n Originally Posted by alexmahone Is it true that for all positive integers n, the inequality $d(n)\leq 1 + log_2 n$ holds? I assume from the title that $d(n)$ is the number of divisors of $n$. Does that hold for $n=144~?$ 3. ## Re: Number of positive divisors of n Originally Posted by Plato I assume from the title that $d(n)$ is the number of divisors of $n$. Does that hold for $n=144~?$ 144 = 12^2 d(144) = 2 + 1 = 3 $1 + log_2 144 \approx 1 + 7.17 = 8.17$ It does hold for n = 144. 4. ## Re: Number of positive divisors of n Uh... no! 1 2 3 4 6 8 9 12 16 ... all divide 144, so the left hand side is larger than the right hand side 5. ## Re: Number of positive divisors of n Originally Posted by TheChaz Uh... no! 1 2 3 4 6 8 9 12 16 ... all divide 144, so the left hand side is larger than the right hand side Oops ... I should have done: 144 = 2^4 * 3^2 d(n) = (4 + 1)(2 + 1) = 15 6. ## Re: Number of positive divisors of n Originally Posted by alexmahone 144 = 12^2 d(144) = 2 + 1 = 3 $1 + log_2 144 \approx 1 + 7.17 = 8.17$ It does hold for n = 144. Because $144=2^4\cdot 3^2$ then $d(144)=(4+1)(3+1)=15$. 7. ## Re: Number of positive divisors of n Perhaps there is a mistake in the book: Is $d(n)\geq 1 + log_2 n$ true? 8. ## Re: Number of positive divisors of n Originally Posted by alexmahone Perhaps there is a mistake in the book: Is $d(n)\geq 1 + log_2 n$ true? The divisor bound « What’s new
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8485047221183777, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58696/why-study-lie-algebras/58779
## why study Lie algebras? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I don't mean to be rude asking this question, I know that the theory of Lie groups and Lie algebras is a very deep one, very aesthetic and that has broad applications in various areas of mathematics and physics. I visited a course on Lie groups, and an elementary one on Lie algebras. But I don't fully understand how those theories are being applied. I actually don't even understand the importance of Lie groups in differential geometry. I know, among others, of the following facts: $1)$ If $G$ and $H$ are two Lie groups, with $G$ simply connected, and $\mathfrak{g,h}$ are their respective Lie algebras, then there is a one to one correspondance between Lie algebra homomorphisms $\mathfrak{g}\rightarrow\mathfrak{h}$ and group homomorphisms $G\rightarrow H$. $2)$ The same remains true if we replace $H$ with any manifold $M$: any Lie algebra homomorphism from $\mathfrak{g}$ to the Lie algebra $\Gamma(TM)$ of smooth vector fields on $M$ gives rise to a local action of $G$ on $M$. $3)$ Under some conditions like (I think) compactness, the cohomology of $\mathfrak{g}$ is isomorphic to the real cohomology of the group $G$. I know that calculating the cohomology of $\mathfrak{g}$ is tractable in some cases. $4)$ There is a whole lot to be said of the representation theory of Lie algebras $5)$ Compact connected centerless Lie groups $\leftrightarrow$ complex semisimple Lie algebras How do people use Lie groups and Lie algebras? What questions do they ask for which Lie groups or algebras will be of any help? And if a geometer reads this, how (if at all) do you use Lie theory? How is the representation theory of Lie algebras useful in differential geometry? Thank you for your time - 3 I am not sure that the question in your title is the same as the ones you ask in your penultimate paragraph. (I think "why should I study X" questions are somewhat unsatisfactory; also, in your main questions, who do you mean by "people"? Number theorists? Fluid dynamics researchers? PDE people? Actuaries?.) – Yemon Choi Mar 17 2011 at 0:31 6 There is a school of thought which suggests that, just occasionally, one studies things to understand them better, not just to find applications of them to Better Banks and Better Bombs... – Yemon Choi Mar 17 2011 at 0:34 2 well, I was hoping to understand. Thus far I have no clue why Lie group are of such importance in differential geometry. Many of the question asked in differential geometry I find natural, like trying to write a Riemannian metric as the usual metric in a chart, and finding the obstruction to the existence of such local coordiantes. Or given a distribution, asking wether it is tangent to a submanifold, and finding the obstruction. Trying to identify Vector bundles, and telling them apart. These are all questions that resonate with me. I don't yet understand the motivation behind Lie theory – Olivier Bégassat Mar 17 2011 at 0:50 21 I wasn't talking about "Better Banks and Better Bombs", and the uses of Lie theory which I enumerate certainly don't suggest that. It's an honest question, do you think it is a ridiculous one? Also by "people" I mean any mathematician. Lie theory is very much a mystery to me, even though I know the basics. I'm interested in how it is being used by various branches of mathematics, and why it plays such a proeminent role. You know, $\emph{understand}$ it's role? – Olivier Bégassat Mar 17 2011 at 1:08 8 There are four votes to close right now. I want to register a vote for this question to stay open. Thus if you want to vote to close this question, don't click on the "vote to close"; rather, leave a comment canceling my vote to keep open (as per the agree-upon policy). – Andy Putman Mar 18 2011 at 0:26 show 5 more comments ## 10 Answers Here is a brief answer: Lie groups provide a way to express the concept of a continuous family of symmetries for geometric objects. Most, if not all, of differential geometry centers around this. By differentiating the Lie group action, you get a Lie algebra action, which is a linearization of the group action. As a linear object, a Lie algebra is often a lot easier to work with than working directly with the corresponding Lie group. Whenever you do different kinds of differential geometry (Riemannian, Kahler, symplectic, etc.), there is always a Lie group and algebra lurking around either explicitly or implicitly. It is possible to learn each particular specific geometry and work with the specific Lie group and algebra without learning anything about the general theory. However, it can be extremely useful to know the general theory and find common techniques that apply to different types of geometric structures. Moreover, the general theory of Lie groups and algebras leads to a rich assortment of important explicit examples of geometric objects. I consider Lie groups and algebras to be near or at the center of the mathematical universe and among the most important and useful mathematical objects I know. As far as I can tell, they play central roles in most other fields of mathematics and not just differential geometry. ADDED: I have to say that I understand why this question needed to be asked. I don't think we introduce Lie groups and algebras properly to our students. They are missing from most if not all of the basic courses. Except for the orthogonal and possibly the unitary group, they are not mentioned much in differential geometry courses. They are too often introduced to students in a separate Lie group and algebra course, where everything is discussed too abstractly and too isolated from other subjects for my taste. - Do you have any recommendations aimed at beginning graduate students regarding books or articles that discuss Lie groups and algebras not too abstractly and too isolated from other subjects? – bavajee Apr 30 2011 at 22:18 1 @bavajee: John Lee's "Introduction to Smooth Manifolds", and Spivak's "comprehensive introduction to differential geometry" are such sources. – Igor Belegradek May 1 2011 at 2:31 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a very fundamental way to create interesting Riemannian manifolds: Let $G$ be a semi-simple Lie group, let $K$ be its maximal compact subgroup, let $\Gamma$ be a discrete subgroup of $G$, and form $G / K.$ This quotient is called the symmetric space attached to $G$. The Riemanian structure comes from an invariant metric on $G$, and so $G$ acts as isometries on $G/K$ by left translation. If you consider the case $G = SL_2(\mathbb R)$, you get $SL_2(\mathbb R)/SO(2)$, which is naturally identified with the complex upper half-plane (on which $SL_2(\mathbb R)$ acts via Mobius transformations; note that the point $i$ is stablized precisely by $SO(2)$), which is also the hyperbolic plane. Other groups give higher dimensional hyperbolic spaces (e.g. $SL_2(\mathbb C)$ gives hyperbolic $3$-space), the Siegel upper half-spaces (from symplectic groups), complex balls, and many other well-known spaces. If you now take a discrete subgroup $\Gamma$ of $G$, you can form the double quotient $\Gamma \backslash G /K$. These are some of the most celebrated Riemannian manifolds in mathematics. In the case of $SL_2(\mathbb R)$, we know via uniformization that all genus $\geq 2$ Riemann surfaces can be described in this way. In the case of $SL_2(\mathbb C)$ we get hyperbolic $3$-manifolds, from symplectic groups we get moduli spaces of abelian varieties, ... . Now (as the preceding discussion hopefully makes clear), lots of these spaces are known by other names that don't involve Lie theory, and can be studied in a non-Lie-theoretic way. But the Lie-theoretic perspective provides a unifying, and frequently clarifying, point of view. For example, cohomological or function-theoretic invariants of these spaces can often be described and computed via Lie theoretic tools (e.g. via Lie algebra cohomology of certain unitary representations of the group $G$). As a concluding remark, let me note that a general principle is that when certain symmetries are implict in a given context (e.g. $SL_2(R)$ being the group of hyperbolic isometries of the upper half-plane), it is good to explicitly bring them to the fore and take them into account. In geometry, the symmetry groups that appear (of a space, or perhaps of its universal cover) are very often Lie groups. And so a little knowledge of Lie theory can turn into a powerful tool for investigating a given geometric situation. P.S. I should also note that the study of spaces $\Gamma \backslash G/K$ for certain $\Gamma$ (so-called congruence subgroups) is one of the basic topics of the Langlands program, and the function theory and cohomology of these spaces (especially their representation-theoretic structure) is conjectured to govern a vast amount of number theory. Trying to understand and work on these conjectures was my own motivation for learning Lie theory. - 4 Minor note for non-experts: the fact that you get hyperbolic manifolds from double quotients of $SL_2(\mathbb{R})$ and $SL_2(\mathbb{C})$ can be explained by exceptional isomorphisms from those two groups to orthogonal groups of signature $(2,1)$ and $(3,1)$, respectively. These isomorphisms let the groups act by norm-preserving transformations on Minkowski space $\mathbb{R}^{n,1}$, so they act by Riemannian transformations on the two-sheeted hyperbolas of norm $-1$ vectors. – S. Carnahan♦ Mar 18 2011 at 4:35 Lie's motivation for studying Lie groups and Lie algebras was the solution of differential equations. Lie algebras arise as the infinitesimal symmetries of differential equations, and in analogy with Galois' work on polynomial equations, understanding such symmetries can help understand the solutions of the equations. I found a nice discussion of some of these ideas in Applications of Lie groups to differential equations by Peter J. Olver, in Springer-Verlag's GTM series. - This is not why most people study Lie algebras these days. – Amritanshu Prasad Nov 12 2011 at 13:05 I like Deane's answer, and I doubt that I can improve upon it, but here is an attempt. One understanding of fundamental particles is that they are representations of classical Lie groups. I think that is reason enough to study them. But more down to earth, the circle is one of the easiest examples of a Lie group to study. Its Lie algebra is the real line. The exponential map is, well, the exponential map $e^{i \theta}.$ Circles and lines are important. A next most simple example is the 3-sphere ($SU(2)$) with its Lie algebra 3-space and the Lie bracket giving $i,j,k$. These are really cool examples. The general theory might also be really cool. - 2 Scott, indeed. The applications to physics are alone enough reason to study Lie groups and algebras. – Deane Yang Mar 17 2011 at 2:42 First of all - your point 3) can be extended to a (sub)class of homogeneous spaces. A very nice example of a use of representation theory is the Hodge theory for Kaehler manifolds as is done e.g. in Wells's book Differential analysis on complex manifolds. On a complex manifold you have a very natural notion of $(p,q)$-forms and of $\partial$ and $\overline{\partial}$ operators. One can view this as a decomposition of exterior forms and deRham differential under a subgroup which preserves the geometric structure. But the story doesn't end here - the crucial notion of primitive cohomology is really best thought of in terms of representation theory of $\mathfrak{sl}(2,\mathbb{C})$ whose action on exterior forms commutes with the action of the structure group. In this example representation theory helps to organize things and calculations and there are many similar ones in spirit. E.g. orthonormal basis of harmonic functions on the sphere consisting of spherical harmonics is also an exercise in representation theory - the advantage of such a basis being the symmetry properties of its functions. ADDED Let me also try to exapand Deane Yang's answer and explain the importance of Lie groups in differential geometry. Bernhard Riemann solved the equivalence problem (i.e. the question whether a sphere is locally isometric to plane) by developing Riemann geometry and introducing the crucial invariant - the Riemann curvature. Elie Cartan developed a general method for solving such equivalence problems (see Cartan's equivalence method or Method of moving frames on wikipedia). The notion of Lie group is already explicit there as it represents the symmetries of the geometrical structure one is interested in. This approach was later developed into what is now called Cartan geometry.Informally, these geometries are curved versions of Klein geometries. The story can be told like this: 1) classical synthetical geometry (Euclidean, projective, Lie sphere geometry, etc.) 2a) Riemann's generalization of Euclidean geometry, introduction of manifolds 2b) Klein's Erlangen program which postulates that every kind of geometry is determined by a homogeneous space $G/H$ 3) Cartan's generalization of these homogenous spaces in terms of $H$-principal bundles which subsumes the previous two generalizations (For details see book by Sharpe.) Given a geometrical structure, it is often hard theorem that the category of manifolds with this structure is isomorphic to (a certain subcategory of) the category of appropriate Cartan geometries. Nevertheless, Cartan's approach gives you very general and conceptual view on geometries like Riemannian, conformal, projective, Kaehler, quaternionic Kaehler, hyperKaehler, contact-projective, CR, ... Lie algebras and representation theory also appear, because the tangent space to $G/H$ can be identified with the homogeneous vector bundle associated to the $G$-representaion $\mathfrak{g}/\mathfrak{h}$ (this is one of the linearizations people keep talking about). One can regard the curvature tensor as an element of the tensor product of these and decomposition into irreducible subrepresentations then gives generalizations of Weyl and Ricci curvatures from Riemannian geometry. The Dirac operator of mathematical physics can be thought of as a deRham differential composed with a projection and an intertwining map between certain representations. In fact even such fancy gadgets as Lie algebra cohomology play their role (the keyword being "harmonic curvature"). In the end, you see that in order to understand the appearance of Lie groups in geometry, one has to read Klein's program. The rest is just ingenious technology to allow for nonflat things. ;-) - As has been said, Lie groups are our best theory encoding continuous symmetry. Lie algebra theory, which is the infinitesimal counterpart, is a theory good enough that numerous problems can be solved by look-up, rather than arguing from first principles. You can look at the history, particularly with Cartan and Weyl; you can look at the examples coming from "commutation relations" people want to study; you can look at representation theory or root systems or the theory of universal enveloping algebras; you can look at string theory or the Langlands philosophy. It has been found very natural to look at the Lie algebra as a linearised object behind the Lie group, and something easier to study. - Although the title is about Lie algebras, the question body mentions Lie groups, and my answer will deal more with these. As mentioned in other answers, Lie groups show up frequently in geometry as groups of symmetries of geometric objects. For example, given a manifold $M$ we can sometimes find a Lie group $G$ that acts on $M$ in some interesting fashion, and it is then not unreasonable to hope that this action might yield information about both $G$ and $M$. Let's look at something a bit more specific. Suppose we have a compact connected Lie group $G$ acting 'in some nice fashion' on a manifold $M$. Typically what one does in this case is break up $M$ into $G$-orbits, and then study each piece individually. Each orbit will be a homogeneous space $G/H$ of $G$, where $H$ is the stabilizer of some point in the orbit. The space $G/H$ is very symmetric-looking, and one might try to exploit the symmetry to gain some structural information. What we have done -- roughly speaking -- is cast aside the manifold and are now working primarily with the group. Of course an interesting special case is when the action of $G$ on $M$ is transitive, i.e. when there is only one $G$-orbit in $M$ so that $M=G/H$ is itself a homogeneous space. There is so much to say about manifolds of the form $G/H$ that I will restrict myself only to two things. `1)` The computation of the (real) cohomology of $G/H$ becomes a problem involving the Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ of $G$ and $H$, which are linear algebraic objects! In particular, if $H$ is closed and connected in the compact and connected Lie group $G$ then the cohomology ring $H^\ast(G/H;\mathbb{R})$ is isomorphic to the relative Lie algebra cohomology ring $H^\ast(\mathfrak{g}, \mathfrak{h};\mathbb{R})$. For instance if $H$ is the trivial subgroup, we obtain the isomorphism $H^\ast(G;\mathbb{R}) \cong H^\ast(\mathfrak{g};\mathbb{R})$ mentioned in the OP; and indeed, computing $H^\ast(\mathfrak{g})$ is a much more tractable problem. Another interesting special case is when $H$ is a maximal torus in $G$, but I will not say more about this here... `2)` Vector bundles over $G/H$ are related to the representation theory of $G$. Strictly speaking, this is only true of equivariant vector bundles, i.e. vector bundles $\pi \colon E \to G/H$ where $G$ acts on the total space $E$ in a way that respects its action on the base $G/H$: that is, we ask that $\pi(ge) = g\pi(e)$ for all $g \in G$ and $e\in E$ and that translation between fibers $E_x \to E_{gx}$ be linear. The fiber lying over the trivial coset in $G/H$ is then seen to carry a representation of $H$. Is there an action of $G$ lurking around? Yes: $G$ acts on the sheaf cohomology $H^\ast(G/H, V)$! Thus we can relate the cohomology of $H^\ast(G/H,V)$ to the representation theory of $G$. A very important special case is when $H$ is a maximal torus $T$ and $V$ is an equivariant (holomorphic) line bundle $L \to G/T$ (let's not fret about the "holomorphic" bit). (There is a miraculous fact that if $G$ is simply connected then every holomorphic line bundle over $G/T$ is automatically equivariant. In particular, this means that even if $G$ isn't simply connected, then we always get an action of the Lie algebra $\mathfrak{g}$ of $G$ on $H^\ast(G/H,L)$, even if there is no corresponding action of $G$. In other words, we can use the representation theory of $\mathfrak{g}$ to study $H^\ast(G/H,L)$.) There is a very explicit description of $H^\ast(G/T, L)$ in terms of the representation theory of $G$: it turns out that either $H^\ast$ vanishes completely, or else it is nonzero in a single degree $q_L$, in which case $H^{q_L}(G/T,L)$ is an irreducible representation of $G$. (This can be made much more precise; in particular, there is an explicit description of $q_L$ and of the resulting irreducible representation in terms of weights. The key phrase here is "Borel--Weil--Bott theorem.'') Here is a concrete example. If $G = \operatorname{SU}(2)$ and $T$ is its diagonal subgroup, then $G/T = \mathbb{C}P^1$, and one can use the Borel--Weil--Bott theorem to describe the cohomology groups $H^\ast(\mathbb{C}P^1, \mathcal{O}(n))$. For instance, the fact that $H^0(\mathbb{C}P^1, \mathcal{O}(n)) = \text{Sym}^n(\mathbb{C}^2)$ (for $n \geq 0$) comes from the fact that $\text{Sym}^n(\mathbb{C}^2)$ is the irreducible representation of $\operatorname{SU}(2)$ of highest weight $n$. There is another obvious reason why Lie groups are important in geometry: they are themselves geometric objects (namely, manifolds)! So you cannot expect to say something about general manifolds that cannot be said about them. Since Lie groups are a relatively well-behaved class of manifolds, one can use them as a test case of or a launch pad to more general results. The same can be said about homogeneous spaces $G/H$. For example, general results like the Atiyah--Bott fixed point foruma and the Atiyah--Singer index formulas when applied to $G/T$ (where $G$ is a compact and connected Lie group and $T$ is a maximal torus) are closely related to the Weyl character formula for $G$. - In addition to A. Prasad's excellent recommendation (Olver's book), I would suggest you take a look at Helgason's notes In particular, it is a good idea to check out the bottom of the page here. The last three papers in the additional readings section gives a non-technical account of the the origins of Lie groups. - Large subfields of modern differential geometry hardly ever use Lie group theory, e.g. they are never mentioned (as far as I can see) in Schoen-Yau's "Lectures on Differential Geometry", and their role in comparison geometry is quite modest. Major uses of Lie groups in Riemannian geometry are: 1. Holonomy groups. 2. Principal bundles and Chern-Weil theory. 3. Homogeneous and symmetric spaces, as a source of fundamental examples of Riemannian manifolds. 4. Collapsing theory with two sided curvature bounds (where local models are nilpotent Lie groups). Kobayashi-Nomizu's two volume "Foundations of Differential Geometry" discusses 1,2,3 extensively. - @Amritanshu Is this related to differential Galois theory? I think Differential Galois Theory answers why some linear differential equations are solvable in known functions..does lie groups used to solve differential equations have anything to say about this? - 1 The way they are formulated, not directly. Differential Galois theory is rather algebraically defined mimicking the Galois theory of fields and algebras; the basic object is a differential field, or more generally differential ring, which is a ring equipped with a derivation. Unfortunately, there does not seem to be developed a higher dimensional version with many derivations (fitting into D-module approach), just with one derivation -- Picard-Vessiot's theory, for which there is also a Tannakian approach (see Deligne's article in Gorthendieck's Festschrift). – Zoran Škoda May 1 2011 at 9:19 1 (continuing) For Lie approach the emphasis is on infinitesimal symmetries, and higher prolongations. Naturally, the object of studies deserves looking for a connection. For an attempt see W. R. Oudshorn, M. van der Put, ams.org/mcom/2002-71-237/S0025-5718-01-01397-7/… – Zoran Škoda May 1 2011 at 9:19 @Zoran Thankyou! I got a complete answer for my question through that paper. – Dinesh May 1 2011 at 11:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 152, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378781318664551, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=2313775
Physics Forums ## better understanding of causality An important part of SR is explaining which events may cause other events (e.g., a signal cannot get from A to B when they are space-like separated). Moreover, I vaguely remember that at least some minor arguments in relativity actually rely on the assumption that causality must not be violated (I might be wrong on this, though). Can someone explain to me why we are so worried about preservation of causality? Let me clarify my question, referring to either the classical relativity theory, or the quantum relativity with only unitary equations (it's just too confusing for me to discuss wave function collapse). In either case, the world is completely deterministic: what will happen in the future is fixed and unchangeable just as much as what happened in the past. In this situation, it makes no sense to talk about an information signal from A to B, since what happens at both events is known in advance. An information signal would only be meaningful if an observer at A could change something there, and then notify an observer at B about that change. It is as if the whole world's past and future in written down in a special book. Would it be meaningful to ask whether a signal from page 100 of the book can reach its page 200? No, because the book's pages aren't going to change, and so the signal can carry no information. Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Assume my superluminal punch.if I punch you, you will be hurt. if this event is space like for any one of us(or any other observer), he sees you getting hurt even before i punch. Thus i do not need to punch for getting you hurt. Since this is against conservation--pt.(1), you cd use that energy to punch me even without claiming to have hit me. Due to point 1 and since it is not good for the both of us, causality has to be preserved Mentor The problem with causality violations is simply that it is a pretty easy source of paradoxes. Killing your own father, sending a signal which stops you from sending the signal, etc. ## better understanding of causality Quote by vin300 Assume my superluminal punch.if I punch you, you will be hurt. if this event is space like for any one of us(or any other observer), he sees you getting hurt even before i punch. Thus i do not need to punch for getting you hurt. Since this is against conservation--pt.(1), you cd use that energy to punch me even without claiming to have hit me. Due to point 1 and since it is not good for the both of us, causality has to be preserved It would seem that in the terms of the question that: Both your FTL punch and the consequent reaction would ,both, already be fait accompli. You could no more, NOT spend the energy to enact the intention than the recipient could avoid the consequence. No loss of conservation. Not that I have anything in particular against causality , I am all in favor. But I also dont see any particular logical reason to assume the universe must operate for the convenience and comfort of its inhabitants. Quote by DaleSpam The problem with causality violations is simply that it is a pretty easy source of paradoxes. Killing your own father, sending a signal which stops you from sending the signal, etc. Hows this for a resolution to the grandfather paradox? Fred goes back and kills gramps. Fred pops out of existance. Gramps proceeds merrily along and in due time Fred is born and later proceeds back in time etc etc etc. The end result is Fred has created a closed time loop for himself. Deja vu all over again, and again.......... The rest of the universe rattles right along , with perhaps a few wondering where Fred got off to , but basically unaware that anything untoward has occured at all. ?????. Quote by vin300 Assume my superluminal punch.if I punch you, you will be hurt. if this event is space like for any one of us(or any other observer), he sees you getting hurt even before i punch. Thus i do not need to punch for getting you hurt. Since this is against conservation--pt.(1), you cd use that energy to punch me even without claiming to have hit me. Due to point 1 and since it is not good for the both of us, causality has to be preserved Quote by DaleSpam The problem with causality violations is simply that it is a pretty easy source of paradoxes. Killing your own father, sending a signal which stops you from sending the signal, etc. I don't think either of these is a problem if laws are deterministic. The critical point is that deterministic laws forbid any free will. Whether or not you punch / kill / send a signal is not something you can decide on the spur of the moment. These actions are fully predetermined by the full solution to the universe's initial conditions. Of course such solution by definition satisfies all laws of physics. Hence whatever is going happen will look perfectly reasonable at each point (energy is conserved, dead people don't give birth, etc.). Mentor Quote by mmoroz Whether or not you punch / kill / send a signal is not something you can decide on the spur of the moment. These actions are fully predetermined by the full solution to the universe's initial conditions. Of course such solution by definition satisfies all laws of physics. Hence whatever is going happen will look perfectly reasonable at each point (energy is conserved, dead people don't give birth, etc.). The point is that the laws of physics as currently understood and formulated don't seem to have this "edit out paradoxes" property. For example, we can build a transmitter that will send a 1-bit signal. We can build a receiver that will receive a 1-bit signal. We can build a processor that triggers the transmitter to send a 0 if it receives a 1 and a 1 if it receives a 0. Clearly none of the laws of physics forbid any of that. And because of causality we do not have to worry about paradoxes. So, now assume that we want to propose a new set of laws of physics but we want to relax the requirement of causality, but at the same time we want to change as little as possible about the laws that we have studied. Now, if we take the current set of laws and try to add only non-causal transmission of information then we get a paradox. It received a 0 so it transmitted a 1 which means that instead of receiving the 0 it actually received a 1 so it transmitted a 0 so ... Thus if we were to simply take the current laws and do a minimal tweak to allow violations of causality, then we would get paradoxes. It is conceivable that there are some self-consistent laws of physics that would forbid such paradoxes, but they would not look very familiar in the "non-causal limit" and yet they would have to reduce to the familiar laws in the "causal limit". That means one of three things: 1) Either the real laws of physics are causal 2) Or the real laws of physics are very strange 3) Or the universe is fundamentally illogical Personally, I am fine with either 1) or 2), but 3) would really bother me. Quote by mmoroz An important part of SR is explaining which events may cause other events (e.g., a signal cannot get from A to B when they are space-like separated). Moreover, I vaguely remember that at least some minor arguments in relativity actually rely on the assumption that causality must not be violated (I might be wrong on this, though). Can someone explain to me why we are so worried about preservation of causality? Let me clarify my question, referring to either the classical relativity theory, or the quantum relativity with only unitary equations (it's just too confusing for me to discuss wave function collapse). In either case, the world is completely deterministic: what will happen in the future is fixed and unchangeable just as much as what happened in the past. In this situation, it makes no sense to talk about an information signal from A to B, since what happens at both events is known in advance. An information signal would only be meaningful if an observer at A could change something there, and then notify an observer at B about that change. It is as if the whole world's past and future in written down in a special book. Would it be meaningful to ask whether a signal from page 100 of the book can reach its page 200? No, because the book's pages aren't going to change, and so the signal can carry no information. Thanks! I think you have come up with a relevant and interesting perception and question. And you are right about SR using the argument that causality must not be violated. In the reductio ad absurdum logical proof that faster than light travel is impossible because it would result in causality violations. SR + FTL = time travel = loss of causality. Of course the whole argument is totally meaningless without the assumption of block time, exactly what you are talking about . And as you have pointed out , if you assume block time , predeterminism, then the very concept of acausality becomes moot , rendered meaningless. So in a neat circular paradox the reductio ad absurdum argument provides a reductio ad absurdum refutation. I myself dont think that FTL,time travel , acausality or total predeterminism are plausible realities but that just MO Quote by DaleSpam That means one of three things: 1) Either the real laws of physics are causal 2) Or the real laws of physics are very strange 3) Or the universe is fundamentally illogical Personally, I am fine with either 1) or 2), but 3) would really bother me. Why only one DaleSpam??? We all basically believe the real laws of physics are causal. ANd the universe. I cant believe anyone who has ever studied SR or QM didn't at some point think the real laws of physics are very strange. As for three; doesn't the fundamental law of conservation of matter and energy, and intrinsic human logic, inevitably make the existence of the universe, itself an insoluble paradox and illogical?? 1) Conservation and logic make it impossible to conceive of something (the universe) emerging from nothing. 2) At the same time the human mind cannot conceive of infinity, of anything existing without a beginning,. 3) Or perhaps the best resolution lies in acausality. The universe is evolving over unfathomable time to a final evolved state which results in its own birth. The end immediately precedes the begininng. I realize that this is not really any more conceivable than the other two but at least it is not mutally exclusive and has a perfect circular symmetry. 0 = $$\infty$$ Quote by DaleSpam So, now assume that we want to propose a new set of laws of physics but we want to relax the requirement of causality, but at the same time we want to change as little as possible about the laws that we have studied. Now, if we take the current set of laws and try to add only non-causal transmission of information then we get a paradox. It received a 0 so it transmitted a 1 which means that instead of receiving the 0 it actually received a 1 so it transmitted a 0 so ... Ahh, that's right. Why didn't I think about it? Somehow, I was sure that time travel only presents a paradox if there's free will. The idea of an automatic device that exploits the time travel to create a paradox is so obvious, in retrospect. Quote by Dmitry67 Sounds like the Polchinski paradox is similar to that 0-1 transmitter construction, except much more precisely defined. I probably didn't understand the Novikov principle correctly. Apart from being very weird and restrictive (certain boundary conditions are not allowed, at least inside the chronology-violating region of space-time), I'm not even sure how it solves the paradox. What if we fill the universe with robots that look for wormholes and throw bombs into them at all possible angles. How would Novikov principle ensure that no bomb enters a wormhole at a "bad" angle (i.e., at an angle that would lead to the bomb blowing up the robot before it could throw it in)? It's all very good that it's not allowed, but what exactly would stop the robot? (Or worse, suppose the robots can actually calculate the angle that creates the paradox.) Mentor Quote by Austin0 Why only one DaleSpam??? Good point, I should have said "at least one". Quote by Austin0 We all basically believe the real laws of physics are causal. ANd the universe. Not really. There is a lot of disagreement and speculation on this topic. Quote by Austin0 As for three; doesn't the fundamental law of conservation of matter and energy, and intrinsic human logic, inevitably make the existence of the universe, itself an insoluble paradox and illogical?? 1) Conservation and logic make it impossible to conceive of something (the universe) emerging from nothing. The big bang was a singularity. A singularity is "everything in the same place" which is vastly different from "nothing". Although I think there may be other similar problems in terms of conservation of phase-space volume for the universe, but I don't really know how it would apply. Quote by mmoroz Sounds like the Polchinski paradox is similar to that 0-1 transmitter construction, except much more precisely defined. I probably didn't understand the Novikov principle correctly. Apart from being very weird and restrictive (certain boundary conditions are not allowed, at least inside the chronology-violating region of space-time), I'm not even sure how it solves the paradox. What if we fill the universe with robots that look for wormholes and throw bombs into them at all possible angles. How would Novikov principle ensure that no bomb enters a wormhole at a "bad" angle (i.e., at an angle that would lead to the bomb blowing up the robot before it could throw it in)? It's all very good that it's not allowed, but what exactly would stop the robot? (Or worse, suppose the robots can actually calculate the angle that creates the paradox.) Yes I was thinking something similar when I read it. One can even make the situation very similar to the original billiard ball one. Just replace the ball with a homing missle. After exiting the wormhole it would change it's path to intercept the incoming one, and any touching would cause an explosion, which would remove any glancing-hit solutions to the problem. Has anyone heard of more "advanced" solutions to this paradox than what is written on the wiki page? Re: better understanding of causality -------------------------------------------------------------------------------- Originally Posted by Austin0 Why only one DaleSpam??? Good point, I should have said "at least one". I 'm not sure I had any point,,,I was just having fun Originally Posted by Austin0 We all basically believe the real laws of physics are causal. ANd the universe. Not really. There is a lot of disagreement and speculation on this topic. I wasn't serious there Of course in "objective" mode I have no problem with time symetric waves as long as they dont wander off the Q Rancho. Or most any other strange phenomena. I was talking about waking up in the morning confident that our coffee wasn't going to get hotter while we drink it. ANd how much sleep have you ever lost worrying about post-emptive tachyon strikes???? Originally Posted by Austin0 As for three; doesn't the fundamental law of conservation of matter and energy, and intrinsic human logic, inevitably make the existence of the universe, itself an insoluble paradox and illogical?? 1) Conservation and logic make it impossible to conceive of something (the universe) emerging from nothing. The big bang was a singularity. A singularity is "everything in the same place" which is vastly different from "nothing". Although I think there may be other similar problems in terms of conservation of phase-space volume for the universe, but I don't really know how it would apply. My "point" exactly. The singularity concept cleverly but deviously avoids this " something from nothing problem". As do spontaneous fluctuations in an "almost" non existent quatum potential field (before the singularity) and other such valient, but doomed , attempts to handle this basic paradox. I myself am firmly convinced that the universe is going to continue expanding at an accelerating rate until everything in it goes superluminal and is tranformed into a cloud of tachyons going backward in time and space to converge and create the singularity. No not really Thanks Tags space-time, special relativity Thread Tools | | | | |--------------------------------------------------------|--------------------|---------| | Similar Threads for: better understanding of causality | | | | Thread | Forum | Replies | | | General Physics | 6 | | | Cosmology | 6 | | | General Math | 4 | | | General Discussion | 11 | | | General Physics | 12 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578260779380798, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2045/ring-theory-in-physics/2048
# Ring theory in physics Surely group theory is a very handy tool in the problems dealing with symmetry. But is there any application for ring theory in physics? If not, what's this that makes rings not applicable in physics problems? - 1 In formal deformation quantization one uses formal power series to seperate geometrical problems from convergence problems. In this setting states are modeled by $\mathbb{C}[[\lambda]]$ linear functionals $\omega \colon C^\infty(M)[[\lambda]] \to \mathbb{C}[[\lambda]]$. So one might say that one replaces the field $\mathbb{C}$ by the ordered ring $\mathbb{C}[[\lambda]]$. I think this leads people which write more abstract papers on representation theory of star products to consider algebras over rings and not over fields. However this might be an artefact of using formal power series. – student Dec 18 '10 at 13:56 How about a matrix ring? And lots of other unital algebras (thought of as rings with addition coming from the underlying module). – Marek Dec 18 '10 at 17:52 By the way, I love the answers. Keep 'em coming :-) – Marek Dec 18 '10 at 18:05 ## 3 Answers This really comes down to the question of how broadly you define ring theory. Special types of rings appear all over the place in physics, but often their focused study is given a more specialized name. The term "ring theory" is sometimes used to indicate the specific study of rings as a general class, and under that interpretation, the discipline seems to be closer to logic and set theory than questions of current physical relevance. In any case, rings show up in the following contexts (this list is not comprehensive): 1. The representation theory of a finite or compact group $G$ can be studied from the optic of ring theory, since representations are modules over a (suitably topologized in the infinite case) group ring $\mathbb{C}[G]$. 2. Algebras of operators, such as C* algebras and von Neumann algebras, are rings. The ring of functions on a manifold is a commutative C* algebra, and geometric quantization is done by deforming the product structure on the ring of functions on a symplectic manifold (e.g., taking the ring of functions on the cotangent bundle of a configuration space to the ring of differential operators on the space). 3. Algebraic varieties such as Calabi-Yau varieties and Riemann surfaces show up in some string theory and conformal field theory papers. They are patched together using commutative rings of functions on open sets. 4. Cohomology and K-theory of a topological space form (graded-)commutative rings. I am told that string theorists sometimes view these rings as places where certain charges live. 5. If you like vertex algebras, their representation theory is captured by the module theory of a current algebra, which is a big ring. 6. I'm told that perturbative renormalization can be placed into Wightman's axiomatic framework in a mathematically rigorous fashion, if the framework is suitably generalized by replacing $\mathbb{C}$ with a formal power series ring in one ore more variables, such as $\mathbb{C}[[\lambda]]$, where the variables are the coupling constants, assumed to be infinitesimal. Objects like the Hilbert space of states are replaced with modules over this ring equipped with a sesquilinear form. Borcherds has a recent preprint establishing this. - – Arnold Neumaier Oct 22 '12 at 19:38 I don't know any important applications of rings in undergraduate level physics, but there are plenty of applications in the study of supersymmetric field theories. For example, take an $N=2$ supersymmetric $\sigma$-model with whose target space is a Kahler manifold $X$. One can define a set of twisted supercharges $Q$ (the exact form depends on the twisting) and there is a ring structure on the cohomology of $Q$. Classically this is the same as the cohomology ring of $X$, but quantum mechanically there are corrections from instantons. The chiral ring of the Q's is often called the quantum cohomology of $X$. The chiral ring structure also played an important role in the discovery of mirror symmetry. The lectures by E. Witten in "Quantum fields and strings: a course for mathematicians, Volume 2" are a good reference on this for the mathematically inclined. If there are some more down to earth examples where rings are important in physics I'd also be interested to hear about them. - In geometry, spaces are often characterized by their ring of continuous (or smooth) R or C valued functions. This is the basic philosophy of algebraic geometry, noncommutative geometry, and deformation theory, which all have applications to physics (the latter two being almost exclusively motivated by quantum mechanics). For example any compact Hausdorff space can be constructed just from its ring of continuous functions so one could say that a space is a ring. This ring is always commutative since we multiply pointwise and R and C are commutative. The viewpoint of noncommutative geometry is to study noncommutative rings as if they were the ring of continuous functions of some "noncommutative" space. This is motivated by quantum mechanics where the algebra of observables is noncommutative. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391347765922546, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10577?sort=oldest
## Hamilton cycle decompositions of the complete graph ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking for the number of Hamilton cycle decompositions of the labelled complete graph $K_n$ for small $n$. From such a decomposition, we can construct a special type of Latin square (called a row-Hamiltonian Latin square). Edit: Clearly, we require $n$ to be odd. To ensure that each Hamilton cycle decomposition is counted once, we only include the $n$-cycle permutations $\alpha$ of ${1,2,\ldots,n}$ that have $\alpha(1)<\alpha^{-1}(1)$. We also write the decomposition $\alpha\beta\ldots$ such that $\alpha(1)<\beta(1)<\cdots$. The count for $n=3$ is $1$ counting (123). The count for $n=5$ is $6$, counting the following: $(12345)(13524)$, $(12354)(13425)$, $(12453)(14325)$, $(12435)(13254)$, $(12543)(14235)$ and $(12534)(13245)$. Assuming my code is correct, the count for $n=7$ is $960$. - 1 Have you tried looking up the first few terms of the sequence in the OEIS? – Qiaochu Yuan Jan 3 2010 at 10:53 I've tried looking up "Hamilton cycle decompositions" and similar terms in OEIS, Google and MathSciNet without luck. I think the counts for n=3 and n=5 are 1 and 24, respectively (since there are 4! 5-cycles), which is not enough. The count for n=7 seems difficult to compute without coding. But this seems like a very natural question to ask - I'd be surprised if nobody has counted these decompositions before. – Douglas S. Stones Jan 3 2010 at 12:13 There's a table at mathworld.wolfram.com/HamiltonianCycle.html – Jason Dyer Jan 3 2010 at 13:51 Jason - that table counts cycles, not decompositions. – Emil Jan 3 2010 at 14:03 1 Douglas, are you labeling the cycles in the decomposition as well? So (1 2 3 4 5) (1 3 5 2 4) is not the same as (1 3 5 2 4) (1 2 3 4 5)? – Harrison Brown Jan 3 2010 at 17:23 show 2 more comments ## 4 Answers In Two-factorizations of complete graphs it is stated that $K_9$ has 122 non-isomorphic Hamiltonian decompositions, and the corresponding number for $K_{11}$ is 3140 (EDIT: the actual figure is much more than this - see comment). I don't think they know any other values. (Sloane's database does not have any sequences with these numbers in.) Now you are interested in the labeled case, which may be easier. However I have not been able to find anything (on Google). - Interesting, it gives some idea of how large the number of $K_9$ is. However, the number for $K_{11}$ counts only those with a non-trivial automorphism. Afterwards "...he generated more than 45 thousand automorphism-free ones before abandoning the task." Perhaps the combinatorial explosion makes this enumeration problem too difficult. – Douglas S. Stones Jan 3 2010 at 22:06 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Gah, I commented but my answer was wrong. I don't have a copy of Mathematica available, but here's (I think) a description of how to compute small cases in Mathematica. There's a package (Combinatorica) with a function called HamiltonianCycle[graph, All] that returns a list of all the directed Hamiltonian cycles beginning and ending at a single node (as lists). Set the graph to be CirculantGraph[n, {2, 3, ..., (n-1)/2}] and compute this list. This is the graph resulting after we remove the first Hamiltonian cycle. Now if you're doing Mathematica, it counts directed cycles, so we only want to consider half the lists. Throw out every cycle where the second element is larger than the second-to-last element (the first and last elements are both 1). (N.B. I originally described this step incorrectly, whence the comments.) Create this sublist, which we'll call hamcyc, and then compute partitions := Subsets[hamcyc, (n-1)/2]. This is a 3D array. Count the number of elements (2D arrays) in this such that every pair of distinct integers in {1, ..., n} is contained as adjacent elements in exactly one of the lists in this 2D array. (Not sure how to do this, but this is the only thing I don't know how to do.) Multiply this count by n!/(n-1) to get the number of partitions into Hamiltonian cycles. - I think I get the idea... this would be a fairly memory intensive algorithm. But I should be able to implement it in GAP (I don't have Combinatorica). Also, I'm not sure if you can make the simplification $k \leq (n-1)/2$, for example how would it handle Hamilton cycles like (1723456) when n=7? – Douglas S. Stones Jan 3 2010 at 22:52 Hm, good point about the simplification, I got confused. I'll fix that. Yeah, it's pretty memory intensive, but I suspect it'll be quick enough to calculate a few examples with n > 5. – Harrison Brown Jan 3 2010 at 23:11 Just reporting that I wrote another algorithm for this and found the following values: ````3 1 5 6 7 960 9 40037760 ```` I ran this through the superseeker on Sloane and it came up with nothing (so perhaps nobody has counted these before). Here's my code below (it uses GAP). We generate a (n-1) x n Latin rectangle where each row is an n-cycle and the i-th and (i+(n-1)/2)-th rows are inverses. ````EnumerateHamiltonDecompositionsBacktrackingAlgorithm:=function(n,L,step) local i,j,k,count,A; i:=Int((step-1)/n)+1; j:=(step-1) mod n+1; count:=0; if(n mod 2=0 or n<3) then return fail; fi; if(j=1) then A:=[Minimum(Filtered([2..n],i->ForAll([1..n-1],t->L[t][1]<>i)))]; else A:=Filtered([1..n],s->ForAll([1..n-1],t->L[t][j]<>s) and ForAll([1..n],t->L[i][t]<>s)); fi; for k in A do L[i][j]:=k; L[i+(n-1)/2][k]:=j; if((j=n and CycleLengths(PermList(L[i]),[1..n])=[n]) or j<n) then if(i=(n-1)/2 and j=n) then count:=count+1; else count:=count+EnumerateHamiltonDecompositionsBacktrackingAlgorithm(n,L,step+1); fi; fi; L[i][j]:=0; L[i+(n-1)/2][k]:=0; od; return count; end;; EnumerateHamiltonDecompositions:=function(n) local L; if(n mod 2=0 or n<3) then return fail; fi; if(n=3) then return 1; fi; L:=List([1..n-1],i->List([1..n],j->0)); L[1]:=List([1..n],i->i mod n+1); L[1+(n-1)/2]:=ListPerm(Inverse(PermList(List([1..n],i->i mod n+1)))); return Factorial(n-2)*EnumerateHamiltonDecompositionsBacktrackingAlgorithm(n,L,n+1); end;; ```` The extra data point comes from assuming that (12..n) is one of the cycles, then multiplying the result by (n-2)!. This is legitimate since each decomposition contains a unique cycle with the edge 12, and by permuting the remaining n-2 edges, we generate a unique decomposition with the cycle (12..n). There are no automorphisms under this group action, so each orbit has cardinality (n-2)!. - I've taken a liberty to submit 1,6,960,40037760 as a new sequence to the OEIS. Soon it will be there as sequence A175554. Further comments/additions are welcome. – Max Alekseyev Jun 29 2010 at 3:07 How do i write the number of different Hamiltonian cycles there are in a fully connected graph with n vertices? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8851863741874695, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/23422?sort=newest
## Random projection and finite fields ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have, say, $n$ $2n$-dimensional linearly independent vectors over $\mathbb{F}_2$. We do a projection on a random $d$-dimensional subspace. We are interested in probability that images of our vectors will be linearly independent (over $\mathbb{F}_2$) too. The question is as follows: how large $d - n$ should be if we want this probability to be, say, $1 - 1 / \mathrm{poly}(n)$? - Ilya, is this your homework in probability? Sounds like an advanced problem for serious students at MSU. – Wadim Zudilin May 4 2010 at 13:17 No. This fact would be useful for my research. Our course in probability is not that advanced. – ilyaraz May 4 2010 at 13:22 Then please clarify whether all objects (vectors, random $d$-dimensional subspaces) are with respect to ground field $\mathbb F_2$, or the 2-element field is needed to express the independence property. Even the latter option sounds strange, a better formulation of the problem would be helpful. – Wadim Zudilin May 4 2010 at 13:41 We consider linear dependence over $F_2$ of course. – ilyaraz May 4 2010 at 13:54 This is clear. Are the vectors from $\mathbb F_2^{2n}$? Is the subspace viewed as a subspace of $\mathbb F_2^{2n}$? – Wadim Zudilin May 4 2010 at 14:14 show 1 more comment ## 1 Answer Suppose the vectors are $e_1,\dots,e_n$. The kernel of projection onto a random subspace of dimension $n+r$ is a random subspace of dimension $n-r$, so you want the probability that such a subspace has trivial intersection with the span of $e_1,\dots, e_n$. Now just count the number of choices for a basis $v_1,\dots, v_{n-r}$ of such a space: $2^{2n} - 2^n$ for the first vector, then $2^{2n} - 2^{n+1}$ for the second, and so on. This is to be compared with $2^{2n} - 1$ choices for the first vector if one doesn't have an restriction, $2^{2n}-2$ for the second and so on. So the probability of this happening is the ratio of these two quantities, which you need to find a good approximation for; a very brief back-of-an-envelope calculation suggested it's about $1 - c2^{-r}$, at least if $r$ is largeish. For your specific needs, then, $d - n$ should be about $C\log n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236832857131958, "perplexity_flag": "head"}
http://mathoverflow.net/questions/47168/ex-1-x-1-x-2-where-x-i-are-integrable-independent-infinitely-divisib/50554
$E(X_1 | X_1 + X_2)$, where $X_i$ are (integrable) independent infinitely divisible rv’s “of the same type” Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following is inspired by this recent question on math.stackexchange. Two standard exercises in conditional expectation are to find `${\rm E}(X_1|X_1+X_2)$` where: 1) `$X_i$`, `$i=1,2$`, are independent `${\rm N}(0,\sigma_i^2)$` rv's; 2) `$X_i$`, `$i=1,2$`, are independent Poisson(`$\lambda_i$`) rv's. The solutions are given by `$\frac{{\sigma _1^2 }}{{\sigma _1^2 + \sigma _2^2 }}(X_1 + X_2)$` and `$\frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}(X_1 + X_2)$`, respectively. A proof for case 1) is given on math.stackexchange. For case 2) we have `${\rm E}(X_1|X_1 + X_2 = n) = \sum\limits_{k = 0}^n {k{\rm P}(X_1 = k|X_1 + X_2 = n)}.$` A straightforward calculation shows that the right-hand side sum is equal to `$\sum\limits_{k = 0}^n {k{n \choose k}\bigg(\frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}\bigg)^k \bigg(\frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}\bigg)^{n - k} },$` which is the expectation of the binomial distribution with parameters `$n$` and `$\lambda_1 / (\lambda_1 + \lambda_2)$`, hence given by `$ n \lambda_1 / (\lambda_1 + \lambda_2)$`. The result for case 2) is thus proved. In this context, what is common to the normal and Poisson distributions is that both are infinitely divisible (ID). More specifically, the characteristic function of `$X_i \sim {\rm N}(0,\sigma_i^2)$` is given by `${\rm E}[{\rm e}^{{\rm i}zX_i} ] = {\rm e}^{\sigma _i^2 ( - z^2 /2)}$`, and that of `$X_i \sim {\rm Poisson}(\lambda_i)$` by `${\rm E}[{\rm e}^{{\rm i}zX_i} ] = {\rm e}^{\lambda _i ({\rm e}^{{\rm i}z} - 1)}$`. Now, consider integrable, ID, independent rv's `$X_i$, $i=1,2$`, with characteristic functions of the form `${\rm E}[{\rm e}^{{\rm i}zX_i} ] = {\rm e}^{c_i \psi(z)}$`, `$c_i > 0$` (loosely speaking, the characteristic function of an arbitrary ID rv is of that form). In view of the normal and Poisson examples considered above (the former requires somewhat tedious algebra for the solution), and the fact that many important rv's fall into the general category of integrable ID rv's (e.g., gamma rv's), it would be very useful to have the following result: `${\rm E}(X_1 | X_1 + X_2) = \frac{{c _1 }}{{c_1 + c_2 }}(X_1 + X_2)$`. In fact, I have proved it recently. Now to my questions: 1) Have you encountered this result before? 2) Can you provide a rigorous but simple proof of it? 3) Can you provide some intuition? EDIT: 1) Here's another interesting example: if `$X_i \sim {\rm Gamma}(c_i,\lambda)$`, `$i=1,2$`, so that `$X_i$` has density `$f_{X_i } (x) = \lambda ^{c_i } {\rm e}^{ - \lambda x} x^{c_i - 1} /\Gamma (c_i )$`, `$x > 0$`, then `${\rm E}(X_1 | X_1 + X_2) = \frac{{c_1 }}{{c_1 + c_2 }}(X_1 + X_2 )$`. 2) It is very instructive to reformulate the result in terms of L\'evy processes: if `$X = \{ X(t): t \geq 0 \}$` is an integrable L\'evy process, then `${\rm E}[X(s)|X(t)] = \frac{s}{t}X(t)$`, `$0 < s < t$`. EDIT: The "direct" solution for the gamma case considered above is now given here. This shows, once more, the effectiveness of the general formula. EDIT: A complete solution is given in my first (according to date) answer below. EDIT: An important extension is considered in my second answer below. - 1 In the two concrete examples you cite, the $c_i$s are the variances. Do you have other interesting concrete cases where that happens and where it doesn't? – Michael Hardy Nov 24 2010 at 1:09 1 Your observation concerning the variances seems to be related to the following facts. If $\mu$ is an infinitely divisible distribution, then there exists a unique L\'evy process $X = \{X_t:t \geq 0\}$ such that $\mu$ is the distribution of $X_1$. If $X_1$ is square-integrable, then $X_t$ is square-integrable for any $t > 0$, and in fact ${\rm Var}(X_t)$ is linear in $t$ ($= \sigma^2 t$ for Brownian motion with variance parameter $\sigma^2$, $= \lambda t$ for Poisson process with parameter $\lambda$). ${\rm E}(X_t)$ is also linear in $t$, but in the centered normal case it is identically $0$ – Shai Covo Nov 24 2010 at 2:17 5 Answers Regarding a "rigorous but simple proof" of the relation the OP is interested in, such a proof is, almost completely, already written in the original post. To see this, consider independent integrable random variables $X$ and $Y$ and assume that their characteristic functions, defined for every real number $t$, are such that $E({\mathrm e}^{\mathrm{i}tX})=\mathrm{e}^{a\psi(t)}$ and $E(\mathrm{e}^{\mathrm{i}tY})=\mathrm{e}^{b\psi(t)}$ for a given function $\psi$ and given real numbers $a$ and $b$. Let $S=X+Y$. Now, to prove that $$(a+b)E(X\vert S)=aS,$$ it suffices to show that, for every real number $t$, $$(a+b)E(X\mathrm{e}^{\mathrm{i}tS})=aE(S\mathrm{e}^{\mathrm{i}tS}).$$ Since both sides of the equality can be explicitly written in terms of $a$, $b$, the function $\psi$ and its derivative $\psi'$, the proof is, in a way and modulo some easy computations, already over. For example, $$E(S\mathrm{e}^{\mathrm{i}tS})=E(X\mathrm{e}^{\mathrm{i}tX})E({\mathrm e}^{\mathrm{i}tY})+E(Y\mathrm{e}^{\mathrm{i}tY})E({\mathrm e}^{\mathrm{i}tX}),$$ because $X$ and $Y$ are independent. Here, both $E({\mathrm e}^{\mathrm{i}tX})$ and $E({\mathrm e}^{\mathrm{i}tY})$ are already known, and both $E(X{\mathrm e}^{\mathrm{i}tX})$ and $E(Y{\mathrm e}^{\mathrm{i}tY})$ are derivatives of the former with respect to $(\mathrm{i}t)$. Hence, $$E(S\mathrm{e}^{\mathrm{i}tS})=-\mathrm{i}(a+b)\psi'(t)\mathrm{e}^{(a+b)\psi(t)}.$$ Likewise, $$E(X\mathrm{e}^{\mathrm{i}tS})=E(X\mathrm{e}^{\mathrm{i}tX})E({\mathrm e}^{\mathrm{i}tY})=-\mathrm{i}a\psi'(t)\mathrm{e}^{(a+b)\psi(t)}.$$ Comparing these two formulas, we are done. (If this helps, one can note that the signs of $a$ and $b$ must be the same, in the sense that $ab>0$ or that $X$ or $Y$ must be $0$ with full probability.) - Thank you. I'll go over your solution today or so. – Shai Covo Nov 28 2010 at 4:45 Can you please elaborate on "Now, to prove that $(a+b)E(X\vert S)=aS$, it suffices to show that $(a+b)E(X\mathrm{e}^{\mathrm{i}tS})=aE(S\mathrm{e}^{\mathrm{i}tS})$ for every real number $t$". It seems that this is based on a property which is not too well-known; can you provide some reference? – Shai Covo Dec 1 2010 at 23:49 Your aim is to prove that $(a+b)E(Xg(S))=aE(Sg(S))$ for every bounded measurable $g$, or for every $0,1$-valued measurable $g$, or for every $g$ in a class a functions large enough to recover the preceding ones. Your hypothesis is that this holds for every $g$ defined by $g(x)=\mathrm{e}^{\mathrm{i}tx}$. Hence this holds for every linear combination of these, hence, by density, for every $g$ in the Schwartz space--et voilà. – Didier Piau Dec 2 2010 at 7:27 (cont'd) Or, you may consider the finite signed measures $\mu$ and $\nu$ defined by $\int g\mathrm{d}\mu=(a+b)E(Xg(S))$ and $\int g\mathrm{d}\nu=aE(Sg(S))$ for every bounded measurable $g$ on the real line. You know the Fourier transforms $\hat\mu$ and $\hat\nu$ coincide, hence $\mu=\nu$. (The proof in the preceding comment is a pedestrian rediscovery of this one.) – Didier Piau Dec 2 2010 at 7:27 Thanks for the clarification. – Shai Covo Dec 6 2010 at 16:49 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Suppose $X_1$ and $X_2$ are i.i.d. integrable rv's. Then $\mathbb{E}(X_1 | X_1+X_2)=(X_1 + X_2)/2$ by symmetry. Similarly, if $X_1,\ldots,X_n$ are i.i.d and integrable then $\mathbb{E}(X_1 | \sum X_i)=\sum X_i/n$. Hence if $S=\sum_{i=1}^n X_i$ and $T=\sum_{i=n+1}^{n+m} X_i$ then $$\mathbb{E}(S | S+T)=\frac{n}{n+m}S+T \ .$$ Any two infinitely divisible rv's with rational ratio of parameters can be decomposed like that and the rest follows by continuity. - 1 Just a small clarification of the spoken symmetry: $E[X_1|X_1+X_2]=E[X_2|X_1+X_2]$, but $E[X_1|X_1+X_2]+E[X_2|X_1+X_2]=X_1+X_2$, whence the Ori's statement. So this fact is very simple. Though not innocent at all. For example, from $E[X_1|X_1+\dots+X_n]=\frac1n\sum_{i=1}^n X_i$, the Hewitt-Savage 0-1 law and results on convergence of $\sigma$-algebras one gets strong law of large numbers. – zhoraster Nov 24 2010 at 7:34 (And, clearly, it is true not only for independent but for any exchandeable sequence.) – zhoraster Nov 24 2010 at 7:36 Thank you Ori and zhoraster. I'll go over the answer/comments later on today. However, I'm not sure how rigorous the proof is. Specifically, "and the rest follows by continuity" needs rigorous justification. – Shai Covo Nov 24 2010 at 8:56 I think the right way to phrase this discussion is as follows. Let $(X_s)_{0 \leq s \leq t}$ be a real stochastic process with cyclically exchangeable increments: for all $u \in [0,t]$, the process `$(X'_s)_{0\leq s \leq t}$` obtained by a cyclic shift by $u$, has the same distribution as the original process. Suppose that $X_0=X_t=0$ with probability one. Then for all $s$, $\mathbb{E}(X_s)=0$. (As in Ori's argument, for this step a continuity argument is needed, which you may not like. On the other hand, this kind of continuity argument is bog-standard -- it is a basic procedure when you study infinitely divisible distributions via their characteristic functions.) Edit: Here is an argument to replace the continuity argument but which requires an additional assumption. Suppose for simplicity that $t=1$. The additional assumption is that $\sup_{s \in (0,1)} |\mathbb{E}(X_s)| < \infty$. Suppose there is $s$ s.t. $\mathbb{E}(X_s) = z > 0$. Then by cyclic exchangeability, $\mathbb{E}(X_{1-s}) = -z$. Again by cyclic exchangeability, $|\mathbb{E}(X_{|2s-1|})| = 2z$, the sign depending on the sign of $2s-1$. By repeating this argument, it follows that if there is any point $s$ with $\mathbb{E}(X_s) \neq 0$ then there are points $s$ for which $|\mathbb{E}(X_s)|$ is arbitrarily large. In fact, since by cyclic exchangeability, $\mathbb{E}(X_{s/n})=\mathbb{E}(X_s)/n$, it then follows that there are points arbitrarily close to zero for which $|\mathbb{E}(X_s)|$ is arbitrarily large. Edit: (This is an expansion of the argument I sketched in the comments.) Note that for any stochastic proces $(X_t)=(X_t)_{0 \leq t \leq 1}$ if $U$ is a uniform $[0,1]$ random variable, independent of $(X_t)$, then the process $(X_t')$ obtained from $(X_t)$ by cyclically shifting $(X_t)$ by $U$, has cyclically exchangeable increments. Furthermore, if $(X_t)$ itself has cyclically exchangeable increments, then $(X_t)$ and $(X_t')$ have the same distribution. Now let $(Z_s)=(Z_s)_{0 \leq s \leq 1}$ be a Lévy process. Let $U$ be uniform on $[0,1]$ and independent of $(Z_s)$, and let $(Y_s)=(Y_s)_{0 \leq s \leq 1}$ be the process you get by cyclically shifting $(Z_s)$ by U. Then $Y_1=Z_1$, and $(Y_s)$ has the same distribution as $(Z_s)$. Conditional upon $Z_1$ (which equals $Y_1$), we don't automatically know the distribution of $(Z_s)$. However, we know the following facts. 1. Conditional on $Z_1$, $(Y_s)$ is distributed as a uniformly random cyclic shift of the conditioned process $(Z_s)$ (conditioned on $Z_1$), so still has has cyclically exchangeable increments. 2. Since $Z_1=Y_1$, $(Y_s)$ conditioned on $Z_1$ is the same as $(Y_s)$ conditioned on $Y_1$. But $(Y_s)$ and $(Z_s)$ have the same distribution so $(Y_s)$ conditioned on $Y_1$ is distributed as $(Z_s)$ conditioned on $Z_1$. Putting these facts together, we see that conditional on $Z_1$, $(Z_s)$ still has cyclically exchangeable increments, and thus (still conditional on $Z_1$) $(Z_s - sZ_1)$ does as well. But then $(Z_s - sZ_1)$ is a process with c.e. increments and equal to zero at $s=0$, $s=1$. By the first three paragraphs of my answer, it follows that if $\sup_{0 \leq s \leq 1} |\mathbb{E}(Z_s -sZ_1 | Z_1)|$ is almost surely finite, then almost surely $\mathbb{E}(Z_s|Z_1)=sZ_1$. But $\mathbb{E}(Z_s -sZ_1 | Z_1) = \mathbb{E}(Z_s|Z_1)+sZ_1$ so the requirement boils down to $\sup_{0 \leq s \leq 1} |\mathbb{E}(Z_s|Z_1)|$ almost surely finite. Using the tower law, this holds as long as $\mathbb{E}(\sup_{0 \leq s \leq 1} |Z_s|)$ is almost surely finite. I think this is equivalent to requiring that $\mathbb{E}|Z_1| < \infty$ but I still haven't checked. - Thank you Louigi. I'll go over your answer later today, or tomorrow. General comment: since the result is very useful yet apparently not well-known, a rigorous proof is desirable. – Shai Covo Nov 24 2010 at 17:18 I think what I described is a rigorous proof. Write $\chi_s(z) = \mathbb{E}(e^{izX_s})$. By infinite divisibility of $X_t$, for rational $q \in (0,1)$ and $s=qt$, $\chi_s(z)=(\chi_t(z))^{q}$ so $\mathbb{E}(X_{qt})=0$. By bounded convergence, the same equality follows for real $r \in (0,1)$. It then follows immediately that $\mathbb{E}(X_{rt}) = \lim_{q \to r} \mathbb{E}(X_{qt}) = 0$. This works for either Ori's or my argument. – Louigi Addario-Berry Nov 24 2010 at 18:00 Incidentally, Kai Lai Chung's "Course in probability theory" has a very careful exposition of characteristic functions and infinitely divisible distributions and in particular does all the required complex-analytic details related to choosing a branch of complex log, etcetera. – Louigi Addario-Berry Nov 24 2010 at 18:03 For my last paragraph I should have said that all that holds conditional upon $Z_t$, or else it doesn't address your question. – Louigi Addario-Berry Nov 24 2010 at 18:19 How can you prove that "all that holds conditional upon $Z_t$"? – Shai Covo Nov 25 2010 at 10:58 show 9 more comments First of all, considering the responses from this site, it seems that this result is not well-known (even among specialists), though very useful and relatively easy to derive. So, it was worth posting this here, and it is worth considering this a little further. I'll begin with Didier's answer, which corresponds to the characteristic functions formulation (original question). The main point, using Didier's notation, is that `$(a+b){\rm E}(X|S) = a S$` (what we want to show) is implied by `$(a + b){\rm E}(X{\rm e}^{{\rm i}tS} ) = a{\rm E}(S{\rm e}^{{\rm i}tS} )$` for every `$t \in \mathbb{R}$`. Indeed, the latter condition implies `$(a + b){\rm E}(X \mathbf{1}_A ) = a{\rm E}(S \mathbf{1}_A )$` for any `$A \in \sigma(S)$`, and thus, from the definition of conditional expectation, `$(a+b){\rm E}(X|S) = a S$`. Now, as Didier described, showing that `$(a + b){\rm E}(X{\rm e}^{{\rm i}tS} ) = a{\rm E}(S{\rm e}^{{\rm i}tS} )$` is very easy, under the assumption `${\rm E}({\rm e}^{{\rm i}tX}) = {\rm e}^{a \psi(t)}$` and `${\rm E}({\rm e}^{{\rm i}tY}) = {\rm e}^{b \psi(t)}$`. For completeness, the following point(s) should be noted here. `$\frac{{\rm d}}{{{\rm d}t}}{\rm E}({\rm e}^{{\rm i}tX} ) = {\rm i}{\rm E}(X{\rm e}^{{\rm i}tX} )$` by virtue of the dominated convergence theorem (since `$X$` is integrable; the same goes with respect to `$Y$`). So, `${\rm e}^{a \psi(t)}$` is differentiable, and from the fact that `$\psi$` is continuous it follows that `$\frac{{\rm d}}{{{\rm d}t}} {\rm e}^{a \psi(t)} = {\rm e}^{a \psi(t)} a \psi'(t)$`, which we needed for the proof. [Interestingly, this shows that if `$X$` is an integrable ID rv, then the corresponding characteristic exponent, `$\psi$`, is differentiable.] So overall, it seems that Didier indeed provided a rigorous but (relatively) simple proof. Ori's answer, on the other hand, corresponds to the L\'evy process formulation. My original proof of the result completes Ori's answer (the beginning is essentially the same). Here it is. Suppose that $X$ is an integrable L\'evy process, and fix `$0 < s < t$`. Assume first that `$s/t=m/n$`, with `$m,n \in \mathbb{N}$`. From `$\sum\nolimits_{i = 1}^n {{\rm E}[X_{it/n} - X_{(i - 1)t/n} |X_t ]} = X_t$` we deduce that `${\rm E}[X_{t/n}|X_t]=X_t / n$`, and, in turn, `${\rm E}[X_s |X_t ] = (m/n)X_t = (s/t)X_t$`. If `$s/t$` is irrational, let `$(s_j)$` be a sequence such that `$s_j \uparrow s$` with `$s_j/t$` being rational. By an elementary property of L\'evy processes, `$X_{s_j } \stackrel{{\rm a.s.}}{\rightarrow} X_s $. Define $X_s^* = \sup _{u \in [0,s]} |X_u |$; thus $|X_{s_j}|\leq X_s^*$ $\forall j$`. Since, by assumption, `${\rm E}[|X_s|]<\infty$`, we conclude from Theorem 25.18 in the classical book "L\'evy Processes and Infinitely Divisible Distributions" (by Sato) that also `${\rm E}[X_s^*]<\infty$`. Hence, by the dominated convergence theorem for conditional expectations, `${\rm E}[X_{s_j } |X_t ] \stackrel{{\rm a.s.}}{\rightarrow} {\rm E}[X_s |X_t ]$`. Since `$s_j/t$` is rational, `${\rm E}[X_{s_j } |X_t ]=(s_j/t)X_t$`. Thus, `${\rm E}[X_s |X_t ] = (s/t)X_t$`. Finally, Louigi's approach may be useful in a more general setting. In this context, I find it interesting to consider `${\rm E}(X_s | X_t)$` (`$0 < s < t$`) for general processes (cf. its counterpart `${\rm E}(X_t | X_s)$`). Any ideas? - At the risk of running against the tide of this (very interesting) MO page, I must confess being less and less convinced by the infinite divisibility (ID) aspect of the problem. (For instance, re a remark in Shai's last post above, the characteristic exponent of EVERY integrable random variable, ID or not, is differentiable.) To wit: the proof I explained does not use ID; it works for every distribution mentioned here (normal, Poisson, Gamma, integrable Lévy); it works also for distributions which are far from ID. (But cyclic exchangeability is a nice tool.) – Didier Piau Dec 8 2010 at 12:46 Thank you for this insightful comment. So, this leaves space for further study. I have some idea, which I'll probably present later. – Shai Covo Dec 8 2010 at 14:19 The following was motivated by Didier's comment given below my first answer. On the one hand, the role of infinite divisibility (ID) might not seem important in our context, in view of the following general example (and, moreover, part of the next paragraph). If $Z$ is any integrable random variable, and if $a/(a+b)$ is rational, say $a/(a+b)=n_1/(n_1+n_2)$ with $n_1,n_2 \in \mathbb{N}$, then letting $X = \sum\nolimits_{i = 1}^{n_1 } {Z_i }$ and $Y = \sum\nolimits_{i = n_1+1}^{n_1+n_2 } {Z_i }$, where $Z_i$ are independent copies of $Z$, we have ${\rm E}( X|X + Y)=\frac{a}{{a + b}}(X + Y)$. As a side note, it is worth noting here that for $X \sim {\rm binomial}(n_1,p)$, $Y \sim {\rm binomial}(n_2,p)$ this gives ${\rm E}(X|X+Y)=\frac{{n_1 }}{{n_1 + n_2 }}(X+Y)$, a result which might be quite difficult to obtain directly, that is by calculating $\sum\nolimits_{k = 0}^{n_1 } {k{\rm P}(X = k|X + Y = n)}$ (this can be a challenging exercise for students). On the other hand, consider the following question. Suppose that $X$ and $Y$ are independent integrable random variables with characteristic functions $\varphi_X$ and $\varphi_Y$, respectively, and $a$ and $b$ are positive real constants. Is it true that ${\rm E}(X|X+Y) = \frac{a}{{a + b}}(X + Y)$ if and only if $\varphi_Y = \varphi_X^{b/a}$? If $X$ is ID, then the condition $\varphi_Y = \varphi_X^{b/a}$ implies ${\rm E}(X|X+Y) = \frac{a}{{a + b}}(X + Y)$. Didier's answer suggests that this is true in general, and moreover that the opposite implication might be true as well, since it gives rise to the differential equation $b\frac{{\varphi'_X }}{{\varphi _X }} = a\frac{{\varphi' _Y }}{{\varphi _Y }}$, hence to $\varphi_Y = \varphi_X^{b/a}$ (note that $\varphi _X (0) = \varphi _Y (0) = 1$). It might be important to point out here that the characteristic function of an ID random variable has no zero. However, if $X$ is not ID, then $\varphi_X^{b/a}$ might not be a characteristic function. Indeed, if $\varphi {}_X^c$ is a characteristic function for all $c>0$, then from $\varphi _X = (\varphi _X^{1/n} )^n$ $\forall n$ it would follow that $X$ is ID. So, it seems that infinite divisibility does play an important role in our context. Finally, do you think that indeed ${\rm E}(X|X+Y) = \frac{a}{{a + b}}(X + Y)$ if and only if $\varphi_Y = \varphi_X^{b/a}$? It is quite an important result, if it is true... - About your "On the one hand" paragraph: indeed, to compute conditional distributions is often a bad idea (which I would not call "direct") if one is interested in a conditional expectation. In the case at hand, the exchangeability of the sequence $(Z_i)$ shows that $E(Z_i|X+Y)$ does not depend on $i$, and $E(X|X+Y)$ follows. – Didier Piau Dec 28 2010 at 11:34 At least it leads to interesting identities, which might even be quite difficult to prove (e.g. in the binomial case). – Shai Covo Dec 28 2010 at 11:43 About your "On the other hand" paragraph: indeed, $(a+b)E(X|X+Y)=(a+b)(X+Y)$ if and only if $b\varphi'_X\varphi_Y=a\varphi'_Y\varphi_X$. And this last condition implies that (the principal determinations of) the complex valued functions $\varphi_X^b$ and $\varphi_Y^a$ coincide in a neighborhood of $0$. But, in general, I simply do not know what it is you call $\varphi_X^{b/a}$. Finally, note that, even assuming that all the functions $\varphi_X^c$ are well defined, the condition that $\varphi_X^c$ be a characteristic function for every $c>0$ is nowhere in the original question. – Didier Piau Dec 28 2010 at 11:46 Note the typo in the first line (coefficient of $(X+Y)$). – Shai Covo Dec 28 2010 at 12:39 In some respect, the problem is particularly suitable for ID variables, since $\varphi_X^{b/a}$ is well-defined for any $a,b>0$ if $X$ is ID. So, the question at the end of my new answer can be answered in the ID setting. – Shai Covo Dec 28 2010 at 12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 210, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424458146095276, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=147834
Physics Forums ## splitting fields I'm studying for my abstract algebra exam and I had a question and was wondering if anyone could help me out. If I'm given a polynomial is there a method to find the splitting field of the polynomial and the degree of that field? My book basically just gives the definition of a splitting field and there are really no examples. Thanks PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug I think you would first have to find all the zeros of the polynomial. But if the zeros aren't in the field that the polynomial is defined in is there any way to see these? I've seen an example of this using complex numbers, using Demoirves thm I think but I'm not really sure how that was done or if that is a standard way to find all the zeros. I think after you find all the zeros you can just take the basis of the field with the basis of the zeros and that is the splitting field? Are there exactly n zeros for an n degree polynomial? And then the degree of the splitting field is the dimension of this basis? Thanks Recognitions: Gold Member Science Advisor Staff Emeritus Yes, you have to find all zeros of the polynomial. The field here is the set of rational numbers. Surely, you can determine whether a number is a rational number or not? I don't know what you mean by "the basis of the field with the basis of the zeros". The splitting field is the smallest field containing all rational numbers and all zeros and the polynomial. It is true that such a field can be written as a vector space over the rational numbers- that may be the "basis" you are referring to. And yes, the dimension of that basis is the degree of the splitting field. Each zero of the polynomial will be a "basis" element as long as it is "independent" of previously used zeros: as long as it cannot be written as a linear combination of those zeros. For example, the zeros of the polynomial x2-2= 0 are $\sqrt{2}$ and $-\sqrt{2}$. They are not 'independent' since one is -2 times the other. The smallest field containing all rational numbers and the zeros of x2-2 must contain any rational number a and $\sqrt{2}$. Since a field is closed under multiplication, it must also include numbers of the form $a\sqrt{2}$. Since a field is closed under addition, it must include numbers of the form $a\sqrt{2}+ b$. It's easy to show that sums and products of such things can be written in the same form. Of course, in a field, every non-zero number has an multiplicative inverse: $$\frac{1}{a\sqrt{2}+b}= \frac{a\sqrt{2}}{2a^2- b^2}- \frac{b}{2a^2- b}$$ (and 2a2- b cannot be 0 because $\sqrt{2}$ is not a rational number). That is, every number in that field can be written in the form $a\sqrt{2}+ b(1)$, a vector space over the rational numbers with basis $\{1, \sqrt{2}\}$ so the degree is 2. ## splitting fields Thanks, the concepts are a little clearer to me now. I am still have trouble with certain problems. For example how would I find the zeros of x^6 -1? I feel like I should know this but I don't know of a way to do it. Also, my teacher ran through this problem really quickly in class. If p is prime, prove that the splitting field over F, the rational numbers, of the polynomial x^p - 1 is of degree p-1? he wrote x^p -1 =(x-1)(x^p-1 +....+1) since 1 is a rational we just need to find the splitting field of the other polynomial. so write a1, ...,a(p-1) as the roots of this polynomial. there are p-1 of them because the degree of the polynomial is p-1? [F(a1):F]= p-1 can we assume this? then x^p -1 =0 so x^p =1 so x = e^((i*2*k*pi)/p) where k = 0,...,p-1 how and why did you do this? then [F(a1, a2):F]=[F(a1,a2);F(a1)][F(a1):F] show this equals 1*(p-1). then to complete the proof show {a1,...,a(p-1)} is a cyclic group with generator a1. I have no idea how to do that but I don't need to know group theory. Although, I am curious. then the proof is done. why exactly? Thanks for any help Recognitions: Gold Member Science Advisor Staff Emeritus x6-1= 0 means that x is a 6th root of unity. Those can be found by putting the number into "polar form". The roots are 1, $\omega_6$, $\omega_6^2$, $\omega_6^3$,$\omega_6^4$, and $\omega_6^5$. Of course, $\omega_6$ is the "principal 6th root of unity": $$e^{\frac{2\pi}{6}}= e^{\frac{\pi}{3}}= cos(\frac{\pi}{3})+ i sin(\frac{\pi}{3})$$. Thread Tools | | | | |---------------------------------------|------------------------------------|---------| | Similar Threads for: splitting fields | | | | Thread | Forum | Replies | | | Atomic, Solid State, Comp. Physics | 2 | | | Calculus & Beyond Homework | 16 | | | Calculus & Beyond Homework | 4 | | | Linear & Abstract Algebra | 5 | | | Chemistry | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505866765975952, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/45599/list
Return to Question 2 Added to the question in light of a first response. Let $G=(V,E)$ be a connected graph, say `$V=\{1,\ldots,n\}$`. Let $F=(V,E')$ be a uniformly random forest in $G$. (In other words, $E'$ is a subset of edges $E$ not containing a cycle, and it is uniformly chosen over all such sets.) Associated to the random forest $F$ are marginals `$\{p_e:e \in E\}$`, where $p_e = \mathbb{P}(e \in E')$. Now let $E^*$ be a random subset of $E$, chosen by independently including each edge $e \in E$ with probability $p_e$. Finally, let $N$ be the (random) smallest number of spanning trees of $G$ whose union contains $E^*$. What is known about the distribution of $N$? How does $\sup_{G} \mathbb{E}(N)$, the largest expected value of $N$ over all $n$-vertex graphs, grow? Is it $O(\log n)$? is Is it $O(1)$? Edit: is it $O(\sqrt{\log n})$? Fedor has a nice example showing that it is not $O(1)$. I expect this may be well-understoodbelieve optimizing Fedor's example yields a lower bound of order $(\log n/\log\log n)^{1/2}$. Note: the question also makes sense if $E'$ is the edge set of a uniformly random spanning tree, but not by meand Fedor's example applies in either case. 1 Covering a random graph with spanning trees. Let $G=(V,E)$ be a connected graph, say `$V=\{1,\ldots,n\}$`. Let $F=(V,E')$ be a uniformly random forest in $G$. (In other words, $E'$ is a subset of edges $E$ not containing a cycle, and it is uniformly chosen over all such sets.) Associated to the random forest $F$ are marginals `$\{p_e:e \in E\}$`, where $p_e = \mathbb{P}(e \in E')$. Now let $E^*$ be a random subset of $E$, chosen by independently including each edge $e \in E$ with probability $p_e$. Finally, let $N$ be the (random) smallest number of spanning trees of $G$ whose union contains $E^*$. What is known about the distribution of $N$? How does $\sup_{G} \mathbb{E}(N)$, the largest expected value of $N$ over all $n$-vertex graphs, grow? Is it $O(\log n)$? is it $O(1)$? I expect this may be well-understood, but not by me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270102381706238, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/31916-function-time-rate-problem.html
# Thread: 1. ## Function of Time Rate Problem I have a claculus word problem that looks like this.... A tank contains 2220L of pure water. A solution that contains 0.02kg of sugar per liter enters a tank at the rate 6L/min. The solution is mixed and drains from the tank at the same rate. Initially there is no sugar in the solution. Find an equation for the amount of sugar in the tank after t minutes (A function of t) I have calculuated so far: (0.02kg/l)(6L/min)-(y/2220L)(6L/min) dy/dt= 0.12kg/min -(64y/2220) The from here I know to take the antiderivatives of both sides: (This is where I begin to get confused) -34.68ln(0.12-64y/2220) = t + C ln(0.12-64y/2220)=-t/34.68 + C This is where I am not sure how to get any farther.... If anyone could help me out I would greatly appreciate it. 2. The rate of change of the amount of sugar in the tank at time t is given by dy/dt=rate in - rate out. $\text{rate in} = (\frac{1}{50})(6)=\frac{3}{25} \;\ \text{kg/min}$ $\text{rate out} = \frac{y}{2220}(6)=\frac{y}{370} \;\ \text{kg/min}$ Therefore, $\frac{dy}{dt}+\frac{y}{370}=\frac{3}{25}$ The integrating factor is $e^{\int\frac{1}{370}dt}=e^{\frac{t}{370}}$ $ye^{\frac{t}{370}}=\frac{222}{5}e^{\frac{t}{370}}+ C$ $y=\frac{222}{5}+Ce^{\frac{-t}{370}}$ Since y(0)=0, then $C=\frac{-222}{5}$ $\boxed{\frac{222}{5}-\frac{222}{5}e^{\frac{-t}{370}}}$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193001985549927, "perplexity_flag": "middle"}
http://www.intechopen.com/books/new-developments-in-liquid-crystals/a-new-method-of-generating-atmospheric-turbulence-with-a-liquid-crystal-spatial-light-modulator
InTech ‚Äö Open Access Publisher Open Access Books & Journals Engineering » Electrical and Electronic Engineering » "New Developments in Liquid Crystals", book edited by Georgiy V Tkachenko, ISBN 978-953-307-015-5, Published: November 1, 2009 under CC BY-NC-SA 3.0 license # A New Method of Generating Atmospheric Turbulence with a Liquid Crystal Spatial Light Modulator By Christopher C Wilcox and Sergio R Restaino DOI: 10.5772/9684 Article top ## Overview Figure 1. Flat wavefront after a long propagation distance from a point source Figure 2. Propagation of light from a distant source that then passes through the atmosphere Figure 3. Plots of the Hufnagel-Valley, SLC-Day, SLC-Night, and Greenwood models for C n 2 with respect to altitude. Figure 4. Illustration of isoplanatic angle Figure 5. A sample phase screen generated via the Frozen Seeing method Figure 6. Sample vector with a few random numbers with the zero mean unitary Gaussian distribution. Figure 7. Sample X i (t) temporal function generated from a vector of a few random elements. Figure 8. Temporal and magnitude difference between (a) tip and tilt and (b) higher order aberrations. Figure 9. Seconds to compute an NxN phase screen of atmosphere. Figure 10. Simulation of the Frozen Seeing model Figure 11. a) Sample wavefront of atmosphere with D/r 0 = 2.25 and its (b) SVD of Zernike polynomials Figure 12. The a i ’s progression over time for (a) tip and tilt and (b) higher order Zernike terms in a simulation of atmospheric turbulence generated via the Frozen Seeing method. Figure 13. The a i ’s progression over time for (a) tip and tilt and (b) higher order Zernike terms in a simulation of atmospheric turbulence generated via the Spline Technique. Figure 14. Graphical user interface for Holoeye Atmospheric Turbulence System Figure 15. Sample (a) and (c) wavefronts and (b) and (d) their corresponding PSFs due to atmospheric turbulence with a 0.4 meter telescope and an r 0 of 1 cm. Figure 16. PSF measurements of the two sample wavefronts in the optical system Figure 17. PSFs of (a) open-loop and (b) closed loop frame with x and y cross section plots. # A New Method of Generating Atmospheric Turbulence with a Liquid Crystal Spatial Light Modulator Christopher C Wilcox1 and Dr. Sergio R Restaino1 [1] Naval Research Laboratory, United States of America ## 1. Introduction Light traveling from a star, or any point source, will propagate spherically outward. After a long distance, the wavefront, or surface of equal phase, will be flat; as is illustrated in Fig. 1. #### Figure 1. Flat wavefront after a long propagation distance from a point source When the light begins to propagate through Earth’s atmosphere, the varying index of refraction will alter the optical path, as shown in Fig. 2. The Earth’s atmosphere can be described as a locally homogeneous medium in which its properties vary with respect to temperature, pressure, wind velocities, humidity and many other factors. Also, the Earth’s atmosphere temporally changes in a quasi-random fashion. All of these processes are usually simply refered to as atmospheric turbulence’’. The Kolmogorov model of energy distribution in a turbulent medium is a useful statistical model to describe the fluctuation in refractive index due to mostly the humidity and pressure changes. This model was first proposed by a Russian mathematician named Andreï Kolmogorov in 1941 and describes how in a fully turbulent media the kinetic energy of large scale motions is transfered to smaller and smaller scale motions (Kolmogorov, 1941). It is supported by a variety of experimental measurements and is quite widely used in simulations for the propagation of electromagnetic waves through a random medium. The first author to fully describe such phenomena was Tatarski in his textbook Wave propagation in a turbulent medium’’ (Tatarski, 1961). The complex and random nature of the Earth’s atmospheric turbulence effect on wave propagation is currently a subject of active research and experimental measurements. Many of the parameters of Earth’s atmospheric turbulence can be, at best, described statistically. #### Figure 2. Propagation of light from a distant source that then passes through the atmosphere These statistical parameters represent the strength and changeability of the atmospheric turbulence, these conditions are customarily refered to as the astronomical seeing’’, as they are widely used for astronomical applications. It is with this statistical information about a certain astronomical site and the specifications of the telescope that an Adaptive Optics (AO) system can be designed to correct the wavefront distortions caused by the atmosphere at that site. As telescopes continue to be manufactured larger and larger, the need for AO is increasing because of the limiting factors caused by atmospheric turbulence. In order to adequately characterize the performance of a particular AO system, an accurate spatial and temporal model of the Earth’s atmosphere is required. AO is the term used for a class of techniques dealing with the correction of wavefront distortions in an optical system in real time. Some wavefront distortions may include those caused by the atmosphere. Astronomical applications of AO particularly include the correction of atmospheric turbulence for a telescope system. Other possible applications include Free Space Laser Communications, High Energy Laser Applications, and Phase-Correction for Deployable Space-Based Telescopes and Imaging systems. However, prior to deployment, an AO system requires calibration and full characterization in a laboratory environment. Many techniques are currently being used with AO systems for simulating atmospheric turbulence. Some static components use glass phase screens with holograms etched into them. In addition, it is also important to simulate the temporal transitions of atmospheric turbulence. Some of these methods include the use of a static aberrator, such as a clear piece of plastic or glass etched phase screen, and rotating it. Rotating filter wheels with etched holographic phase screens can simulate temporal transitions, as well. Also, simply using a hot-plate directly under the beam path in an optical system can simulate temporally the atmospheric turbulence. However, etching holographic phase screens into glass can be quite costly and not very flexible to simulate different atmospheric characteristics. Thus, one would need more than one phase screen. A testbed that simulates atmospheric aberrations far more inexpensively and with greater fidelity and flexibility can be achieved using a Liquid Crystal (LC) Spatial Light Modulator (SLM). This system allows the simulation of atmospheric seeing conditions ranging from very poor to very good and different algorithms may be easily employed on the device for comparison. These simulations can be dynamically generated and modified very quickly and easily. ## 2. Background ### 2.1. Brief history of the study of atmospheric turbulence Ever since Galileo took a first look at the moons of Jupiter through one of the first telescopes, astronomers have strived to understand our universe. Within the last century, telescopes have enabled us to learn about the far reaches of our universe, even the acceleration of the expansion of the universe, itself. The field of building telescopes has been advancing much in recent years. The twin Keck Telescopes on the summit of Hawaii’s dormant Mauna Kea volcano measure 10 meters and are currently the largest optical telescopes in the world. Plans and designs for building 30 and 100 meter optical telescopes are underway. As these telescope apertures continue to grow in diameter, the Earth’s atmosphere degrades the images we try to capture more and more. As Issac Newton said is his book, Optiks in 1717, “… the air through which we look upon the stars is in perpetual tremor; as may be seen by the tremulous motion of shadows cast from high towers, and by the twinkling of the fixed stars…. The only remedy is a most serene and quiet air, such as may perhaps be found on the tops of high mountains above grosser clouds.” It was at this time when we first realized that the Earth’s atmosphere was the major contributor to image quality for ground-based telescopes. The light arriving from a distant object, such as a star, is corrupted by turbulence-induced spatial and temporal fluctuations in the index of refraction of the air. In 1941, Kolmogorov published his treatise on the statistics of the energy transfer in a turbulent flow of a fluid medium. Tatarskii used this model to develop the theory of electro-magnetic wave propagation through such a turbulent medium. Then, Fried used Tatarskii’s model to introduce measurable parameters that can be used to characterize the strength of the atmospheric turbulence. The theory of linear systems allows us to understand how a system transforms an input just by defining the characteristic functions of the system itself. Such a characteristic function is represented by a linear operator operating on an impulse function. The characteristic system function is generally called the impulse response function’’. Very often, such an operator is the so-called Fourier transform. An imaging system can be approximated by a linear, shift-invariant system over a wide range of applications. The next few sections will explain the use of a Fourier transform in such an optical imaging system and its applications with optical aberrations. ### 2.2. Brief overview of fourier optics and mathematical definitions A fantastic tool for the mathematical analysis of many types of phenomena is the Fourier transform. The 2-dimensional Fourier transform of the function g(x,y) is defined as, G(fx,fy)=F{g(x,y)}=∫−∞∞∫−∞∞g(x,y)e−j2π(fxx+fyy)dxdy where, for an imaging system, the x-y plane is the entrance pupil and the f x -f y plane is the imaging plane. A common representation of the Fourier transform of a function is by the use of lower case for the space domain and upper case for the Fourier transform, or frequency domain. Similarly, the inverse Fourier transform of the function G(f x ,f y ) is defined as, g(x,y)=F−1{G(fx,fy)}=∫−∞∞∫−∞∞G(fx,fy)ej2π(fxx+fyy)dfxdfy There exist various properties of the Fourier transform. The linearity property states that the Fourier transform of the sum of two or more functions is the sum of their individual Fourier transforms and is shown by, F{ag(x,y)+bf(x,y)}=aF{g(x,y)}+bF{f(x,y)} where a and b are constants. The scaling property states that stretching or skewing of a function in the x-y domain results in skewing or stretching of the Fourier transform, respectively, and is shown by, F{g(ax,by)}=1|ab|G(fxa,fyb) where a and b are constants. The shifting property states that the translation of a function in the space domain introduces a linear phase shift in the frequency domain and is shown by, F{g(x−a,y−b)}=G(fx,fy)e−j2π(afx+bfy) where a and b are constants. This property is of particular interest in the mathematical analysis of tip and tilt in an optical system, as it describes horizontal or vertical position in the imaging plane. Parsaval’s Theorem is generally known as a statement for the conservation of energy and is shown as, ∫−∞∞∫−∞∞|g(x,y)|2dxdy=∫−∞∞∫−∞∞|G(fx,fy)|2dfxdfy The convolution property states that the convolution of two functions in the space domain is exactly equivalent to the multiplication of the two functions’ Fourier transforms, which is usually a much simpler operation. The convolution of two functions is defined as, g(x,y)∗f(x,y)=∫−∞∞∫−∞∞g(ξ,η)f(ξ−x,η−y)dξdη The convolution property is shown as, F{g(x,y)∗f(x,y)}=G(fx,fy)F(fx,fy) A special case of the convolution property is known as the autocorrelation property and is shown as, F{g(x,y)∗g*(x,y)}=|G(fx,fy)|2 where the superscript * denotes the complex conjugate of the function g(x,y). The autocorrelation property gives the Power Spectral Density (PSD) of a function and is a useful way to interpret a spatial function’s frequency content. The square of the magnitude of the G(f x ,f y ) function is also referred to as the Point Spread Function (PSF). The PSF is the imaging equivalent of the impulse response function. It is easy to see that the PSF represents the spreading of energy on the output plane of a point source at infinity. The spatial variation as a function of spatial frequency is described by the Optical Transfer Function (OTF). The OTF is defined as the Fourier transform of the PSF written as, OTF=F{|G(fx,fy)|2}=F{PSF} The Modulation Transfer Function (MTF) is the magnitude of the OTF and is written as, MTF=|F{|G(fx,fy)|2}|=|F{PSF}| Two common aperture geometries, or pupil functions, that will be discussed are the rectangular and circular apertures. The rectangular aperture is defined as, rect(xk,yl)={10        |x|≤k2 and |y|≤l2        otherwise where k and l are positive constants that refer to the length and width of the aperture, respectively. The circular aperture is defined as, circ(ρl)={10        ρ≤l and ρ=x2+y2        otherwise where l is a positive constant referring to the radius of the aperture. These pupil functions become of great use when analyzing an imaging system with these apertures. For the purposes of this discussion, a circular aperture will be considered as it is of particular use with Zernike polynomials and Karhunen-Loeve polynomials which will be discussed later. In order to include the effects of aberrations, it is useful to introduce the concept of a “generalized pupil function”. Such a function is complex in nature and the argument of the imaginary exponential is a function that represents the optical phase aberrations by, ℙ(x,y)=P(x,y)ej2πλW(x,y) where P(x,y) = circ(ρ), λ is the wavelength, and W(x,y) is the effective path length error, or error in the wavefront. It is in this wavefront error that atmospheric turbulence induces and degrades image quality of an optical system and induces aberrations. This wavefront error can be induced in an optical system through the use of a LC SLM. The next several sections will describe methods of simulating atmospheric turbulence in an optical system and introduce the new method of simulating atmospheric turbulence developed at the Naval Research Laboratory. ### 2.3. Optical aberrations as Zernike polynomials The primary goal of AO is to correct an aberrated, or distorted, wavefront. A wavefront with aberrations can be described by the sum of an orthonormal set of polynomials, of which there are many. One specific set is the so called Zernike polynomials, Z i (ρ,θ), and they are given by, Zi(ρ,θ)={n+1Rnm(ρ)cos(θ)n+1Rnm(ρ)sin(θ)Rn0(ρ)        for m≠0 and i is even        for m≠0 and i is evenfor m=0 where Rnm(ρ)=∑s=0n−m2(−1)s(n−s)!s!(n+m2−s)!(n−m2−s)!ρn−2s The azimuthal and radial orders of the Zernike polynimials, m and n, respectively, satisfy the conditions that m ≤ n and n-m = even, and i is the Zernike order number (Roggemann & Welsh, 1996). The Zernike polynomials are used because, among other reasons, the first few terms resemble the classical aberrations well known to lens makers. The Zernike order number is related to the azimuthal and radial orders via the numerical pattern in Table 1. | | | | | | | | | | | | | |----|----|----|----|----|----|----|----|----|----|----|----| | i | n | m | i | n | m | i | n | m | i | n | m | | 1 | 0 | 0 | 8 | 3 | -1 | 15 | 4 | -4 | 22 | 6 | 0 | | 2 | 1 | 1 | 9 | 3 | 3 | 16 | 5 | 1 | 23 | 6 | 2 | | 3 | 1 | -1 | 10 | 3 | -3 | 17 | 5 | -1 | 24 | 6 | -2 | | 4 | 2 | 0 | 11 | 4 | 0 | 18 | 5 | 3 | 25 | 6 | 4 | | 5 | 2 | 2 | 12 | 4 | 2 | 19 | 5 | -3 | 26 | 6 | -4 | | 6 | 2 | -2 | 13 | 4 | -2 | 20 | 5 | 5 | 27 | 6 | 6 | | 7 | 3 | 1 | 14 | 4 | 4 | 21 | 5 | -5 | 28 | 6 | -6 | ### Table 1. Relationship between Zernike order and azimuthal and radial orders Zernike polynomials represent aberrations from low to high order with the order number. A wavefront can generally be represented by, Wavefront(ρ,θ)=∑i=1MaiZi(ρ,θ) where the a i ’s are the amplitudes of the aberrations and M is the total number of Zernike orders the wavefront is represented by. This wavefront can be substituted into Equation (14) as it represents the phase in an imaging system. ### 2.4. Kolmogorov’s statistical model of atmospheric turbulence The Sun’s heating of land and water masses heat the surrounding air. The buoyancy of air is a function of temperature. So, as the air is heated it expands and begins to rise. As this air rises, the flow becomes turbulent. The index of refraction of air is very sensitive to temperature. Kolmogorov’s model provides a great mathematical foundation for the spatial fluctuations of the index of refraction of the atmosphere. The index of refraction of air is given by, n(r→,t)=n0+n1(r→,t) where r→ is the 3-dimensional space vector, t is time, n 0 is the average index of refraction, and n1(r→,t) is the spatial variation of the index of refraction. For air, we may say n 0 = 1. At optical wavelengths the dependence of the index of refraction of air upon pressure and temperature is n 1 = n – 1 = 77.6x10-6 P/T, where P is in millibars and T is in Kelvin. The index of refraction for air can now be given as, n(P,T)=1+77.6×10−6PT Differentiating the index of refraction with respect to temperature gives, ∂∂Tn(P,T)=−77.6×10−6PT2 From Equation (20), we can see that the change in index of refraction with respect to temperature cannot be ignored (Roggemann & Welsh, 1996). These slight variances of temperature, of which the atmosphere constantly has many, will affect the index of refraction enough to affect the resolution of an imaging system. As light begins to propagate through Earth’s atmosphere, the varying index of refraction will alter the optical path slightly. To a fairly good approximation, the temperture and pressure can be treated as random variables. Unfortunately, because of the apparent random nature of Earth’s atmosphere, it can at best be described statistically. It is with this statistical information about a certain astronomical site and the specifications of the telescope that an adaptive optics system can be designed to correct the wavefront distortions caused by the atmosphere at that site. The quantity Cn2 is called the structure constant of the index of refraction fluctuations with units of m−2/3 (Roggemann & Welsh, 1996), it is a measurable quantity that indicates the strength of turbulence with altitude in the atmosphere. The value Cn2 can vary from ~10−17 m−2/3 or less and ~10−13 m−2/3 or more in weak and strong conditions, respectively (Andrews, 2004). Cn2 can have peak values during midday, have near constant values at night and minimum values near sunrise and sunset. These minimum values’ occurrence at sunrise and sunset is known as the diurnal cycle. ### Figure 3. Plots of the Hufnagel-Valley, SLC-Day, SLC-Night, and Greenwood models for Cn2 with respect to altitude. Some commonly accepted models of Cn2(h) as functions height are the Hufnagel-Valley, SLC-Day, SLC-Night and Greenwood models. The Hufnagel-Valley model is written as, Cn2(h)=0.00594(vw27)2(10−5h)10e−h1000+2.7×10−16e−h1500+Cn2(0)e−h100 where v w is the rms wind speed and Cn2(0) is the ground-level value of the structure constant of the index of refraction. The SLC-Day model is written as, Cn2(h)={1.7×10−143.13×10−13h−1.051.3×10−158.87×10−7h−32.0×10−16h−12     0h18.5 18.5h240 240h880 880h72007200h20000 The SLC-Night model is written as, Cn2(h)={8.4×10−152.87×10−12h−22.5×10−168.87×10−7h−32.0×10−16h−12    0h18.5 18.5h110  110h15001500h7200 7200h20000 The Greenwood model is written as, Cn2(h)=[2.2×10−13(h+10)−1.3+4.3×10−17]e−h4000 In each of these models, h may be replaced by hcos(θz) if the optical path is not vertical, or at zenith, and θ z is the angle away from zenith. ### 2.5. Fried and Noll’s model of turbulence The fact that a wavefront can be expressed as a sum of Zernike polynomials is the basis for Noll’s analysis on how to express the phase distortions due to the atmosphere in terms of Zernike polynomials. Fried’s parameter, also known as the coherence length of the atmosphere and represented by r 0 , is a statistical description of the level of atmospheric turbulence at a particular site. Fried’s parameter is given by, r0=[0.423k2secζ∫PathCn2(z)dz]−35 where k=2πλ and λ is the wavelength, ζ is the zenith angle, the Path is from the light source to the telescope’s aperture along the z axis and it is expressed in centimeters. The value of r 0 ranges from under 5 cm with poor seeing conditions to more than 25 cm with excellent seeing conditions in the visible light spectrum. The coherence length limits a telescope’s resolution such that a large aperture telescope without AO does not provide any better resolution than a telescope with a diameter of r 0 (Andrews, 2004). In conjunction with r 0 , another parameter that is important is the isoplanatic angle, θ 0 , given and approximated by, θ0=[2.91k2sec83ζ∫PathCn2(z)z53dz]−35≈0.4125r0 and is expressed in milli-arcseconds. The isoplanatic angle describes the maximum angular difference between the paths of two objects in which they should traverse via the same atmosphere. This is illustrated in Fig. 4. It is also important to remember that the atmosphere is a statistically described random medium that has temporal dependence as well as spatial dependence. One common simplification is to assume that the wind causes the majority of the distortions, temporally. The length of time in which the atmosphere will remain roughly static is represented by τ 0 and is approximated by, ### Figure 4. Illustration of isoplanatic angle Zernike Mode Zernike-Kolmogorov residual error Tip Δ1 = 1.0299 (D/r0)5/3 Tilt Δ2 = 0.5820 (D/r0)5/3 Focus Δ3 = 0.1340 (D/r0)5/3 Astigmatism X Δ4 = 0.0111 (D/r0)5/3 Astigmatism Y Δ5 = 0.0880 (D/r0)5/3 Coma X Δ6 = 0.0648 (D/r0)5/3 Coma Y Δ7 = 0.0587 (D/r0)5/3 Trefoil X Δ8 = 0.0525 (D/r0)5/3 Trefoil Y Δ9 = 0.0463 (D/r0)5/3 Spherical Δ10 = 0.0401 (D/r0)5/3 Secondary Astigmatism X Δ11 = 0.0377 (D/r0)5/3 Secondary Astigmatism Y Δ12 = 0.0352 (D/r0)5/3 Higher orders (J "/ 12) ΔJ = 0.2944(D/r0)5/3 ### Table 2. Zernike-Kolmogorov residual errors, Δ J , and their relation to D/r 0 J3/2 where v w is the average wind speed at ground level, and D is the telescope aperture. The three parameters r 0 , θ 0 , and τ 0 are required to know the limitations and capabilities of a particular site in terms of being able to image objects through the atmosphere. To make a realization of a wavefront after being distorted by the Earth’s atmosphere, Fried derived Zernike-Kolmogorov residual errors (Fried, 1965, Noll, 1976, Hardy, 1998). The a i ’s in Equation (17) are calculated from the Zernike-Kolmogorov residual errors, Δ J , measured through many experimental procedures and calcutated by Fried (Fried, 1965) and by Noll (Noll, 1976) and are given in Table 2. Thus, a realization of atmospheric turbulence can be simulated for different severities of turbulence and for different apertures. ### 2.6. Frozen Seeing model of atmospheric turbulence Time dependence of atmospheric turbulence is very complex to simulate and even harder to generate in a laboratory environment. One common and widely-accepted method of simulating temporal effects of atmospheric turbulence is by the use of Frozen Seeing, also known as the Taylor approximation (Roggemann & Welsh, 1996). This approximation assumes that given a realization of a large portion of atmosphere, it drifts across the aperture of interest with a constant velocity determined by local wind conditions, but without any other change, whatsoever (Roddier, 1999). This technique has proved to be a good approximation given the limited capabilities of simulating accurate turbulence conditions in a laboratory environment. For example, a large holographic phase screen can be generated and may be simply moved across an aperture and measurements can then be made. A sample realization of atmospheric turbulence with a ratio of D/r 0 = 2.25 can be seen in Fig. 5. ### Figure 5. A sample phase screen generated via the Frozen Seeing method ## 3. New method of generating atmospheric turbulence with temporal dependence In this next section, a new method of generating atmospheric turbulence is introduced. This method takes into account the temporal and spatial effects of simulating atmospheric turbulence with the thought in mind of being able to use this method in a laboratory with a LC SLM. Some advantages of this method include far less computational constraints than using the Frozen Seeing model in software. In addition, the use of Karhunen-Loeve polynomials is introduced rather than using Zernike polynomials, as they are a statistically independent set of orthonormal polynomials. ### 3.1. Karhunen-Loeve polynomials Karhunen-Loeve polynomials are each a sum of Zernike polynomials, however, they have statistically independent coefficients (Roddier, 1999). This is important due to the nature of atmospheric turbulence as described by the Kolmogorov model following Kolmogorov statistics. The Karhunen-Loeve polynomials are given by, τ0=[2.91k2secζ∫PathCn2(z)vw53dz]−35≈0.314r0vw(Dr0)16 where the b p,j matrix is calculated and given by Wang and Markey (Wang & Markey, 1978), and N is the number of Zernike orders the Karhunen-Loeve order j is represented by. Thus, to represent a wavefront, Equation (17) can be rewritten as, Kp(ρ,θ)=∑j=1Nbp,jZj(ρ,θ) and now the wavefront is now represented as a sum of Karhunen-Loeve polynomials with the Zernike-Kolmogorov resitual error weights in the a i ’s. ### 3.2. Spline technique Tatarski’s model describes the phase variances to have a Gaussian random distribution (Tatarski, 1961). So, by taking Equation (29) and modifying it such that there is Gaussian random noise factored in gives, Wavefront(ρ,θ)=∑i=1MaiKi(ρ,θ) where X i is the amount of noise for the i th mode based on a zero-mean unitary Gaussian random distribution and the a i ’s are the amplitudes of the aberrations calculated from Zernike-Kolmogorov residual errors in Table 2. The X i ’s in Equation (30) can be generated by just using randomly generated numbers. But generating a continuous transition for the atmospheric turbulence realization temporally will require another method. The X i ’s can be modified from being just random numbers to a continuous function of time for each mode. Thus, Equation (30) can be rewritten as, Wavefront(ρ,θ)=∑i=1MXiaiKi(ρ,θ) where the X i (t) function here is generated by, first, creating a vector with a few random numbers with the zero mean unitary Gaussian distribution, as in Fig. 6. ### Figure 6. Sample vector with a few random numbers with the zero mean unitary Gaussian distribution. Next, a spline curve is fit to this vector of a few random numbers, shown in Fig. 7, and this spline curve is now the X i (t) temporal function for generating the wavefronts in the atmospheric turbulence simulation. Without this splining technique, the change between phase screens would be discontinuous and would not provide an accurate representation of the atmosphere for testing an adaptive optics system. In reality, the Earth’s atmosphere is a continuous medium. With this technique, the temporal transition of the wavefronts in the atmospheric turbulence simulation is continuous and smooth. Also, in conjunction with the use of Karhunen-Loeve polynomials, a statistically independent realization of the atmosphere is preserved. It has been shown in various experiments that the first order aberrations, ie tip and tilt, are larger in magnitude and vary less with respect to time (Born & Wolf, 1997, Wilcox, 2005). To futher validate the Spline technique, one can take this into account by using a vector of fewer numbers than for the higher order aberrations for tip and tilt and the larger magnitude is taken care of by the Zernike-Kolmogorov residual errors used with the a i ’s in Equation (31). Fig. 8 illustrates the temporal difference between the transitions of tip and tilt and those of some higher order aberrations. In the next section, a comparison between this technique and the Frozen Seeing model will be analyzed and discussed. ### Figure 7. Sample X i (t) temporal function generated from a vector of a few random elements. ### Figure 8. Temporal and magnitude difference between (a) tip and tilt and (b) higher order aberrations. ## 4. Comparison of spline technique to the Frozen Seeing model The method of generating atmospheric turbulence with temporal evolution as described in the previous section proposes various advantages compared to the Frozen Seeing model. The computational time to generate a phase screen of atmosphere of size NxN increases exponentially. Fig. 9 illustrates the number of seconds required to generate a phase screen of atmosphere using the Frozen Seeing model. #### Figure 9. Seconds to compute an NxN phase screen of atmosphere. #### Figure 10. Simulation of the Frozen Seeing model Once the phase screen is generated, to be able to simulate the atmospheric turbulence on a SLM with the Frozen Seeing model, a subsection of that image of appropriate size is taken and used at the phase screen to represent the atmosphere at a moment in time. Then, that subsection is drifted across the large phase screen and that represents the next moment in time, to simulate the behavior of wind. This process is repeated until the edge of the NxN phase screen is reached, as shown in the illustration in Fig. 10. One can clearly see that by generating atmospheric turbulence in this fasion will last for only a few seconds. Increasing the size of the large phase screen, N, would allow for a longer simulation, but the computational requirement to generate that phase screen would be a computational burden. In addition, with an NxN array, the number of bytes in that array will be N 2. This will quickly lead to an image size of dozens of megapixels which will eventually lead to a software overflow. What can also be done is rather than drifting the subsection of the large phase screen across in a stright line is drifting in a circular motion about the large phase screen, but this will lead to a simulation of atmosphere that is very repetitious. Using the Spline technique outlined in the previous section, one can realistically simulate atmospheric turbulence for a longer period of time with far less computational requirements. To compare the Frozen Seeing model to the Spline technique outlined in the previous section, each subsection of the larger phase screen can be analyzed with a single value decomposition (SVD) of the numerical values and calculate the Zernike coefficients, a i ’s, of Equation (17) with M = 24. Fig. 11 illustrates the SVD (b) of a sample wavefront from a realization of atmosphere with a D/r 0 = 2.25 (a), and the values of the SVD are listed in Table 3. Next, this process is repeated as the subsection is drifted across the large phase screen, to show the temporal transition of the a i ’s. #### Figure 11. a) Sample wavefront of atmosphere with D/r 0 = 2.25 and its (b) SVD of Zernike polynomials The SVD representation in Fig. 11 (b) of the wavefront in Fig. 11 (a) has a fitted percent error of less than 2%. The a i ’s progression over time are expected to change in a quasi-random fasion. It can be seen in Fig. 12 (a) and (b) that the temporal transitions of the a i ’s resemble the temporal transitions as in the Spline technique, as shown in Fig. 13. Furthermore, the tilt components, a 2 and a 3 , are larger in magnitude than higher orders, which is consistant with the turbulence model outlined by the Zernike-Kolmogorov residual errors and the Spline technique. | | | | | |----|----------|----|----------| | i | ai | i | ai | | 1 | -3.74173 | 13 | 0.344473 | | 2 | 1.3423 | 14 | -0.09171 | | 3 | 0.384392 | 15 | 0.045092 | | 4 | -0.47419 | 16 | 0.179839 | | 5 | -1.51239 | 17 | 0.003466 | | 6 | 0.398136 | 18 | -0.01032 | | 7 | -0.2772 | 19 | -0.14574 | | 8 | -0.4476 | 20 | 0.052195 | | 9 | -0.21973 | 21 | 0.121335 | | 10 | 0.249914 | 22 | 0.354472 | | 11 | -0.06698 | 23 | -0.18282 | | 12 | 0.267201 | 24 | -0.24076 | #### Table 3. Zernike polynomial coefficients that make up a sample representation of atmosphere with a D/r 0 = 2.25 #### Figure 12. The a i ’s progression over time for (a) tip and tilt and (b) higher order Zernike terms in a simulation of atmospheric turbulence generated via the Frozen Seeing method. #### Figure 13. The a i ’s progression over time for (a) tip and tilt and (b) higher order Zernike terms in a simulation of atmospheric turbulence generated via the Spline Technique. By visual inspection, the two methods simulate atmospheric turbulence in a similar way. A statistical measure of the similarity of these two methods can be described by the cross-correlation of the respective a i ’s. | | | | |---------------|--------------------|---------------------------| | Zernike order | Aberration | Average Cross-Correlation | | a2 | Tip | 0.7004 | | a3 | Tilt | 0.7471 | | a4 | Focus | 0.6686 | | a5 | Astigmatism X | 0.7433 | | a6 | Astigmatism Y | 0.5937 | | a7 | Coma X | 0.5981 | | a8 | Coma Y | 0.6703 | | a9 | Trefoil X | 0.6909 | | a10 | Trefoil Y | 0.4878 | | a11 | Spherical | 0.5910 | | a12 | Sec. Astigmatism X | 0.5277 | #### Table 4. Average cross-correlation values for each a i After generating and analyzing ten realizations of atmosphere from the Frozen Seeing method, the average cross-correlation values are summarized in Table 4. An overall average of these cross-correlation values is 0.6381. This shows a consistancy between the generally accepted Frozen Seeing model and the new Spline technique outlined here. ## 5. System performance and results The Holoeye LC2002 SLM device used in this example is a diffractive device that can directly modulate the phase of an incoming wavefront by π radians. In order to utilize the full 2π radian phase modulation on the impinging wavefront, one can set up a Fourier Filter and use either the +1 or -1 diffractive order through the rest of the system. The graphical user interface (GUI) of the software developed for controlling this system, written in Matlab, can be seen in Fig. 14. #### Figure 14. Graphical user interface for Holoeye Atmospheric Turbulence System This software is capable of controlling any LC SLM. The Holoeye LC2002 is a device with 800x600 pixels and can accept a beam of 0.82“ in diameter. This software developed can set up the alignment of the diffraction orders and set alignment biases that may be entered to compensate misalignments in the optical components of the overall system for maximum performance. Different algorithms of generating turbulence can be used if desired and the parameters for the simulated telescope diameter and Fried parameter can control the severity of turbulence, as well. If desired, a secondary annular obscuration can be included in the simulation to simulate a telescope’s secondary mirror. #### Figure 15. Sample (a) and (c) wavefronts and (b) and (d) their corresponding PSFs due to atmospheric turbulence with a 0.4 meter telescope and an r 0 of 1 cm. Using the GUI developed, sample atmospheric conditions have been calculated and put on the SLM and then theire PSFs are measured with an imaging camera. The simulated atmospheric turbulence was calculated for a 0.4 meter telescope with seeing conditions having an r 0 of 1 cm. Sample wavefronts from the simulation and their theoretical PSFs can be seen in Fig. 15. The measured PSFs from the wavefronts in Fig. 15 (a) and (c) can be seen in Fig. 16 (a) and (b) and they are similar to that of the calculated PSFs in Fig. 15 (b) and (d), respectively. The 2-dimenstional cross-correlation factors between the two frames and their theoretical components are 0.9589 and 0.8638, respectively, showing that the system performs quite well and the measured and theoretical values are consistent with each other. #### Figure 16. PSF measurements of the two sample wavefronts in the optical system #### Figure 17. PSFs of (a) open-loop and (b) closed loop frame with x and y cross section plots. At the Naval Research Laboratory, we have developed an AO system for use in astronomical applications (Restaino, S.R., et. al., 2008). We have simulated atmospheric turbulence with the system outlined in the previous sections and caused distortions on a laser beam for our AO system to correct. Simulating fairly reasonable seeing contitions with D/r 0 = 1.5 lead to roughly a time-averaged Strehl ratio of 0.32. The Strehl ratio is a common way of measuring the effect that aberrations have on the imaging system (Born and Wolf, 1997). The typical definition of the Strehl ratio is the ratio of the peak intensity between the unaberrated system PSF and the system PSF with aberrations. Thus, a diffraction limited system, or a system limited only by the diffraction at the edge of the entrance pupil, will have a Strehl ratio of 1, and any aberration present in the system will cause the Strehl ratio to be less than 1. Fig. 17 (a) and (b) show the PSFs of a frame taken during open-loop and closed-loop operation with their respective x and y cross sections. There is noticeable increase in peak intensity of the PSF and the other feature that is very important is the formation of the first ring of the Airy function in the corrected PSF. After closing the loop and allowing the AO system to begin correction, the time-averaged Strehl ratio for the simulation was increased to 0.84. ## 6. Summary The method of generating atmospheric turbulence via the Spline technique is virtually the same as the Frozen Seeing method with the added feature of being far less computationally intensive on a computer system. This advantage can be exploited in the development of a software package that can drive any SLM to simulate atmospheric turbulence in almost any wavelength for any telescope diameter and adaptive optical and laser communication systems can be tested for performance evaluations. At the Naval Research Laboratory, a current system is being used with software written in the programming language Matlab and various tests are ongoing. Currently, two SLMs from Holoeye and Boulder Non-Linear Systems are being investigated and various wavelengths are being ustilized for different applications. Future work will include the investigation of other new liquid crystal devices as the field of liquid crystal technology is a very rapidly moving and growing field.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 4, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833574652671814, "perplexity_flag": "middle"}
http://www.openwetware.org/wiki/Physics307L_F09:People/Mahony/Rough
Physics307L F09:People/Mahony/Rough From OpenWetWare SJK 13:53, 2 December 2009 (EST) 13:53, 2 December 2009 (EST) I think this is a very good rough draft. There is a lot of work needed (see all comments below), but I think you've written a really good foundation to make that very do-able. My guess is that the most substantial work will be required in expanding the results and conclusions section. I gave you some ideas for that, and also you'll naturally be able to expand it due to your "extra data" next week. OK, Good luck! The Speed of Light SJK 11:19, 30 November 2009 (EST) 11:19, 30 November 2009 (EST) Author & Contact info is missing. Also, a more descriptive title is needed. As it is now, a reader may assume your article is a vast review of all topics related to the speed of light, whereas a more descriptive title could indicate that you're measuring speed of light in a specific way, etc. Abstract SJK 11:31, 30 November 2009 (EST) 11:31, 30 November 2009 (EST) You are homing in on a short and to-the-point abstract, compared with an abstract that gives more motivation and conclusion. That's OK, if done precisely, which you can probably do. In terms of getting a bit more practice, though, I'd probably prefer that you: (1) expanded the first sentence a bit, if you can add a bit of "why" it's important. (2) Add a second sentence saying that there's a variety of extremely precise methods for measuring the speed of light. (3) then in your next sentence, preface it by saying you're using the time of flight method. (4) your sentence "by positioning the LED..." can be broken up into two sentences -- first that you created fixed distances and measured the change in time delays for each; next that you used linear regression to produce ___ m/s. (5) Finally, add a conclusion sentence--is it consistent w/ accepted value and any other comments. The speed of light is an important fundamental constant in physics. In this experiment we use a Time-Amplitude Converter to measure the delay between an LED and a photomultiplier tube. By positioning the LED at different distances from the PMT, and using the conversion ratio of voltage to time, we were able to fit our data to a line, with the slope corresponding to our measured speed of light of 2.941(15)*10^8 m/s. SJK 11:25, 30 November 2009 (EST) 11:25, 30 November 2009 (EST) LED is not defined and probably should be. PMT is almost defined, you just need to put (PMT) after the "photomultiplier tube." It's good to define almost all acronyms, although you can use your judgement on some of them, such as DNA, e.g. Introduction SJK 11:49, 30 November 2009 (EST) 11:49, 30 November 2009 (EST) This is an excellent foundation for the introduction, but it's too succinct, I think. It's so concise that it could almost serve as half an abstract! In some cases, such as when page limits are very short, this could be OK. In this case, though, I'd like you to expand each of your ideas. I think you can do this relatively easily based on what you have. Just build sentences around what you already have. For example, what was the community focusing on after 1887? Since you mention it, how was it that special relativity gained wide acceptance? References to key experiments? More important are the "many experiments done to more accurately measure the speed." What were the nature of these experiments? What are the different methods? Can you link to a few specific research papers that reported a value for the speed? Finally, your last sentence is the only sentence about what you're reporting here. You can expand this to say that you're using time of flight, and that you will compare your result with the accepted value, or you could say that the result you produce does compare favorably. And / or that this experiment is a satisfying way to explore methods for measuring the speed of light. With the refutation of the aether theory by the Michelson-Morley experiment in 1887, light became a subject of focus in the scientific community.[1] Equally controversial was Einstein's theory of special relativity, published in 1905, which proposed that the speed at which light propagated was a fundamental constant, invariant of the speed of the reference frame in which it was observed.[2] This theory became widely accepted, and throughout the rest of the 20th century, many experiments were done to more accurately measure this speed.[3] In 1983 the meter was redefined by the CGPM as the distance traveled by light in 1/299,792,458 seconds, giving the speed of light the exact value of 299,792,458 meters/second.[4] In our experiment, we set out to measure this speed to see if our experimental data matched the accepted value. Methods SJK 12:01, 30 November 2009 (EST) 12:01, 30 November 2009 (EST) Your style for the methods section is great. And the information you have included is very good with some minor additions needed (as noted). Some details are missing: did you average the signals? How much averaging? How did you define "intensity" on the oscilloscope? Also, in your results section, you talk about "in the 1st trial," but the methods does not mention trials, or differences between them (as far as I can see). More important is that data analysis methods are completely missing. You may talk about this sufficiently below (I haven't looked closely yet), but in any case, it belongs in the methods section. Just like you do for the equipment, you'll want to cite the software that you use, and you'll want to spell out any algorithms you developed. It's up to you whether you want to include specific formulas or link to them in an appendix or elsewhere. Figure 1: Time Walk Effect- Different amplitudes of a signal cause a time shift in a trigger signal. SJK 11:50, 30 November 2009 (EST) 11:50, 30 November 2009 (EST) Is there a model number for LED or PMT? If not, can you describe them somehow? (pulse rate, color, etc.) We positioned a photomultiplier tube (PMT) powered by a Bertran 313B Power Supply on one end of a carboard tube. We placed a LED in the other end, powered by a Harrison Laboratories 6207A PSU. We measured the time difference between the LED's pulse and the photomultiplier's response with a Ortec 567 TAC/SCA Module plugged into a Harshaw NQ-75 NIM Bin. We placed a Canberra 2058 Delay Module between the PMT and the TAC to guarantee the response pulse would be received by the TAC after the triggering pulse from the LED. SJK 11:56, 30 November 2009 (EST) 11:56, 30 November 2009 (EST) All figures used in a report will be referred to in the text. You can do that here by saying something like "See Figure 2 for experimental setup." Or work it into a sentence as you did for the time walk figure. Probably you'll need to renumber those so that the first one you refer to is Figure 1. Also, figure captions need a bit more detail: For time walk figure: tell reader what the x and y axes represent, and what the typical scales would be (nanoseconds/ volts). For Figure 2, I'm not sure how to expand it, but probably should, even if just saying "with key equipment labeled" instead of "labels." I do love the panorama, BTW. We measured the TAC's voltage using a Tektronix TDS 1002 Oscilloscope. This voltage corresponded to the time between the LED trigger pulse and the PMT response pulse with the LED at different positions, all 10 cm apart. As the LED got closer to the PMT, the intensity would increase, and this would cause error due to "time walk." The oscilloscope displays a signal by triggering when the signal reaches some threshold. The "time walk" effect is the change in time of this trigger signal due to a change in amplitude of the input signal. In this experiment, a change in intensity of the LED signal causes the oscilloscope and the TAC to trigger at a different time, and the TAC will produce a different voltage. To minimize the error due to time walk (see Figure 1) we used a set of polarizers placed on the PMT and the LED to keep the intensity of the LED pulses constant. We measured the intensity of the LED when it was at its maximum distance from the PMT, and then we rotated the PMT with the polarizer attached so that the intensity of the LED signal remained constant for every measurement with the LED in a closer position. Figure 2: Panorama of the setup with labels Results and Discussion SJK 13:40, 2 December 2009 (EST) 13:40, 2 December 2009 (EST) I really like your plot: it clearly conveys a lot of information very clearly. It's definitely a good way of presenting your net result. I think, though, some other figures should be included to make the results section more substantial. To do this, ask yourself what the reader may want to see after looking at the figure you have. In my mind, this would be individual linear regression fits for some or all of the data sets. If you did that as a preceding figure, I think you'd have enough to talk about. As it is now, there is very little text in your results and discussion section, which is not standard. So, I think it'd be good by starting out, even the way you have, "In the first trial..." and then saying, "Trial one and the linear regression fit is displayed in Figure X.A. Subsequent trials (Figure X.B-E) included 11 positions, separated by __ cm. As seen in Figure X, no noticeable trend was seen in terms of uncertainty of individual measurements, or on residuals (you could even plot all the residuals on another graph, which could be interesting" In the first trial, we measured the voltage of the TAC with the LED in 10 different positions. For every subsequent trial, we measured the voltage with the LED at 11 positions. After the first trial, we used the averaging function on the oscilloscope. This function took a time average of a signal, which reduced the noise, so we could better measure its voltage. The TAC was set to produce a 10V signal for a 100 ns delay. We used this ratio of 1V/10ns to convert our measured voltages into times. I used the chi-square minimization technique to fit the data with a line. The slope of the line and standard error were used in a weighted average to compute the measured speed of light. This value was: $2.941(15)\cdot 10^{8} m/s$ The exact speed of light is approximately: $2.998\cdot 10^{8} m/s$ The calculated speed of light was 4 sigma away. Assuming only normally distributed random error, the probability of measuring the same value we did is 0.006%.SJK 13:35, 2 December 2009 (EST) 13:35, 2 December 2009 (EST) A couple comments about this. First, this is definitely on the right track for how to compare your measurement to a precise accepted value. As you've written it, though, your wording should be, "the probability of measuring 4 sigma or more away from the mean (in either direction) is 0.006%." That is, you're reporting 1 minus the error integral from -4sigma to + 4sigma. Furthermore, you should add a comment, "Thus, it is almost certain that substantial systematic error remained in our measurements." (You can make a goal of eliminating or identifying this systematic error next week.) I would be pretty much OK with this kind of comparison. But it's worth noting that when you're out on those tails of the distribution, the estimate of the standard deviation of the mean is pretty important. In reality, we usually don't care so much about whether it's 0.006% or 0.3% : in either case we'd be pretty certain there's substantial systematic error. But if you are reporting those kinds of numbers and asserting confidence, then I think what you need to do is use "Student's t-test," which would account for how many degrees of freedom were used in estimating the standard error. It's worth reading, just to find out that the guy who invented it was using the pseudonum "Student" because his employer, Guinness, didn't want him revealing the trade secret that the brewery was using statistics to improve its processes: http://en.wikipedia.org/wiki/T_test#History Figure 3: Trials 1-6, the accepted speed of light, and the calculated value from the measurements, along with their corresponding uncertainties, are shown. SJK 13:42, 2 December 2009 (EST) 13:42, 2 December 2009 (EST) Usually, the figure legend is not included (probably due to space limitations?). However, it works well for you, so I'd leave it. Nevertheless, I think more description in the figure legend is needed to explain what the various symbols and error bars refer to. E.g. green triangle represents the mean of all 6 trials and the error bars represent one standard error of the mean. The supplementary data and analysis can be seen here.SJK 13:44, 2 December 2009 (EST) 13:44, 2 December 2009 (EST) Hyperlinks are usually not included in the text, but rather you'd have a numbered "reference" (endnote) here and in the reference would include the link. For example, "Raw data and source code are freely available[7]." Conclusions SJK 13:46, 2 December 2009 (EST) 13:46, 2 December 2009 (EST) Aside from the revised language regarding probability as I mentioned above, I think this is a pretty good conclusion. I think investigating the systematic error next week would be good, including trying out the DAQ card if you'd like. The probability of measuring the same value we did is 0.006%. Assuming only normally distributed random error, the likelihood of this happening again is quite low. I conclude that the experimental data deviated from the accepted value due to systematic error. I believe the cause of this error was inadequate minimization of the time walk effect caused by the reliance on human judgment in determining when the intensity of the LED pulse signal matched the original signal. This error might be reduced by the use of a computer to measure the LED signal, rather than using the screen of an oscilloscope. This method is far more quantitative, and I believe it would yield more accurate results. Acknowledgements SJK 13:48, 2 December 2009 (EST) 13:48, 2 December 2009 (EST) You can revise this to make it more typical. "I thank Ryan Long for help with electronic lab notebook, instrumentation setup, data acquisition, and data analysis...I thank A. Barron for his open access lab report which provided guidance on bib tex references and manuscript style (this is not something you'd read in a peer-reviewed report, but I really like that you give him props, so I think you should leave it in and just try to make it sound "formal." Thanks my lab partner Ryan for his help with running the lab, taking data, and finishing up the lab notebook with me. I'd also like to thank Dr. Koch for his helpful explanations of various parts of the setup. Thanks to A. Barron, who I referred to for help in formatting citations as well as getting a general idea of what I needed to write.[5] References SJK 11:43, 30 November 2009 (EST) 11:43, 30 November 2009 (EST) These first two are the kind of original peer-reviewed research I'm looking for--good! 1. Michelson, Albert Abraham & Morley, Edward Williams (1887), "On the Relative Motion of the Earth and the Luminiferous Ether", American Journal of Science 34: 333–345 [Michelson] 2. Albert Einstein (1905) "Zur Elektrodynamik bewegter Körper", Annalen der Physik 17: 89 SJK ~~~~~ ~~~~~ This article is in German, right? I could only download part of it now. If you can read German, or get useful information out of it somehow, that is OK. However, if you used a translation, then you should cite the translation in addition or only the translation. More generally, in case you don't know: any citation you put in here means that you read at least part of the actual paper you're citing in order to extract information. In contrast, it's not good to cite a reference that you didn't read but which you know someone else cited in order to get some information (e.g. if you get the numbers from wikipedia or other review but want to cite the original source, you need to double-check the original source). Make sense? [SR] 3. Measurement of the speed of light. T. G. Blaney C. C. Bradley G. J. Edwards B. W. Jolliffe D. J. E. Knight W. R. C. Rowley K. C. Shotton & P. T. Woods Nature 251, 46 (1974) | doi:10.1038/251046a0. http://www.nature.com/nature/journal/v251/n5470/pdf/251046a0.pdf [Blaney] 4. Base unit definitions: Meter. Nov 15 2009. http://physics.nist.gov/cuu/Units/meter.html [NIST] 5. A. Barron's Final Report [Barron] Links Back to Course Page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956293523311615, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/272047/if-ab-0-then-aat-or-bbt-is-singular
# If $AB=0$, then $A+A^T$ or $B+B^T$ is singular Define $A$ and $B$ as being square matrices of dimension $2011$. Prove that if $AB=0$, then at least one of matrices $A+A^{T}$ or $B+B^{T}$ have rank below $2011$. -- edit -- Rank of a matrix is a number of linear independent rows. -- edit2 -- dimension instead of rank - 2 I'd have choosen $2013$ instead of $2011$ :). – YACP Jan 7 at 11:21 ## 1 Answer If $A,B\in M_n(K)$, $K$ a field, $n$ odd, and $AB=0$, then $A+A^T$ or $B+B^T$ is singular. If $A+A^T$ and $B+B^T$ are invertible, then their rank is $n$. But $\operatorname{rank}(A+A^T)\le 2\operatorname{rank}(A)$ and similarly $\operatorname{rank}(B+B^T)\le 2\operatorname{rank}(B)$. Set $n=2k+1$. Then $\operatorname{rank}(A)\ge k+1$ and $\operatorname{rank}(B)\ge k+1$, so $\operatorname{rank}(A)+\operatorname{rank}(B)\ge n+1$. From Sylvester Rank Inequality we have $\operatorname{rank}(A)+\operatorname{rank}(B)-n\le \operatorname{rank}(AB)=0$, a contradiction. - Please describre, why if $A+A^T$ is invertible, then it's rank is n. – Steve Jan 7 at 17:16 Probably we are assuming that $rank(A+A^T)=rank(B+B^T)=2011$, and showing that this is a contradiction. – Jonny Jan 7 at 17:28 A square $X$ matrix of size $n$ is invertible iff $\det X\neq 0$ iff rank$(X)=n$. – YACP Jan 7 at 18:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791816830635071, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/220854/how-can-i-infer-a-result-using-primal-feasibility-dual-feasibility-and-complem/220870
# How can I infer a result using primal feasibility, dual feasibility, and complementary slackness? I am trying to find the minimum of $-x_1$ with restrictions $\bar g\leq\bar 0$ so that $$\bar g=\begin{pmatrix} (x_1+2)^2+(x_2-4)^2-20\\ (x_1+2)^2+x_2^2-20\\ -x_1\end{pmatrix}\leq \begin{pmatrix}0\\0\\0\\\end{pmatrix}=\bar0$$ I used KKT conditions to solve this puzzle below so $x_1=\frac{4\mu_1+4\mu_2-1-\mu_3}{2(\mu_1+\mu_2)}$ and $x_2=\frac{4\mu_1}{\mu_1+\mu_2}$ where $\mu_i\in\mathbb R\forall i$. I know from the graphical plot that the solution is something like $(1.5,1.5)$ but I cannot see how I can get such a solution from the equations for $x_1$ and $x_2$. I followed this part of Wikipedia here, source here, about necessary conditions but I am stack how to find the minimum now. How to find it now with the necessary equations for the optimal point $(x_1,x_2)$? My calculations 1. Wok suggested complementary slackness -assumption $\mu_i g_i(x^*)=0, i=1,2,3$ and dual -feasibility assumption $\mu_i\geq 0,i=1,2,3$. I cannot yet see what it helps here. 2. I can solve the intersection point just by solving the equations, proof here, but I cannot see how the KKT -way really make a difference in comparison to solving in the easy way, really puzzled! 3. KKT is some sort of generalization of Lagrangean, example here, trying to understand what is happening... - ## 2 Answers Try to use primal and dual feasibility and complementary slackness, while assuming $\mu_1+\mu_2\neq 0$ (otherwise your computations of $x_1$ and $x_2$ do not stand). With dual feasibility, at least one of $\mu_1$ and $\mu_2$ are strictly greater than zero. Finding the solution is a matter of enumerating active constraints. First case If $\mu_1=0$ and $\mu_2\neq0$, then $x_2=0$. If $\mu_1\neq0$ and $\mu_2=0$, then $x_2=4$. In both cases, $(x_1+2)^2\leq 4$ (primal feasibility). Since $x_1\geq0$, $x$ lies on the vertical axis (primal feasibility). Second case If $\mu_1\neq0$ and $\mu_2\neq0$, then $x$ is one of the two points at the intersection of the two circles (complementary slackness). Since $x_1\geq0$, $x$ is the point in the right quadrant. (primal feasibility) Conclusion The maximum of $x_1$ is attained in the second case, since $x_1=0$ in the first case. The solution is $x=(2,2)$. - Did you mean $\mu_1+\mu_2+\mu_3\not =0$? There are three constaints $g_1, g_2$ and $g_3$. Could you recite a bit what you mean... – hhh Oct 25 '12 at 14:57 You assume $\mu_1+\mu_2\neq 0$ in your expression of $x_1$ and $x_2$. – wok Oct 25 '12 at 14:58 Please consider accepting the answer after satisfaction is provided. – wok Oct 25 '12 at 15:30 1 $x_2=\frac{4\mu_1}{\mu_1+\mu_2} = \frac{4 \times 0}{0 + \mu_2} = 0$ – wok Oct 25 '12 at 18:37 1 Use Primal feasability. – wok Oct 25 '12 at 20:29 show 11 more comments I have tried to cover each topic in specific threads below. The lecture slides are here - Please consider accepting the answer after satisfaction is provided. – wok Oct 30 '12 at 14:51 @wok I need to arrange a meeting with my teacher bf I am qualified enough to accept, taking some time. – hhh Oct 30 '12 at 19:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288027286529541, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/257065/hamiltonian-eulerian-paths-one-vertex-graph
# Hamiltonian & Eulerian paths, one vertex graph Does the graph with only one vertex has an Eulerian path? And i think he has an Hamiltonian path, right? Cheers. - 1 I don't know if everyone defines it the same way, but Diestel's book, at least, allows a path to consist of a single vertex, which means that the answer is yes. – Andrew Uzzell Dec 12 '12 at 14:48 @Andrew: You could post that as an answer so the question doesn't remain unanswered. – joriki Dec 12 '12 at 17:34 ## 1 Answer I'm not sure if all graph theory books treat degenerate cases the same way, but Diestel's Graph Theory, at least, allows a path to have length $0$, i.e., to consist of a single vertex with no edges. If a graph consists of a single vertex $v$, then the a path consisting of $v$ is vacuously Eulerian. It is also a Hamiltonian path, since it contains all of the vertices of the graph. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578784108161926, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/14961/how-would-you-store-heat?answertab=oldest
# How would you store heat? Um .. naive question perhaps but if somebody wanted to store heat, how would they go about it? Can heat be stored? I'm told that decomposing kitchen waste in a closed vessel results in a rise in temperature on the body of the vessel. I'm just wondering whether it could be stored for later use. - 3 The trivial thing is to put the warm stuff into an insulated vessel; were you looking for something esoteric that I didn't catch? – dmckee♦ Sep 22 '11 at 16:57 Er no. It's possible to store electricity in an electric battery/capacitor; I wondered about it's heat equivalent. – Everyone Sep 22 '11 at 17:01 1 – Georg Sep 22 '11 at 17:36 @Georg I think I've seen something like that sold as re-usable hand warmers at ski resorts. – dmckee♦ Sep 22 '11 at 17:39 2 The decomposing process heat increase is of course because of the bacteria doing what they do, consuming sugars and stuff. And thermoses are great, but most bacteria stops working at 60C and heat is more useful the larger the difference to the norm medium. – Captain Giraffe Sep 22 '11 at 18:03 show 3 more comments ## 3 Answers Building off of the comments to the question. It might be instructive to think carefully about what is being stored when you store "electricity" in a capacitor or battery. Note that it is not electrons as I can charge a capacitor either positive or negative relative to a floating ground. The "what" in that case is energy in the form of electric fields (capacitor) or chemical potential (battery). Heat is also energy, in the form of excitation of microscopic degrees of freedom, and the way you store it is by exciting the microscopic degree of freedom in some material and then don't allow it to transmit that energy to other forms---which you do by insulating the hot stuff. Or you can convert the heat to some more tractable form (as in Dan's answer or Georg's comment) and store that. - This is closest to what I was looking for. But I think the insulation at the interface would heat up over a while ... so the storage would be very lossy itself too – Everyone Sep 22 '11 at 18:17 2 The insulation would heat up (unless you are using vacuum insulation) and would eventually reach thermodynamic equilibrium with the environment so that the amount of heat lost is the same as the amount of heat generated. However, the stable temperature of the composter with insulation is higher than without. This is what I was refering to with my aside "(at least initially)". – AdamRedwine Sep 22 '11 at 18:24 1 Certainly the storage is lossy---even vacuum insulation transmits heat by radiation. Batteries and capacitors are lossy too. The question here is one of timescales, the relative inefficiencies of transforming the energy, and expected losses in other choices of storage. – dmckee♦ Sep 22 '11 at 18:32 If even vacuum radiates heat, then would it be safe to say there is no way to stop radiation? If so, could we control heat radiation by directing it? – Everyone Sep 24 '11 at 18:08 You can reduce radiative losses by reducing the emissivity of the radiating surface (or of course, by reducing the temperature difference between the two surfaces, but that doesn't accomplish your basic goal). This is why the inside surfaces of the vacuum vessel in a Dewar are silvered. – dmckee♦ Sep 24 '11 at 20:18 ## Did you find this question interesting? Try our newsletter email address If you want to store heat in a battery-like device, you could use the heat to power a turbine, generate electrical energy, and store it as chemical energy in a battery. This is extremely inefficient, but I think this is most analogous to what you are asking. You could also find a high-energy chemical reaction in equilibrium. This would store some of the heat as chemical energy, but would have to be kept at the same temperature or the chemical mixture would start producing heat. Really, though, the best way is probably dmckee's "warm stuff + insulated container". - 4 Phase transition is pretty good, too, like melting some salts. It absorbs lots of energy without raising the temperature too much. Raising the temp is bad, because then the losses increase rapidly. – Florin Andrei Sep 22 '11 at 17:57 ""Phase transition is pretty good,"" Right, but often storage of a big mass in a container is just the cheapest thing including investment. Very common are isolated tanks with water to store warm water produced by sun on rooftop during daytime. – Georg Sep 22 '11 at 18:08 Slightly naive, but not an uninteresting question. The previous answers give some good discussion, but I think they are missing what you are really going for. While the terminology you use is odd, the ability of a substance to "store" heat energy is one of the first things you learn how to calculate in thermodynamics. The value is called the "heat capacity" and is a measure of the amount of heat required to change the temperature of an object. $$C = \frac{Q}{\Delta T}$$ Of course, a larger object takes more heat energy to raise it's temperature so this value is often divided by the mass of the object to obtain the specific heat capacity of the material (assuming homogeneity and all that good stuff). This value also depends on wether you keep the volume or the pressure constant so there are a number of different forms that you might run across this concept. The comments made about converting the heat to chemical or electric potential energy are very relevant, but not, practically speaking, relevant to your specific example. The suggestion of insulation is a good one and would reduce the rate of heat dissipation to the environment (at least initially) of the container. One way to get an intuitive feel for the amount of heat generated and stored by your composter would be to put a marble slab (say a cutting block for example) underneith. The marble has a high heat capacity and would absorb a fair amount of heat from the composter provided that the two objects are in good thermal contact. After a few days, once the system had reached thermal equilibrium, you could remove the marble block and feel the spot where the composter had been; it should be slightly warm. Another interesting experiment is to take the marble slab to bed with you. When you go to sleep, it will feel very cold for a long time. If you sleep on it, when you wake up in the morning it will feel very warm because it has absorbed a lot of your body heat. People used to sleep with large quartz stones to keep cool in the summer. - 1 Thank you (+: that was a lucid explanation – Everyone Sep 22 '11 at 18:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633773565292358, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/233414/prove-that-for-every-integer-n-that-is-not-a-multiple-of-3-we-have-3-4n/233428
Prove that for every integer $n$ that is not a multiple of $3$ we have $3 | (4n^{12}+3n^6+2)$ Prove that for every integer $n$ that is not a multiple of $3$ we have $3 | (4n^{12}+3n^6+2)$ So I know this has something to do with fermat/euler's theorem which says: For some $a^x \equiv y \pmod{n}$, if $gcd(a,n)=1$ then $a^{\phi{(n)}} \equiv 1 \pmod{n}$ However I don't see how we are suppose to apply the theorem? - All that matters here is that $12$ is even. Squares are either $0$ or $1 \pmod 3.$ – Will Jagy Nov 9 '12 at 7:25 3 Answers Note that $4n^{12} + 3n^6 + 2 = (3n^{12} + 3n^6) + n^{12} + 2$. Further note that $3 \vert (3n^{12} + 3n^6)$. Hence, it is enough to prove that $$3 \vert (n^{12} + 2)$$ Since $n \equiv \pm \pmod{3}$, we have that $n^{2} \equiv 1 \pmod{3} \implies n^{12} \equiv 1 \pmod{3}$. Hence, $$n^{12} +2 \equiv (1+2) \pmod{3} \equiv 0 \pmod{3}$$ - Sorry, I'm a little slow - how can we see that $3|3n^{12}+3n^6$? And did you mean $4n$ instead of $3n$? – Arvin Nov 9 '12 at 7:35 @Arvin $3n^{12} + 3n^6 = 3(n^{12} + n^6)$. – user17762 Nov 9 '12 at 7:36 Ahh of course. Thanks heaps – Arvin Nov 9 '12 at 7:43 Hint $\rm\,\ mod\ 3\!:\ n\not\equiv0\:\Rightarrow\:n\equiv\pm1\:\Rightarrow\:\color{#C00}{n^2}\!\equiv1\:\Rightarrow\: 4\,(\color{#C00}{n^2})^6\!+3\,(\color{#C00}{n^2})^3\!+2\equiv 4+3+2\equiv 0$ - By Fermat's theorem $n^3 ≡ n$ (mod $3$) , so , $4n^{12} ≡ 4n^4$ (mod $3$) and $3n^6 ≡ 3n^2$(mod $3$) so , $3 |4n^{12} + 3n^6 +2$ $\longleftrightarrow$ $3| 4n^4 + 3n^2 + 2$ $\longleftrightarrow$ $3|n^4 + 2$ $\longleftrightarrow$ $3|n^4 -1$ $\longleftrightarrow$ $3|(n^2+1)(n^2-1)$ , which is obvious in view of Fermat's theorem $n^2 ≡ 1$(mod$3$) , since $3$ is given to not divde $n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551595449447632, "perplexity_flag": "head"}
http://www.chegg.com/homework-help/questions-and-answers/the-figure-below-shows-a-bar-attached-to-a-pivot-p-with-a-calibrated-spring-attaching-the--q2467693
home / homework help / questions and answers / science / physics / the figure below shows a bar... Close ## x and y components and tension of a spring The figure below shows a bar attached to a pivot p with a calibrated spring attaching the bar to the ceiling. The length of the bar is 0.900 m, its mass is 0.700 kg, and its center of mass is 0.400 m from the pivoted end. Please use $$\phi = 75.0 degrees$$ , $$g=9.8 m/s^2$$ and calculate the horizontal and vertical components of the force the pivot exerts on the bar and the tension in the calibrated spring connecting it to the ceiling. # Answers (0) There are no answers to this question yet. #### Company Chegg Plants Trees
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8777633905410767, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/185718/ellipse-with-non-orthogonal-minor-and-major-axes
# Ellipse with non-orthogonal minor and major axes? If there's an ellipse with non-orthogonal minor and major axes, what do we call it? For example, is the following curve a ellipse? $x = \cos(\theta)$ $y = \sin(\theta) + \cos(\theta)$ curve $C=\vec(1,0)*\cos(\theta) + \vec(1,1)*\cos(\theta)$ The major and minor axes are $\vec(1,0)$ and $\vec(1,1)$. They are not orthogonal. Is it still an ellipse? Suppose I have a point $P(p_1,p_2)$ can I find a point Q on this curve that has shortest euclidean distance from P? - 1 Is is true that for any point on that curve, the sum of its distances to two fixed points ( the purposed ellipse's foci ) is a fixed number? Can you locate foci at all, to begin with? – DonAntonio Aug 23 '12 at 5:31 1 You can show that the curve you have is indeed an ellipse, with axes of length $\sqrt{1+\phi}$ and $\sqrt{2-\phi}$, and the major axis inclined at an angle of $\arctan\,\phi$, where $\phi$ is the golden ratio. – J. M. Aug 23 '12 at 5:45 1 The major and minor axes are always orthogonal. This is not obvious, but it is true. – Qiaochu Yuan Aug 23 '12 at 6:02 6 I think what you may not realize is the following: a nonsingular linear transformation (e.g. a shear) always transforms an ellipse to another ellipse, but the axes of the image ellipse are not in general the images of the axes of the original ellipse. – Robert Israel Aug 23 '12 at 6:20 ## 3 Answers More explicitly, we have the decomposition $$\begin{pmatrix}\cos\,t\\\cos\,t+\sin\,t\end{pmatrix}=\begin{pmatrix}\cos\,\lambda&-\sin\,\lambda\\\sin\,\lambda&\cos\,\lambda\end{pmatrix}\cdot\begin{pmatrix}\sqrt{1+\phi}\cos(t+\eta)\\\sqrt{2-\phi}\sin(t+\eta)\end{pmatrix}$$ where $\tan\,\lambda=\phi$, $\tan\,\eta=1-\phi$, and $\phi=\dfrac{1+\sqrt{5}}{2}$ is the golden ratio. You can check that both your original parametric equations and the new decomposition both satisfy the Cartesian equation $2x^2-2xy+y^2=1$. What the decomposition says is that your curve is an ellipse with axes $\sqrt{1+\phi}$ and $\sqrt{2-\phi}$, with the major axis inclined at an angle $\lambda$. If we take the linear algebraic viewpoint, as suggested by Robert in the comments, what the decomposition given above amounts to is the singular value decomposition (SVD) of the shearing matrix; i.e., $$\begin{pmatrix}1&0\\1&1\end{pmatrix}=\begin{pmatrix}\cos\,\lambda&-\sin\,\lambda\\\sin\,\lambda&\cos\,\lambda\end{pmatrix}\cdot\begin{pmatrix}\sqrt{1+\phi}&\\&\sqrt{2-\phi}\end{pmatrix}\cdot\begin{pmatrix}\cos\,\eta&\sin\,\eta\\-\sin\,\eta&\cos\,\eta\end{pmatrix}^\top$$ The SVD is in fact an excellent way to look at how a matrix transformation geometrically affects points: the two orthogonal matrices on the left and right can be thought of as rotation matrices, reflection matrices, or products thereof, and the diagonal matrix containing the singular values amounts to nothing more than a scaling about the axes of your coordinate system. - Thank you very much! – Hi271 Aug 23 '12 at 17:46 Notice that $\sin\theta+\cos\theta \propto \sin(\theta+\pi/4)$, so that the shape you are drawing is just a rotated (and stretched) ellipse (because the "corner" points, $\theta=n\pi/2$, occur with the coordinates shifted from the usual $x$-axis, $y$-axis orientation). Grapher concurs: So, in the sense that you've defined it, the axes of the ellipse don't end up non-orthogonal at all -- hopefully that answers the rest of the questions. In general, a linear combination of sines and cosines can always be written as a single shifted sine or cosine, and thus an ellipse can be rotated (and/or stretched) as such. - Hint: From $y=\sin\theta+\cos\theta$, we get $y-x=\sin\theta$, and therefore $(y-x)^2=\sin^2\theta=1-x^2$. After simplifying and completing the square, can you recognize the curve? The major and minor axes do turn out to be orthogonal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264247417449951, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Second_countable_space
# Second-countable space (Redirected from Second countable space) In topology, a second-countable space, also called a completely separable space, is a topological space satisfying the second axiom of countability. A space is said to be second-countable if its topology has a countable base. More explicitly, this means that a topological space $T$ is second countable if there exists some countable collection $\mathcal{U} = \{U_i\}_{i=1}^\infty$ of open subsets of $T$ such that any open subset of $T$ can be written as a union of elements of some subfamily of $\mathcal{U}$. Like other countability axioms, the property of being second-countable restricts the number of open sets that a space can have. Most "well-behaved" spaces in mathematics are second-countable. For example, Euclidean space (Rn) with its usual topology is second-countable. Although the usual base of open balls is not countable, one can restrict to the set of all open balls with rational radii and whose centers have rational coordinates. This restricted set is countable and still forms a basis. ## Properties Second-countability is a stronger notion than first-countability. A space is first-countable if each point has a countable local base. Given a base for a topology and a point x, the set of all basis sets containing x forms a local base at x. Thus, if one has a countable base for a topology then one has a countable local base at every point, and hence every second countable space is also a first-countable space. However any uncountable discrete space is first-countable but not second-countable. Second-countability implies certain other topological properties. Specifically, every second-countable space is separable (has a countable dense subset) and Lindelöf (every open cover has a countable subcover). The reverse implications do not hold. For example, the lower limit topology on the real line is first-countable, separable, and Lindelöf, but not second-countable. For metric spaces, however, the properties of being second-countable, separable, and Lindelöf are all equivalent. Therefore, the lower limit topology on the real line is not metrizable. In second-countable spaces—as in metric spaces—compactness, sequential compactness, and countable compactness are all equivalent properties. Urysohn's metrization theorem states that every second-countable, regular space is metrizable. It follows that every such space is completely normal as well as paracompact. Second-countability is therefore a rather restrictive property on a topological space, requiring only a separation axiom to imply metrizability. ### Other properties • A continuous, open image of a second-countable space is second-countable. • Every subspace of a second-countable space is second-countable. • Quotients of second-countable spaces need not be second-countable; however, open quotients always are. • Any countable product of a second-countable space is second-countable, although uncountable products need not be. • The topology of a second-countable space has cardinality less than or equal to c (the cardinality of the continuum). • Any base for a second-countable space has a countable subfamily which is still a base. • Every collection of disjoint open sets in a second-countable space is countable. ## Examples • Consider the disjoint countable union $X = [0,1] \cup [2,3] \cup [4,5] \cup \dotsb \cup [2k, 2k+1] \cup \dotsb$. Define an equivalence relation and a quotient topology by identifying the left ends of the intervals - that is, identify 0 ~ 2 ~ 4 ~ … ~ 2k and so on. X is second countable, as a countable union of second countable spaces. However, X/~ is not first countable at the coset of the identified points and hence also not second countable. • Note that the above space is not homeomorphic to the same set of equivalence classes endowed with the obvious metric: i.e. regular Euclidean distance for two points in the same interval, and the sum of the distances to the left hand point for points not in the same interval. It is a separable metric space (consider the set of rational points), and hence is second-countable. ## References • Stephen Willard, General Topology, (1970) Addison-Wesley Publishing Company, Reading Massachusetts. • John G. Hocking and Gail S. Young (1961). Topology. Corrected reprint, Dover, 1988. ISBN 0-486-65676-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905513882637024, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/180333/winning-strategy-for-a-matchstick-game
# Winning strategy for a matchstick game There are $N$ matchsticks at the table. Two players play the game. Rules: (i) A player in his or her turn can pick $a$ or $b$ match sticks. (ii) The player who picks the last matchstick loses the game. 1. What should be the conditions on $N$ so that a winning strategy can be derived for the first player? 2. What should be the strategy of first player so that he or she always wins this game provided $N$ is such that a wiinning strategy can be derived? I have solved this problem by hit and trial for small numbers one or two, but is there a general solution? Edit : Suppose rules of the game are changed and now a player in his or her turn can pick any number of matchstick upto p < N , then how many sticks should first player pick to ensure a win - What does it mean to "pick a or b match stick"? – joriki Aug 8 '12 at 16:11 Suppose there are 10 matchstick at table, – Arpit Bajpai Aug 8 '12 at 16:12 You can pick either 1 or 2 in your turn – Arpit Bajpai Aug 8 '12 at 16:13 So $a$ and $b$ are arbitrary fixed integers, and each move consists in removing either $a$ or $b$ matchsticks from the table? – joriki Aug 8 '12 at 16:13 yes we need value of N in terms of a and b for winning startegy and then winning strategy – Arpit Bajpai Aug 8 '12 at 16:17 show 2 more comments ## 2 Answers With the current edit, the game goes like this: There are $N$ matchsticks on a table, two players (Left and Right) and a pre-ordained number $p$. Play alternates between Left and Right, with Left starting, and on a player's turn that player may remove up to $p$ matchsticks from the table. The person who takes the last matchstick loses. So you ask how Left can guarantee a win. The answer is that he can't always guarantee a win. For example, suppose $p = 1$ and there are 3 matchsticks. He picks up 1, Right picks up 1, and left picks up the last one, losing. But Left can always win unless $N = kp + (k + 1)$ for $k \geq 0$. Why is this? If $N = 1$, Left loses. Ok. If $N = 2$, Left picks one up. If $N = 3$, Left picks 2 up. This works until $N = p + 1$, when Left picks up as many as he can take, $p$. At $p + 2$, no matter how many Left picks up, there are a number of sticks that allow Right to win (Right just has to use the strategy that Left would have used). But at $p + 3$ sticks, all that Left has to do is pick up $1$, landing Right in the losing position $p+2$. This works until $2p+2$, when Left can pick up exactly $p$. At $2p + 3$, Left loses as no matter what move Left does, Right can use the strategy we described above for Left. For example, suppose $N = 14$ and $p = 5$. Then Left wants to move the game to a 'losing position' $kp + (k + 1)$. Here, that means moving to $2(5) + (2 + 1) = 13$, by picking exactly one up. No matter how many Right picks up, Left's next move will be to $1(5) + (1 + 1) = 7$ (i.e. after Left moves, there will only be $7$ sticks on the table). As this is $6$ away from $13$, Right can't move there. Left's next move will be to exactly $1$ stick on the table. So $N = kp + (k+1)$ are the 'losing positions' of the game. - – joriki Aug 9 '12 at 6:00 More specifically it is misère subtraction, since in this case the last player to move loses. – Théophile Aug 10 '12 at 18:06 ## Did you find this question interesting? Try our newsletter If the moves consist of taking either $a$ or $b$ sticks, with $a\lt b$ and, as clarified in a comment, $N\lt a$ is a loss, then the first player wins iff $((N-1)\bmod(a+b))\bmod(2a)\lt a$, and a winning strategy is to take $a$ sticks if $(N-1)\bmod(a+b)\gt b$, take $b$ sticks if $(N-1)\bmod(a+b)\lt a$, and take either otherwise.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464610815048218, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/62161-please-help-area-problem.html
# Thread: 1. ## Please help- area problem a rectangle R of length l and width w is revolved around the line L. Find volume of resulting solid of revolution. Attached Thumbnails 2. The theorem of Pappus may be a cool way to go with this one. Find the centroid of the rectangle. Rectangles are easy for that. It is in the center. Find the distance the center of the rectangle is from what it is being revolved around. It's area is merely lw. Volume would then be $V=\left(\text{area of rectangles}\right)\cdot\left(\text{distance traveled by centroid}\right)$ $V=2{\pi}\cdot \overline{x}\cdot (\text{area of rectangle})$ 3. is there any way to do this problem without that theorem. . . that theorem is awesome though 4. You could try to find the equations of the lines making up the sides of the rectangle. But Pappus is a decent way it would appear. The centroid is located a distance of $d+\frac{\sqrt{l^{2}+w^{2}}}{2}$ from the vertical line L. The area of the region is simply lw.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493820071220398, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109622/kodaira-spencer-map-as-a-differential
## Kodaira-Spencer map as a “differential” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Using the laguage of derived category, the Kodaira-Spencer map $\kappa(x) : Ext^1_X(k(x), k(x)) \rightarrow Ext^1_X(\mathcal F_x, \mathcal F_x)$ can be described as a Fourier-Mukai transformation $\Phi_{\mathcal F}$. $\Phi_{\mathcal F} : Hom_{D^b(X)} (k(x), k(x)[1]) \rightarrow Hom_{D^b(X)}(\mathcal F_x, \mathcal F_x[1])$ Here, $x \in X$ is a point of a smooth projective variety over an algebraically closed field. $\mathcal F$ is a coherent sheaf on $X \times X$ which is flat over the first factor, and $\mathcal F_x$ means a $\mathcal {i}^*_{x\times X} \mathcal F$. (which is equal to $\Phi_{\mathcal F} (k(x))$) Of course, KS map is not defined as a differential of some morphism, but please consider the following argument ; Suppose $\mathcal F_x$ is concentrated at $x$ for all $x \in X$. It means the map $f:x \mapsto \mathcal F_x$ is injective. Hence $\kappa(x) := df(x)$ is injective at generic point. (see D.Huybrechts, Fourier-Mukai transform in algebraic geometry, Ch7, Prop7.1) Here, the definition of $f$ is a nonsense. It is not even clear where the image of $f$ lives. But the argument is quite persuasive. And it is even geometrically intuitive, because it describes a KS map as a differential of some "function". I think this must be a shadow of rigorous mathematical contents(probably a deformation theory), but I failed to make it so. Could someone explain to me what's going on? Thanks in advance. - 1 This can be made totally rigorous. There are a ton of details to check, but I think it is fairly straightforward when (as you point out) you know what the target of $f$ is supposed to be. Let $P$ be the Hilbert polynomial of $\mathcal{F}_x$. The image is $Hilb^P(X)$ (the scheme representing the functor $S\to X \mapsto$ flat quotients of $\mathcal{O}_{S\times X}$ with Hilbert polynomial $P$). Commutativity of the appropriate diagram and identification of tangent spaces with the Ext groups using deformation theory shows that $df$ is exactly $\kappa (x)$. – Matt Oct 16 at 16:44 ## 1 Answer You can view the sheaf $\mathcal{F}$ on $X \times X$ instead as a map from $X$ (the first factor) to the moduli stack $M$ of sheaves on $X$ (the second factor). This induces a map on the tangent spaces. The tangent space to $M$ at a sheaf $F$ is the space of first order deformations of $F$, which is $\mathrm{Ext}^1(F, F)$. Using the exact sequence $0 \rightarrow I \rightarrow \mathcal{O}_X \rightarrow k(x) \rightarrow 0$ you get an identification between $T_x X = \mathrm{Hom}_{\mathcal{O}_X}(I, k(x)) = \mathrm{Ext}^1(k(x),k(x))$. Therefore the differential gives a map $T_x X = \mathrm{Ext}^1(k(x),k(x)) \rightarrow T_{\mathcal{F}_x} M = \mathrm{Ext}^1(\mathcal{F}_x, \mathcal{F}_x)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350482225418091, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/81707-another-application-integral.html
Thread: 1. Another application Integral I was just looking over my notes and I do not understand this question entirely A cone with height 10ft and top radius 5ft is filled with oil which weighs 57lb/ft^3. find work done in pumping out the oil. I know that the interval being sliced is [0,10] because the oil fills the entire tank. I also understand why the the distance is (10-y) What I do not understand according to my notes is where (y^2) comes from. Or where the radius comes into play? If it is coming from the formula for a cone 1/3pi*r^2 then why does my notes say the final answer is $\frac{57{\pi}}{4}\int_{0}^{10}(y^2)(10-y)dy$ I thought it should be $\frac{57{\pi}}{3}$ 2. Originally Posted by gammaman i was just looking over my notes and i do not understand this question entirely a cone with height 10ft and top radius 5ft is filled with oil which weighs 57lb/ft^3. Find work done in pumping out the oil. I know that the interval being sliced is [0,10] because the oil fills the entire tank. I also understand why the the distance is (10-y) what i do not understand according to my notes is where (y^2) comes from. Or where the radius comes into play? If it is coming from the formula for a cone 1/3pi*r^2 then why does my notes say the final answer is $\frac{57{\pi}}{4}\int_{0}^{10}(y^2)(10-y)dy$ i thought it should be $\frac{57{\pi}}{3}$ See the attachment for the similar triangles from this we know that the radius is $r=\frac{1}{2}(10-y)=\frac{10-y}{2}$ so each little volume chunk is a cyllindar of height dy and and above radius $dV=\pi \left( \frac{10-y}{2}\right)^2dy$ so now Force will be density times volume $dF=\frac{57\pi}{4}(10-y)^2dy$ Since we will move each dy slice y units we get $\int_{0}^{10}y \cdot \frac{57\pi}{4}(10-y)^2dy=\frac{57\pi}{4}\int_{0}^{10}y(10-y)^2dy$ 3. why are you squaring (10-y) and not y? 4. Originally Posted by gammaman why are you squaring (10-y) and not y? If you look at the diagram you will see why. We have the roles of y and y-10 exchanged in our diagrams. This is unimportant the integrals are equivelent. Basically I didn't notice that you have your coordinate system starting at the bottom of the cone where mine is at the top. if you let $u=10-y \iff y=10-u \implies dy=-du$ $\int_{0}^{10}y(10-y)^2dy=\int_{10}^{0}(10-u)(u)^2(-du)=\int_{0}^{10}u^2(10-u)du$ Remember the way you set up alot of these is not unique. 5. Ok so the formula's do not change. The only thing that would change would be the Di. If I adding fluid it is (y + something). If I am taking fluid out it is (Y-something). By the way, I love your avatar, is that from Oblivion? 6. So let me get this straight. If the height were 12 and the radius were 3, I would have $\frac{Yi}{12}=\frac{Ri}{3}$ Then $\frac{3Yi}{12}=\frac{12Ri}{12}$ So then we would have (.25)^2 = 1/8? so then it would just be $\frac{57{\pi}}{8}\int_{0}^{12}(y^2)(12-y)dy$ 7. Originally Posted by gammaman So let me get this straight. If the height were 12 and the radius were 3, I would have $\frac{Yi}{12}=\frac{Ri}{3}$ Then $\frac{3Yi}{12}=\frac{12Ri}{12}$ So then we would have (.25)^2 = 1/8? so then it would just be $\frac{57{\pi}}{8}\int_{0}^{12}(y^2)(12-y)dy$ Your logic is correct but $(.25)^2=\left(\frac{1}{4}\right)^2=\frac{1}{16}$ 8. so then it would be $\frac{57{\pi}}{16}?$ 9. Originally Posted by gammaman so then it would be $\frac{57{\pi}}{16}?$ You got it! 10. One last question, what would the "setup" be for a cylinder? And I am still dying to know where your avatar comes from. 11. Originally Posted by gammaman One last question, what would the "setup" be for a cylinder? And I am still dying to know where your avatar comes from. Sorry my wife found it for me and it is supposed to be Raistlin Mejere from Dragon lance. I can ask her where she got it when she gets home from school. off topic I just played my first oblivion game on PS3 and it was awsome. On topic We are using the height as dy so on the small scale each slice is approximated by a cylindar. Think of it as we are cutting little circular disks of height dy
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9613924622535706, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/2351/speech-vs-music-classification
# Speech vs Music classification I want to determine which parts of an audio file contain speech respectively music. I hope someone has a made something like this or can tell me where to start. Can you please suggest some method or tutorial for doing the same? - 3 There are a lot of papers/documentation online, just search for "music speech discrimination", or take a look at this: "Speech and Music Classification and Separation: A Review" – Vor Jun 7 '12 at 7:56 @Kaveh: “If you are looking for algorithms then the question might be OK here”: Please be careful to make a general statement like that. I think that from OP’s viewpoint, he or she is obviously looking for algorithms (which classify audio to speech and music), so according to what you said, “the question might be OK here.” But the issue with this question is that the problem to solve is not well-defined from the TCS point of view: “speech” and “music” are just what human interpret a sound as, and their meanings cannot be defined mathematically without referring to human. (more) – Tsuyoshi Ito Jun 9 '12 at 11:40 (cont’d) Exploring the characteristic differences of what human interprets as speech and what human interprets as music is not in the scope of TCS, and this is the main point of the question. – Tsuyoshi Ito Jun 9 '12 at 11:43 @TsuyoshiIto: IMO the difference between a BIG "$2^n$" and a small "$n^k$" is also a "human interpretation" :-) – Vor Jun 9 '12 at 13:26 2 @Vor: No. The difference between $2^n$ and $n^k$ is well-defined in mathematical sense. Interpreting this difference as important is a human interpretation, but the interpretation does not affect the definition of, say, P. – Tsuyoshi Ito Jun 9 '12 at 13:28 show 2 more comments ## 2 Answers The appropriate technique is machine learning. Some keywords you could search for are "music speech discrimination", and you could look at this survey. (These pointers came from Vor's comment.) - If you have many data ( large number of audio pieces with human labels), I would suggest you try Deep Convolutional Neural Networks. But I think there should be a much more direct way than that, since I believe the spectrum could be a very good feature for discriminate decisions. - Do you have any evidence why those networks can be expected to work well for this problem? – Raphael♦ Feb 4 at 12:16 – Strin Feb 6 at 8:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485814571380615, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66053/dissecting-a-square/66054
## Dissecting a square ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Edited - some comments may now be out-of-date. I thought I had a complete set of solutions to this: ````Cut a square into identical pieces so that they all touch the center point. ```` It became clear after some discussions that I was very, very wrong. There are infinite families of solutions, and a sporadic. So I have two questions: 1. What do you think is a complete set of solutions? 2. What techniques and approaches can I use to prove that the ones I have are all there are? Hope that's clearer. Thanks. - 2 What exactly is your question? How can we help you to show that your answer is "complete", when we don't know your answer, let alone your proof? – Abel Stolz May 26 2011 at 13:15 1 It seems to me that there are infinitely many solutions. Voting to close. – Steven Gubkin May 26 2011 at 13:17 1 Of course there are infinitely many solutions. The question is, what families of solutions exist, and are there any sporadics. Sorry, I'm new here, but I thought that the question: How many ways are there to dissect a square into congruent pieces such that all of them touch the centre? was pretty clear. If there are infinitely many solutions, characterise them. – Colin D Wright May 26 2011 at 13:20 3 @Colin: that question may be clear, but that is not the question you asked. The question you asked is "did I miss any solutions, and how can I prove that my answer is complete," and neither of these questions is possible to answer without knowing what your answer is. – Qiaochu Yuan May 26 2011 at 13:21 2 Too many or's. Choose something. Meanwhile, I'll make my choice. I do not want to turn it into a topology problem, so I'll not assume the pieces connected. I do not want it to become some amenability question either, so I suggest to assume that each set is a closure of its interior with boundary of zero measure. Under these assumptions, look at artofproblemsolving.com/Forum/… Is it one of the solutions you knew? – fedja May 26 2011 at 20:26 show 19 more comments ## 3 Answers You can take any of a variety of paths from the center to the edges . There are various ways to draw a a path from one vertex of a small square to the opposite one and get two symmetric pieces (although one can't just use horizontal and vertical segments). Then 4 of those little squares yield a dissection into 8 pieces. Later addition Under more relaxed conditions, here is an example with 32 pieces (and a suggestion that maybe there is no limit) . If you look at it the right way you might be able to convince yourself that each piece is path-wise connected. The 4 colored pieces are congruent by rotation around the center of the area shown (which is a quarter of the full square). Using a reflection will give 4 more pieces for a total of eight. These eight pieces (so far) fit together to fill in the square shown and each touches each corner. Put together 4 copies of this (32 pieces total) to get the full square. As it is, each piece in the full square touches one corner and the center. I have not totally convinced myself, but it seems that it should be possible to divide each piece into 4 by quadrisecting each acute angle and then recolor in such a way as to have all 32 pieces each touching all 4 corners. If so, then 4 copies of that figure could be arranged and give a square partitioned into 132 pieces all touching its center. If that is correct then there should be no limit. comments 1) If you want respect on this site then stop hinting that you have a perhaps complete classification with three families plus one sporadic (I don't mind that but some people here do). Describe them carefully enough that people can decide if they have others that they can think of. Is the sporadic case the entire square? I gather that you think every solution uses 1,2,4 or 8 pieces. I imagine that is true but the proof of that alone would be a good start and might not be that easy (see comment 3 below.) 2) Your description would probably make clear what you mean by piece, but the most general definition commonly seen (although I have now used a looser definition above) might be something like : "a closed topological disk in the plane with boundary a simple closed curve." You wish to find a finite set of such pieces, all congruent (reflection allowed), disjoint interiors, union the square and the center on the boundary of each piece (ignoring the one piece case...). Tedious, but worth saying anyway. For reference below: a polyomino is such a tile made of unit squares meeting edge to edge. 3) It is fun and challenging to find a missed example for claims of the sort: "this is all tilings." It can be surprisingly difficult (compared to the "obviousness" of the result) and rather tedious (to my taste) to prove that there are not any exceptions. Here is a 7 page paper discussing when you can split a polygon into 2 congruent shapes: Splitting a Polygon into Two Congruent Pieces Kimmo Eriksson The American Mathematical Monthly Vol. 103, No. 5 (May, 1996), pp. 393-400 It might be relevant for this problem and at least is an example of how to prove such things. I can believe that the only way to partition a rectangle into 3 congruent pieces is if the pieces are themselves rectangles. Here is an 8 page proof: Samuel J. Maltby Trisecting a rectangle Journal of Combinatorial Theory, Series A Volume 66, Issue 1, April 1994, Pages 40-52 It cites the following classic (to those who follow these matters) 6 page note proving a long conjectured result (again the methods in the paper could be relevant): Polyominoes of order 3 do not exist I. N. Stewart and A. Wormstein Journal of Combinatorial Theory, Series A Volume 61, Issue 1, September 1992, Pages 130-136 Abstract: The order of a polyomino is the minimum number of congruent copies that can tile a rectangle. It is an open question whether any polyomino can have an odd order greater than one. Klarner has conjectured that no polyomino of order three exists. We prove Klarner's conjecture by showing that if three congruent copies of a polyomino tile a rectangle then the polyomino itself is rectangular. The proof uses simple observations about the topology of a hypothetical tiling, and symmetry arguments play a key role. - 2 To fill out: Take a path from a point on the boundary to the centre. Rotate it by a quarter turn, half turn, three quarters about the centre of the square. A condition is required so that these paths do not cross each other, but if they don't they divide the square into four congruent pieces. For eight piece solutions, divide into four by horizontal and vertical straight lines through the centre. Join the centre to the edge with a path having twofold rotational symmetry around the midpoint of the diagonal from centre to vertex and lying within the quarter. Replicate in the other quarters. – Mark Bennet May 26 2011 at 17:19 2 @Colin D Wright: This is not a discussion site. This is a question and answer site. Say what you know if you want me to spend time thinking about it. – Douglas Zare May 26 2011 at 19:31 1 @Colin I forgive you and I don't disagree! I'm just telling you what I've seen. I'd still like to know what your 3 families and sporadic case are. – Aaron Meyerowitz May 26 2011 at 20:23 1 My infinite families are as follows. Take any non self-intersecting path from the center to the border such that a 180 degree rotation does not intersect. That divides the square into two. Similarly for a 90 degree rotation, which divides the square into 4. Now divide into four squares, and for each small square, divide diagonally with a curve that has 180 degree symmetry. That divides the square into 8. The sporadic is the trivial case - one piece. But fedja has just demonstrated a 16 piece dissection, so now I know I know nothing. – Colin D Wright May 26 2011 at 20:56 1 @Aaron: This is a cool answer. I urge you to rephrase the beginning of comment (1), however, even though the OP seems not to have been offended. – Daniel Litt May 29 2011 at 7:44 show 6 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think there are very few such solutions. The pieces must be identical, and they must touch the center. Consider the segment joining the center with one of the vertices. Then all small figures (in which you split the square) must contain a segment of this length, and there are only four such segments. Any such segment belongs to at most two small figures, and we find that there are at most $8$ small figures. From here on it is easy to see that the possible splits are: • the square itself • the square cut by a diagonal • the square cut by two diagonals • the square cut by parallel lines through the center • the square cut by parallel lines through the center and by its diagonals • the square cut by a line through the center • the square cut by two orthogonal lines through its center. • the square cut by any smooth curve symmetric by its center. • the square cut by any smooth curve symmetric by its center, and the rotate of this curve by $\pi/2$. There are indeed many solutions. Sorry for my initial remark. I think that essentially the square can be dissected in 2,4 or 8 parts. The 8 parts is unique. The 2 parts cutting must be symmetric by its center, and the 4 parts cutting must be made such that is invariant by a $\pi/2$ rotation. - That's what I originally thought, but there are many, many more. Hence asking about how to prove the completeness of the enumeration. – Colin D Wright May 26 2011 at 13:26 Present one of your many more solutions, to see where my answer is wrong. – Beni Bogosel May 26 2011 at 13:29 @Beni: Do you mean, literally, "must contain a segment of this length"? – Joseph O'Rourke May 26 2011 at 13:35 Draw a semicircle with one end at the centre and the other touching the side of the square in the middle. Rotate 180 degrees. Now you have a sort of Ying-Yang in a square. – Colin D Wright May 26 2011 at 13:36 You're getting closer to the solutions I have. I have three infinite families and a sporadic. The challenge is to show that these are all, and the question is what techniques people might suggest for approaching this. – Colin D Wright May 26 2011 at 13:47 The number of solutions is the maximal possible, namely $2^{2^{\aleph_0}}$. But we need to be specific about some definitions. To begin, what does it mean to cut into identical pieces? There seems to be agreement on this point that it means to partition the square into finitely many pieces each of which can be rotated into the other. The other definition I will assume is that "touching the center point" means having that point in the closure of each set. We also need to assume that we actually have a partition of the square minus the centre point, because otherwise there is no solution except for the square itself because the centre point can belong to only one member of the partition. Given all of this, let $[(p^0_\xi,p^1_\xi)]_{\xi\in 2^{\aleph_0}}$ enumerate all pairs of points in the square symmetric about the central point. For any function $f:2^{\aleph_0}\to 2$ let $X(f)$ be the set of all $p_\xi^{f(\xi)}$ and $Y(f)$ the complement. This is a partition of the square into two pieces as desired, and there are $2^{2^{\aleph_0}}$ such partitions. - Yes, but it's not all the solutions. Agreed that more solutions won't increase the cardinality, but not including all the solutions seems "sub-optimal." What if you additionally require the "pieces" to be connected (in some sense)? – Colin D Wright May 26 2011 at 18:48 If you ask for all partitions into two pieces, the sets $X(f)$ and $Y(f)$ I describe do give you all solutions. If you ask for partitions into $n$ pieces then a similar argument, using $n^\text{th}$-roots of unity, also yields a description of all partitions. If you ask for connected pieces, that is a different question. – Juris Steprans May 26 2011 at 19:51 1 Even connected is not enough though. For example let the square in question be centred at the origin and have width 2. For any subset $X\subseteq (0,1]$ consider the partition consisting of the open upper half of the square together with $X$ and $[-1,0) \setminus -X$. This again yields many connected partitions. So one should probably ask for partitions into open connected sets and define "partition" to mean you partition all but a closed nowhere dense subset of the square (so that you can ignore the boundaries). – Juris Steprans May 26 2011 at 20:06 Or define two partitions to be equivalent if they agree on all but a nowhere dense set, which is probably the same, but I'd need to work hard to think about that clearly enough. Very helpful - thank you. – Colin D Wright May 26 2011 at 20:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493169188499451, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/10920/statistical-method-to-quantify-diversity-variance-inequality
# Statistical method to quantify diversity / variance / inequality I am looking for a statistical method to define the variance/ diversity / inequality in a set of observations. For example: If I have following (n=4) observations using 4 data points, here the diversity/variance/inequality is zero. ````A-B-C-D A-B-C-D A-B-C-D A-B-C-D ```` In the following (n=4) observations using 7 data points, there is x amount of diversity / variance / inequality in the observations. What method I can use to derive a diversity / variance score ? ````A-B-B-D Q-D-C-B B-C-B-A B-Z-F-A ```` My real data is derived from a database of 10K data points and observations are usually >= 2 - 1000s. Currently am looking at Gini Impurity as a potential method, Do you think Gini Impurity is better for this type data ? Do you know about any better method which can derive a score by considering the order of the data ? Looking forward for your suggestions. - 2 You should at least mention whether order is a part of your 'similarity' score. – Nick Sabbe May 17 '11 at 22:02 1 @Shameer I was curious about a few things in your problem: Are data points the same as categorical attributes? What does your observational unit represent in the real-world? What are the 10K data points actually representing in the real-world. Is it human and human attributes? When you refer to inequality do you refer to the concern regarding human inequality or simply dispersion? Should each 'data point' be assumed to hold the same weight? Thanks... – B_Dev May 18 '11 at 0:51 @Nick, I need to consider the order as part of my score. Thanks for this, now I have I edited the question. – Khader Shameer May 18 '11 at 21:38 @B_Dev, The data is kind of a categorical label. I used A, B, C, D... to simplify the representation. In real-world these are protein domains(see: en.wikipedia.org/wiki/Protein_domain) encoded in a protein sequence sourced from a database of protein domains (for example see: pfam.sanger.ac.uk). Here, 1 row = 1 protein with different domains. I am looking at dispersion, but am looking for a sequence-alignment free method here. I can assign weights to data points, but that will be an extension of my analysis. – Khader Shameer May 18 '11 at 21:51 ## 1 Answer I can't give a specific answer - mainly because the question isn't specific enough (happy to edit in due course though, given more information). If the values A, B, C, D, E, etc. have a definite ordering that is meaningful to you (such as $A>C>B>E>Z>\dots$) and you only want to describe the diversity that exists within your observations, then gini coefficient makes sense. However, if the values A, B, C, etc. are just arbitrary labels (i.e. they carry no information apart from distinguishing one type from another type) then a gini coefficient makes less sense - what is the relevant analogy to "rich" and "poor" if being "rich" can't be seen to be greater than being "poor". For Gini coefficient to work, you have to be able to order the observations A further consideration is if you wish to make inference about something which exists outside your sampled values. For example, if the first set of 4 observations (which are the same) are part of a larger group which was not observed and you want to make a statement about the larger group. Such a statement might be the sample is not diverse therefore the group that the sample came from is not diverse. Gini co-efficient may or may not be appropriate in this case, depending on what you know about how the larger group is related to the sample, and on how big your sample is compared to the larger group. One way to think about the problem which may help is to consider the following scenarios. If you were told that a set of observations A is diverse, but not given any observations from A, what would you predict them to be? What if you were given the first observation only? If you were told that the set A is more diverse than another set B, and you were given the observations from set B, what would you predict the observations in set A to be? Thinking about the problem this way will help you to describe the features that your diversity measure should have (and features that it shouldn't have). If your data are categorical, then you could use tests based on a multinomial distribution with the number of trials equal to the number of categories per observation (4 in your examples) and the number of probability parameters equal to the number of data points (4 in your first example, and 7 in your second). So, taking the second example we have: $$x_{i}\sim Multinomial(4,\theta_{1},\theta_{2},\dots,\theta_{7})$$ And a homogeneity score can be created by calculating the probability that all of the $\theta_{j}$ are equal. However, in calculating this probability (which is not difficult) you will also end up calculate the probability of several other hypothesis (which are also useful), such as $\theta_{1}$ is different, all other $\theta_{j}$ are equal, that 2 are different, 3 are different, and so on up to all 7 are different. I can post how you would do this, but I want to make sure this is something that you actually want first! If you also cared about positioning (so that $A-B-C-D$ is considered different to $A-C-B-D$), then this can be incorporated by creating $28$ $\theta$ pararemters, and doing hypothesis tests about them. So you would have: $$x_{ik}\sim Multinomial(1,\theta_{1k},\theta_{2k},\dots,\theta_{7k})\;\;\;\;\;\;\;\;k=1,2,3,4$$ Admittedly you will need a reasonably sized data set in order to do this kind of test. And you would have hypothesis about the $k$ index indicating "ordering" diversity, and hypothesis about the $j$ index indicating "composition" diversity. - That's a wonderful answer ! Thanks a lot for your thoughts !!. I like the way you answered my concerns about applying Gini co-efficient to my problem. I will explore the possibilities of applying multinomial distribution here. – Khader Shameer May 18 '11 at 21:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480034708976746, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/49736/4-velocity-and-electromagnetic-fields
# 4-velocity and electromagnetic fields Can anyone see a reason for $$\left(1+{U_\rho U^\rho\over c^2}\right)\left(U_\nu{d^2 U^\nu\over d\tau^2}\right)=0$$? Here $U^\rho$ is the 4-velocity for a particle and $\tau$ the proper time. The context is for a particle moving in an electromagnetic field. I believe it may be useful to introduce the antisymmetric tensor $F_{\mu\nu}$ -- the electromagnetic field tensor. - ## 1 Answer The left parentheses are equal to zero due to $U_{\rho}U^{\rho}=-c^2$. This is true for timelike vectors in the (-1,1,1,1) signature. - Thanks for answering, Fredric. However, I don't understand the answer. Firstly, for the LHS bracket to be $0$, surely we need $U_{\rho}U^{\rho}=-c^2$. Secondly, I don't understand how time-like vectors would guarantee that. Many thanks. – Greta Jan 9 at 16:50 1 Right, Frederic meant or should have meant your value of $U_\rho U^\rho$. It describes the length (proper time) of a time-like 4-vector normalized to de facto unit length. More precisely, the length is $c$. Only the direction (of the velocity, the direction of the world line) is variable; the normalization is fixed by convention. – Luboš Motl Jan 9 at 17:02 Indeed, that was a typo. Thanks. – Frederic Brünner Jan 9 at 17:03 Thank you, Frederic and @LubošMotl. – Greta Jan 9 at 19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012260437011719, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/8207-help-partition-problems.html
# Thread: 1. ## help with partition problems The context is Discrete math /relation Hi I need help with this problem I have some trouble with partitions: Which of these collections of subsets are partitions of the set of integers? 1- The set of even integer and the set of odd integers. 2- the set of positive integer and the set of negative integers. 3- the set of integers divisible by 3, the set of integers leaving a remainder of 1 when divided by 3, and the set of integers divisible by 3, the set of integers leaving a remainder of 2 when divided by 3. 4- The set of integers less than -100, the set of integers with absolute value not exceeding 100, and the set of integers greater than 100. 5- the set of integers not divisible by 3, the set of even integers and the set of intger that leave a remainder of 3 when divided by 6. 2. Hello, braddy! A partition of a set $A$ divides $A$ into a number of disjoint subsets. We must see if the following is true: . . (a) There is no "overlap" among the subsets. . . (b) All the elements of $A$ are used. You can answer these questions with some Thinking. I'll baby-talk through most of them . . . Which of these collections of subsets are partitions of the set of integers? $I \;= \;\{\hdots\,\text{-}3,\,\text{-}2,\,\text{-}1,\,0,\,1,\,2,\,3,\,\hdots\}$ 1) $A$ = the set of even integers, $B$ = the set of odd integers $A \:=\:\{\hdots\,\text{-}5,\,\text{-}3,\,\text{-}1,\,1,\,3,\,5,\,\hdots\}$ $B \:=\:\{\hdots,\,\text{-}6,\,\text{-}4,\,\text{-}2,\,0,\,2,\,4,\,6,\,\hdots\}$ . . (a) $A \cap B \:=\:\emptyset$ . . . There is no overlap . . (b) $A \cup B \:=\:I$ . . . All of $I$ is used It is a partition. 2) $C$ = the set of positive integers, $D$ = the set of negative integers $C\:= \:\{1,\,2,\,3,\,4,\,\hdots\}$ $D\:= \:\{\text{-}1,\,\text{-}2,\,\text{-}3,\,\text{-}4,\,\hdots\}$ . . (a) $C \cap D\:=\:\emptyset$ . . . They are disjoint . . (b) $C \cup D \:\neq\:I$ . . . The $0$ is not included. It is not a partition. 3) $P$ = the set of integers divisible by 3, . . $Q$ = the set of integers leaving a remainder of 1 when divided by 3, . . $R$ = the set of integers leaving a remainder of 2 when divided by 3 You can think your way through this one. When we divide an integer by 3, only three things can happen: . . [1] the remainder is 0 . . . the integer is in $P.$ . . [2] the remainder is 1 . . . the integer is in $Q.$ . . [3] the remainder is 2 . . . the integer is in $R.$ . . (a) $P,\,Q,\,R$ are disjoint. . . (b) $P \cup Q \cup R \:=\:I$ It is a partition. 4) $A$ = the set of integers less than -100, . . $B$ = the set of integers with absolute value not exceeding 100, . . $C$ = the set of integers greater than 100. This one sounds complicated, but let's take baby steps . . . $A$ is easy: . $A \:=\:\{\hdots\,\text{-}104,\,\text{-}103,\,\text{-}102,\,\text{-}101\}$ $C$ is easy: . $C\:=\:\{101,\,102,\,103,\,104,\,\hdots\}$ $B$ has integers $n$, where $|n| \leq 100$ . . . that is: . $-100 \leq n \leq 100$ . . Hence: . $B \:=\:\{\text{-}100,\,\text{-}99,\,\text{-}98,\,\hdots\,98,\,99,\,100\}$ . . (a) $A,\,B,\,C$ are disjoint. . . (b) $A \cup B \cup C \:=\:I$ . . . all of $I$ is used It is a partition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225224256515503, "perplexity_flag": "head"}
http://mathoverflow.net/questions/18408/does-a-locally-free-sheaf-over-a-product-pushforward-to-a-locally-free-sheaf/18439
## Does a locally free sheaf over a product pushforward to a locally free sheaf? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose $X$ and $Y$ are two (smooth, affine) algebraic varieties. Let $\mathcal{F}$ be a locally free coherent sheaf over $X \times Y$, and let $\mathcal{G}$ be the pushforward of $\mathcal{F}$ to $X$. Is it true that $\mathcal{G}$ is a locally free quasicoherent sheaf? - ## 5 Answers The answer is "yes" (though I can't imagine a situation where one would really need this fact). More generally, if $A$ and $B$ are arbitrary commutative algebras over a field $k$ with $A$ noetherian and if $M$ is an $A \otimes_k B$-module which is locally free as such (perhaps not finitely generated) then $M$ is locally free as an $A$-module. Here, by "locally free" I meant relative to the Zariski topology. Without loss of generality (since $A$ is noetherian), we may and do assume that ${\rm{Spec}}(A)$ is connected. The first thing to observe is that $M$ is projective as an $A \otimes_k B$-module. Indeed, projectivity is a Zariski-local (even fpqc-local) property for modules over commutative rings, by 3.1.3 part II of Raynaud-Gruson (and the fact that faithfully flat ring maps satisfy their condition (C), using 3.1.4 part I of Raynaud-Gruson), so any locally free module over a commutative ring is projective. Thus, $M$ is a direct summand of a free $A \otimes_k B$-module, which in turn is also free as an $A$-module. Hence, $M$ is projective as a $A$-module. If $M$ is module-finite as such then it is certainly locally free (since $A$ is noetherian). But if it is not module-finite then we're again done since $A$ is noetherian with connected spectrum, as then it follows that any projective $A$-module that is not finitely generated is free! This is Bass' theorem "big projective modules are free"; see Corollary 4.5 in his paper with that title. - 5 It was suggested (in an earlier comment which was later deleted) that there must be something "wrong" with Bass' theorem, since for a non-free finitely generated projective module P, surely an infinite direct sum of copies of P is not free. In case anyone else has doubts, it really is free (and Corollary 4.5 in Bass' paper is not wrong); that (among other things) is the marvelous surprise of Bass' theorem. This phenomenon can also be seen on a smaller scale: if $R$ is Dedekind and $I$ is a non-principal ideal then $I \oplus I \simeq R^2$ is free even though $I$ is not free. – BCnrd Mar 17 2010 at 0:02 A fact similar to the last one you mention is also true for the Weyl algebra A_1 (differential operators on C[x] with polynomial coefficients). If M is a projective (left, say) A_1 module, $M \oplus M$ is free. – Peter Samuelson Mar 17 2010 at 6:15 Small typo: in my preceding comment, for the example at the end I meant to require that $I$ is also 2-torsion in the class group. – BCnrd Mar 17 2010 at 6:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One can prove this also without Bass's theorem. Let $X= Spec A$ and $Y=Spec B$. The sheaf `$\cal F$` comes from a finitely generated module $M$ over `$C=A\otimes B$`. Our first goal is to show that $M$, as an $A$-module, is a direct sum of finitely generated, locally free modules. Since $A$ is noetherian, this is equivalent to $M$ being a direct sum of finitely generated projective $A$-modules. The fact that $M$ is locally-free implies that $M$ is projective over $C$, further it is finitely generated, so there is a finitely generated $C$-module $N$ such that $M\oplus N=\bigoplus_{i=1}^kC e_i$. Now each $e_i$ of this free basis can bewritten uniquely as $e_i=m_i+n_i$. Let $M_0$ be the $A$-module generated by $m_1,\dots,m_k$. Let `$(b_j)_{j\in J}$` be a basis of $B$ over the ground field, then $$M=\bigoplus_{j\in J}b_j M_0.$$ Let $N_0$ be the $A$-module generated by $n_1,\dots,n_k$. Then $M_0\oplus N_0= \bigoplus_i Ae_i$ is a free $A$-module and so is Also $b_j(M_0\oplus N_0)=\bigoplus_iAb_je_i$. This means that we have written $M$ as a direct sum of finitely generated projective $A$-modules as claimed. Now to conclude remember that $A$ is noetherian, therefore for each point in $X$ there exists an open neighborhood, where all summands of $\cal G$ are free. - Hi. I don't know how to make a comment; this is on Torsten's example with elliptic curve minus zero point. When you take a line bundle corresponding to a point (for simplicity not a two torsion point) and add the line bundle corresponding to minus of that point then this rank=2 vector bundle restricted to the elliptic curve minus zero is (even globally) free. Indeed, working on the elliptic curve twist with the degree one line bundle L corresponding to the zero point and take the canonical inclusion of the structure sheaf on both factors, the quotient must be isomorphic to L squared. This is just to understand Bass' theorem on this nice example. - No. Take for example $X$ to be an elliptic curve, $Y = Pic^0(X) \cong X$ and $F = L$, the Poincare bundle (the universal bundle on $X \times Pic^0(X)$. Then the pushforward of $L$ onto $Pic^0(X)$ is the structure sheaf of the point, corresponding to the trivial line bundle. - 3 An elliptic curve is not affine. – anton Mar 16 2010 at 20:51 Certainly not. I am sorry. – Sasha Mar 17 2010 at 8:47 Maybe I'm totally confused, but a locally free sheaf over an affine variety should be nothing else than a projective module over the ring of global functions. Push forward along the projection is then just restriction of scalars along the inclusion $$O_X\rightarrow O_X\otimes_k O_Y$$ Now $O_Y=\bigoplus k$ so $$O_X\otimes_k O_Y=O_X\otimes_k \bigoplus k=\bigoplus O_X$$ so free modules stay free and projective Modules=Summands of free modules stay projective. - 3 In the affine setting a locally free sheaf corresponds (under mild finiteness conditions) to a projective module, not to a free module. Also, your equality O_Y=\oplus k is incorrect. – Georges Elencwajg Mar 16 2010 at 21:15 Thanks, I edited my post. O_Y=\oplus k is true since O_Y is a k algebra and any vectorspace is free. – Jan Weidner Mar 16 2010 at 21:28 I think the question may be based on the fact that for non-finitely generated modules projective and locally finite are not the same thing. Take for instance an elliptic curve (minus the origin to make it affine) and take the sum of all the line bundles (one for each point on the original elliptic curve). Then this sum is projective but there is no non-empty open subset of the spectrum over which it is free. – Torsten Ekedahl Mar 16 2010 at 22:04 By Bass' theorem, a non-finitely generated projective module over any noetherian ring with connected spectrum is free. – BCnrd Mar 17 2010 at 0:04 Mea culpa. My comment was wrong in almost any respect. I leave it both in order to leave the subsequent comment meaningful and as a reminder to myself. – Torsten Ekedahl Mar 17 2010 at 5:25 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343664646148682, "perplexity_flag": "head"}
http://mathoverflow.net/questions/4075?sort=newest
## Questions about analogy between Spec Z and 3-manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm not sure if the questions make sense: Conc. primes as knots and Spec Z as 3-manifold - fits that to the Poincare conjecture? Topologists view 3-manifolds as Kirby-equivalence classes of framed links. How would that be with Spec Z? Then, topologists have things like virtual 3-manifolds, has that analogies in arithmetics? Edit: New MFO report: "At the moment the topic of most active interaction between topologists and number theorists are quantum invariants of 3-manifolds and their asymptotics. This year’s meeting showed significant progress in the field." Edit: "What is the analogy of quantum invariants in arithmetic topology?", "If a prime number is a knot, what is a crossing?" asks this old report. An other such question: Minhyong Kim stresses the special complexity of number theory: "To our present day understanding, number fields display exactly the kind of order ‘at the edge of chaos’ that arithmeticians find so tantalizing, and which might have repulsed Grothendieck." Probably a feeling of such a special complexity makes one initially interested in NT. Knot theory is an other case inducing a similar impression. Could both cases be connected by the analogy above? How could a precise description of such special complexity look like and would it cover both cases? Taking that analogy, I'm inclined to answer Minhyong's question with the contrast between low-dimensional (= messy) and high-dimensional (= harmonized) geometry. Then I wonder, if "harmonizing by increasing dimensions"-analogies in number theory or the Langlands program exist. Minhyong hints in a mail to "the study of moduli spaces of bundles over rings of integers and over three manifolds as possible common ground between the two situations". A google search produces an old article by Rapoport "Analogien zwischen den Modulräumen von Vektorbündeln und von Flaggen" (Analogies between moduli spaces of vector bundles and flags) (p. 24 here, MR). There, Rapoport describes the cohomology of such analogous moduli spaces, inspired by a similarity of vector bundles on Riemann surfaces and filtered isocrystals from p-adic cohomologies, "beautifull areas of mathematics connected by entirely mysterious analogies". (book by R., Orlik, Dat) As interesting as that sounds, I wonder if google's hint relates to the initial theme. What do you think about it? (And has the mystery Rapoport describes now been elucidated?) Edit: Lectures by Atiyah discussing the above analogies and induced questions of "quantum Weil conj.s" etc. This interesting essay by Gromov discusses the topic of "interestung structures" in a very general way. Acc. to him, "interesting structures" exist never in isolation, but only as "examples of structurally organized classes of structured objects", Z only because of e.g. algebraic integers as "surrounding" similar structures. That would fit to the guesses above, but not why numbers were perceived as esp. fascinating as early as greek antiquity, when the "surrounding structures" Gromov mentions were unknown. Perhaps Mochizuki has with his "inter-universal geometry" a kind of substitute in mind? Edit: Hidekazu Furusho: "Lots of analogies between algebraic number theory and 3-dimensional topology are suggested in arithmetic topology, however, as far as we know, no direct relationship seems to be known. Our attempt of this and subsequent papers is to give a direct one particularly between Galois groups and knots." - I retagged and retitled to try to attract the people who would know about this. – David Speyer Nov 4 2009 at 13:26 I'm experimenting with tag `arithmetic-topology` (a reasonably standard name for "primes are knots topic"). Comment if you like! – Ilya Nikokoshev Nov 4 2009 at 16:25 4 Usually I just complain about bad grammar for its own sake, but this time it's positively confusing. -1 – Scott Morrison♦ Nov 4 2009 at 17:19 1 I think this question deserves a rewrite by somebody with better English... Scott? me? – Ilya Nikokoshev Nov 4 2009 at 17:27 Link to " Minhyong's question " is not working now - "londonnumbertheory.wordpress.com is no longer available. The authors have deleted this blog. " – Alexander Chervov Nov 11 at 19:31 ## 8 Answers The analogy doesn't quite give a number theoretic version of the Poincare conjecture. See Sikora, "Analogies between group actions on 3-manifolds and number fields" (arXiv:0107210): the author states the Poincare conjecture as "S3 is the only closed 3-manifold with no unbranched covers." The analogous statement in number theory is that Q is the only number field with no unramified extensions, and indeed he points out that there are a few known counterexamples, such as the imaginary quadratic fields with class number 1. The paper also has a nice but short summary of the so-called "MKR dictionary" relating 3-manifolds to number fields in section 2. Morishita's expository article on the subject, arXiv:0904.3399, has more to say about what knot complements, meridians and longitudes, knot groups, etc. are, but I don't think there's an explanation of what knot surgery would be and so I'm not sure how Kirby calculus fits into the picture. Edit: An article by B. Morin on Sikora's dictionary (and how it relates to Lichtenbaum's cohomology, p. 28): "he has given proofs of his results which are very different in the arithmetic and in the topological case. In this paper, we show how to provide a unified approach to the results in the two cases. For this we introduce an equivariant cohomology which satisfies a localization theorem. In particular, we obtain a satisfactory explanation for the coincidences between Sikora's formulas which leads us to clarify and to extend the dictionary of arithmetic topology." - Thomas Koberda wrote about "Class field theory and the MKR dictionary for knots." in: math.harvard.edu/~koberda/minorthesis.pdf – Thomas Riepe Jan 15 2010 at 21:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems the following remarks in M. Kapranov's paper http://arxiv.org/abs/alg-geom/9604018 page 64 bottom, has not been mentioned so far According to the point of view going back to Y.I. Manin and B. Mazur, one should visualize any 1-dimensional arithmetic scheme X as a kind of 3-manifold and closed points x ∈ X as oriented circles in this 3-manifold. Thus the Frobenius element (which is only a conjugacy class in the fundamental group) is visualized as the monodromy around the circle (which, as an element of the fundamental group, is also defined only up to conjugacy since no base point is chosen on the circle), Legendre symbols as linking numbers and so on. From this point of view, it is natural to think of the operators (algebra elements) af,x,d for fixed f and varying x, d as forming a free boson field Af on the “3-manifold” X; more precisely, for ±d > 0, the operator a ± f,x,d is the dth Fourier component of Af along the “circle” Spec(Fq(x)). The bosons a ± f,x,d and their sums over x ∈ X (i.e., the Taylor components of log Φ ± f (t)) will be used in a subsequent paper to construct representations of U in the spirit of [FJ]. It might be that recent paper by Kapranov and coauthors: http://arxiv.org/abs/1202.4073 The spherical Hall algebra of Spec(Z) is somehow developing ideas quoted above. The question which I heard from V. Golyshev and others many years ago is the following: if Spec (Z) is analogous to 3-fold, what should be arithmetic analog of Chern-Simons theory ? - My post is not an answer, but rather a suggestion of work which belongs to this circle of ideas : Sugiyama gives a geometric analog of the Birch and Swinnerton-Dyer conjecture in http://geoquant.mi.ras.ru/sugiyama.pdf - Thanks for giving that interesting link! – Thomas Riepe Dec 8 2011 at 9:36 This has been well-addressed by the answerers before me, but just to chime in -- there are a variety of analogs one could make for the Poincare conjecture for number fields. For one, there are several equivalent statements about the Poincare conjecture for 3-manifolds which are not equivalent when transferred over by analogy to the number field case. As a first easy example, while 3-manfiolds enjoy a clean Poincare duality, number fields have extra 2-torsion. In particular, one frequently has $H^1(\mathcal{O}_K,\mathbf{G}_m)$ trivial with $H^2(\mathcal{O}_K,\mathbf{G}_m)$ non-trivial (example: any real quadratic number field with trivial class group). The equivalences (or lack thereof) between being an integral homology 3-sphere, a rational homology 3-sphere, and a homotopy 3-sphere are not the same in the two "categories." So depending on how you phrase your analogous Poincare conjecture, you may get different answers. The cleanest form (found in Niranjan Ramachandran's "A Note on Arithmetic Topology", which deals exclusively with this question) is that there are exactly ten rational homology 3-spheres which are homotopy 3-spheres, namely the 9 quadratic imaginary number fields of class number one and $\mathbb{Q}$ itself. (Or really, $\mathbb{Z}$ itself), and even more homotopy 3-spheres. A second frequently under-emphasized point to make is that no one really knows what the right category for this analogy is on the number theory side. As mentioned above, if you take your category to be Specs of rings of integers in a number field, you don't get the Poincare conjecture. On the other, if you take the point of view of Artin-Verdier theory (or alternatively, Arakelov theory), where you include in your spaces some information about the behavior of the infinite primes (from the point of view of number theory, defining Spec(Z) as the set of prime ideals ignores the obviously important primes at infinity), then you get a different cohomology theory. With these new cohomology groups in place, some things look a little bit cleaner. Again, see Ramachandran. - From reading the Morishita article 0904.3399 (page 24), there is a following analogue of Poincare conjecture: Suppose that k is a number field whose ring of integers is “cohomologically ”, namely for i ≥ 0. Then must be . - Yay for math support! (Though I had to erase formulas, otherwise the formatting is broken) – Ilya Nikokoshev Nov 4 2009 at 16:45 What's that superscript c? – Kevin Lin Dec 23 2009 at 0:56 "compactly-supported etale cohomology taking the in- finite prime into account", it's subscript c in the article but I changed it for some reason (I forgot why) – Ilya Nikokoshev Dec 24 2009 at 8:26 I think it's important to keep track of the fact that the analogy isn't between individual number fields and individual 3-manifolds; it's between the collection of all number fields and the collection of all 3-manifolds. So in my opinion it's slightly awry to ask for an "arithmetic Poincare conjecture" about Spec Z; I don't think Spec Z should be thought of as analogous to S^3 in any meaningful sense. As always, John Baez has useful things to say. I saw Deninger give a beautiful talk about his point of view on this, some of which is recorded in this paper. Part of the idea, somewhat vaguely, is that you should think of a number field not as an unadorned 3-manifold but as a 3-manifold with a flow on it. And then the finite primes are not just knots, but closed orbits of that flow! That gives a more satisfying answer to "why should a 3-manifold have a distinguished countably infinite family of knots on it," makes the connection with dynamical zeta functions, etc. - I cannot really say anything about relations with Poincare conjecture, but the obvious references you should look at are M. Morishita. On certain analogies between knots and primes. Journ. f¨ur die reine u. angew. Math., 550 (2002), 141–167. (there is a more recent exposition also by Morishita in http://arxiv.org/abs/0904.3399 ) and Manin's "The notion of dimension in geometry and algebra": http://arxiv.org/abs/math/0502016 which contains abundant references inside. - There are some cryptic remarks about this in the first few pages of this talk of Fujiwara: http://www.ms.u-tokyo.ac.jp/~t-saito/conf/rv/Leopoldt.pdf (n.b. I believe - though I'm not 100% sure - that some of the later material in these slides has been retracted. But the relevant part is early on.) - "Remark 0.2. `C = Spec O_F\Σ` should be considered as an analogue of hyperbolic 3-manifold `N`, so Thurston’s theory of the moduli of flat bundles on `N` is the right geometric analogy. " I didn't find anything about `Spec Z` though :) – Ilya Nikokoshev Nov 4 2009 at 16:19 Take F=Q. I'm not quite sure what Sigma is, though, but if you can take it to be empty... – TG Nov 4 2009 at 18:19 2 Well, then how is 3-sphere a hyperbolic manifold? – Ilya Nikokoshev Nov 4 2009 at 21:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356256127357483, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/25008/given-a-finite-field-k-what-are-the-possible-degrees-of-a-polynomial-p-in-kx
## Given a finite field $K$, what are the possible degrees of a polynomial $p\in K[x]$ such that $x\longmapsto p(x)$ is one-to-one? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Such a polynomial has clearly not degree $0$ and it cannot be of degree two except for $x\longmapsto (\alpha(x))^2$ for $\alpha$ an affine bijection of a field of characteristic $2$. Are there many examples of degree $3$ (except for the stupid $x\longmapsto (\alpha(x))^3$ with $\alpha$ an affine bijection of a field of characteristic $3$)? I guess that the degrees of such polynomials (except for affine bijections and their composition with the Frobenius map) are generically fairly high (the interpolation polynomial for a "random" permutation of a finite field with $q$ elements should typically be of degree $q-1$). What can for instance be said on the smallest degree $>1$ of a non-affine polynomial inducing a bijection of $\mathbb Z/p\mathbb Z$? - 1 If n is coprime to q-1, then every element of K admits an n*-th root in *K, so that the polynomial $x^n$ is one-to-one. – damiano May 17 2010 at 13:44 Nice observation. So let us also remove the class of "stupid" examples obtained by composing (perhaps several times) affine bijections and powers coprime to $p-1$. – Roland Bacher May 17 2010 at 13:56 The group of affine bijections and the group $(\mathbb Z/(q-1)\mathbb Z)^*$ (acting as powers) generate probably the complete symmetric group of all permutations of all elements in the field. This, if true, yields almost an answer. – Roland Bacher May 17 2010 at 14:03 There seem to be some discussion on the subject of permutation polynomials in Ch. 7 of Lidl-Niederreiter‏ (books.google.com/…). [I hope I understood the question correctly. In the title shouldn’t $x\mapsto f(x)$ really be $c\mapsto f(c)$?] – unknown (google) May 17 2010 at 14:25 2 I would start with the work of Mike Zieve, who's thinking very actively about permutation polynomials. See e.g. front.math.ucdavis.edu/0810.2830 – JSE May 17 2010 at 14:36 ## 3 Answers Such things are referred to as 'permutation polynomials' and if you do a search, you'll find a whole menagerie of non-stupid classes which is constantly expanding. One simple result going back to Dickson provides something converse to damiano's observation - there are no (non-linear) permutation polynomials of degree dividing q-1. Something backing up your guess that the degrees are 'generically' fairly high is a conjecture of Carlitz: Fix even degree $n$, then the cardinality $q$ (with $q$ odd) of a field having a degree $n$ permutation polynomial is bounded from above. This has been proved in cases for $n$ up to 14 by Dickson, Hayes and then Daqing Wan in On a conjecture of Carlitz J. Austral. Math. Soc. Ser. A 43 (1987), no. 3, 375--384 but as far as I can tell, the general case is completely open. Edit: a quick search provides more: S. Cohen Permutation polynomials and primitive permutation groups. Arch. Math. (Basel) 57 (1991), no. 5, 417--423 proves the above conjecture for $n\leq 1000$ and for n begin 2 times an odd prime. And in fact the whole conjecture is implied by the result proved by Fried, Saxl and Guralnick in Schur covers and Carlitz's conjecture. Israel J. Math. 82 (1993), no. 1-3, 157--225. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $q$ is a prime, not dividing $p^2-1$, then $q$-th Dickson's polynomial will permute the field of $p$ elements. - For permutation polynomials, you can also look into the relevant part of "Finite fields", by Rudolf Lidl and Harald Niederreiter, CUP. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.912738025188446, "perplexity_flag": "head"}
http://mathoverflow.net/questions/44703/clusters-of-coloured-particles/44711
## clusters of coloured particles ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is the following a standard problem in combinatorics? Where can I find reference for it? Consider $n$ particles in a circle, $k$ white and $n-k$ black, otherwise indistinguishable so that the number of dispositions is $n!/(n-k)!k!$. Different dispositions will have a different number of white/black/white/black... clusters. How many dispositions $d(N,K,C)$ have C clusters? Example. $n=4, k=2$ I get: $d(4,2,2) = 4$ $d(4,2,4) = 2.$ - I do not get why you say that there are $n!/(n-k)!k!$ different configurations : If the particles are indistinguishable, there are only 2 dispositions with 4 particles and 2 white ones : they are adjacent or they are not, and rotation of your circle gives all of them from 2 such configurations. – Nathann Cohen Nov 3 2010 at 18:34 @Nathann: Are rotations of the circle allowed? I expect not. – Peter Shor Nov 3 2010 at 18:53 ## 1 Answer We will let $k_1=k$ and $k_2=n-k$, and $\tilde{c}=c/2$. I believe the answer is then $\frac{k_1 + k_2}{\tilde{c}} {k_1-1 \choose \tilde{c}-1} {k_2-1 \choose \tilde{c}-1}$. I have no idea whether there is a reference for this anywhere. Here's how it works. First we solve (nearly) the same problem on a line. There are $k_1$ white balls and $k_2$ black balls, and we want to count how many ways there are of arranging the balls so you have $c$ clusters, starting with a white ball on the left and ending with a black ball on the right. What we do is to make $c/2$ clusters of white balls and $c/2$ clusters of balls balls, and interleave them. To make $\tilde{c} = c/2$ clusters of white balls, we start with $k_1$ white balls, and put in $\tilde{c}-1$ dividing lines in any of the $k_1-1$ spots between two balls. We can do the same for the black balls. This means that we have ${k_1-1 \choose \tilde{c}-1} {k_2-1 \choose \tilde{c}-1}$. ways of arranging these balls into $c$ clusters on a line (with a white on the left and a black on the right). Now, we join them into a circle. There are $n=k_1+k_2$ positions on the circle where we could place the left endpoint of the line. But once we have a circle with $c$ clusters, there are $\tilde{c}=c/2$ ways we could have gotten to this disposition of balls in the circle; we can go back to a line by cutting the circle between any (black,white) pair of balls and there are $c/2$ of them. We thus obtain the above answer. - It seems like it works well, great and timely answer indeed! Thank you a lot. I feared that the answer would have to do with partions of integer numbers, instead it's nice to have a simple solution. – tomate Nov 3 2010 at 23:19 This indeed would be a great homework problem for a combinatorics class. As I said, I don't know whether it's in the literature or not. – Peter Shor Nov 3 2010 at 23:25 Hello !! There is something I still don't get : once you have your line arrangement, you can place the left endpoint in $n$ different ways on the cycle, but this does not means the $n$ coloring you thus produce are different. For example with all the clusters of size 1, you would only produce 2 different colorings. Of course you then divide by $c/2$ but it sounds too rough for me to solve symmetry problems. If your clusters are : 3 + 1 + 3 + 1 + 3 + 1 + 3 + 1 in one case, and 2 + 1 + 4 + 1 + 2 + 1 + 4 + 1, wouldn't you get a different numbers of distinct copies in each case ? – Nathann Cohen Nov 4 2010 at 8:58 > you can place the left endpoint in n different ways on the cycle, but this does not means the n coloring you thus produce are different That's true, but I think Shor's argument does not treat symmetry that way (otherwise he would have multiplied by n instead of n/2). In fact, if if you take k_1 = k_2 and clusters of size 1, that is c = k_1, the formula gives 2 as you expect. I don't understand your last question: the two dispositions you give correspond to the same clustering W-B-W-B-W-B-W-B. How many other dispositions of 16 particles, 12 white and 4 black, give this very same clustering? – tomate Nov 4 2010 at 9:39 It might be interesting to generalize to clusters of particles of many colours. This problem arose in the calculation of the Von Neumann entropy of a quantum mixed state. – tomate Nov 4 2010 at 9:41 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349619746208191, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/183233/joining-sigma-algebras?answertab=votes
# Joining sigma algebras I am having some problems joining sigma algebras. So I have: $$\textit{K} =\left \{ A\cap B: A\in \sigma (D),B\in \sigma (E) \right \}$$ and I need to show $$\sigma(K)= \sigma (D,E)$$ ### What I've done so far I intend to do this by showing $$\sigma(K)\subseteq \sigma (D,E)$$ and $$\sigma(K)\supseteq\sigma (D,E)$$ The first part is easy enough, my problem is $\sigma(K)\supseteq\sigma (D,E)$ as I'm not sure where to begin. I've started by saying by definition $\sigma(D,E)= \sigma(\sigma(D)\cup\sigma(E))$ and $\sigma(\sigma(D)\cup\sigma(E))=\sigma(\sigma(D)\cap\sigma(E))$ by De Morgan and $\sigma(D)\cap\sigma(E)=K$, but I think that is wrong as $\sigma(D)\cap\sigma(E)$ may be a sigma algebra and thus its smaller than $\sigma (A,B)$. I've also considered something like $\sigma(K^{c})$ to somehow argue that $\sigma(\sigma(D)\cup\sigma(E))$ is a subset of that but intuitively it feels like what I did with the De Morgan above. Thanks EDIT: sorry for the confusing notation everyone, everything has been changed. - 3 Confusing notation, $A \in \sigma(A)$ ... perhaps there are two different letters $A$ intended here? – GEdgar Aug 16 '12 at 15:39 Thanks for changing notation. Now explain what the letters mean. There is some large set $X$, and we have sigma-algebras on it? Is $D$ a subset of $X$, so that $\sigma(D)$ is a very small sigma-algebra? Or what? – GEdgar Aug 19 '12 at 16:57 ## 2 Answers Notice that $K$ contains $A$ and $B$. The definition of $\sigma(A,B)$ would be $\sigma(A\cup B)$, I think. Not that it makes for much of a difference, but still... I think you're overthinking it. I believe that the first part is actually harder (if only a little bit). Also, you might want to use different symbols to denote the algebras and their elements (\mathcal might help you). If you're still stuck, see the further hint: If we have a set $G$ and a sigma-algebra $\Sigma$, then if $G\subseteq \Sigma$, then $\sigma(G)\subseteq \Sigma$. Thus, if for some $G_1,G_2$ we have that $\sigma(G_1)\supseteq G_2$ and $\sigma(G_2)\supseteq G_1$, then $\sigma(G_1)=\sigma(G_2)$. - Thanks, I realised I can see how I was overthinking. Jus ta misc question would it be correct to say: $$\sigma(\sigma(A)\cup\sigma(B))=\sigma(\sigma(D)\cap\sigma(E))$$? – Pk.yd Aug 19 '12 at 14:34 @Pk.yd: what are $A,B$? If they are elements of $\sigma(D),\sigma(E)$, then not really. The only sensible way to interpret $\sigma(A)$ I can think of in this case would be as $\sigma(\{ A\})$, which, except for trivial cases, is something much less rich than $\sigma(D)$ – tomasz Aug 19 '12 at 18:26 Your definition of $K$ has no sense: you should say for instance $K=\sigma(X\cap Y, X \in\sigma(A), Y\in\sigma(B))$ (because $A \in \sigma(A)$ is puzzling). The first inclusion $\sigma(K) \subset \sigma(A,B)$ is easy: it follows from the inclusions $\sigma(A) \subset \sigma(A,B)$ and $\sigma(B) \subset \sigma(A,B)$ and from the stability of a $\sigma$-field under intersection. The second inclusion $\sigma(K) \supset \sigma(A,B)$ is easy too: clearly $\sigma(K)$ is a $\sigma$-algebra containing $\sigma(A)$ and $\sigma(B)$, therefore it contains the smallest $\sigma$-algebra containing $\sigma(A)$ and $\sigma(B)$, which is nothing but $\sigma(A,B)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643226265907288, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/56119/are-quantum-mechanics-and-determinism-actually-irreconcilable
# Are quantum mechanics and determinism actually irreconcilable? [closed] As a preface, I am not a physicist. I'm simply interested in abstract physics and fundamental principles of the universe and such. As such, if you can provide an answer for the layman (as non-academic and unjargonized as possible), it would be very, very appreciated so that I can actually understand it. Everything I ever learned about physics seemed to be built off of an assumption that the universe and everything in it behaved deterministically. So it should always be a (theoretical) possibility that, given perfect knowledge of every particle and force in the universe at a given moment, we can calculate with 100% accuracy what the state of the universe will be in the next moment. This of course assumes omniscience and unlimited computational capacity, which is why I said this is only a theoretical possibility. However, we can define our closed system to be much smaller -- say, a bottle full of nitrogen and helium -- and apply this principle more directly. And it seems like this assumption is absolutely necessary for scientific experiments to even take place or have any validity, since without this kind of determinism, the observations and the results inferred from them can't ever actually be trusted. I don't understand quantum mechanics very well, but it seems like this theory breaks this assumption completely. From what I understand, there is no way to predict what the state of the particle will be at the next moment. The most I can know is that, given that a particle is in state `A`, it will next be in state `B` or state `C`. There is absolutely no way to know for sure, and the only way to find out is to observe it actually change. Furthermore, observations of this kind don't yield any insight into what other particles in state `A` will do. So, in classical physics, laws used to look like this: ````A -> B [A implies B] ```` But with quantum physics, all of this is gone, and our laws can at best look something like this: ````A -> ((B v C) v D) v E [A implies B, or C, or D, or E, or ...] ```` How does this not break everything physics is built upon? The implications of this are seriously troubling to me, and I feel like it destroys everything I thought I knew. Can anyone explain how this works in slightly lower-level terms, or show how it's still possible for the theories and laws of classical physics to hold any weight? - – elfmotat Mar 6 at 21:08 @elfmotat Thanks for the video. I will watch it when I'm in a more appropriate environment. I think it's less "telling nature how to behave", and more "everything nature ever showed me about how it behaves is a complete lie." It legitimately kind of creeps me out that determinism is false. – Jeff Mar 6 at 21:10 Some people see the many-worlds interpretation of quantum mechanics as preserving determinism, since there is no random waveform collapse as such in that theory; instead, the only 'random' factor is our subjective perception of which universe we are in (which isn't really random, since all other universes can ask the same question). – kbelder Mar 6 at 22:13 Philosophically speaking, physics tries to understand the nature behavior. It doesn't mind if the theories are deterministic or not, because at the end of the day the validity of them are subjected to experimental results. And answering your question, quantum mechanics is not a deterministic theory by the fact that it uses probability theory. – Anuar Mar 7 at 0:54 1 This question and the answer seem to be more philosophical than physical, and seem to have completely ignored Bell's inequality and the limits that experimental test on it imply. – dmckee♦ Mar 7 at 18:30 ## closed as not constructive by dmckee♦Mar 7 at 18:30 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance. ## 3 Answers NEWTONIAN MECHANICS: Newtonian mechanics, as expressed through Newton’s laws of motion, gave us huge confidence about our understanding of nature. We were able to calculate the motion of planets and predict their position quite accurately many years into the future, and then observed them to check they do as we have predicted. Things worked extremely well. Occasionally there were some disagreement between our predictions and observation, like the anomalies in the motion of Uranus for example. But these did not mean the Newtonian laws were wrong; we simply missed something out, and in the case of Uranus it was a planet beyond it that caused the anomalies. The physical properties of that planet were all calculated and predicted, again, using the Newtonian laws. Newtonian mechanics was so triumphant, that physicists began to believe they had discovered the holly grail of science. This however was all to change, when the same laws were applied to study the behaviour of matter at the smallest scales we knew, the atomic scale. $QUANTUM MECHANICS:$ The application of Newtonian mechanics for the study of the atom, after it was discovered by Rutherford that atoms were like minute solar systems, was expected to produce yet more results that would be in agreement with our calculations. Lo and behold! Things went pare shaped so badly, and it became obvious that new physics was needed urgently. We could not explain: i) The atomic spectra, ii) The black body radiation spectrum iii) The photoelectric effect iv) The wave properties of matter as displayed in the Davisson-Germer experiment. v) We could not explain the properties of matter as we were discovering them: Superconductivity, Super fluidity and many more. All these problems were resolved with just one touch of the strong right hand of quantum mechanics! The new physics was not just a slight modification to Newtonian mechanics, but entirely different and counter intuitive to what we had become accustomed to with Newtonian mechanics. No longer we could talk about objective predictions of how nature was going to behave. We could only calculate probabilities, and could only talk about probabilities, and we could only predict probabilistically the behaviour of atoms, particles, properties of solids and liquids and so on. DO WE COMPROMISE OUR KNOWLEDGE ABOUT NATURE WITH QM? Views vary on this as it depends on how deeply one is entrenched into the philosophy of Newtonian mechanics. The point is that, calculation after calculation, observation after observation, nature is telling us that in the micro world matter behaves according to the laws of probability. We might dislike it, but let us think about this for a moment: Do we really believe, that nature would be able to display the richness of phenomena and variation we observe around us, had the rules she follows been rigid, black and white type of “philosophy,” as we learned from Newtonian mechanics? We all have our own views on this, so you can form yours. - Quantum mechanics, in its essence, is deterministic. It is only measurements that give rise to problems (in the Copenhagen interpretation). Let's compare three theories: classical mechanics, classical mechanics with random pushing/pulling (stochastical mechanics) and quantum mechanics. In classical mechanics, all laws are deterministic in nature. If $A$ gets you to $B$ and $B$ gets you to $C$, we can easily retrace our steps. We can reverse time and get back to where we started by using the same laws without any problems. In stochastical mechanics, however, the system we want to describe at random times gets pushed or pulled randomly hard. Let's say you still go from $A$ to $B$ but somewhere between $B$ and $C$ you get a random push in a random direction and you end up in $D$. You can't just retrace your steps. So if we reverse time in stochastical mechanics, we don't necessarily get back to where we started. Determinism is broken. In quantum mechanics, the laws are again all deterministic in nature. The laws actually look like those of classical mechanics, in an abstract sence. That's to say: $A$ gets you to $B$, $B$ gets you to $C$, maybe $C$ gets you back to $A$, deterministic. It's only measurements that cause us a headache. As long as we don't measure anything, we can simply rewind and nature will get back to its original state. Note the very important difference with stochastical mechanics: quantum mechanics is not just like classical mechanics with a random component! So QM is deterministic as long as we don't measure anything. However, when we make a measurement (for which a completely satisfying definition doesn't really exist), we can run into problems. Because in the Copenhagen interpretation of QM, a measurement "collapses" the wavefunction, i.e. the system is forced into an eigenstate corresponding to the measuring device. This discontinuity would break determinism. There isn't a general consensus about this among physicists. Some suggest everything is still fine if we describe the measurement fully, including the wavefunctions of all the entities involved in the measurement. Some subscribe to a different interpretation of QM, like the Many-worlds interpretation, where there is no collapse of the wavefunction and therefore no discontinuity either. Others prefer not to think about it too much. (the worst way to go as a scientist in my opinion) - Good answer, but the view that QM is deterministic and "the laws are actually very similar to those of Classical mechanics" is in not quite true. – JKL Mar 7 at 0:18 @John Well, the laws themselves are deterministic, aren't they? If not, QM would just be a fancy form of stochastical mechanics and "God really does play dice", to paraphrase Einstein. When we want to measure, we can only make predictions about probabilities, though one possibility is that QM becomes deterministic again if we take into account all the wavefunctions (states) of all the interacting particles (those of the measuring device as well). However, this has been a point of discussion ever since the first Solvay conference. – Wouter Mar 7 at 6:53 @John The sentence you mention might not be the best phrasing, I don't mean to give the impression QM is similar to classical mechanics. I'll change that. – Wouter Mar 7 at 6:56 The laws of quantum mechanics are laws of calculating probabilities, and uncertainties. If there is anything deterministic in QM is the probabilities we calculate by solving Schrodinger's equation. Predicting probabilities is not equivalent to predicting the future, and how exactly things will turn out like we did in Newtonian mechanics. Even if we had the most accurate Hamiltonian, we would still calculate probabilities, but they would be much more accurate. But I agree, the nature of QM is very subtle indeed. – JKL Mar 7 at 14:23 It is quite simple actually. Quantum mechanics is DETERMINISTIC in the sense of it's laws - they are fixed and are not subject to change. Things which are not deterministic are COORDINATES , which simply means you can not measure well your parameters (but it does not mean, for example, that you can not measure well some combinations of them - like total energy of the system). Back to your question -- there is no uncertainty of implications in QM there is only uncertainty of position. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632229804992676, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/240491/what-is-a-covector-and-what-is-it-used-for?answertab=active
# What is a covector and what is it used for? From what I understand, a covector is an object that takes a vector and returns a number. So given a vector $v \in V$ and a covector $\phi \in V^*$, you can act on $v$ with $\phi$ to get a real number $\phi(v)$. Is that "it" or is there more to it? I find this simple realization hard to reconcile with an example given in my textbook: "Geometric Measure Theory" by Frank Morgan, where he explains that given $\mathbb{R}^n$, the dual space $\mathbb{R}^{n*}$ has basis $dx_1, dx_2 \dots$. So lets say we are in $\mathbb{R}^2$, so $dx$ is a function on any vector, that will return a scalar? I can't imagine what $dx([1,2])$ is. Can someone explain to me this concept of covectors reconciled with the infinitesimal case? Edit: It has something to do with biorthogonality? I am really stumped on these concepts, like I'm ramming my head against a wall. - 6 The thing I always found most confusing is the passage between the linear algebra situation and the infinitesimal situation. If we have a basis $\{x_1,\ldots,x_n\}$ for a vector space $V$, then the dual basis $\{dx_1,\ldots,dx_n\}$ of $V^*$ is determined by $dx_i(a_1x_1+\cdots + a_nx_n) = a_i$. Of course, this notation $dx_i$ is pretty unfortunate. Actually, more or less all of my confusion in differential geometry (as far as I got in it, at least) has stemmed from poor notation. – Aaron Mazel-Gee Nov 19 '12 at 13:00 Yes, I think most of my misunderstanding arose from the fact that I was interpreting $dx$ as a "length" as opposed to a function. – Mike Flynn Nov 19 '12 at 20:23 When you say you can't imagine what $dx([1,2])$ is... are you using the notation $[1,2]$ to mean a vector in $\mathbb{R}^2$ with coordinates $(1, 2)$? That would be pretty non-standard, so I ask just to clarify. – Jesse Madnick Dec 10 '12 at 8:25 yes, so $dx([1,2])$ would be the function $dx$ acting on the vector $[1,2] \in R^2$. – Mike Flynn Dec 11 '12 at 1:09 ## 3 Answers Addition: in a new answer I have briefly continued the explanation below to define general tensors and not just differential forms (covariant tensors). Many of us got very confused with the notions of tensors in differential geometry not because of its algebraic structure or definition, but because of confusing old notation. The motivation for the notation $dx_i$ is perfectly justified once one is introduced to exterior differentiation of differential forms, which are just antisymmetric multilinear covectors (i.e. take a number of vectors and give a number, changing sign if we reorder its input vectors). The confusion in your case is higher because your are using $x_i$ for your basis of vectors, and $x_i$ are actually your local coordinate functions. CONSTRUCTION & NOTATION: Let $V$ be a finite vector space over a field $\mathbb{k}$ with basis $\vec{e}_1,...,\vec{e}_n$, let $V^*:=\operatorname{Hom}(V, \mathbb{k})$ be its dual vector space, i.e. the linear space formed by linear functionals $\tilde\omega: V\rightarrow\mathbb{k}$ which eat vectors and give scalars. Now the tensor product $\bigotimes_{i=1}^p V^*=:(V^*)^{\otimes p}$ is just the vector space of multilinear functionals $\tilde\omega^{(p)}:V^k\rightarrow\mathbb{k}$, e.g. $\tilde\omega (\vec v_1,...,\vec v_k)$ is a scalar and linear in its input vectors. By alternating those spaces, one gets the multilinear functionals mentioned before, which satisfy $\omega (\vec v_1,...,\vec v_k)=\operatorname{sgn}(\pi)\cdot \omega (\vec v_{\pi(1)},...,\vec v_{\pi(k)})$ for any permutation of the $k$ entries. The easiest example is to think in row vectors and matrices: if your vectors are columns, think of covectors as row vectors which by matrix product give you a scalar (actually its typical scalar product!), they are called one-forms; similarly any matrix multiplied by a column vector on the right and by a row vector on the left gives you a scalar. An anti-symmetric matrix would work similarly but with the alternating property, and it is called a two-form. This generalizes for inputs of any $k$ vectors. Now interestingly, there is only a finite number of those alternating spaces: $$\mathbb{k}\cong V^0, V^*, (V^*)^{\wedge 2}:=V^*\wedge V^*,..., (V^*)^{\wedge \dim V}.$$ If you consider your $V$-basis to be $\vec{e}_1,...,\vec{e}_n$, by construction its dual space of covectors has a basis of the same dimension given by the linear forms $\tilde e^1,...,\tilde e^n$ which satisfy $\tilde e^i (\vec e_k)=\delta^i_k$, that is, they are the functionals that give $1$ only over its dual vectors and $0$ on the rest (covectors are usually indexed by superindices to use the Einstein summation convention). That is why any covector acting on a vector $\vec v=\sum_i v^i\vec e_i$ can be written $$\tilde\omega =\sum_{i=1}^n \omega_i\tilde e^i\Rightarrow \tilde\omega(\vec v) =\sum_{i=1}^n \omega_i\tilde e^i(\vec v)=\sum_{i=1}^n \omega_i\cdot v^i.$$ Finally, when you are working in differential manifolds, you endow each point $p$ with a tangent space $T_p M$ of tangent vectors and so you get a dual space $T_p^* M$ of covectors and its alternating space $\Omega_p^1(M)$ of alternating 1-forms (which in this case coincide). Analogously you define your $k$-forms on tangent vectors defining $\Omega_p^k(M)$. Since these spaces depend from point to point, you start talking about fields of tangent vectors and fields of differential $k$-forms, which vary in components (with respect to your chosen basis field) from point to point. Now the interesting fact is that the constructive intrinsic definition of tangent vectors is based on directional derivatives at every point. Think of your manifold $M$ of dimension $n$ as given locally charted around a point $p$ by coordinates: $x^1,...,x^n$. If you have a curve $\gamma:[0,1]\subset\mathbb R\rightarrow M$ over your manifold passing through your point $p$ of interest, you can intrinsically define tangent vectors at $p$ by looking at the directional derivatives at $p$ of any smooth function $f:M\rightarrow\mathbb R$ which are given by the rate of change of $f$ over any curve passing through $p$. That is, in local coordinates your directional derivative at point $p$ along (in the tangent direction of) a curve $\gamma$ passing through it ($\gamma(t_0)=p$) is given by (using the chain rule for differentiation with the local coordinates for $\gamma (t)$): $$\frac{d f(\gamma(t))}{dt}\bigg|_{t_0}= \sum_{i=1}^n\frac{\partial f}{\partial x^i}\bigg|_p\frac{d x^i(\gamma (t))}{d t}\bigg|_{t_0}=\sum_{i=1}^n X^i\vert_p\frac{\partial f}{\partial x^i}\bigg|_p.$$ This is how that equation must be understood: at every point $p$, different curves have different "tangent vectors" $X$ with local components $X^i\vert_p\in\mathbb R$ given by the parametric differentiation of the curve's local equations (so actually you only have to care about equivalence classes of curves having the same direction at each point); the directional derivative at any point of any function in direction $X$ is thus expressible as the operator: $$X=\sum_{i=1}^n X^i\frac{\partial}{\partial x^i},$$ for all the possible components $X^i$ such that are smooth scalar fields over the manifold. In this way you have attached a tangent space $T_pM\cong\mathbb R^n$ at every point with canonical basis $\vec e_i=\frac{\partial}{\partial x^i}=:\partial_i$ for every local chart of coordinates $x^i$. This seems to be far from the visual "arrow"-like notion of vectors and tangent vectors as seen in surfaces and submanifolds of $\mathbb R^n$, but it can proved to be equivalent to the definition of the geometric tangent space by immersing your manifold in $\mathbb R^n$ (which can always be done by Whitney's embedding theorem) and restricting your "arrow"-space at every point to the "arrow"-subspace of vectors tangent to $M$ as submanifold, in the same way that you can think of the tangent planes of a surface in space. Besides this is confirmed by the transformation of components, if two charts $x$ and $y$ contain point $p$, then their coordinate vectors transform with the Jacobian of the coordinate transformation $x\mapsto y$: $$\frac{\partial}{\partial y^i}=\sum_{j=1}^n\frac{\partial x^j}{\partial y^i}\frac{\partial}{\partial x^j},$$ which gives the old-fashioned transformation rule for vectores (and tensors) as defined in theoretical physics. Now it is an easy exercise to see that, if a change of basis in $V$ from $\vec e_i$ to $\vec e'_j$ is given by the invertible matrix $\Lambda^i_{j'}$ (which is always the case), then the corresponding dual basis are related by the inverse transformation $\Lambda^{j'}_i:=(\Lambda^i_{j'})^{-1}$: $$\vec e'_j = \sum_{i=1}^n \Lambda^i_{j'}\,\vec e_i\Rightarrow \tilde e'^j = \sum_{i=1}^n (\Lambda^i_{j'})^{-1}\,\tilde e^i=:\sum_{i=1}^n \Lambda^{j'}_i\,\tilde e^i,\;\text{ where }\sum_i \Lambda^{k'}_i\Lambda^i_{j'}=\delta^{k'}_{j'}.$$ Thus, in our manifold, the cotangent space is defined to be $T_p^*M$ with coordinate basis given by the dual functionals $\omega_{x}^i (\partial/\partial x^j)=\delta^i_j$, $\omega_{y}^i (\partial/\partial y^j)=\delta^i_j$. By the transformation law of tangent vectors and the discussion above on the dual transformation, we must have that tangent covectors transform with the inverse of the previously mentioned Jacobian: $$\omega_y^i=\sum_{j=1}^n\frac{\partial y^i}{\partial x^j}\omega_x^j.$$ But this is precisely the transformation rule for differentials by the chain rule! $$dy^i=\sum_{j=1}^n\frac{\partial y^i}{\partial x^j}dx^j.$$ Therefore is conventional to use the notation $\partial/\partial x^i$ and $dx^i$ for the tangent vector and covector coordinate basis in a chart $x:M\rightarrow\mathbb R^n$. Now, from the previous point of view $dx^i$ are regarded as the classical differentials, just with the new perspective of being the functional duals of the differential operators $\partial/\partial x^i$. To make more sense of this, one has to turn to differential forms, which are our $k$-forms on $M$. A 1-form $\omega\in T_p^*M=\Omega_p^1(M)$ is then just $$\omega = \sum_{i=1}^n \omega_i\, dx^i,$$ with $\omega_i(x)$ varying with $p$, smooth scalar fields over the manifold. It is standard to consider $\Omega^0(M)$ the space of smooth scalar fields. After defining wedge alternating products we get the $k$-form basis $dx^i\wedge dx^j\in\Omega^2(M), dx^i\wedge dx^j\wedge dx^k\in\Omega^3(M),..., dx^1\wedge\cdots\wedge dx^n\in\Omega^n(M)$, (in fact not all combinations of indices for each order are independent because of the antisymmetry, so the bases have fewer elements than the set of products). All these "cotensor" linear spaces are nicely put together into a ring with that wedge $\wedge$ alternating product, and a nice differentiation operation can be defined for such objects: the exterior derivative $\mathbf d:\Omega^k(M)\rightarrow\Omega^{k+1}(M)$ given by $$\mathbf d\omega^{(k)}:=\sum_{i_0<...<i_k}\frac{\partial\omega_{i_1,...,i_k}}{\partial x^{i_0}}dx^{i_0}\wedge dx^{i_1}\wedge\cdots\wedge dx^{i_k}.$$ This differential operation is a generalization of, and reduces to, the usual vector calculus operators $\operatorname{grad},\,\operatorname{curl},\,\operatorname{div}$ in $\mathbb R^n$. Thus, we can apply $\mathbf d$ to smooth scalar fields $f\in\Omega^0(M)$ so that $\mathbf df=\sum_i\partial_i f dx^i\in\Omega^1(M)$ (note that not all covectors $\omega^{(1)}$ come from some $\mathbf d f$, a necessary and sufficient condition for this "exactness of forms" is that $\partial_{i}\omega_j=\partial_{j}\omega_i$ for any pair of components; this is the beginning of de Rham's cohomology). In particular the coordinate functions $x^i$ of any chart satisfy $$\mathbf d x^i=\sum_{j=1}^n\frac{\partial x^i}{\partial x^j}dx^j=\sum_{j=1}^n\delta^i_jdx^j= dx^i.$$ THIS final equality establishes the correspondence between the infinitesimal differentials $dx^i$ and the exterior derivatives $\mathbf dx^i$. Since we could have written any other symbol for the basis of $\Omega_p^1(M)$, it is clear that $\mathbf dx^i$ are a basis for the dual tangent space. In practice, notation is reduced to $\mathbf d=d$ as we are working with isomorphic objects at the level of their linear algebra. All this is the reason why $\mathbf dx^i(\partial_j)=\delta^i_j$, since once proved that the $\mathbf dx^i$ form a basis of $T_p^*M$ it is straightforward without resorting to the component transformation laws. On the other hand, one could start after the definition of tangent spaces as above, with coordinate basis $\partial/\partial x^i$ and dualize to its covector basis $\tilde e^i$ such that $\tilde e^i(\delta_j)=\delta^i_j$; after that, one defines wedge products and exterior derivatives as usual from the cotangent spaces; then it is a theorem that for any tangent vector field $X$ and function $f$ on $M$ $$X(f)=\sum_{i=1}^n X^i\frac{\partial f}{\partial x^i}=\mathbf d f(X),$$ so in particular we get as a corollary that our original dual coordinate basis is in fact the exterior differential of the coordinate functions: $$(\vec e_i)^*=\tilde e^i=\mathbf d x^i\in\Omega^1(M):=T^*(M).$$ (That is true at any point for any coordinate basis from the possible charts, so it is true as covector fields over the manifold). In particular the evaluation of covectors $\mathbf d x^i$ on infinitesimal vectors $\Delta x^j\partial_j$ is $\mathbf d x^i(\Delta x^j\partial_j)=\sum_{j=1}^n\delta^i_j\Delta x^j$, so when $\Delta x^i\rightarrow 0$ we can see the infinitesimal differentials as the evaluations of infinitesimal vectors by coordinate covectors. MEANING, USAGE & APPLICATIONS: Covectors are the essential structure for differential forms in differential topology/geometry and most of the important developments in those fields are formulated or use them in one way or another. They are central ingredients to major topics such as: Linear dependence of vectors, determinants and hyper-volumes, orientation, integration of forms (Stokes' theorem generalizing the fundamental theorem of calculus), singular homology and de Rham cohomology groups (Poincaré's lemma, de Rham's theorem, Euler characteristics and applications as invariants for algebraic topology), exterior covariant derivative, connection and curvature of vector and principal bundles, characteristic classes, Laplacian operators, harmonic functions & Hodge decomposition theorem, index theorems (Chern-Gauß-Bonnet, Hirzebruch-Riemann-Roch, Atiyah-Singer...) and close relation to modern topological invariants (Donaldson-Thomas, Gromow-Witten, Seiberg-Witten, Soliton equations...). In particular, from a mathematical physics point of view, differential forms (covectors and tensors in general) are fundamental entities for the formulation of physical theories in geometric terms. For example, Maxwell's equations of electromagnetism are just two equations of differential forms in Minkowski's spacetime: $$\mathbf d F=0,\;\;\star\mathbf d\star F=J,$$ where $F$ is a $2$-form (an antysymmetric 4x4 matrix) whose components are the electric and magnetivc vector field components $E_i,\,B_j$, $J$ is a spacetime vector of charge-current densities and $\star$ is the Hodge star operator (which depends on the metric of the manifold so it requires additional structure!). In fact the first equation is just the differential-geometric formulation of the classical Gauß' law for the magnetic field and the Faraday's induction law. The other equation is the dynamic Gauß' law for the electric field and Ampère's circuit law. The continuity law of conservation of charge becomes just $\mathbf d\star J=0$. Also, by Poincaré's lemma, $F$ can always be solved as $F=\mathbf d A$ with $A$ a covector in spacetime called the electromagnetic potential (in fact, in electrical engineering and telecommunications they always solve the equations by these potentials); since exterior differentiation satisfies $\mathbf{d\circ d}=0$, the potentials are underdetermined by $A'=A+\mathbf d\phi$, which is precisely the simplest "gauge invariance" which surrounds the whole field of physics. In theoretical physics the electromagnetic force comes from a $U(1)$ principal bundle over the spacetime; generalizing this for Lie groups $SU(2)$ and $SU(3)$ one arrives at the mathematical models for the weak and strong nuclear forces (electroweak theory and chromodynamics). The Faraday $2$-form $F$ is actually the local curvature of the gauge connection for those fiber bundles, and similarly for the other forces. This is the framework of Gauge Theory working in arbitrary manifolds for your spacetime. The only other force remaining is gravity, and again general relativity can be written in the Cartan formalism as a curvature of a Lorentz spin connection over a $SO(1,3)$ gauge bundle, or equivalently as the (Riemannian) curvature of the tangent space bundle. - Great Answer! But I have a few doubts that have been nagging me since many months. It is regarding the difference between covariance and contravariance. I understand it as follows, covariant components of the tensor transform in the same way as the basis, amd contravaraince transform in the inverse way as the basis. So in your notation, your dual basis has a covariant index, sho shouldn't it transform by $\Lambda_{ij}$. How do you get $\vec e'_j = \sum_{i=1}^n \Lambda_{ij}\,\vec e_i\Rightarrow \tilde e'_j = \sum_{i=1}^n (\Lambda_{ij})^{-1}\,\tilde e_i=:\sum_{i=1}^n \Lambda^i_j\,\tilde e_i$ – ramanujan_dirac Dec 10 '12 at 4:16 Also, you say that the tangent co-vectors transform with the inverse of the Jacobian, but in you answer you have taken it with the Jacobian (sorry, if I am going hopelessly wrong). $\omega_y^i=\sum_{j=1}^n\frac{\partial y^i}{\partial x^j}\omega_x^j$ To be brief, I am extremely confused, by how the components of each of the tangent and the cotangent(or dual), or the contravariant and covariant components transform, essentially with respect to their respective basis. – ramanujan_dirac Dec 10 '12 at 4:20 One has to be careful because the Jacobians and their inverses appear in the vector and covector transformations in the opposite way than in the contravariant and covariant components. First of all, one can call the "Jacobian" any of the two, since there are two change of coordinates: $x^j(y)$ and $y^i(x)$. If I call "the Jacobian" the matrix $\partial x^j/\partial y^i$, then $\partial y^i/\partial x^j$ is its inverse, but I could have called "the inverse" the former and "the Jacobian" the latter, but that is relative on what you consider your initial coordinates, since both are Jacobians! – Javier Álvarez Dec 10 '12 at 7:22 In the equation you mention in your first comment I should have used superscripts for the covectors to be consistent with the rest of the answer's notation (I have corrected this now). The way to deduce that covectors transform with the inverse matrix is to notice that by their definition $\tilde e'^j(\vec e'_i)=\delta^{j'}_{i'}$ so $\tilde e'^j(\sum_k\Lambda^k_{i'}\vec e_k)=\sum_k\Lambda^k_{i'}\tilde e'^j(\vec e_k)=\sum_k\Lambda^k_{i'}\sum_{j'} M^{j'}_l\tilde e^l(\vec e_k)$ and by $\tilde e^l(\vec e_k)=\delta^l_k$ we get $\sum_k\Lambda^k_{i'}M^{j'}_k=\delta^{j'}_{i'}$. – Javier Álvarez Dec 10 '12 at 7:57 By the last equation of the above deduction, the transformation matrix $M$ of the dual vectors (covectors) must be the inverse of the corresponding one for vectors $\Lambda$ which by convention of notation is denoted with the same letter but with different position of sub/super indices or commas. In the case of manifold coordinate basis is less confusing as the matrices are Jacobians so instead of distinguishing $i', i$ we keep track of where are the $y$ and $x$: $\frac{\partial}{\partial y^i}=\sum_j\frac{\partial x^j}{\partial y^i}\frac{\partial}{\partial x_j}$. – Javier Álvarez Dec 10 '12 at 8:07 show 3 more comments It's always been easier for me to think of "covectors" (dual vectors, cotangent vectors, etc.) as different basis of vectors (potentially for a different space than "usual" vectors) because the linear algebra properties are still basically the same. Yeah, a covector is an object that "takes" a vector and returns a number, but you could define a vector as an object that "takes" a covector and returns a number! (And saying that this is all vectors and covectors can do--return numbers through the inner product--seems quite an understatement of what they can be used for.) Furthermore, in a space with a metric, covectors can be constructed easily by using the tangent basis vectors. Let $e_1, e_2, e_3$ be a tangent-space basis for a 3d real vector space. The basis covectors are then $$\begin{align*} e^1 &= \frac{e_2 \wedge e_3}{e_1 \wedge e_2 \wedge e_3} \\ e^2 &= \frac{e_3 \wedge e_1}{e_1 \wedge e_2 \wedge e_3} \\ e^3 &= \frac{e_1 \wedge e_2}{e_1 \wedge e_2 \wedge e_3} \end{align*}$$ (These wedges simply form higher dimensional objects than vectors but that are built from vectors. $e_2 \wedge e_3$ is, for example, an oriented, planar object corresponding to the parallelogram spanned by $e_2, e_3$. Thus it is related to the cross product in 3d, but the wedge product is useful in other dimensions, too, while the cross product does not generalize outside of 3 and 7 dimensions.) Basically, the basis covectors are formed by finding the vectors orthogonal to hypersurfaces spanned by all other basis vectors. Covectors are useful in large part because they enter expressions through the vector derivative $\nabla$. We generally define $\nabla = e^i \partial_i$ over all $i$ to span the space. The notation that the basis covectors are $dx_1, dx_2,\ldots$ is one I too find a bit confusing; in geometric calculus, they would either signify scalars or vectors! But I think I understand that to mean that $\nabla x^1$ extracts the $e^1$ basis covector from the vector derivative, and they are just using $d$ to mean something that might otherwise be denoted $\nabla$ (which is not uncommon, as has been pointed out, in differential forms). $$\nabla x^1 = e^1 \frac{\partial}{\partial x^1} x^1 = e^1$$ if that makes it more clear. - There are at least two layers of ideas here. First, as you say, the "dual space" $V^*$ to a real vector space is (by definition) the collection of linear maps/functionals $V\rightarrow \mathbb R$, with or without picking a basis. Nowadays, $V^*$ would more often be called simply the "dual space", rather than "covectors". Next, the notion of "tangent space" to a smooth manifold, such as $\mathbb R^n$ itself, at a point, is (intuitively) the vector space of directional derivative operators (of smooth functions) at that point. So, on $\mathbb R^n$, at $0$ (or at any point, actually), $\{\partial/\partial x_1, \ldots, \partial/\partial x_n\}$ forms a basis for that vector space of directional-derivative operators. One notation (a wacky historical artifact, as you observe!) for the dual space to the tangent space, in this example, is $dx_1,\ldots, dx_n$. That is, literally, $dx_i(\partial/\partial x_j)$ is $1$ for $i=j$ and $0$ otherwise. One way to help avoid some confusions is to not be hasty to identify all $n$-dimensional real vector spaces with $\mathbb R^n$, so that you don't feel an obligation to evaluation $dx_1(a,b)$... even though, if you view $(a,b)$ as $a\cdot \partial/\partial x_1 + b\cdot \partial/\partial x_2$, then $dx_1$ evaluated on it gives $a$. As much as anything, this is just a notational set-up for calculus-on-manifolds that includes various historical artifacts that are somewhat incompatible with other notational conventions, I agree! For example, just when you get used to the idea that in $dy/dx$ the $dy$ and the $dx$ cannot be separated, that seems to have been done in writing a differential form $f(x,y)dx+g(x,y)dy$. And, in fact, the latter notation descends from a tradition that was not fussing about whether there were or weren't infinitesimals and all that, and would perform such symbolic manipulations, for one thing. That is, in my opinion, even on $\mathbb R^n$, the notation is best understood, for what it is meant to suggest, not in a contemporary context, but imagining ourselves in 1890 or so. This is a separate issue from those about proving things, of course, but may be helpful in intepreting the intention of the notation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 202, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419522285461426, "perplexity_flag": "head"}
http://mathoverflow.net/questions/25229/why-is-continuity-required-for-sheaf-theoretic-definitions-of-a-structure-on-a-sp
## Why is continuity required for sheaf-theoretic definitions of a structure on a space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For example, I take differentiability, analyticity, and algebraicity(of a function). All(more or less) imply continuity. So when we define a differentiable function on $\mathbb R^n$ or an analytic function on $\mathbb C^n$, or a regular map on an affine space, we do not explicitly require that the functions are continuous. It follows automatically from the stronger condition. But, when I look at the definitions in books of a global structure using sheaf theory, for a global definition of a morphism, ie on a differentiable manifold or an analytic space, or an abstract algebraic variety, the definition of a morphism requires a priori that the map be continuous, and then one requires that there is additionally a morphism of sheaves of algebras(of the suitable type of structure sheaves, depending on the local model used). Why is this so? Is it something done for fancy, or is there a real need for the extra continuity assumption? I mean could things go wrong if this assumption is dropped? - 9 Sheaves are defined on open sets. So in order to have a behaved push-forward (which is needed before the morphism of sheaves, you need open sets to pull back to open sets, that is, continuity. – Andrea Ferretti May 19 2010 at 14:01 ## 3 Answers As Andrea hints, if you start with sheaves then you need continuity to even begin talking about morphisms of sheaves. However, if you're interesting in just defining, say, a smooth map between manifolds then you can simply write "$f \colon M \to N$ is smooth if, whenever $c \colon \mathbb{R} \to M$ is a smooth curve then $f \circ c \colon \mathbb{R} \to N$ is smooth". No assumption about continuity is needed there. Indeed, once one gets to more exotic spaces, continuity becomes a hassle and is best left to one side. For example, the evaluation map $E \times E^* \to \mathbb{R}$ is smooth for any locally convex topological vector space, $E$, but is only continuous for $E$ a normed vector space. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $M,N$ two manifolds and $f : M \to N$ a (set-theoretic) map. Then there are (at least) two definitions for $f$ to be smooth: (1) For every ball $B \subseteq N$ the preimage $f^{-1}(B)$ can be covered with balls $C \subseteq M$ such that the induced maps $C \to B$ are smooth. (2) $f$ is continuous and for every ball $B \subseteq N$ and every ball $C \subseteq f^{-1}(B)$ the induced map $C \to B$ is smooth. Remark that in (1) it follows automatically that $f$ is continuous. However, the second statement in (2) does not imply continuity because it is possible that $f^{-1}(B)$ contains no ball at all, or just not enough. The same is true for other subsheaves of continuous functions mentioned in the question. - The simple answer is because you are describing your "spaces" (manifolds, variety etc.) as locally ringed spaces, in particular objects in the category of topological spaces + more data. The arrows in the category of topological spaces are precisely continuous maps, so this is where continuity comes in. To be more specific, let $C$ be any category, and let $S:C \to Cat$ be any pseudo-functor (weak 2-functor). "Pretend" that $S$ is the assignment of each object $c \in C_0$ its category of "sheaves of algebraic objects", e.g. if $C$ is topological spaces you could let $S$ be $S:X \mapsto Sh_{rings}(X)$ which associates to a space $X$ the category of sheaves of local rings over $X$. For any such $S$, one can take its Grothendieck construction, which yields a category fibred over $C$, $\int_C{S} \to C$. The objects of $\int_C{S}$ are pairs $(c,s)$ with $c \in C_0$ and $s \in S(C)_0$ and the maps $(c,s) \to (d,t)$ are pairs $(f,g)$ such that $f:c \to d$ in $C$ and $g:f^*(t) \to s$ in $S(C)$, and the functor $\int_C{S} \to C$ sends $(c,s)$ to $c$ and $(f,g)$ to $f$. If $S$ and $C$ are taken to be $Sh_{rings}$ and $Top$, then you get exactly the category of locally ringed spaces, for example. By construction, the "underlying morphism" of a morphism $(c,s) \to (d,t)$ is a morphism $f:c \to d$. If $C$ is $Top$, then of course, this means it is a morphism in $Top$, hence continuous. - $S$ should be contravariant. – David Carchedi May 21 2010 at 14:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248266220092773, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/batteries
# Tagged Questions The batteries tag has no wiki summary. 2answers 60 views ### What is a “gravitational cell”? I am not a physicist, and I don't understand the details of electromagnetism. Anyhow, I was looking for how the batteries work in Google. So, I came across this article: "How batteries work: A ... 2answers 42 views ### Different batteries connected in parallel If we have 2 batteries one of emf x and the other is of emf y and we connect them in series we get an effective emf of x+y. But what if we connect them in parallel, how to calculate the emf now? 2answers 85 views ### Simple ohms law on a battery ? Paradox or conceptual error? Suppose we have a regular pencil battery which supplies DC voltage $V$. Say we take copper wire and connect the ends of the battery to an $R$ ohms resistance. Then Ohm's law tells use the current in ... 3answers 71 views ### Capacitor Charging and Discharging when connected to the ground When we charge a capacitor using a battery and then remove the battery, the plates of capacitor becomes charged. One holds positive charge and the other one gets equal negative charge. o. k. ? Now ... 2answers 80 views ### How can a circuit function with two negative battery terminals facing each other? Here is a drawing of the circuit that is confusing me: I don't quite understand how batteries work in this diagram. If a battery has a negative and positive terminal, there must be a barrier ... 2answers 68 views ### Static discharge out of 3 volts button battery I am starting to get interested in static electricity I have found this which is very cool but I do not know how I can control the output voltage. ... 2answers 59 views ### Why isn't this capacitor charging? Let's say you have a parallel plate capacitor and you connect one plate to the positive terminal of a battery and the other plate to the negative end. So this is like a static situation, you have a ... 3answers 131 views ### Can I charge a capacitor using 2 batteries? 1 capacitor, 2 separate batteries (Battery A and Battery B). Connect A+ to one side of the capacitor and B- to the other side of the capacitor. A and B are not connected, there is no closed circuit. ... 1answer 116 views ### Path of an electron through an electric circuit When a potential difference is applied across a conductor, and if an electron moves from the negative terminal of the battery and reaches the positive terminal, then I want to know if the electron ... 1answer 192 views ### Can I use AA batteries to charge iPhone? [closed] Technically, 4 AA batteries will give out ~5V, which is equivalent to USB. Can you use 4 AA batteries to charge an iPhone, without messing up the Li-Ion cells used on iPhone? So my question really ... 2answers 90 views ### How can you calculate (or convert) the Wh of a capacitor whose energy is given in Farads When trying to compare the energy in a battery to the energy in a capacitor, the units don't match up. How can one compare a battery whose Ah are 10 and Voltage is 3 (for a total of 30 Wh) to a ... 2answers 157 views ### What's the difference between Capacitors, Ultra-Capacitors and Batteries Capacitors are known to hold and release energy very quickly, unlike the slower release that batteries exhibit. If one were to bunch many (1000's of) capacitors together could they function as a ... 2answers 443 views ### If we charge a capacitor can we discharge it into a battery? I have read that we can charge a capacitor using a battery, but can the vice versa happen? My project needs to show a battery being charged through a fully charged capacitor. 1answer 320 views ### Calculate the UPS Capacity in amp-hours [closed] I am trying to find out the UPS capacity in amp-hours for my HP UPS system. I've already done some calculations based on the UPS information from the HP Power Manager software. Bellow are my ... 2answers 159 views ### What is the volume of gases liberated when a battery is charged? Apparently Hydrogen/Oxygen are liberated when a Lead-acid battery is charged. If true, how does one calculate the expected volume & rate at which each gas is liberated when a battery is charged? ... 2answers 213 views ### How many grams of explosive are equivalent to the energy in a battery? In the movie Terminator 3: Rise of the Machines, Arnie's fuel cells were twice shown to be extremely devastating when their stored energy was released. In real life, how many grams of, say, gunpowder ... 2answers 615 views ### Does the 'mAh' rating of a battery have something to do with its power? I'm curious about the 'mAh' of a battery: how can this impact the power of the battery? I've done some research on the internet, and most of the articles I found explain about the 'amount of charge ... 2answers 286 views ### Can a battery charger be too powerful for a rechargeable battery? I got the impression that a regular iPhone charger can charge the iPhone and the iPhone won't become too hot while charging, and the charging time is standard, but if using the 10W iPad charger to ... 1answer 117 views ### Battery fully loaded, pull out the chord? I recently bought a new phone (Samsung Galaxy 3) and when the battery is fully loaded, it says like "Battery fully loaded, pull out the chord". Is this a typo from Samsung, or would there ... 1answer 556 views ### Does the mass of a battery's change when charged/discharged? ... and if so, how much? Is it possible to detect it, or is it beyond any measurement? I'd say there are two possible scenarios (depending on the battery type) and both seem interesting: The battery ... 5answers 3k views ### What is the difference between a battery and a charged capacitor? What is the difference between a battery and a charged capacitor? I can see lot of similarities between capacitor and battery. In both these charges are separated and When not connected in a circuit ... 1answer 656 views ### short circuit an alkaline battery i'm not doing anything related to physics, but i'm just curious : What really happen when i short circuit an alkaline battery ? some article in the net shown that fire/explosion can be happened when ... 1answer 80 views ### Does recharging a battery at a lower temperature lower its internal resistance? Does temperature affect the internal resistance of batteries? And does charging a "frozen" battery allow it to charge faster than a warm or room temperature battery? 2answers 89 views ### Does using batteries in a series raise the overall Wh's of the batteries used? Why/Why Not? If one has 2 batteries of 6 Volts each with 500 mAh each, then separately the batteries should provide 6 Volts * 500 mAh = 3 Wh each or 6 Wh total. However would it be the case that if someone where ... 1answer 190 views ### Lifetime of battery If I directly connect two terminals of 3V battery (negative to positive) using copper wire, would it lose all its charge faster compared to another 3V battery that is used to lighten a 1.5V bulb? 3answers 9k views ### Charging 12V 150Ah battery I want to charge a 12V battery of 150Ah with a solar panel. The solar panel specs is 12V, 25 Watt. Can anyone please provide me how to calculate that how much time it will take to charge the battery? ... 2answers 130 views ### Mechanical work to required battery power I have a very practical question where I've calculated the mechanical work needed by a simple mechanical system by solving the line integral $W = \int_C \ F \ dx$. However, since I have a black spot ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367893934249878, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2109/why-does-rsa-give-better-security-on-longer-messages/2110
# Why does RSA give better security on longer messages? I am trying to understand the notion of RSA security. Choosing a public exponent where $e = 3$ facilitates the calculations, considering that it is secure if the plaintext or message is long. If the message is short, it affects security but why? Does that relates to the $2^n$ possibilities, if the message is short the probability of knowing the message is high? - 2 Typical RSA messages consist of a random symmetric key, so this concept doesn't really apply in practice. We also use special padding modes instead of plain RSA, which prevents many attacks (AFAIK including yours). – CodesInChaos Mar 17 '12 at 19:08 ## 1 Answer This only applies to "text book" RSA, i.e. plain modular exponentiation with the public or secret exponent. If you have a $n$-bit modulus, and use a message $x$ shorter than $n/3$ bits, the modular part of modular exponentiation doesn't come to play when calculating the ciphertext as $c = x^3 \bmod n$. The effect is that you can simply calculate the (integer) cubic root of the ciphertext $c$ to extract the plaintext, and don't have to deal with the key at all. I think similar attacks are possible if the message is only slightly longer than $n/3$ bits. This is one of the reasons that you normally don't use textbook RSA, but standard RSA, which uses non-zero padding to fill up the message to the size of the modulus, so the modular reduction actually comes into effect. Another reason is that you need some randomness in the padding, to avoid that the same message encrypted twice with the same public key gives always the same result. Also, in practice you are not encrypting the message directly, but encrypt a key for a symmetric algorithm, which then is used to encrypt the actual message, in a hybrid scheme. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8904675841331482, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/functional-equations?sort=faq&pagesize=15
# Tagged Questions The name "functional equation" is used for problems where the goal is to find all functions satisfying the given equation (and maybe some other conditions). So in this case, solving the equation means finding all functions fulfilling the equation. (This is different from the more common use of the ... learn more… | top users | synonyms 3answers 1k views ### Is there a name for function with the exponential property $f(x+y)=f(x) \times f(y)$? I was wondering if there is a name for a function that satisfies the conditions $f:\mathbb{R} \to \mathbb{R}$ and $f(x+y)=f(x) \times f(y)$? Thanks and regards! 1answer 379 views ### continuous functions on $\mathbb R$ such that $g(x+y)=g(x)g(y)$ Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$. $g(0)=1$. If $a=g(1)$,then $a>0$ ... 2answers 2k views ### If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$. I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as \$g(xy) = g(x) + ... 3answers 516 views ### Classifying Functions of the form $f(x+y)=f(x)f(y)$ [duplicate] Possible Duplicate: Is there a name for such kind of function? The question is: is there a nice characterization of all nonnegative functions $f:\mathbb{R}\rightarrow \mathbb{R}$ such that ... 3answers 744 views ### Proving that an additive function $f$ is continuous if it is continuous at a single point Suppose that $f$ is continuous at $x_0$ and $f$ satisfies $f(x)+f(y)=f(x+y)$. Then how can we prove that $f$ is continuous at $x$ for all $x$? I seems to have problem doing anything with it. Thanks in ... 2answers 323 views ### On sort-of-linear functions Background A function $f: \mathbb{R}^n \rightarrow \mathbb{R} \$ is linear if it satisfies $$(1)\;\; f(x+y) = f(x) + f(y) \ , \ and$$ $$(2)\;\; f(\alpha x) = \alpha f(x)$$ for all \$ x,y \in ... 1answer 409 views ### Solution(s) to $f(x + y) = f(x) + f(y)$ (and miscellaneous questions…) My lecturer was talking today (in the context of probability, more specifically Kolmogorov's axioms) about the additive property of functions, namely that: $$f(x+y) = f(x) + f(y)$$ I've been trying ... 2answers 417 views ### If $f\colon \mathbb{R} \to \mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere Prove that if $f\colon\mathbb{R}\to\mathbb{R}$ is such that $f(x+y)=f(x)f(y)$ for all $x,y$, and $f$ is continuous at $0$, then it is continuous everywhere. If there exists $c \in \mathbb{R}$ ... 3answers 455 views ### How to calculate $f(x)$ in $f(f(x)) = e^x$? How would I calculate the power series of $f(x)$ if $f(f(x)) = e^x$? Is there a faster-converging method than power series for fractional iteration/functional square roots? 2answers 572 views ### $f(x+1)=f(x)+1$ and $f(x^2)=f(x)^2$ A function that satisfies both $f(x+1)=f(x)+1$ and $f(x^2)=f(x)^2$ for all real $x$ is known to be the identity over $\mathbb Q$, but is it also the identity over $\mathbb R$? If not, can you provide ... 2answers 446 views ### Entire functions such that $f(z^{2})=f(z)^{2}$ I'm having trouble solving this one. Could you help me? Characterize the entire functions such that $f(z^{2})=f(z)^{2}$ for all $z\in \mathbb{C}$. Hint: Divide in the cases $f(0)=1$ and $f(0)=0$. ... 5answers 799 views ### Find polynomials such that $(x-16)p(2x)=16(x-1)p(x)$ Find all polynomials $p(x)$ such that for all $x$, we have $$(x-16)p(2x)=16(x-1)p(x)$$ I tried working out with replacing $x$ by $\frac{x}{2},\frac{x}{4},\cdots$, to have $p(2x) \to p(0)$ but then ... 1answer 221 views ### Iterated polynomial problem Polynomial $P$ satisfies $P(n)>n$ for all positive integers $n$. Every positive integer $m$ is a factor of some number of the form $P(1),P(P(1)),P(P(P(1))),\ldots$. Prove that $P(x)=x+1$. 1answer 436 views ### Solution for exponential function's functional equation by using a definition of derivative let $f(0)=1$ and $f'(0)=1$. and $f(x+y)=f(x)f(y)$ for $x,y\in R$. How can I found $f(x)$ by using a definition of derivative? 2answers 281 views ### Graph of discontinuous linear function is dense $f:\mathbb{R}\rightarrow\mathbb{R}$ is a function such that for all $x,y$ in $\mathbb{R}$, $f(x+y)=f(x)+f(y)$. If $f$ is cont, then of course it has to be linear. But here $f$ is NOT cont. Then show ... 3answers 2k views ### The easy(?) part of IMO 2011 Problem 3 Let $f : \mathbb R \to \mathbb R$ be a real-valued function defined on the set of real numbers that satisfies $$f(x + y) \leq yf(x) + f(f(x))$$ for all real numbers $x$ and $y$. How can I prove that ... 3answers 419 views ### a continuous function satisfying $f(f(f(x)))=-x$ other than $f(x)=-x$ My question is about existence of a non-trivial solution of the functional equation $f(f(f(x)))=-x$ where $f$ is a continuous function defined on $\mathbb{R}$. Also, what about the general one ... 8answers 923 views ### How to find the function $f$ given $f(f(x)) = 2x$? I was wondered how to find the function in this equality: $f(f(x))=2x$. Also $f$ is continuous. I don't need the answer, how to find it is more important. 3answers 267 views ### Inverse function of $y=W(e^{ax+b})-W(e^{cx+d})+zx$ I have a simple question for which I am looking for a closed form expression (If there exits one). In other words, given: $$y=W(e^{ax+b})-W(e^{cx+d})+zx$$ where $W$ is the Lambert $W$ function and ... 4answers 494 views ### Solving the functional equation $f(x+1) - f(x-1) = g(x)$ Given a function $g(x)$, is it possible to find a function $f(x)$ that satisfies $$f(x+1) - f(x-1) = g(x)$$ 4answers 244 views ### 3rd iterate of a continuous function equals identity function If $f: \mathbb{R} \to \mathbb{R}$ is continuous, and $\forall x \in \mathbb{R} :\;(f \circ f \circ f)(x) = x$, show that $f(x) = x$. The condition that $f$ is continuous on $\mathbb{R}$ is ... 2answers 182 views ### $f(z_1 z_2) = f(z_1) f(z_2)$ for $z_1,z_2\in \mathbb{C}$ then $f(z) = z^k$ for some $k$ Same as my previous question except domain is complex. I tried assuming that the function was analytic, so for $z_1=z_2=z$ , $f(z^2) = f(z)^2$ \sum_{n=0}^\infty a_n z^{2n}=\left(\sum_{n=0}^\infty ... 1answer 142 views ### A question concerning on the axiom of choice and Cauchy functional equation The Cauchy functional equation: $$f(x+y)=f(x)+f(y)$$ has solutions called 'additive functions'. If no conditions are imposed to $f$, there are infinitely many functions that satisfy the equation, ... 2answers 51 views ### Finding Value, Related To Functional Equation $f(x)$ is continuous for $\forall x \in R$ and $f(2x)-f(x)=x^{3}$ (1) $f(x)+f(-x)$ is constant ? (2) $f(0)=0$ ? I don't know how to use the continuity. especially for $f(0)=0$ ? 2answers 53 views ### What's the solution of the functional equation I need help with this: "Find all functions $f$, $g : \mathbb{Z} \rightarrow \mathbb{Z}$, with $g$ injective and such that: $$f(g(x)+y) = g(f(x)+y), \mbox{ for all } x, y \in \mathbb{Z}.$$ 1answer 171 views ### Functions $f$ satisfying $f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R}$. How to prove that the continuous functions $f$ on $\mathbb{R}$ satisfying $$f\circ f(x)=2f(x)-x,\forall x\in\mathbb{R},$$ are given by $$f(x)=x+a,a\in\mathbb{R}.$$ Any hints are welcome. Thanks. 4answers 584 views ### Solving for the implicit function $f\left(f(x)y+\frac{x}{y}\right)=xyf\left(x^2+y^2\right)$ and $f(1)=1$ How can I find all functions $f:\mathbb{R}\to\mathbb{R}$ such that $f(1)=1$ and $$f\left(f(x)y+\frac{x}{y}\right)=xyf\left(x^2+y^2\right)$$ for all real numbers $x$ and $y$ with $y\neq0$? PS. This is ... 6answers 689 views ### Find all polynomials $P$ such that $P(x^2+1)=P(x)^2+1$ Find all polynomials $P$ such that $P(x^2+1)=P(x)^2+1$ 3answers 611 views ### Continuous function satisfying $f^{k}(x)=f(x^k)$ How does one set out to find all continuous functions $f:\mathbb{R} \to \mathbb{R}$ which satisfy $f^{k}(x)=f(x^k)$ , where $k \in \mathbb{N}$? Motivation: Is $\sin(n^k) ≠ (\sin n)^k$ in general? 3answers 3k views ### Prove that this function is bounded This is an exercise from Problems from the Book by Andreescu and Dospinescu. When it was posted on AoPS a year ago I spent several hours trying to solve it, but to no avail, so I am hoping someone ... 4answers 451 views ### Solving (quadratic) equations of iterated functions, such as $f(f(x))=f(x)+x$ In this thread, the question was to find a $f: \mathbb{R} \to \mathbb{R}$ such that $$f(f(x)) = f(x) + x$$ (which was revealed in the comments to be solved by $f(x) = \varphi x$ where $\varphi$ is ... 1answer 207 views ### Is there a real-valued function $f$ such that $f(f(x)) = -x$? Is there a function $f\colon \mathbb{R} \to\mathbb{R}$ such that $f(f(x)) = -x$ ? 1answer 270 views ### Riemann's thinking on symmetrizing the zeta functional equation In the translated version of Riemann's classic On the Number of Prime Numbers less than a Given Quantity, he quickly derives the zeta functional equation through contour integration essentially as ... 3answers 155 views ### $f(x^2) = 2f(x)$ and $f(x)$ continuous I ran into a problem recently where I obtained the following constraint on a function. $$f(x^2) = 2f(x) \,\,\,\, \forall x \geq 0$$ and the function $f(x)$ is continuous. Can we conclude that $f(x)$ ... 3answers 145 views ### Is there a non-constant function $f:\mathbb{R}^2 \to \mathbb{Z}/2\mathbb{Z}$ that sums to 0 on corners of squares? A problem in the 2009 Putnam asks about functions $f:\mathbb{R}^2 \to \mathbb{R}$ such that whenever $A,B,C,D$ are corners of some square we have $f(A)+f(B)+f(C)+f(D)=0$. Without spoiling the problem ... 4answers 402 views ### thoughts about $f(f(x))=e^x$ I was thinking, inspired by mathlinks, precisely from this post, if there exists a continuous real function $f:\mathbb R\to\mathbb R$ such that $$f(f(x))=e^x.$$ However I have not still been able to ... 2answers 318 views ### very elementary proof of Maxwell's theorem Maxwell's theorem (after James Clerk Maxwell) says that if a function $f(x_1,\ldots,x_n)$ of $n$ real variables is a product $f_1(x_1)\cdots f_n(x_n)$ and is rotation-invariant in the sense that the ... 2answers 124 views ### Functional Equation: a little tricky Find all functions $f:\mathbb{R} \rightarrow \mathbb{R}$ such that $f[f(x)^2+f(y)]=xf(x)+y$ for all real numbers $x$ and $y$. Clearly $f(x)=x$ is a solution, check by substitution. I'm at a loss as ... 2answers 248 views ### Evaluating $f(x) f(x/2) f(x/4) f(x/8) \cdots$ Let $f : \mathbb R \to \mathbb R$ be a given function with $\lvert f(x) \rvert \le 1$ and $f(0) = 1$. Is there a nice simplified expression for \begin{align}F(x) &= f(x) f(x/2) f(x/4) f(x/8) ... 1answer 93 views ### If $g(x) := \int_1^2 f(xt)dt \equiv 0$ then $f \equiv 0$ Let $f \colon \mathbb R \to \mathbb R$ be a continuous function. Let's define $$g(x) := \int_1^2 f(xt)dt.$$ Prove that $g \equiv 0 \Rightarrow f \equiv 0$. Well, I show you what I have ... 1answer 113 views ### Recurrence relations on a continuous domain While attempting to read Shannon's paper I came across the following (p. 3): suppose $N\colon \mathbb{R} \to \mathbb{R}$ is a function, which for some fixed (given) set of values \$t_1, t_2, \dots, ... 3answers 320 views ### Finding an $f(x)$ that satisfies $f(f(x)) = 4 - 3x$ I need to find $f(f(x)) = 4 - 3x$ In other examples, such as $f(2)$, I can see that the result equates to $-2$ or $f(x^2)$ becomes $-3x^2 + 4$. Do I really just substitute $f(x)$ for $x$ and ... 2answers 841 views ### Converting polar equation to cartesian coordinate polar equation and back again? OK, so I have the following polar equation: $r = Θ/20$ And I would like to translate this a little to the right, and down from the polar origin. Now, I figure since I know cartesian coordinate ... 2answers 487 views ### Solving the functional Equation $f(f(x))=f(x)+x$ Let $f$ be continuous on $\mathbb{R}$. Then how to find all continuous functions satisfying $f(f(x))=f(x)+x$ 1answer 368 views ### Which trigonometric identities involve trigonometric functions? Once upon a time, when Wikipedia was only three-and-a-half years old and most people didn't know what it was, the article titled functional equation gave the identity \sin^2\theta+\cos^2\theta = 1 ... 2answers 518 views ### How to prove $f(x)=ax$ if $f(x+y)=f(x)+f(y)$ and $f$ is locally integrable Suppose $f(x)$ is integrable in any bounded interval on $\mathbb R$, and it satisfies the equation $f(x+y)=f(x)+f(y)$ on $\mathbb R$. How to prove $f(x)=ax$? 1answer 167 views ### $f(x+f(y))=f(x)+y^n$ Here is the problem: Fix $n\in\mathbb{N}$. Find all monotonic solutions $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $f(x+f(y))=f(x)+y^n$. I've tried to show that $f(0)=0$ and derive some ... 0answers 143 views ### Vector valued contraction I am familiar with the Banach Fixed Point Theorem and I have used it to prove existence and uniqueness of functional equations in Banach spaces like $C(X)$, the space of bounded continuous function ... 2answers 300 views ### Implicit function $y = e^{(y-1)/x}$ I'd like to know if the function $y = f(x) : [0,1] \rightarrow [0,1]$ defined implicitly by the transcendental equation $$\displaystyle y = e^{(y-1)/x}$$ is "well known" (name, properties) or is ... 1answer 212 views ### About finding the function such that $f(xy)=f(x)f(y)-f(x+y)+1$ Define a function $f\colon\mathbb{R}\to\mathbb{R}$ which satisfies $$f(xy)=f(x)f(y)-f(x+y)+1$$ for all $x,y\in\mathbb Q$. With a supp condition $f(1)=2$. (I didn't notice that.) How to show that ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 213, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343756437301636, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2012-056-9
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Inversion of the Radon Transform on the Free Nilpotent Lie Group of Step Two Read article [PDF: 259KB] Published:2012-12-04 • Jianxun He, School of Mathematics and Information Sciences, Guangzhou University, Guangzhou 510006, China • Jinsen Xiao, Key Laboratory of Mathematics and Interdisciplinary Sciences of Guangdong Higher Education Institutes, Guangzhou University, Guangzhou 510006, China Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract Let $F_{2n,2}$ be the free nilpotent Lie group of step two on $2n$ generators, and let $\mathbf P$ denote the affine automorphism group of $F_{2n,2}$. In this article the theory of continuous wavelet transform on $F_{2n,2}$ associated with $\mathbf P$ is developed, and then a type of radial wavelets is constructed. Secondly, the Radon transform on $F_{2n,2}$ is studied and two equivalent characterizations of the range for Radon transform are given. Several kinds of inversion Radon transform formulae are established. One is obtained from the Euclidean Fourier transform, the others are from group Fourier transform. By using wavelet transform we deduce an inversion formula of the Radon transform, which does not require the smoothness of functions if the wavelet satisfies the differentiability property. Specially, if $n=1$, $F_{2,2}$ is the $3$-dimensional Heisenberg group $H^1$, the inversion formula of the Radon transform is valid which is associated with the sub-Laplacian on $F_{2,2}$. This result cannot be extended to the case $n\geq 2$. Keywords: Radon transform, wavelet transform, free nilpotent Lie group, unitary representation, inversion formula, sub-Laplacian MSC Classifications: 43A85 - Analysis on homogeneous spaces 44A12 - Radon transform [See also 92C55] 52A38 - Length, area, volume [See also 26B15, 28A75, 49Q20]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7795690894126892, "perplexity_flag": "middle"}
http://mathinsight.org/surfaces_revolution
# Math Insight Page Navigation • Top • In threads • Links • Contact us When printing from Firefox, select the SVG renderer in the MathJax contextual menu (right click any math equation) for better printing results ### Surfaces of revolution Surfaces of revolution are graphs of functions $f(x,y)$ that depend only on the the distance of the point $(x,y)$ to the origin. One way to discuss such surfaces is in terms of polar coordinates $(r,\theta)$. The coordinate $r$ is the radius from the origin to the point $P$ (or the distance to the origin) and $\theta$ is the angle from the positive $x$-axis. You may recall that we can express the radius $r$ in terms of rectangular (Cartesian) coordinates by $r=\sqrt{x^2+y^2}$. The set of all points where $r$ is constant is a circle of radius $r$ centered around the origin. Let's say we had a function $f(x,y)$ of a special form so that it depended only on the radius $r$, i.e., depended on $x$ and $y$ only via the expression $\sqrt{x^2+y^2}$. In this case, if we changed $x$ and $y$ in such a way that $r=\sqrt{x^2+y^2}$ didn't change, then the value of the function $f(x,y)$ would not change. Combining this observation with our knowledge about $r$, we conclude that $f(x,y)$ is constant along any circle centered around the origin. What we've just described is a function of the form $f(x,y)= g\bigl(\sqrt{x^2 + y^2}\bigr)$, where $g(r)$ is some one-variable function. We know, for example, that the function $f(x,y)$ is constant on the circle of radius 2 centered at the origin, because for any point $(x,y)$ where $\sqrt{x^2+y^2}=2$, the value of the function $f(x,y)$ is $g(2)$. This property makes it simple to graph the surface $z=f(x,y)$ because it follows directly from the graph the curve $z=g(r)$. And you already know how to graph a one-variable function. To graph $f(x,y)$, we take advantage of the fact that it doesn't change as we rotate around the origin in the $xy$-plane. In three dimensions, the $z$-axis would be pointing out of the screen in the above figure illustrating polar coordinates. Hence, this rotation corresponds to rotation around the $z$-axis. The graph of $f(x,y)$ is the graph of $g(r)$ rotated around the $z$-axis. For this reason, the resulting surface is a called a surface of revolution. To illustrate, we'll show how the plot of \begin{gather*} z=f(x,y) = \frac{\sin \sqrt{x^2+y^2}} {\sqrt{x^2+y^2}+1} \end{gather*} is a surface of revolution. Since $f(x,y)$ depends on $x$ and $y$ only via the combination $r=\sqrt{x^2+y^2}$, we can rewrite $f(x,y)$ as $f(x,y)= g\bigl(\sqrt{x^2 + y^2}\bigr)$, where \begin{gather*} g(r) =\frac{\sin r} {r+1}. \end{gather*} Here's a plot of $g(r)$. To help you visualize the relationship between the plot of $g(r)$ and the surface of revolution $z=f(x,y)$, you can transform between the plot of $g(r)$ and the plot of $f(x,y)$ by changing the rotation angle $\theta$ in the following applet. Surface of revolution. The graph of the function $$f(x,y)=\frac{\sin \sqrt{x^2+y^2}}{\sqrt{x^2+y^2}+1}$$ is a surface of revolution since $f(x,y)$ is a function of just the radius $r=\sqrt{x^2+y^2}$, i.e., $f(x,y)=g\left(\sqrt{x^2+y^2}\right)$ where $g(r)=\sin r/(r+1)$. You can transform between the plot of $g(r)$ and the plot of $f(x,y)$ by changing the rotation angle $\theta$. (Drag the blue point along the slider.) When $\theta=0$, the figure shows the plot of $z=g(x)$ in the $xz$-plane. When $\theta=2\pi$, the figure shows the entire surface of $z=f(x,y)$. More information about applet. Can you recognize that each of these is a surface of revolution? • $\displaystyle z=\frac{1}{1+x^2+y^2}$ • $z=e^{(x^2+y^2)^3}$ • $z=\sin(\sqrt{x^2+y^2})-x^2-y^2$ #### Thread navigation ##### Vector algebra • Previous: Translation, rescaling, and reflection • Next: Spherical coordinates #### Cite this as Nykamp DQ, “Surfaces of revolution.” From Math Insight. http://mathinsight.org/surfaces_revolution Keywords: surface, visualization
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8942714333534241, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/markov-chains+intuition
# Tagged Questions 0answers 18 views ### Intuition behind criterion for an irreducible Markov chain to be transient I have been looking over my notes for Markov chains, and I have come across the following: Theorem: An irreducible Markov chain is transient iff for some state $i$ there exists a nonzero vector $y$ ... 0answers 83 views ### Why Markov matrices always have 1 as an eigenvalue Also called stochastic matrix. Let $A=[a_{ij}]$ - matrix over $\mathbb{R}$ $0\le a_{ij} \le 1 \forall i,j$ $\sum_{j}a_{ij}=1 \forall i$ i.e the sum along each column of $A$ is 1. I ... 1answer 59 views ### What is the meaning of detailed balance in Markov Chains? I know what it means formally to say that a stochastic matrix and a measure are in detailed balance. ($\lambda_iP_{ij} = \lambda_jP_{ji} \; \forall (i,j)$) but I'm not really sure how to interpret it ... 2answers 112 views ### Average run lengths for large numbers of trials: Intuition and proof This article states that the formula for the average run lengths for large numbers of trials is:$$\frac{1}{1-Pr(event\ in\ one\ trial)}.$$ My questions What is the intuition behind this formula? Do ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238273501396179, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/117004-equivalent-expression-sin-7pi-6-terms-related-acute-angle.html
# Thread: 1. ## Equivalent expression for sin(-7pi/6) in terms of the related acute angle. State an equivalent expression for sin (-7pi/6) in terms of the related acute angle. I don't really know how to solve these kind of questions, I find everything I google doesn't explain how to do it 2. Originally Posted by kmjt State an equivalent expression for sin (-7pi/6) in terms of the related acute angle. I don't really know how to solve these kind of questions, I find everything I google doesn't explain how to do it HI Put it in degrees so that its easier for you to see. $\sin (-\frac{7\pi}{6})=\sin (-210)=-\sin 210=\sin 30=0.5$ Note that $\sin 30=\sin 150=-\sin 210=-sin 330$ 3. I'm not following =/ I know that sin (-7pi/6) is the same thing as sin (-210 degrees) but i'm not sure where to go from there. I'm not even sure what the question is asking 4. Originally Posted by kmjt I'm not following =/ I know that sin (-7pi/6) is the same thing as sin (-210 degrees) but i'm not sure where to go from there. I'm not even sure what the question is asking OK , try to follow here . I will go slow . $\sin (-210)=-\sin 210$ (Step 1) $-\sin 210=-(-\sin 30)$ (Step 2) This is because sin is negative in the 3rd quadrant . $-(-\sin 30)=\sin 30$ (Step 3) I think this is what the question wants since it asks for acute angles (angle < than 90 degree) If you still have any problem , just indicate which step you are unsure with .. don say everything or else i don know where to start . 5. Where is the-(-sin30) coming from? 6. Originally Posted by kmjt Where is the-(-sin30) coming from? sin 210 = - sin 30 - sin 210 = - ( - sin 30 ) 7. It just clicked in my brain thanks! 8. I don't see why degrees should be easier than radians and I think it is good to encourage people to think in terms of radians. $\frac{7\pi}{6}= \pi+ \frac{\pi}{6}$. Since the angle is negative, we swing down from the positive x axis to the negative y axis and then [itex]\pi/6[/itex] above. Now we are in the second quadrant where sine is still postive. $sin(-\frac{7\pi}{6})= sin(\frac{\pi}{6})$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470171332359314, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/05/06/vector-spaces/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
# The Unapologetic Mathematician ## Vector spaces I know I usually go light on Sundays, but I want to finish off what I started yesterday. Remember that we’re considering free modules over a ring $R$ with unit. A free module has a basis, but there may be different bases to choose from. I’ll start with an example of how widely bases can vary. Let $M$ be a free $R$-module with one basis element for each natural number: $\{e_1,e_2,...,e_n,...\}$. Then consider the ring $S=\hom_R(M,M)$. I claim that for any natural number $n$ there is a basis of $S$ as a free $S$ module with $n$ elements. That is, $S\cong S\oplus...\oplus S$ as a left $S$ module for any finite number of summands. In fact, here it is: $\{f_0,...,f_{n-1}\}$, where $f_r(e_{qn+r})=e_q$ and $f_r$ sends all other basis elements of $M$ to ${}0$. I’ll leave it to you to check that these elements span $S$ and are linearly independent. This example shows that in general we can’t even say how many elements are in a basis. However, in many cases of interest we can. In particular, if our ring is commutative or a division ring (or both: a field) then any two bases of a free module are in bijection. Actually, when we’re working over a division ring the situation is even better: every module is free! Let’s start by considering a module $V$ over a division ring $D$. I claim that a linearly independent set spans $V$ — and thus is a basis — exactly when it is maximal. That is, if we add any other vector we’ll get a nontrivial linear combination adding up to ${}0$. Indeed if $\{e_i\}_{i\in\mathcal{I}}$ is maximal then when we add any new element $e$ we get a relation $re+\sum\limits_{i\in\mathcal{I}}r_ie_i=0$ Here $r$ has to be nonzero, because otherwise we would already have a linear relation on $\{e_i\}_{i\in\mathcal{I}}$. But since $D$ is a division ring we can multiply on the left by $r^{-1}$ to get $e=\sum\limits_{i\in\mathcal{I}}-r^{-1}r_ie_i$ so the maximal linearly independent set $\{e_i\}_{i\in\mathcal{I}}$ spans $V$, and thus is a basis. Now, take any linearly independent subset $X$ of $V$ and consider the collection of all linearly independent subsets of $V$ containing $X$. We can partially-order these subsets by inclusion: if a subset $Y$ is contained in another $Y'$ then $Y\leq Y'$ in the order. Take some list of these subsets $\{C_i\}$ so that the inclusion order restricted to this list is a total order. That is, each $C_i$ either contains or is contained in each other $C_j$. We can verify that the union of all the subsets in the list is actually still linearly independent, and it clearly contains every other element in the list. Thus it is an upper bound on the list which is still in the collection of linearly independent subsets of $V$. Every such list does contain an upper bound in the collection. And here we need a seemingly-bizarre statement that I’ll cover more thoroughly in a later post: Zorn’s Lemma. This says that any nonempty partially-ordered set $P$ in which every chain (totally ordered subset of $P$) has an upper bound in $P$ contains a maximal element. That is, an element $a$ so that $a\leq c$ implies $a=c$. There’s nothing “above” $a$. So here we have just such a partially-ordered set. Zorn’s Lemma tells us that there is some linearly independent subset of $V$ containing $X$ that is contained in no larger linearly independent subset. Thus starting with any linearly independent set we can add some elements to it and get a basis. In particular, we could choose $X$ to be the empty set — no elements means no relations at all means linearly independent — and pull a basis for $V$ out of Zorn’s hat. Weird. Eerie. And another similar argument shows that if we started with a set that spans $V$ we can throw out some elements to get a basis. From here there are a couple of really rather technical theorems to get to the fact that any two bases of $V$ have the same cardinality. One handles the infinite case (and applies to all rings with unit) and the other the finite case (and just applies to division rings). The latter is actually not that hard, just dry. Take one basis and replace its elements one-by-one with elements of the other basis, showing at each step that you still have a basis if you choose the replacement right. I might go through these if people really want to see them, but I’ve never seen what good the proofs are. The upshot is that modules over division rings are exceedingly nice. Every single one of them has a basis, and any two bases of a given module have the same cardinality. We have a few special terms here. We call modules over division rings “vector spaces”, and the cardinality of any basis of a vector space we call its “dimension”. Vector spaces, particularly over fields (commutative division rings), will be extremely useful to us as we move ahead. One very common use is to use a vector space over a field $\mathbb{F}$ as the substrate for an algebraic structure rather than an abelian group ($\mathbb{Z}$-module). For example, we might want to put an action of some other ring $R$ onto a vector space $V$, commuting with the field action. Our work on modules then tells us that in many ways working over $\mathbb{F}$ is just like working over $\mathbb{Z}$. For instance, we can take tensor products over $F$ and apply $\hom_F$ and get back vector spaces over $F$ since $F$ is commutative and acts on both sides of any vector space. The resulting theory will often be simpler, though, because general vector spaces are so much simpler than general abelian groups, and so they’re less likely to “get in the way” of other structures than abelian groups are. ### Like this: Posted by John Armstrong | Ring theory ## 14 Comments » 1. [...] all these collections to find the collection at its top. This is almost exactly the same thing as we did back when we showed that every vector space is a free module! And just like then, we need Zorn’s lemma to tell us that we can manage the trick in general, [...] Pingback by | March 29, 2008 | Reply 2. [...] maps — homomorphisms — between abelian groups, and particularly between modules or vector spaces, which are just modules over a field. In particular we’ll focus on vector spaces over some [...] Pingback by | May 19, 2008 | Reply 3. [...] about the high-level views of linear algebra. That is, we’re discussing the category of vector spaces over a field and -linear transformations between [...] Pingback by | May 20, 2008 | Reply 4. [...] This says that given any finite-dimensional vector space we have some so that . But we know that every vector space has a basis, and for it must be finite; that’s what “finite-dimensional” means! Let’s [...] Pingback by | June 24, 2008 | Reply 5. [...] see this, we’ll need to refine an earlier result. Remember how we showed that every vector space has a basis. We looked for maximal linearly independent sets and used Zorn’s lemma to assert that they [...] Pingback by | June 26, 2008 | Reply 6. [...] set may be narrowed to a basis. And the proof is again the same technique we used to show that every vector space has a basis. It’s just that this time we flip the whole thing over. Now we consider the set of subsets of [...] Pingback by | June 30, 2008 | Reply 7. [...] the rank — and so must be independent of which vectors we throw out. Looking back at the maximality property of a basis, we can state a new characterization of the rank: it is the cardinality of the [...] Pingback by | July 1, 2008 | Reply 8. [...] series and such, I’m coming back to linear algebra. What we want to talk about now is how two vector spaces can be isomorphic. Of course, this means that they are connected by an invertible linear [...] Pingback by | October 17, 2008 | Reply 9. [...] we know that every finite-dimensional vector space has a basis, and is thus isomorphic to , where is the cardinality of the basis. So given a vector space with [...] Pingback by | October 22, 2008 | Reply 10. [...] we were primarily concerned with the topological space . As a topological space, is just like the vector space we’ve been discussing, but now we care a lot less about the algebraic structure than we do [...] Pingback by | September 15, 2009 | Reply 11. [...] multivariable calculus. In a very real sense, the sources and targets of our functions are not the vector spaces [...] Pingback by | September 28, 2009 | Reply 12. [...] which is equivalent to the axiom of choice — was essential when we needed to show that every vector space has a basis, or Tychonoff’s theorem, or that exact sequences of vector spaces split. So it’s sort [...] Pingback by | April 24, 2010 | Reply 13. [...] want to define some structures that blend algebraic and topological notions. These are all based on vector spaces. And, particularly, we care about infinite-dimensional vector spaces. Finite-dimensional vector [...] Pingback by | May 12, 2010 | Reply 14. [...] concerned with complex representations of these groups. That is, we want to pick some complex vector space , and for each permutation we want to come up with some linear transformation for which the [...] Pingback by | September 8, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 65, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342401027679443, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/forces+particle-physics
# Tagged Questions 1answer 35 views ### Can Joule's First Law of Thermodynamics be Applied to Atomic Charges? James Joule established that all forms of energy were basically the same and interchangeable. My question is if thas law is relevant in particle physics. Can a positive charge and a negative charge be ... 1answer 135 views ### Origin of electric charge Baryons have charges that are the result of a polynomial calculation of their building blocks (quarks)'s fractional charges. But what gives these quarks electric charges? What interactions do they ... 1answer 62 views ### Will a gas keep forever in a “perfect” flask? I've been wondering about the porosity of materials, I know that, for example the air comes out of tires/balloons because (besides having huge gaps on the rim contact area/knot) they are made of a ... 1answer 176 views ### How are forces related to decays? How are decays related to forces, what is meant by particle X decays through the, say, strong force? The way I understand forces is by how they change the acceleration of particles with the right ... 1answer 61 views ### Is There Any Difference Between Strong Quarks Force and Strong Nuclear Force? the spectrum of quarkonium and the comparison with positronium potential for the strong force is: $V(r) = - \frac{4}{3} \frac{\alpha_s(r) \hbar c}{r} + kr$ (where the constant $k$ determines the ... 1answer 319 views ### Inverse square law in 2+1 dimensional universe from a Yukawa coupling? There is a nice result that in 3+1 space time, a Yukawa coupling leads to an inverse square law force as the mass of the scalar field goes to zero. I was wondering what the corresponding force in a ... 3answers 514 views ### How does gravity force get transmitted? How does gravity force get transmitted? It is not transmitted by particles I guess. Because if it was, then its propagation speed would be limited by the speed of light. If it is not transmitted by ... 1answer 282 views ### How are fundamental forces transmitted? How are the fundamental forces transmitted? In particular I wonder, are all "processes" local, i.e. without superluminal distant interactions? But if they are local, then particles would have to ... 2answers 90 views ### Nonabelian gauge theories and range of the corresponding force Do all nonabelian gauge theories produce short range force? 5answers 4k views ### Is there an equation for the strong nuclear force? The equation describing the force due to gravity is $$F = G \frac{m_1 m_2}{r^2}.$$ Similarly the force due to the electrostatic force is $$F = k \frac{q_1 q_2}{r^2}.$$ Is there a similar equation ... 2answers 1k views ### Did the researchers at Fermilab find a fifth force? Please consider the publication Invariant Mass Distribution of Jet Pairs Produced in Association with a W boson in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV by the CDF-Collaboration, ... 1answer 170 views ### Paramagnetism what about Paraweakism or Parastrongism? Ok, I was just curious but the electromagnetic force can allow paramagnetism macroscopically in some objects. Can this be done microscopically to the subatomic level? Also, what about other forces ... 2answers 1k views ### Why isn't Higgs coupling considered a fifth fundamental force? When I first learned about the four fundamental forces of nature, I assumed that they were just the only four kind of interactions there were. But after learning a little field theory, there are many ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924339771270752, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/116790-finding-recurrence-relation.html
# Thread: 1. ## Finding a recurrence relation Define an n-letter word to be a string of n letters, each taken from the 26 letters of the alphabet. Find a recurrence relation for an, n >= 0, where an is the number of n-letter words in which no adjacent letters in the word can both be vowels. (For example, bark is such a 4-letter word, but meow is not.) 2. Originally Posted by sbankica Define an n-letter word to be a string of n letters, each taken from the 26 letters of the alphabet. Find a recurrence relation for an, n >= 0, where an is the number of n-letter words in which no adjacent letters in the word can both be vowels. (For example, bark is such a 4-letter word, but meow is not.) Hint: Let's say we want to find $a_{n+1}$. For the sake of brevity, let's say a string is "acceptable" if it does not contain two adjacent vowels. Break the possibilities into two disjoint sets depending on whether the (n+1)-th letter is a vowel. The total number of possibilities is the sum of the sizes of these sets. If the (n+1)-th letter is a vowel, the preceding n letters can be any acceptable string which does not end in a vowel. If the nth letter is not a vowel, then the preceding n-1 letters can be any acceptable string, which can be done in $a_{n-1}$ ways. Then how many choices are there for the nth and (n+1)-th letters? If the (n+1)-letter is not a vowel, then the preceding n letters can be any acceptable string, which can be done in $a_n$ ways. Then how many choices are there for the (n+1)-th letter?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8746001124382019, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/137174-raffle-probability.html
# Thread: 1. ## Raffle Probability I'm entered in a raffle with my friends and we're trying to find out our chance of winning. We have six tickets and there are 20 total tickets for the three prizes. The first drawing will take place and the winning ticket will be put aside, then the second drawing will take place with the winning ticket put aside and so on. What is our chances of winning one of the prizes? 2. Hello deathfromabove Welcome to Math Help Forum! Originally Posted by deathfromabove I'm entered in a raffle with my friends and we're trying to find out our chance of winning. We have six tickets and there are 20 total tickets for the three prizes. The first drawing will take place and the winning ticket will be put aside, then the second drawing will take place with the winning ticket put aside and so on. What is our chances of winning one of the prizes? I'm assuming that the question means: what is the probability that we win at least one prize? In which case, we find the probability that we don't win any prizes, and subtract this from $1$. So, when the first ticket is drawn, there are $14$ losing tickets out of $20$. The probability that one of these is chosen is $\frac{14}{20}$ If we don't win on this draw, then there are then $13$ losing tickets out of the $19$ remaining. So the probability that one of these is chosen is $\frac{13}{19}$ Similarly the probability that the third one chosen also loses is $\frac{12}{18}$ So the probability that we don't win at all is: $\frac{14}{20}\times\frac{13}{19}\times\frac{12}{18 }=\frac{182}{570}$ Therefore the probability that we win at least one prize is $1-\frac{182}{570}=\frac{488}{570}$ Grandad
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9657724499702454, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/9395-russel-paradox.html
# Thread: 1. ## Russel Paradox Can someone answer this: What the Russell Paradox is (a complete proof that at some point you will want to prove that certain assumption leads to a contradictiion). And then Explain why it is so bad that a contradiction can be proved somewhere.( why ncant we just ignore the fact that in some remote corner of set theory this contradiction arises?), And Finally my 3rd question abotu russel paradox iii) Assuming a Statement B, which happens tobe a contradiction, prove that 6=21. 2. Originally Posted by ruprotein Can someone answer this: What the Russell Paradox is (a complete proof that at some point you will want to prove that certain assumption leads to a contradictiion). See this. RonL 3. what if i asked u the last question, i kinda know what russell paradox is, its a simple theorem im trying to answer this questiont hough or the question Assuming a statement C, which happens to be a contradiction prove that 2=7 4. The article referred to by CaptainBlack is nice. I will however try to discuss it in easier, if perhaps more imprecise terminology. Frege's original conception of a set was essentially as follows: anything you can describe is a set. The set of all natural numbers. The set of all elephants. The set of all tea cups. The set of all things that are green. etc, etc. So, let R denote "the set of all sets which are not members of themselves." There is nothing a priori in Frege's original conception which would disallow such a thing. Now ask the following question: is R a member of itself? However you try to answer this question leads to an inevitable contradiction. Suppose R is a member of itself. Then, by definition, R is not a member of itself (since it contains precisely those sets which are not members of themselves). Conversely, if R is not a member of itself, then by definition it is a member of itself. There is the paradox. Now you ask, what is so bad about a paradox in a formal system? This is called inconsistency. And the problem is, you can prove (and disprove) absolutely anything once you've encountered one. Suppose you have a statement A, and that you've established the contradiciton that both A and ~A (read NOT A) are true. Let P be any other statement. The statement "A or ~A" is a tautology. Thus "~P implies (A or ~A)" is also a tautology. This is equivalent to, by contraposition, "~(A or ~A) implies P". But "~(A or ~A)" is equivalent to "A and ~A". Thus we have "(A and ~A) implies P". But "A and ~A" is a true statement! And since "(A and ~A) implies P", we must conclude that P is true. So we see that a paradox is never self-contained. Once you have both a statement and its negative being true, you can prove absolutely any other statement (and its negative). 5. It reminds me of this..... "In a town of Seville the barber shaves everyone who does not save himself. Who shaves the barber". --- BubbleBrain I have a question. In axiomatic set theory the Russel paradox is settled? 6. ## 2=7 ok well nop1 seemed to answer this so maybe u cansee if my proof is correct ore there is some mistake somewhere PROOF. Step 1. let p(x) be the statement "2=7" step 2. then let S be the set such that S = {x: p(x)} s3. Assume p(x) s4. Then p(x) is such an x such that the set S is true, but we know 2 doesnt equal 7. Then S is not true which results in a contradiction. This is an example of the Russel Paradox. 7. Brain_103 gave you a very clear discussion of this problem. I will just add a few comments. If you have access to a mathematics library find NAÏVE SET THEORY by Paul Halmos. That book has one of the clearest discussions of the problem. As Halmos shows that with proper axioms one can prove that “nothing contains everything”. In other words, the whole problem is one of extensionality just as the Brain said. To answer TPH’s question: paradoxes are never settled. We just improve the axioms to avoid the problem. In the newer axiom systems, most use ZF or Quine, both avoid the Russell paradox. Having read you question a number of times, I still am unsure about #3. The sentence, If $6 \in \emptyset$ then $6 = 21$ is a true statement. You see a of property of implication is: A false statement implies any statement. Thus, if we allow any paradox to remain in the system then any is true. 8. Originally Posted by Plato To answer TPH’s question: paradoxes are never settled. We just improve the axioms to avoid the problem. In the newer axiom systems, most use ZF or Quine, both avoid the Russell paradox. I think I understand what you said. I just want to be sure on it. You are saying that the paradoxes are not settled but they do not arise because the ZFC axioms forbid such a set to exist. Thus, we did not settle the paradox rather we just ignore it and it never bothers us because of the consistency of the axioms? 9. Originally Posted by ThePerfectHacker the paradoxes are not settled but they do not arise because the ZFC axioms forbid such a set to exist. Yes that is correct. We do not bother because they will not arise. 10. Originally Posted by Plato Yes that is correct. We do not bother because they will not arise. Sweet! I cannot wait the day I learn set theory. 11. Originally Posted by ThePerfectHacker It reminds me of this..... "In a town of Seville the barber shaves everyone who does not shave himself. Who shaves the barber?". I have a simple answer to this. Barber's wife! Women are not shaving at all so the rule that "barber shaves everyone who does not shave himself" doesn't apply to them... I've got another futuristic answer... Robot!!! 12. Originally Posted by OReilly I have a simple answer to this. Barber's wife! Women are not shaving at all so the rule that "barber shaves everyone who does not shave himself" doesn't apply to them... I've got another futuristic answer... Robot!!! There is a problem with the "woman" answer. The original statement of the paradox says: Suppose there is a town with just one male barber; and that every man in the town keeps himself clean-shaven: some by shaving themselves, some by attending the barber. It seems reasonable to imagine that the barber obeys the following rule: He shaves all and only those men who do not shave themselves. Under this scenario, we can ask the following question: Does the barber shave himself? The problem statement clearly specifies that the barber is a man. -Dan 13. Originally Posted by topsquark There is a problem with the "woman" answer. The original statement of the paradox says: The problem statement clearly specifies that the barber is a man. -Dan I think my English was misunderstood. I didn't mean that barber is female. My answer was that barber is shaved by his wife. 14. Originally Posted by OReilly I think my English was misunderstood. I didn't mean that barber is female. My answer was that barber is shaved by his wife. Ah, no. I just misunderstood you. However the barber shaves every man that doesn't shave himself. Thus if he doesn't shave himself, the barber (not the barber's wife) must shave him. -Dan 15. Originally Posted by topsquark Ah, no. I just misunderstood you. However the barber shaves every man that doesn't shave himself. Thus if he doesn't shave himself, the barber (not the barber's wife) must shave him. -Dan I know, I was just joking.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608754515647888, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/22965?sort=oldest
## Bounded and weakly bounded sets in top. vector spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a locally convex topological vector space V over the complex numbers. Is it true that every weakly bounded subset of V is indeed bounded? If not, what additional requirements are needed for this to hold? Perhaps someone has a reference, I was not able to find something in the literature. Cheers, Ralf - ## 2 Answers Theorem 3.18 in the excellent book by Rudin "Functional Analysis" says: In a locally convex space $X$, every weakly bounded set is originally bounded, and vice versa. The proof is based on the Banach-Alaoglu theorem (well, no surprise) and Baire's category theorem. - What's "originally bounded"? I've never heard of that one! – Andrew Stacey Apr 29 2010 at 14:13 It means "bounded in the original topology" – Johannes Hahn Apr 29 2010 at 15:23 Of course in the case where X is not just a LCTVS but a Banach space then this is the Uniform Boundedness theorem (a.k.a. Banach-Steinhaus, more or less) – Yemon Choi Apr 29 2010 at 16:28 @Yemon: that's what worries me a little about this answer. Banach-Steinhaus is a Big Theorem and I don't think that it holds for all LCTVS (does it even hold for incomplete nvs?). Unfortunately, I'm not in my office so can't check my sources, but I want to say something like "X needs to be barrelled", but that might only be to do with the space of functions on X, not X itself. – Andrew Stacey Apr 29 2010 at 17:15 @Andrew: a subset $E$ of a topological vector space $X$ is (originally) bounded if to every neighborhood $V$ of $0$ in $X$ corresponds a number $s > 0$ such that $E \subset tV$ for every $t > s$. I'm sorry, I should have stated the definition that I use in my answer, but I think, this is the common definition if you just have topological vector spaces. – Ulrich Pennig Apr 30 2010 at 7:20 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is direct consequence of the Mackey Theorem: Having a dual pair (V,V') with V' as the dual of the locally convex space V, the bounded sets on V under any dual topology are identical. A dual topology on V is a locally convex topology $\tau$ such that (V,$\tau$)' = V'. As the original and the weak topology give the same dual, the bounded sets are identical. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435253143310547, "perplexity_flag": "middle"}
http://mathbabe.org/category/math-education/page/2/
# mathbabe Exploring and venting about quantitative issues ### Archive Archive for the ‘math education’ Category ## Columbia Data Science course, week 7: Hunch.com, recommendation engines, SVD, alternating least squares, convexity, filter bubbles October 18, 2012 Last night in Rachel Schutt’s Columbia Data Science course we had Matt Gattis come and talk to us about recommendation engines. Matt graduated from MIT in CS, worked at SiteAdvisor, and co-founded hunch as its CTO, which recently got acquired by eBay. Here’s what Matt had to say about his company: Hunch Hunch is a website that gives you recommendations of any kind. When we started out it worked like this: we’d ask you a bunch of questions (people seem to love answering questions), and then you could ask the engine questions like, what cell phone should I buy? or, where should I go on a trip? and it would give you advice. We use machine learning to learn and to give you better and better advice. Later we expanded into more of an API where we crawled the web for data rather than asking people direct questions. We can also be used by third party to personalize content for a given site, a nice business proposition which led eBay to acquire us. My role there was doing the R&D for the underlying recommendation engine. Matt has been building code since he was a kid, so he considers software engineering to be his strong suit. Hunch is a cross-domain experience so he doesn’t consider himself a domain expert in any focused way, except for recommendation systems themselves. The best quote Matt gave us yesterday was this: “Forming a data team is kind of like planning a heist.” He meant that you need people with all sorts of skills, and that one person probably can’t do everything by herself. Think Ocean’s Eleven but sexier. A real-world recommendation engine You have users, and you have items to recommend. Each user and each item has a node to represent it. Generally users like certain items. We represent this as a bipartite graph. The edges are “preferences”. They could have weights: they could be positive, negative, or on a continuous scale (or discontinuous but many-valued like a star system). The implications of this choice can be heavy but we won’t get too into them today. So you have all this training data in the form of preferences. Now you wanna predict other preferences. You can also have metadata on users (i.e. know they are male or female, etc.) or on items (a product for women). For example, imagine users came to your website. You may know each user’s gender, age, whether they’re liberal or conservative, and their preferences for up to 3 items. We represent a given user as a vector of features, sometimes including only their meta data, sometimes including only their preferences (which would lead to a sparse vector since you don’t know all their opinions) and sometimes including both, depending on what you’re doing with the vector. Nearest Neighbor Algorithm? Let’s review nearest neighbor algorithm (discussed here): if we want to predict whether a user A likes something, we just look at the user B closest to user A who has an opinion and we assume A’s opinion is the same as B’s. To implement this you need a definition of a metric so you can measure distance. One example: Jaccard distance, i.e. the number of things preferences they have in common divided by the total number of things. Other examples: cosine similarity or euclidean distance. Note: you might get a different answer depending on which metric you choose. What are some problems using nearest neighbors? • There are too many dimensions, so the closest neighbors are too far away from each other. There are tons of features, moreover, that are highly correlated with each other. For example, you might imagine that as you get older you become more conservative. But then counting both age and politics would mean you’re double counting a single feature in some sense. This would lead to bad performance, because you’re using redundant information. So we need to build in an understanding of the correlation and project onto smaller dimensional space. • Some features are more informative than others. Weighting features may therefore be helpful: maybe your age has nothing to do with your preference for item 1. Again you’d probably use something like covariances to choose your weights. • If your vector (or matrix, if you put together the vectors) is too sparse, or you have lots of missing data, then most things are unknown and the Jaccard distance means nothing because there’s no overlap. • There’s measurement (reporting) error: people may lie. • There’s a calculation cost – computational complexity. • Euclidean distance also has a scaling problem: age differences outweigh other differences if they’re reported as 0 (for don’t like) or 1 (for like). Essentially this means that raw euclidean distance doesn’t explicitly optimize. • Also, old and young people might think one thing but middle-aged people something else. We seem to be assuming a linear relationship but it may not exist • User preferences may also change over time, which falls outside the model. For example, at Ebay, they might be buying a printer, which makes them only want ink for a short time. • Overfitting is also a problem. The one guy is closest, but it could be noise. How do you adjust for that? One idea is to use k-nearest neighbor, with say k=5. • It’s also expensive to update the model as you add more data. Matt says the biggest issues are overfitting and the “too many dimensions” problem. He’ll explain how he deals with them. Going beyond nearest neighbor: machine learning/classification In its most basic form, we’ve can model separately for each item using a linear regression. Denote by $f_{i, j}$ user $i$‘s preference for item $j$ (or attribute, if item $j$ is a metadata item). Say we want to model a given user’s preferences for a given item using only the 3 metadata properties of that user, which we assume are numeric. Then we can look for the best choice of $\beta_k$ as follows: $p_i = \beta_1 f_{1, i} + \beta_2 f_{2, i} + \beta_3 f_{3, i} +$ $\epsilon$ Remember, this model only works for one item. We need to build as many models as we have items. We know how to solve the above per item by linear algebra. Indeed one of the drawbacks is that we’re not using other items’ information at all to create the model for a given item. This solves the “weighting of the features” problem we discussed above, but overfitting is still a problem, and it comes in the form of having huge coefficients when we don’t have enough data (i.e. not enough opinions on given items). We have a bayesian prior that these weights shouldn’t be too far out of whack, and we can implement this by adding a penalty term for really large coefficients. This ends up being equivalent to adding a prior matrix to the covariance matrix. how do you choose lambda? Experimentally: use some data as your training set, evaluate how well you did using particular values of lambda, and adjust. Important technical note: You can’t use this penalty term for large coefficients and assume the “weighting of the features” problem is still solved, because in fact you’re implicitly penalizing some coefficients more than others. The easiest way to get around this is to normalize your variables before entering them into the model, similar to how we did it in this earlier class. The dimensionality problem We still need to deal with this very large problem. We typically use both Principal Component Analysis (PCA) and Singular Value Decomposition (SVD). To understand how this works, let’s talk about how we reduce dimensions and create “latent features” internally every day. For example, we invent concepts like “coolness” – but I can’t directly measure how cool someone is, like I could weigh them or something. Different people exhibit pattern of behavior which we internally label to our one dimension of “coolness”. We let the machines do the work of figuring out what the important “latent features” are. We expect them to explain the variance in the answers to the various questions. The goal is to build a model which has a representation in a lower dimensional subspace which gathers “taste information” to generate recommendations. SVD Given a matrix $X,$ compose it into three matrices: $X = U S V^{\tau}.$ Here $X$ is $m \times n, U$ is $m \times k, S$ is $k\times k,$ and $V$ is $k\times n,$ where $m$ is the number of users, $n$ is the number of items, and $k$ is the rank of $X.$ The rows of $U$ correspond to users, whereas $V$ has a row for each item. The square matrix $S$ is diagonal where each entry is a singular value, which measure the importance of each dimension. If we put them in decreasing order, which we do, then the dimensions are ordered by importance from highest to lowest. Every matrix has such a decomposition. Important properties: • The columns of $U$ and $V$ are orthogonal to each other. • So we can order the columns by singular values. • We can take lower rank approximation of X by throwing away part of $S.$ In this way we might have $k$ much smaller than either $n$ or $m$, and this is what we mean by compression. • There is an important interpretation to the values in the matrices $U$ and $V.$ For example, we can see, by using SVD, that “the most important latent feature” is often something like seeing if you’re a man or a woman. [Question: did you use domain expertise to choose questions at Hunch? Answer: we tried to make them as fun as possible. Then, of course, we saw things needing to be asked which would be extremely informative, so we added those. In fact we found that we could ask merely 20 questions and then predict the rest of them with 80% accuracy. They were questions that you might imagine and some that surprised us, like competitive people v. uncompetitive people, introverted v. extroverted, thinking v. perceiving, etc., not unlike MBTI.] More details on our encoding: • Most of the time the questions are binary (yes/no). • We create a separate variable for every variable. • Comparison questions may be better at granular understanding, and get to revealed preferences, but we don’t use them. Note if we have a rank $k$ matrix $X$ and we use the SVD above, we can take the approximation with only $k-3$ rows of the middle matrix $S,$ so in other words we take the top $k-3$ most important latent features, and the corresponding rows of $U$ and $V,$ and we get back something very close to $X.$ Note that the problem of sparsity or missing data is not fixed by the above SVD approach, nor is the computational complexity problem; SVD is expensive. PCA Now we’re still looking for $U$ and $V$ as above, but we don’t have $S$ anymore, so $X = U \cdot V^{\tau},$ and we have a more general optimization problem. Specifically, we want to minimize: $argmin \sum_{i, j \in P} (p_{i, j} - u_i \cdot v_j)^2.$ Let me explain. We denote by $u_i$ the row of $U$ corresponding to user $i,$ and similarly we denote by $v_j$ the row of $V$ corresponding to item $j.$ Items can include meta-data information (so the age vectors of all the users will be a row in $V$). Then the dot product $u_i \cdot v_j$ is taken to mean the predicted value of user $i$‘s preference for item $j,$ and we compare that to the actual preference $p_{i, j}$. The set $P$ is just the set of all actual known preferences or meta-data attribution values. So, we want to find the best choices of $U$ and $V$ which overall minimize the squared differences between prediction and observation on everything we actually know, and the idea is that if it’s really good on stuff we know, it will also be good on stuff we’re guessing. Now we have a parameter, namely the number $D$ which is how may latent features we want to use. The matrix $U$ will have a row for each user and a column for each latent feature, and the matrix $V$ will have a row for each item and a column for each latent features. How do we choose $D?$ It’s typically about 100, since it’s more than 20 (we already know we had a pretty good grasp on someone if we ask them 20 questions) and it’s as much as we care to add before it’s computational too much work. Note the resulting latent features will be uncorrelated, since they are solving an efficiency problem (not a proof). But how do we actually find $U$ and $V?$ Alternating Least Squares This optimization doesn’t have a nice closed formula like ordinary least squares with one set of coefficients. Instead, we use an iterative algorithm like with gradient descent. As long as your problem is convex you’ll converge ok (i.e. you won’t find yourself at a local but not global maximum), and we will force our problem to be convex using regularization. Algorithm: • Pick a random $V$ • Optimize $U$ while $V$ is fixed • Optimize $V$ while $U$ is fixed • Keep doing the above two steps until you’re not changing very much at all. Example: Fix $V$ and update $U.$ The way we do this optimization is user by user. So for user $i,$ we want to find $argmin_{u_i} \sum_{j \in P_i} (p_{i, j} - u_i * v_j)^2,$ where $v_j$ is fixed. In other words, we just care about this user for now. But wait a minute, this is the same as linear least squares, and has a closed form solution! In other words, set: $u_i = (V_{*, i}^{\tau} V_{*, i})^{-1} V_{*, i}^{\tau} P_{* i},$ where $V_{*, i}$ is the subset of $V$ for which we have preferences coming from user $i.$ Taking the inverse is easy since it’s $D \times D,$ which is small. And there aren’t that many preferences per user, so solving this many times is really not that hard. Overall we’ve got a do-able update for $U.$ When you fix U and optimize V, it’s analogous; you only ever have to consider the users that rated that movie, which may be pretty large, but you’re only ever inverting a $D \times D$ matrix. Another cool thing: since each user is only dependent on their item’s preferences, we can parallelize this update of $U$ or $V.$ We can run it on as many different machines as we want to make it fast. There are lots of different versions of this. Sometimes you need to extend it to make it work in your particular case. Note: as stated this is not actually convex, but similar to the regularization we did for least squares, we can add a penalty for large entries in $U$ and $V,$ depending on some parameter $\lambda,$ which again translates to the same thing, i.e. adding a diagonal matrix to the covariance matrix, when you solve least squares. This makes the problem convex if $\lambda$ is big enough. You can add new users, new data, keep optimizing U and V. You can choose which users you think need more updating. Or if they have enough ratings, you can decide not to update the rest of them. As with any machine learning model, you should perform cross-validation for this model – leave out a bit and see how you did. This is a way of testing overfitting problems. Thought experiment – filter bubbles What are the implications of using error minimization to predict preferences? How does presentation of recommendations affect the feedback collected? For example, can we end up in local maxima with rich-get-richer effects? In other words, does showing certain items at the beginning “give them an unfair advantage” over other things? And so do certain things just get popular or not based on luck? How do we correct for this? Categories: data science, math education, statistics ## The investigative mathematical journalist October 14, 2012 I’ve been out of academic math a few years now, but I still really enjoy talking to mathematicians. They are generally nice and nerdy and utterly earnest about their field and the questions in their field and why they’re interesting. In fact, I enjoy these conversations more now than when I was an academic mathematician myself. Partly this is because, as a professional, I was embarrassed to ask people stupid questions, because I thought I should already know the answers. I wouldn’t have asked someone to explain motives and the Hodge Conjecture in simple language because honestly, I’m pretty sure I’d gone to about 4 lectures as a graduate student explaining all of this and if I could just remember the answer I would feel smarter. But nowadays, having left and nearly forgotten that kind of exquisite anxiety that comes out of trying to appear superhuman, I have no problem at all asking someone to clarify something. And if they give me an answer that refers to yet more words I don’t know, I’ll ask them to either rephrase or explain those words. In other words, I’m becoming something of an investigative mathematical journalist. And I really enjoy it. I think I could do this for a living, or at least as a large project. What I have in mind is the following: I go around the country (I’ll start here in New York) and interview people about their field. I ask them to explain the “big questions” and what awesomeness would come from actually having answers. Why is their field interesting? How does it connect to other fields? What is the end goal? How would achieving it inform other fields? Then I’d write them up like columns. So one column might be “Hodge Theory” and it would explain the main problem, the partial results, and the connections to other theories and fields, or another column might be “motives” and it would explain the underlying reason for inventing yet another technology and how it makes things easier to think about. Obviously I could write a whole book on a given subject, but I wouldn’t. My audience would be, primarily, other mathematicians, but I’d write it to be readable by people who have degrees in other quantitative fields like physics or statistics. Even more obviously, every time I chose a field and a representative to interview and every time I chose to stop there, I’d be making in some sense a political choice, which would inevitably piss someone off, because I realize people are very sensitive to this. This is presuming anybody every read my surveys in the first place, which is a big if. Even so, I think it would be a contribution to mathematics. I actually think a pretty serious problem with academic math is that people from disparate fields really have no idea what each other is doing. I’m generalizing, of course, and colloquiums do tend to address this, when they are well done and available. But for the most part, let’s face it, people are essentially only rewarded for writing stuff that is incredibly “insider” for their field. that only a few other experts can understand. Survey of topics, when they’re written, are generally not considered “research” but more like a public service. And by the way, this is really different from the history of mathematics, in that I have never really cared about who did what, and I still don’t (although I’m not against name a few people in my columns). The real goal here is to end up with a more or less accurate map of the active research areas in mathematics and how they are related. So an enormous network, with various directed edges of different types. In fact, writing this down makes me want to build my map as I go, an annotated visualization to pair with the columns. Also, it obviously doesn’t have to be me doing all this: I’m happy to make it an open-source project with a few guidelines and version control. But I do want to kick it off because I think it’s a neat idea. A few questions about my mathematical journalism plan. 1. Who’s going to pay me to do this? 2. Where should I publish it? If the answers are “nobody” and “on mathbabe.org” then I’m afraid it won’t happen, at least by me. Any ideas? One more thing. This idea could just as well be done for another field altogether, like physics or biology. Are there models of people doing something like that in those fields that you know about? Or is there someone actually already doing this in math? Categories: math, math education, musing ## Columbia Data Science course, week 6: Kaggle, crowd-sourcing, decision trees, random forests, social networks, and experimental design October 11, 2012 Yesterday we had two guest lecturers, who took up approximately half the time each. First we welcomed William Cukierski from Kaggle, a data science competition platform. Will went to Cornell for a B.A. in physics and to Rutgers to get his Ph.D. in biomedical engineering. He focused on cancer research, studying pathology images. While working on writing his dissertation, he got more and more involved in Kaggle competitions, finishing very near the top in multiple competitions, and now works for Kaggle. Here’s what Will had to say. Crowd-sourcing in Kaggle What is a data scientist? Some say it’s someone who is better at stats than an engineer and better at engineering than a statistician. But one could argue it’s actually someone who is worse at stats than a statistician. Being a data scientist is when you learn more and more about more and more until you know nothing about everything. Kaggle using prizes to induce the public to do stuff. This is not a new idea: • the Royal Navy in 1714 couldn’t measure longitude, and put out a prize worth \$6 million in today’s dollars to get help. John Harrison, an unknown cabinetmaker, figured it out how to make a clock to solve the problem. • In the U.S. in 2002 FOX issued a prize for the next pop solo artist, which resulted in American Idol. • There’s also the X-prize company; \$10 million was offered for the Ansari X-prize and \$100 million was lost in trying to solve it. So it doesn’t always mean it’s an efficient process (but it’s efficient for the people offering the prize if it gets solved!) There are two kinds of crowdsourcing models. First, we have the distributive crowdsourcing model, like wikipedia, which as for relatively easy but large amounts of contributions. Then, there’s the singular, focused difficult problems that Kaggle, DARPA, InnoCentive and other companies specialize in. Somee of the problems with some crowdsourcing projects include: • they don’t always evaluate your submission objectively. Instead they have a subjective measure, so they might just decide your design is bad or something. This leads to high barrier to entry, since people don’t trust the evaluation criterion. • Also, one doesn’t get recognition until after they’ve won or ranked highly. This leads to high sunk costs for the participants. • Also, bad competitions often conflate participants with mechanical turks: in other words, they assume you’re stupid. This doesn’t lead anywhere good. • Also, the competitions sometimes don’t chunk the work into bite size pieces, which means it’s too big to do or too small to be interesting. A good competition has a do-able, interesting question, with an evaluation metric which is transparent and entirely objective. The problem is given, the data set is given, and the metric of success is given. Moreover, prizes are established up front. The participants are encouraged to submit their models up to twice a day during the competitions, which last on the order of a few days. This encourages a “leapfrogging” between competitors, where one ekes out a 5% advantage, giving others incentive to work harder. It also establishes a band of accuracy around a problem which you generally don’t have- in other words, given no other information, you don’t know if your 75% accurate model is the best possible. The test set y’s are hidden, but the x’s are given, so you just use your model to get your predicted y’s for the test set and upload them into the Kaggle machine to see your evaluation score. This way you don’t share your actual code with Kaggle unless you win the prize (and Kaggle doesn’t have to worry about which version of python you’re running). Note this leapfrogging effect is good and bad. It encourages people to squeeze out better performing models but it also tends to make models much more complicated as they get better. One reason you don’t want competitions lasting too long is that, after a while, the only way to inch up performance is to make things ridiculously complicated. For example, the original Netflix Prize lasted two years and the final winning model was too complicated for them to actually put into production. The hole that Kaggle is filling is the following: there’s a mismatch between those who need analysis and those with skills. Even though companies desperately need analysis, they tend to hoard data; this is the biggest obstacle for success. They have had good results so far. Allstate, with a good actuarial team, challenged their data science competitors to improve their actuarial model, which, given attributes of drivers, approximates the probability of a car crash. The 202 competitors improved Allstate’s internal model by 271%. There were other examples, including one where the prize was \$1,000 and it benefited the company \$100,000. A student then asked, is that fair? There are actually two questions embedded in that one. First, is it fair to the data scientists working at the companies that engage with Kaggle? Some of them might lose their job, for example. Second, is it fair to get people to basically work for free and ultimately benefit a for-profit company? Does it result in data scientists losing their fair market price? Of course Kaggle charges a fee for hosting competitions, but is it enough? [Mathbabe interjects her view: personally, I suspect this is a model which seems like an arbitrage opportunity for companies but only while the data scientists of the world haven't realized their value and have extra time on their hands. As soon as they price their skills better they'll stop working for free, unless it's for a cause they actually believe in.] Facebook is hiring data scientists, they hosted a Kaggle competition, where the prize was an interview. There were 422 competitors. [Mathbabe can't help but insert her view: it's a bit too convenient for Facebook to have interviewees for data science positions in such a posture of gratitude for the mere interview. This distracts them from asking hard questions about what the data policies are and the underlying ethics of the company.] There’s a final project for the class, namely an essay grading contest. The students will need to build it, train it, and test it, just like any other Kaggle competition. Group work is encouraged. Thought Experiment: What are the ethical implications of a robo-grader? Some of the students’ thoughts: • Actual human graders aren’t fair anyway. • Is this the wrong question? The goal of a test is not to write a good essay but rather to do well in a standardized test. The real profit center for standardized testing is, after all, to sell books to tell you how to take the tests. It’s a screening, you follow the instructions, and you get a grade depending on how well you follow instructions. • There are really two question: 1) Is it wise to move from the human to the machine version of same thing for any given thing? and 2) Are machines making things more structured, and is this inhibiting creativity? One thing is for sure, robo-grading prevents me from being compared to someone more creative. • People want things to be standardized. It gives us a consistency that we like. People don’t want artistic cars, for example. • Will: We used machine learning to research cancer, where the stakes are much higher. In fact this whole field of data science has to be thinking about these ethical considerations sooner or later, and I think it’s sooner. In the case of doctors, you could give the same doctor the same slide two months apart and get different diagnoses. We aren’t consistent ourselves, but we think we are. Let’s keep that in mind when we talk about the “fairness” of using machine learning algorithms in tricky situations. Introduction to Feature Selection “Feature extraction and selection are the most important but underrated step of machine learning. Better features are better than better algorithms.” – Will “We don’t have better algorithms, we just have more data” -Peter Norvig Will claims that Norvig really wanted to say we have better features. We are getting bigger and bigger data sets, but that’s not always helpful. The danger is if the number of features is larger than the number of samples or if we have a sparsity problem. We improve our feature selection process to try to improve performance of predictions. A criticism of feature selection is that it’s no better than data dredging. If we just take whatever answer we get that correlates with our target, that’s not good. There’s a well known bias-variance tradeoff: a model is ”high bias” if it’s is too simple (the features aren’t encoding enough information). In this case lots more data doesn’t improve your model. On the other hand, if your model is too complicated, then “high variance” leads to overfitting. In this case you want to reduce the number of features you are using. We will take some material from a famous paper by Isabelle Guyon published in 2003 entitled “An Introduction to Variable and Feature Selection”. There are three categories of feature selection methods: filters, wrappers, and embedded methods. Filters order variables (i.e. possible features) with respect to some ranking (e.g. correlation with target). This is sometimes good on a first pass over the space of features. Filters take account of the predictive power of individual features, and estimate mutual information or what have you. However, the problem with filters is that you get correlated features. In other words, the filter doesn’t care about redundancy. This isn’t always bad and it isn’t always good. On the one hand, two redundant features can be more powerful when they are both used, and on the other hand something that appears useless alone could actually help when combined with another possibly useless-looking feature. Wrapper feature selection tries to find subsets of features that will do the trick. However, as anyone who has studied the binomial coefficients knows, the number of possible size $k$ subsets of $n$ things, called $n\choose k$, grows exponentially. So there’s a nasty opportunity for over fitting by doing this. Most subset methods are capturing some flavor of minimum-redundancy-maximum-relevance. So, for example, we could have a greedy algorithm which starts with the best feature, takes a few more highly ranked, removes the worst, and so on. This a hybrid approach with a filter method. We don’t have to retrain models at each step of such an approach, because there are fancy ways to see how objective function changes as we change the subset of features we are trying out. These are called “finite differences” and rely essentially on Taylor Series expansions of the objective function. One last word: if you have a domain expertise on hand, don’t go into the machine learning rabbit hole of feature selection unless you’ve tapped into your expert completely! Decision Trees We’ve all used decision trees. They’re easy to understand and easy to use. How do we construct? Choosing a feature to pick at each step is like playing 20 questions. We take whatever the most informative thing is first. For the sake of this discussion, assume we break compound questions into multiple binary questions, so the answer is “+” or “-”. To quantify “what is the most informative feature”, we first define entropy for a random variable $X$ to mean: $H(X) = - p(x_+) log_2(p(x_+)) - p(x_-) log_2(p(x_-)).$ Note when $p(x_*) = 0,$ we define the term to vanish. This is consistent with the fact that $\lim_{t\to 0} t log(t) = 0.$ In particular, if either option has probability zero, the entropy is 0. It is maximized at 0.5 for binary variables: which we can easily compute using the fact that in the binary case, $p(x_+) = 1- p(x_-)$ and a bit of calculus. Using this definition, we define the information gain for a given feature, which is defined as the entropy we lose if we know the value of that feature. To make a decision tree, then, we want to maximize information gain, and make a split on that. We keep going until all the points at the end are in the same class or we end up with no features left. In this case we take the majority vote. Optionally we prune the tree to avoid overfitting. This is an example of an embedded feature selection algorithm. We don’t need to use a filter here because the “information gain” method is doing our feature selection for us. How do you handle continuous variables? In the case of continuous variables, you need to ask for the correct threshold of value so that it can be though of as a binary variable. So you could partition a user’s spend into “less than \$5″ and “at least \$5″ and you’d be getting back to the binary variable case. In this case it takes some extra work to decide on the information gain because it depends on the threshold as well as the feature. Random Forests Random forests are cool. They incorporate “bagging” (bootstrap aggregating) and trees to make stuff better. Plus they’re easy to use: you just need to specify the number of trees you want in your forest, as well as the number of features to randomly select at each node. A bootstrap sample  is a sample with replacement, which we usually take to be 80% of the actual data, but of course can be adjusted depending on how much data we have. To construct a random forest, we construct a bunch of decision trees (we decide how many). For each tree, we take a bootstrap sample of our data, and for each node we randomly select (a second point of bootstrapping actually) a few features, say 5 out of the 100 total features. Then we use our entropy-information-gain engine to decide which among those features we will split our tree on, and we keep doing this, choosing a different set of five features for each node of our tree. Note we could decide beforehand how deep the tree should get, but we typically don’t prune the trees, since a great feature of random forests is that it incorporates idiosyncratic noise. Here’s what does a decision tree looks like for surviving on the Titanic. David Huffaker, Google: Hybrid Approach to Social Research David is one of Rachel’s collaborators in Google. They had a successful collaboration, starting with complementary skill sets, an explosion of goodness ensued when they were put together to work on Google+ with a bunch of other people, especially engineers. David brings a social scientist perspective to the analysis of social networks. He’s strong in quantitative methods for understanding and analyzing online social behavior. He got a Ph.D. in Media, Technology, and Society from Northwestern. Google does a good job of putting people together. They blur the lines between research and development. The researchers are embedded on product teams. The work is iterative, and the engineers on the team strive to have near-production code from day 1 of a project. They leverage cloud infrastructure to deploy experiments to their mass user base and to rapidly deploy a prototype at scale. Note that, considering the scale of Google’s user base, redesign as they scaling up is not a viable option. They instead do experiments with smaller groups of users. David suggested that we, as data scientists, consider how to move into an experimental design so as to move to a causal claim between variables rather than a descriptive relationship. In other words, to move from the descriptive to the predictive. As an example, he talked about the genesis of the “circle of friends” feature of Google+. They know people want to selectively share; they’ll send pictures to their family, whereas they’d probably be more likely to send inside jokes to their friends. They came up with the idea of circles, but it wasn’t clear if people would use them. How do they answer the question: will they use circles to organize their social network? It’s important to know what motivates them when they decide to share. They took a mixed-method approach, so they used multiple methods to triangulate on findings and insights. Given a random sample of 100,000 users, they set out to determine the popular names and categories of names given to circles. They identified 168 active users who filled out surveys and they had longer interviews with 12. They found that the majority were engaging in selective sharing, that most people used circles, and that the circle names were most often work-related or school-related, and that they had elements of a strong-link (“epic bros”) or a weak-link (“acquaintances from PTA”) They asked the survey participants why they share content. The answers primarily came in three categories: first, the desire to share about oneself – personal experiences, opinions, etc. Second, discourse: people wanna participate in a conversation. Third, evangelism: people wanna spread information. Next they asked participants why they choose their audiences. Again, three categories: first, privacy – many people were public or private by default. Second, relevance – they wanted to share only with those who may be interested, and they don’t wanna pollute other people’s data stream. Third, distribution – some people just want to maximize their potential audience. The takeaway from this study was this: people do enjoy selectively sharing content, depending on context, and the audience. So we have to think about designing features for the product around content, context, and audience. Network Analysis We can use large data and look at connections between actors like a graph. For Google+, the users are the nodes and the edges (directed) are “in the same circle”. Other examples of networks: • nodes are users in 2nd life, interactions between users are possible in three different ways, corresponding to three different kinds of edges • nodes are websites, edges are links • nodes are theorems, directed edges are dependencies After you define and draw a network, you can hopefully learn stuff by looking at it or analyzing it. As you may have noticed, “social” is a layer across all of Google. Search now incorporates this layer: if you search for something you might see that your friend “+1″‘ed it. This is called a social annotation. It turns out that people care more about annotation when it comes from someone with domain expertise rather than someone you’re very close to. So you might care more about the opinion of a wine expert at work than the opinion of your mom when it comes to purchasing wine. Note that sounds obvious but if you started the other way around, asking who you’d trust, you might start with your mom. In other words, “close ties,” even if you can determine those, are not the best feature to rank annotations. But that begs the question, what is? Typically in a situation like this we use click-through rate, or how long it takes to click. In general we need to always keep in mind a quantitative metric of success. This defines success for us, so we have to be careful. Privacy Human facing technology has thorny issues of privacy which makes stuff hard. We took a survey of how people felt uneasy about content. We asked, how does it affect your engagement? What is the nature of your privacy concerns? Turns out there’s a strong correlation between privacy concern and low engagement, which isn’t surprising. It’s also related to how well you understand what information is being shared, and the question of when you post something, where does it go and how much control do you have over it. When you are confronted with a huge pile of complicated all settings, you tend to start feeling passive. Again, we took a survey and found broad categories of concern as follows: identity theft • financial loss digital world • really private stuff I searched on • unwanted spam • provocative photo (oh shit my boss saw that) • unwanted solicitation • unwanted ad targeting physical world • offline threats • harm to my family • stalkers • employment risks • hassle What is the best way to decrease concern and increase undemanding and control? Possibilities: • Write and post a manifesto of your data policy (tried that, nobody likes to read manifestos) • Educate users on our policies a la the Netflix feature “because you liked this, we think you might like this” • Get rid of all stored data after a year Rephrase: how do we design setting to make it easier for people? how do you make it transparent? • make a picture or graph of where data is going. • give people a privacy switchboard • make the settings you show them categorized by things you don’t have a choice about vs. things you do • make reasonable default setting so people don’t have to worry about it. David left us with these words of wisdom: as you move forward and have access to big data, you really should complement them with qualitative approaches. Use mixed methods to come to a better understanding of what’s going on. Qualitative surveys can really help. Categories: data science, math education, statistics ## Columbia Data Science course, week 5: GetGlue, time series, financial modeling, advanced regression, and ethics October 5, 2012 I was happy to be giving Rachel Schutt’s Columbia Data Science course this week, where I discussed time series, financial modeling, and ethics. I blogged previous classes here. The first few minutes of class were for a case study with GetGlue, a New York-based start-up that won the mashable breakthrough start-up of the year in 2011 and is backed by some of the VCs that also fund big names like Tumblr, etsy, foursquare, etc. GetGlue is part of the social TV space. Lead Scientist, Kyle Teague, came to tell the class a little bit about GetGlue, and some of what he worked on there. He also came to announce that GetGlue was giving the class access to a fairly large data set of user check-ins to tv shows and movies. Kyle’s background is in electrical engineering, he placed in the 2011 KDD cup (which we learned about last week from Brian), and he started programming when he was a kid. GetGlue’s goal is to address the problem of content discovery within the movie and tv space, primarily. The usual model for finding out what’s on TV is the 1950′s TV Guide schedule, and that’s still how we’re supposed to find things to watch. There are thousands of channels and it’s getting increasingly difficult to find out what’s good on. GetGlue wants to change this model, by giving people personalized TV recommendations and personalized guides. There are other ways GetGlue uses Data Science but for the most part we focused on how this the recommendation system works. Users “check-in” to tv shows, which means they can tell people they’re watching a show. This creates a time-stamped data point. They can also do other actions such as like, or comment on the show. So this is a -tuple: {user, action, object} where the object is a tv show or movie. This induces a bi-partite graph. A bi-partite graph or network contains two types of nodes: users and tv shows. An edges exist between users and an tv shows, but not between users and users or tv shows and tv shows. So Bob and Mad Men are connected because Bob likes Mad Men, and Sarah and Mad Men and Lost are connected because Sarah liked Mad Men and Lost. But Bob and Sarah aren’t connected, nor are Mad Men and Lost. A lot can be learned from this graph alone. But GetGlue finds ways to create edges between users and between objects (tv shows, or movies.) Users can follow each other or be friends on GetGlue, and also GetGlue can learn that two people are similar[do they do this?]. GetGlue also hires human evaluators to make connections or directional edges between objects. So True Blood and Buffy the Vampire Slayer might be similar for some reason and so the humans create an edge in the graph between them. There were nuances around the edge being directional. They may draw an arrow pointing from Buffy to True Blood but not vice versa, for example, so their notion of “similar” or “close” captures both content and popularity. (That’s a made-up example.) Pandora does something like this too. Another important aspect is time. The user checked-in or liked a show at a specific time, so the -tuple extends to have a time-stamp: {user,action,object,timestamp}. This is essentially the data set the class has access to, although it’s slightly more complicated and messy than that. Their first assignment with this data will be to explore it, try to characterize it and understand it, gain intuition around it and visualize what they find. Students in the class asked him questions around topics of the value of formal education in becoming a data scientist (do you need one? Kyle’s time spent doing signal processing in research labs was valuable, but so was his time spent coding for fun as a kid), what would be messy about a data set, why would the data set be messy (often bugs in the code), how would they know? (their QA and values that don’t make sense), what language does he use to prototype algorithms (python), how does he know his algorithm is good. Then it was my turn. I started out with my data scientist profile: As you can see, I feel like I have the most weakness in CS. Although I can use python pretty proficiently, and in particular I can scrape and parce data, prototype models, and use matplotlib to draw pretty pictures, I am no java map-reducer and I bow down to those people who are. I am also completely untrained in data visualization but I know enough to get by and give presentations that people understand. Thought Experiment I asked the students the following question: What do you lose when you think of your training set as a big pile of data and ignore the timestamps? They had some pretty insightful comments. One thing they mentioned off the bat is that you won’t know cause and effect if you don’t have any sense of time. Of course that’s true but it’s not quite what I meant, so I amended the question to allow you to collect relative time differentials, so “time since user last logged in” or “time since last click” or “time since last insulin injection”, but not absolute timestamps. What I was getting at, and what they came up with, was that when you ignore the passage of time through your data, you ignore trends altogether, as well as seasonality. So for the insulin example, you might note that 15 minutes after your insulin injection your blood sugar goes down consistently, but you might not notice an overall trend of your rising blood sugar over the past few months if your dataset for the past few months has no absolute timestamp on it. This idea, of keeping track of trends and seasonalities, is very important in financial data, and essential to keep track of if you want to make money, considering how small the signals are. How to avoid overfitting when you model with time series After discussing seasonality and trends in the various financial markets, we started talking about how to avoid overfitting your model. Specifically, I started out with having a strict concept of in-sample (IS) and out-of-sample (OOS) data. Note the OOS data is not meant as testing data- that all happens inside OOS data. It’s meant to be the data you use after finalizing your model so that you have some idea how the model will perform in production. Next, I discussed the concept of causal modeling. Namely, we should never use information in the future to predict something now. Similarly, when we have a set of training data, we don’t know the “best fit coefficients” for that training data until after the last timestamp on all the data. As we move forward in time from the first timestamp to the last, we expect to get different sets of coefficients as more events happen. One consequence of this is that, instead of getting on set of coefficients, we actually get an evolution of each coefficient. This is helpful because it gives us a sense of how stable those coefficients are. In particular, if one coefficient has changed sign 10 times over the training set, then we expect a good estimate for it is zero, not the so-called “best fit” at the end of the data. One last word on causal modeling and IS/OOS. It is consistent with production code. Namely, you are always acting, in the training and in the OOS simulation, as if you’re running your model in production and you’re seeing how it performs. Of course you fit your model in sample, so you expect it to perform better there than in production. Another way to say this is that, once you have a model in production, you will have to make decisions about the future based only on what you know now (so it’s causal) and you will want to update your model whenever you gather new data. So your coefficients of your model are living organisms that continuously evolve. Submodels of Models We often “prepare” the data before putting it into a model. Typically the way we prepare it has to do with the mean or the variance of the data, or sometimes the log (and then the mean or the variance of that transformed data). But to be consistent with the causal nature of our modeling, we need to make sure our running estimates of mean and variance are also causal. Once we have causal estimates of our mean $\overline{y}$ and variance \$\sigma_y^2\$, we can normalize the next data point with these estimates just like we do to get from a gaussian distribution to the normal gaussian distribution: $y \mapsto \frac{y - \overline{y}}{\sigma_y}$ Of course we may have other things to keep track of as well to prepare our data, and we might run other submodels of our model. For example we may choose to consider only the “new” part of something, which is equivalent to trying to predict something like $y_t - y_{t-1}$ instead of $y_t.$ Or we may train a submodel to figure out what part of $y_{t-1}$ predicts $y_t,$ so a submodel which is a univariate regression or something. There are lots of choices here, but the point is it’s all causal, so you have to be careful when you train your overall model how to introduce your next data point and make sure the steps are all in order of time, and that you’re never ever cheating and looking ahead in time at data that hasn’t happened yet. Financial time series In finance we consider returns, say daily. And it’s not percent returns, actually it’s log returns: if $F_t$ denotes a close on day $t,$ then the return that day is defined as $log(F_t/F_{t-1}).$ See more about this here. Then you get the following log returns: What’s that mess? It’s crazy volatility caused by the financial crisis. We sometimes (not always) want to account for that volatility by normalizing with respect to it (described above). Once we do that we get something like this: Which is clearly better behaved. Note this process is discussed in this post. We could also normalize with respect to the mean, but we typically assume the mean of daily returns is 0, so as to not bias our models on short term trends. Financial Modeling One thing we need to understand about financial modeling is that there’s a feedback loop. If you find a way to make money, it eventually goes away- sometimes people refer to this as the fact that the “market learns over time”. One way to see this is that, in the end, your model comes down to knowing some price is going to go up in the future, so you buy it before it goes up, you wait, and then you sell it at a profit. But if you think about it, your buying it has actually changed the process, and decreased the signal you were anticipating. That’s how the market learns – it’s a combination of a bunch of algorithms anticipating things and making them go away. The consequence of this learning over time is that the existing signals are very weak. We are happy with a 3% correlation for models that have a horizon of 1 day (a “horizon” for your model is how long you expect your prediction to be good). This means not much signal, and lots of noise! In particular, lots of the machine learning “metrics of success” for models, such as measurements of precision or accuracy, are not very relevant in this context. So instead of measuring accuracy, we generally draw a picture to assess models, namely of the (cumulative) PnL of the model. This generalizes to any model as well- you plot the cumulative sum of the product of demeaned forecast and demeaned realized. In other words, you see if your model consistently does better than the “stupidest” model of assuming everything is average. If you plot this and you drift up and to the right, you’re good. If it’s too jaggedy, that means your model is taking big bets and isn’t stable. Why regression? From above we know the signal is weak. If you imagine there’s some complicated underlying relationship between your information and the thing you’re trying to predict, get over knowing what that is – there’s too much noise to find it. Instead, think of the function as possibly complicated, but continuous, and imagine you’ve written it out as a Taylor Series. Then you can’t possibly expect to get your hands on anything but the linear terms. Don’t think about using logistic regression, either, because you’d need to be ignoring size, which matters in finance- it matters if a stock went up 2% instead of 0.01%. But logistic regression forces you to have an on/off switch, which would be possible but would lose a lot of information. Considering the fact that we are always in a low-information environment, this is a bad idea. Note that although I’m claiming you probably want to use linear regression in a noisy environment, the actual terms themselves don’t have to be linear in the information you have. You can always take products of various terms as x’s in your regression. but you’re still fitting a linear model in non-linear terms. Advanced regression The first thing I need to explain is the exponential downweighting of old data, which I already used in a graph above, where I normalized returns by volatility with a decay of 0.97. How do I do this? Working from this post again, the formula is given by essentially a weighted version of the normal one, where I weight recent data more than older data, and where the weight of older data is a power of some parameter $s$ which is called the decay. The exponent is the number of time intervals since that data was new. Putting that together, the formula we get is: $V_{old} = (1-s) \cdot \sum_i r_i^2 s^i.$ We are actually dividing by the sum of the weights, but the weights are powers of some number s, so it’s a geometric sum and the sum is given by $1/(1-s).$ One cool consequence of this formula is that it’s easy to update: if we have a new return $r_0$ to add to the series, then it’s not hard to show we just want $V_{new} = s \cdot V_{old} + (1-s) \cdot r_0^2.$ In fact this is the general rule for updating exponential downweighted estimates, and it’s one reason we like them so much- you only need to keep in memory your last estimate and the number $s.$ How do you choose your decay length? This is an art instead of a science, and depends on the domain you’re in. Think about how many days (or time periods) it takes to weight a data point at half of a new data point, and compare that to how fast the market forgets stuff. This downweighting of old data is an example of inserting a prior into your model, where here the prior is “new data is more important than old data”. What are other kinds of priors you can have? Priors Priors can be thought of as opinions like the above. Besides “new data is more important than old data,” we may decide our prior is “coefficients vary smoothly.” This is relevant when we decide, say, to use a bunch of old values of some time series to help predict the next one, giving us a model like: $y = F_t = \alpha_0 + \alpha_1 F_{t-1} + \alpha_2 F_{t-2} + \epsilon,$ which is just the example where we take the last two values of the time series \$F\$ to predict the next one. But we could use more than two values, of course. [Aside: in order to decide how many values to use, you might want to draw an autocorrelation plot for your data.] The way you’d place the prior about the relationship between coefficients (in this case consecutive lagged data points) is by adding a matrix to your covariance matrix when you perform linear regression. See more about this here. Ethics I then talked about modeling and ethics. My goal is to get this next-gen group of data scientists sensitized to the fact that they are not just nerds sitting in the corner but have increasingly important ethical questions to consider while they work. People tend to overfit their models. It’s human nature to want your baby to be awesome. They also underestimate the bad news and blame other people for bad news, because nothing their baby has done or is capable of is bad, unless someone else made them do it. Keep these things in mind. I then described what I call the deathspiral of modeling, a term I coined in this post on creepy model watching. I counseled the students to • try to maintain skepticism about their models and how their models might get used, • shoot holes in their own ideas, • accept challenges and devise tests as scientists rather than defending their models using words – if someone thinks they can do better, than let them try, and agree on an evaluation method beforehand, • In general, try to consider the consequences of their models. I then showed them Emanuel Derman’s Hippocratic Oath of Modeling, which was made for financial modeling but fits perfectly into this framework. I discussed the politics of working in industry, namely that even if they are skeptical of their model there’s always the chance that it will be used the wrong way in spite of the modeler’s warnings. So the Hippocratic Oath is, unfortunately, insufficient in reality (but it’s a good start!). Finally, there are ways to do good: I mentioned stuff like DataKind. There are also ways to be transparent: I mentioned Open Models, which is so far just an idea, but Victoria Stodden is working on RunMyCode, which is similar and very awesome. Categories: data science, finance, math education, open source tools, statistics ## What is a model? September 28, 2012 I’ve been thinking a lot recently about mathematical models and how to explain them to people who aren’t mathematicians or statisticians. I consider this increasingly important as more and more models are controlling our lives, such as: • employment models, which help large employers screen through applications, • political ad models, which allow political groups to personalize their ads, • credit scoring models, which allow consumer product companies and loan companies to screen applicants, and, • if you’re a teacher, the Value-Added Model. • See more models here and here. It’s a big job, to explain these, because the truth is they are complicated – sometimes overly so, sometimes by construction. The truth is, though, you don’t really need to be a mathematician to know what a model is, because everyone uses internal models all the time to make decisions. For example, you intuitively model everyone’s appetite when you cook a meal for your family. You know that one person loves chicken (but hates hamburgers), while someone else will only eat the pasta (with extra cheese). You even take into account that people’s appetites vary from day to day, so you can’t be totally precise in preparing something – there’s a standard error involved. To explain modeling at this level, then, you just need to imagine that you’ve built a machine that knows all the facts that you do and knows how to assemble them together to make a meal that will approximately feed your family. If you think about it, you’ll realize that you know a shit ton of information about the likes and dislikes of all of your family members, because you have so many memories of them grabbing seconds of the asparagus or avoiding the string beans. In other words, it would be actually incredibly hard to give a machine enough information about all the food preferences for all your family members, and yourself, along with the constraints of having not too much junky food, but making sure everyone had something they liked, etc. etc. So what would you do instead? You’d probably give the machine broad categories of likes and dislikes: this one likes meat, this one likes bread and pasta, this one always drinks lots of milk and puts nutella on everything in sight. You’d dumb it down for the sake of time, in other words. The end product, the meal, may not be perfect but it’s better than no guidance at all. That’s getting closer to what real-world modeling for people is like. And the conclusion is right too- you aren’t expecting your model to do a perfect job, because you only have a broad outline of the true underlying facts of the situation. Plus, when you’re modeling people, you have to a priori choose the questions to ask, which will probably come in the form of “does he/she like meat?” instead of “does he/she put nutella on everything in sight?”; in other words, the important but idiosyncratic rules won’t even be seen by a generic one-size-fits-everything model. Finally, those generic models are hugely scaled- sometimes there’s really only one out there, being used everywhere, and its flaws are compounded that many times over because of its reach. So, say you’ve got a CV with a spelling error. You’re trying to get a job, but the software that screens for applicants automatically rejects you because of this spelling error. Moreover, the same screening model is used everywhere, and you therefore don’t get any interviews because of this one spelling error, in spite of the fact that you’re otherwise qualified. I’m not saying this would happen – I don’t know how those models actually work, although I do expect points against you for spelling errors. My point is there’s some real danger in using such models on a very large scale that we know are simplified versions of reality. One last thing. The model fails in the example above, because the qualified person doesn’t get a job. But it fails invisibly; nobody knows exactly how it failed or even that it failed. Moreover, it only really fails for the applicant who doesn’t get any interviews. For the employer, as long as some qualified applicants survive the model, they don’t see failure at all. Categories: data science, math education, statistics ## Columbia Data Science course, week 4: K-means, Classifiers, Logistic Regression, Evaluation September 27, 2012 This week our guest lecturer for the Columbia Data Science class was Brian Dalessandro. Brian works at Media6Degrees as a VP of Data Science, and he’s super active in the research community. He’s also served as co-chair of the KDD competition. Before Brian started, Rachel threw us a couple of delicious data science tidbits. The Process of Data Science First we have the Real World. Inside the Real World we have: • People competing in the Olympics • Spammers sending email From this we draw raw data, e.g. logs, all the olympics records, or Enron employee emails. We want to process this to make it clean for analysis. We use pipelines of data munging, joining, scraping, wrangling or whatever you want to call it and we use tools such as: • python • shell scripts • R • SQL We eventually get the data down to a nice format, say something with columns: name event year gender event time Note: this is where you typically start in a standard statistics class. But it’s not where we typically start in the real world. Once you have this clean data set, you should be doing some kind of exploratory data analysis (EDA); if you don’t really know what I’m talking about then look at Rachel’s recent blog post on the subject. You may realize that it isn’t actually clean. Next, you decide to apply some algorithm you learned somewhere: • k-nearest neighbor • regression • Naive Bayes • (something else), depending on the type of problem you’re trying to solve: • classification • prediction • description You then: • interpret • visualize • report • communicate At the end you have a “data product”, e.g. a spam classifier. K-means So far we’ve only seen supervised learning. K-means is the first unsupervised learning technique we’ll look into. Say you have data at the user level: • G+ data • survey data • medical data • SAT scores Assume each row of your data set corresponds to a person, say each row corresponds to information about the user as follows: age gender income Geo=state household size Your goal is to segment them, otherwise known as stratify, or group, or cluster. Why? For example: • you might want to give different users different experiences. Marketing often does this. • you might have a model that works better for specific groups • hierarchical modeling in statistics does something like this. One possibility is to choose the groups yourself. Bucket users using homemade thresholds. Like by age, 20-24, 25-30, etc. or by income. In fact, say you did this, by age, gender, state, income, marital status. You may have 10 age buckets, 2 gender buckets, and so on, which would result in 10x2x50x10x3 = 30,000 possible bins, which is big. You can picture a five dimensional space with buckets along each axis, and each user would then live in one of those 30,000 five-dimensional cells. You wouldn’t want 30,000 marketing campaigns so you’d have to bin the bins somewhat. Wait, what if you want to use an algorithm instead where you could decide on the number of bins? K-means is a “clustering algorithm”, and k is the number of groups. You pick k, a hyper parameter. 2-d version Say you have users with #clicks, #impressions (or age and income – anything with just two numerical parameters). Then k-means looks for clusters on the 2-d plane. Here’s a stolen and simplistic picture that illustrates what this might look like: The general algorithm is just the same picture but generalized to d dimensions, where d is the number of features for each data point. Here’s the actual algorithm: • randomly pick K centroids • assign data to closest centroid. • move the centroids to the average location of the users assigned to it • repeat until the assignments don’t change It’s up to you to interpret if there’s a natural way to describe these groups. This is unsupervised learning and it has issues: • choosing an optimal k is also a problem although $1 \leq k \leq n$ , where n is number of data points. • convergence issues – the solution can fail to exist (the configurations can fall into a loop) or “wrong” • but it’s also fast • interpretability can be a problem – sometimes the answer isn’t useful • in spite of this, there are broad applications in marketing, computer vision (partition an image), or as a starting point for other models. One common tool we use a lot in our systems is logistic regression. Thought Experiment Brian now asked us the following: How would data science differ if we had a “grand unified theory of everything”? He gave us some thoughts: • Would we even need data science? • Theory offers us a symbolic explanation of how the world works. • What’s the difference between physics and data science? • Is it just accuracy? After all, Newton wasn’t completely precise, but was pretty close. If you think of the sciences as a continuum, where physics is all the way on the right, and as you go left, you get more chaotic, then where is economics on this spectrum? Marketing? Finance? As we go left, we’re adding randomness (and as a clever student points out, salary as well). Bottomline: if we could model this data science stuff like we know how to model physics, we’d know when people will click on what ad. The real world isn’t this understood, nor do we expect to be able to in the future. Does “data science” deserve the word “science” in its name? Here’s why maybe the answer is yes. We always have more than one model, and our models are always changing. The art in data science is this: translating the problem into the language of data science The science in data science is this: given raw data, constraints and a problem statement, you have an infinite set of models to choose from, with which you will use to maximize performance on some evaluation metric, that you will have to specify. Every design choice you make can be formulated as an hypothesis, upon which you will use rigorous testing and experimentation to either validate or refute. Never underestimate the power of creativity: usually people have vision but no method. As the data scientist, you have to turn it into a model within the operational constraints. You need to optimize a metric that you get to define. Moreover, you do this with a scientific method, in the following sense. Namely, you hold onto your existing best performer, and once you have a new idea to prototype, then you set up an experiment wherein the two best models compete. You therefore have a continuous scientific experiment, and in that sense you can justify it as a science. Classifiers Given • data • a problem, and • constraints, we need to determine: • a classifier, • an optimization method, • a loss function, • features, and • an evaluation metric. Today we will focus on the process of choosing a classifier. Classification involves mapping your data points into a finite set of labels or the probability of a given label or labels. Examples of when you’d want to use classification: • will someone click on this ad? • what number is this? • what is this news article about? • is this spam? • is this pill good for headaches? From now on we’ll talk about binary classification only (0 or 1). Examples of classification algorithms: • decision tree • random forests • naive bayes • k-nearest neighbors • logistic regression • support vector machines • neural networks Which one should we use? One possibility is to try them all, and choose the best performer. This is fine if you have no constraints or if you ignore constraints. But usually constraints are a big deal – you might have tons of data or not much time or both. If I need to update 500 models a day, I do need to care about runtime. these end up being bidding decisions. Some algorithms are slow – k-nearest neighbors for example. Linear models, by contrast, are very fast. One under-appreciated constraint of a data scientist is this: your own understanding of the algorithm. Ask yourself carefully, do you understand it for real? Really? Admit it if you don’t. You don’t have to be a master of every algorithm to be a good data scientist. The truth is, getting the “best-fit” of an algorithm often requires intimate knowledge of said algorithm. Sometimes you need to tweak an algorithm to make it fit your data. A common mistake for people not completely familiar with an algorithm is to overfit. Another common constraint: interpretability. You often need to be able to interpret your model, for the sake of the business for example. Decision trees are very easy to interpret. Random forests, on the other hand, not so much, even though it’s almost the same thing, but can take exponentially longer to explain in full. If you don’t have 15 years to spend understanding a result, you may be willing to give up some accuracy in order to have it easy to understand. Note that credit cards have to be able to explain their models by law so decision trees make more sense than random forests. How about scalability? In general, there are three things you have to keep in mind when considering scalability: • learning time: how much time does it take to train the model? • scoring time: how much time does it take to give a new user a score once the model is in production? • model storage: how much memory does the production model use up? Here’s a useful paper to look at when comparing models: “An Empirical Comparison of Supervised Learning Algorithms”, from which we learn: • Simpler models are more interpretable but aren’t as good performers. • The question of which algorithm works best is problem dependent • It’s also constraint dependent At M6D, we need to match clients (advertising companies) to individual users. We have logged the sites they have visited on the internet. Different sites collect this information for us. We don’t look at the contents of the page – we take the url and hash it into some random string and then we have, say, the following data about a user we call “u”: u = <xyz, 123, sdqwe, 13ms> This means u visited 4 sites and their urls hashed to the above strings. Recall last week we learned spam classifier where the features are words. We aren’t looking at the meaning of the words. So the might as well be strings. At the end of the day we build a giant matrix whose columns correspond to sites and whose rows correspond to users, and there’s a “1″ if that user went to that site. To make this a classifier, we also need to associate the behavior “clicked on a shoe ad”. So, a label. Once we’ve labeled as above, this looks just like spam classification. We can now rely on well-established methods developed for spam classification – reduction to a previously solved problem. Logistic Regression We have three core problems as data scientists at M6D: • feature engineering, • user level conversion prediction, • bidding. We will focus on the second. We use logistic regression- it’s highly scalable and works great for binary outcomes. What if you wanted to do something else? You could simply find a threshold so that, above you get 1, below you get 0. Or you could use a linear model like linear regression, but then you’d need to cut off below 0 or above 1. What’s better: fit a function that is bounded in side [0,1]. For example, the logit function $P(t)= \frac{1}{(1+ e^{-t})}.$ wanna estimate $P(c_i | x) = f(x) = \frac{1}{1 + e^{-(\alpha + \beta^t*x)}}$. To make this a linear model in the outcomes $c_i$, we take the log of the odds ratio: $ln(P(c_i | x)/(1-P(c_i | x))) = \alpha + \beta^t *x.$ The parameter $\alpha$ keeps shape of the logit curve but shifts it back and forth. To interpret $\alpha$ further, consider what we call the base rate, the unconditional probability of “1″ (so, in the case of ads, the base rate would correspond to the click-through rate, i.e. the overall tendency for people to click on ads; this is typically on the order of 1%). If you had no information except the base rate, the average prediction would be just that. In a logistical regression, $\alpha$ defines the base rate. Specifically, the base rate is approximately equal to $\frac{1}{1+e^{-\alpha}}.$ The slope $\beta$ defines the slope of the logit function. Note in general it’s a vector which is as long as the number of features we are using for each data point. Our immediate modeling goal is to use our training data to find the best choices for $\alpha$ and $\beta.$ We will use a maximum likelihood estimation or convex optimization to achieve this; we can’t just use derivatives and vector calculus like we did with linear regression because it’s a complicated function of our data. The likelihood function $L$ is defined by: $L(\Theta | X_1, X_2, \dots , X_n) = P(X | \Theta) =$ $P(X_1 | \Theta) \cdot \dots \cdot P(X_n | \Theta),$ where we are assuming the data points $X_i$ are independent and where $\Theta = \{\alpha, \beta\}.$ We then search for the parameters that maximize this having observed our data: $\Theta_{MLE} = argmax_{\Theta} \prod_1^n P(X_i | \Theta).$ The probability of a single observation is $p_i^{Y_i} \cdot (1-p_i)^{1-Y_i},$ where $p_i = 1/(1+e^{-(\alpha + \beta^t x)})$ is the modeled probability of a “1″ for the binary outcome \$Y_i.\$ Taking the product of all of these we get our likelihood function which we want to maximize. Similar to last week, we now take the log and get something convex, so it has to have a global maximum. Finally, we use numerical techniques to find it, which essentially follow the gradient like Newton’s method from calculus. Computer programs can do this pretty well. These algorithms depend on a step size, which we will need to adjust as we get closer to the global max or min – there’s an art to this piece of numerical optimization as well. Each step of the algorithm looks something like this: $x_{n+1} = x_n - \gamma_n \Delta F(x_n),$ where remember we are actually optimizing our parameters $\alpha$ and $\beta$ to maximize the (log) likelihood function, so the $x$ you see above is really a vector of $\beta$s and the function $F$ corresponds to our $log(L).$ “Flavors” of Logistic Regression for convex optimization. The Newton’s method we described above is also called Iterative Reweighted Least Squares. It uses the curvature of log-likelihood to choose appropriate step direction. The actual calculation involves the Hessian matrix, and in particular requires its inversion, which is a kxk matrix. This is bad when there’s lots of features, as in 10,000 or something. Typically we don’t have that many features but it’s not impossible. Another possible method to maximize our likelihood or log likelihood is called Stochastic Gradient Descent. It approximates gradient using a single observation at a time. The algorithm updates the current best-fit parameters each time it sees a new data point. The good news is that there’s no big matrix inversion, and it works well with huge data and with sparse features; it’s a big deal in Mahout and Vowpal Wabbit. The bad news is it’s not such a great optimizer and it’s very dependent on step size. Evaluation We generally use different evaluation metrics for different kind of models. First, for ranking models, where we just want to know a relative rank versus and absolute score, we’d look to one of: Second, for classification models, we’d look at the following metrics: • lift: how much more people are buying or clicking because of a model • accuracy: how often the correct outcome is being predicted • f-score • precision • recall Finally, for density estimation, where we need to know an actual probability rather than a relative score, we’d look to: In general it’s hard to compare lift curves but you can compare AUC (area under the receiver operator curve) – they are “base rate invariant.” In other words if you bring the click-through rate from 1% to 2%, that’s 100% lift but if you bring it from 4% to 7% that’s less lift but more effect. AUC does a better job in such a situation when you want to compare. Density estimation tests tell you how well are you fitting for conditional probability. In advertising, this may arise if you have a situation where each ad impression costs \$c and for each conversion you receive \$q. You will want to target every conversion that has a positive expected value, i.e. whenever $P(Conversion | X) \cdot \$q > \$c.$ But to do this you need to make sure the probability estimate on the left is accurate, which in this case means something like the mean squared error of the estimator is small. Note a model can give you good rankings but bad P estimates. Similarly, features that rank highly on AUC don’t necessarily rank well with respect to mean absolute error. So feature selection, as well as your evaluation method, is completely context-driven. Categories: data science, math education, open source tools, statistics ## Evaluating professor evaluations September 24, 2012 I recently read this New York Times “Room for Debate” on professor evaluations. There were some reasonably good points made, with people talking about the trend that students generally give better grades to attractive professors and easy grading professors, and that they were generally more interested in the short-term than in the long-term in this sense. For these reasons, it was stipulated, it would be better and more informative to have anonymous evaluations, or have students come back after some time to give evaluations, or interesting ideas like that. Then there was a crazy crazy man named Jeff Sandefer, co-founder and master teacher at the Acton School of Business in Austin, Texas. He likes to call his students “customers” and here’s how he deals with evaluations: Acton, the business school that I co-founded, is designed and is led exclusively by successful chief executives. We focus intently on customer feedback. Every week our students rank each course and professor, and the results are made public for all to see. We separate the emotional venting from constructive criticism in the evaluations, and make frequent changes in the program in real time. We also tie teacher bonuses to the student evaluations and each professor signs an individual learning covenant with each student. We have eliminated grade inflation by using a forced curve for student grades, and students receive their grades before evaluating professors. Not only do we not offer tenure, but our lowest rated teachers are not invited to return. First of all, I’m not crazy about the idea of weekly rankings and public shaming going on here. And how do you separate emotional venting from constructive criticism anyway? Isn’t the customer always right? Overall the experience of the teachers doesn’t sound good – if I have a choice as a teacher, I teach elsewhere, unless the pay and the students are stellar. On the other hand, I think it’s interesting that they have a curve for student grades. This does prevent the extra good evaluations coming straight from grade inflation (I’ve seen it, it does happen). Here’s one think I didn’t see discussed, which is students themselves and how much they want to be in the class. When I taught first semester calculus at Barnard twice in consecutive semesters, my experience was vastly different in the two classes. The first time I taught, in the Fall, my students were mostly straight out of high school, bright eyed and bushy tailed, and were happy to be there, and I still keep in touch with some of them. It was a great class, and we all loved each other by the end of it. I got crazy good reviews. By contrast, the second time I taught the class, which was the next semester, my students were annoyed, bored, and whiny. I had too many students in the class, partly because my reviews were so good. So the class was different on that score, but I don’t think that mattered so much to my teaching. My theory, which was backed up by all the experienced Profs in the math department, was that I had the students who were avoiding calculus for some reason. And when I thought about it, they weren’t straight out of high school, they were all over the map. They generally were there only because they needed some kind of calculus to fulfill a requirement for their major. Unsurprisingly, I got mediocre reviews, with some really pretty nasty ones. The nastiest ones, I noticed, all had some giveaway that they had a bad attitude- something like, “Cathy never explains anything clearly, and I hate calculus.” My conclusion is that I get great evaluations from students who want to learn calculus and nasty evaluations from students who resent me asking them to really learn calculus. What should we do about prof evaluations? The problem with using evaluations to measure professor effectiveness is that you might be a prof that only has ever taught calculus in the Spring, and then you’d be wrongfully punished. That’s where we are now, and people know it, so instead of using them they just mostly ignore them. Of course, the problem with not ever using these evaluations is that they might actually contain good information that you could use to get better at teaching. We have a lot of data collected on teacher evaluations, so I figure we should be analyzing it to see if there really is a useful signal or not. And we should use domain expertise from experienced professors to see if there are any other effects besides the “Fall/Spring attitude towards math” effect to keep in mind. It’s obviously idiosyncratic depending on field and even which class it is, i.e. Calc II versus Calc III. If there even is a signal after you extract the various effects and the “attractiveness” effect, I expect it to be very noisy and so I’d hate to see someone’s entire career depend on evaluations, unless there was something really outrageous going on. In any case it would be fun to do that analysis. Categories: data science, math, math education, statistics ## Columbia Data Science course, week 3: Naive Bayes, Laplace Smoothing, and scraping data off the web September 20, 2012 In the third week of the Columbia Data Science course, our guest lecturer was Jake Hofman. Jake is at Microsoft Research after recently leaving Yahoo! Research. He got a Ph.D. in physics at Columbia and taught a fantastic course on modeling last semester at Columbia. After introducing himself, Jake drew up his “data science profile;” turns out he is an expert on a category that he created called “data wrangling.” He confesses that he doesn’t know if he spends so much time on it because he’s good at it or because he’s bad at it. Thought Experiment: Learning by Example Jake had us look at a bunch of text. What is it? After some time we describe each row as the subject and first line of an email in Jake’s inbox. We notice the bottom half of the rows of text looks like spam. Now Jake asks us, how did you figure this out? Can you write code to automate the spam filter that your brain is? Some ideas the students came up with: • Any email is spam if it contains Viagra references. Jake: this will work if they don’t modify the word. • Maybe something about the length of the subject? • Exclamation points may point to spam. Jake: can’t just do that since “Yahoo!” would count. • Jake: keep in mind spammers are smart. As soon as you make a move, they game your model. It would be great if we could get them to solve important problems. • Should we use a probabilistic model? Jake: yes, that’s where we’re going. • Should we use k-nearest neighbors? Should we use regression? Recall we learned about these techniques last week. Jake: neither. We’ll use Naive Bayes, which is somehow between the two. Why is linear regression not going to work? Say you make a feature for each lower case word that you see in any email and then we used R’s “lm function:” lm(spam ~ word1 + word2 + …) Wait, that’s too many variables compared to observations! We have on the order of 10,000 emails with on the order of 100,000 words. This will definitely overfit. Technically, this corresponds to the fact that the matrix in the equation for linear regression is not invertible. Moreover, maybe can’t even store it because it’s so huge. Maybe you could limit to top 10,000 words? Even so, that’s too many variables vs. observations to feel good about it. Another thing to consider is that target is 0 or 1 (0 if not spam, 1 if spam), whereas you wouldn’t get a 0 or a 1 in actuality through using linear regression, you’d get a number. Of course you could choose a critical value so that above that we call it “1″ and below we call it “0″. Next week we’ll do even better when we explore logistic regression, which is set up to model a binary response like this. To use k-nearest neighbors we would still need to choose features, probably corresponding to words, and you’d likely define the value of those features to be 0 or 1 depending on whether the word is present or not. This leads to a weird notion of “nearness”. Again, with 10,000 emails and 100,000 words, we’ll encounter a problem: it’s not a singular matrix this time but rather that the space we’d be working in has too many dimensions. This means that, for example, it requires lots of computational work to even compute distances, but even that’s not the real problem. The real problem is even more basic: even your nearest neighbors are really far away. this is called “the curse of dimensionality“. This problem makes for a poor algorithm. Question: what if sharing a bunch of words doesn’t mean sentences are near each other in the sense of language? I can imagine two sentences with the same words but very different meanings. Jake: it’s not as bad as it sounds like it might be – I’ll give you references at the end that partly explain why. Aside: digit recognition In this case k-nearest neighbors works well and moreover you can write it in a few lines of R. Take your underlying representation apart pixel by pixel, say in a 16 x 16 grid of pixels, and measure how bright each pixel is. Unwrap the 16×16 grid and put it into a 256-dimensional space, which has a natural archimedean metric. Now apply the k-nearest neighbors algorithm. Some notes: • If you vary the number of neighbors, it changes the shape of the boundary and you can tune k to prevent overfitting. • You can get 97% accuracy with a sufficiently large data set. • Result can be viewed in a “confusion matrix“. Naive Bayes Question: You’re testing for a rare disease, with 1% of the population is infected. You have a highly sensitive and specific test: • 99% of sick patients test positive • 99% of healthy patients test negative Given that a patient tests positive, what is the probability that the patient is actually sick? Answer: Imagine you have 100×100 = 10,000 people. 100 are sick, 9,900 are healthy. 99 sick people test sick, and 99 healthy people do too. So if you test positive, you’re equally likely to be healthy or sick. So the answer is 50%. Let’s do it again using fancy notation so we’ll feel smart: Recall $p(y|x) p(x) = p(x, y) = p(x|y) p(y)$ and solve for $p(y|x):$ $p(y|x) = \frac{p(x|y) p(y)}{p(x)}.$ The denominator can be thought of as a “normalization constant;” we will often be able to avoid explicitly calculuating this. When we apply the above to our situation, we get: $p(sick|+) = p(+|sick) p(sick) / p(+) = 99/198 = 1/2.$ This is called “Bayes’ Rule“. How do we use Bayes’ Rule to create a good spam filter? Think about it this way: if the word “Viagra” appears, this adds to the probability that the email is spam. To see how this will work we will first focus on just one word at a time, which we generically call “word”. Then we have: $p(spam|word) = p(word|spam) p(spam) / p(word)$ The right-hand side of the above is computable using enough pre-labeled data. If we refer to non-spam as “ham”, we only need $p(word|spam), p(word|ham), p(spam),$ and $p(ham).$ This is essentially a counting exercise. Example: go online and download Enron emails. Awesome. We are building a spam filter on that – really this means we’re building a new spam filter on top of the spam filter that existed for the employees of Enron. Jake has a quick and dirty shell script in bash which runs this. It downloads and unzips file, creates a folder. Each text file is an email. They put spam and ham in separate folders. Jake uses “wc” to count the number of messages for one former Enron employee, for example. He sees 1500 spam, and 3672 ham. Using grep, he counts the number of instances of “meeting”: grep -il meeting enron1/spam/*.txt | wc -l This gives 153. This is one of the handful of computations we need to compute $p(spam|meeting) = 0.09.$ Note we don’t need a fancy programming environment to get this done. Next, we try: • “money”: 80% chance of being spam. • “viagra”: 100% chance. • “enron”: 0% chance. This illustrates overfitting; we are getting overconfident because of our biased data. It’s possible, in other words, to write an non-spam email with the word “viagra” as well as a spam email with the word “enron.” Next, do it for all the words. Each document can be represented by a binary vector, whose jth entry is 1 or 0 depending whether jth word appears. Note this is a huge-ass vector, we will probably actually represent it with the indices of the words that actually do show up. Here’s the model we use to estimate the probability that we’d see a given word vector given that we know it’s spam (or that it’s ham). We denote the document vector $x$ and the various entries $x_j$, where the $j$ correspond to all the indices of $x,$ in other words over all the words. For now we denote “is spam” by $c$: $p(x|c) = \prod_j \theta^{x_j}_{jc} (1- \theta_{jc})^{(1-x_j)}$ The theta here is the probability that an individual word is present in a spam email (we can assume separately and parallel-ly compute that for every word). Note we are modeling the words independently and we don’t count how many times they are present. That’s why this is called “Naive”. Let’s take the log of both sides: $log(p(x|c)) = \sum_j x_j log(\theta_{jc}/(1-\theta_{jc})) + \sum_j log(1-\theta_{jc})$ [It's good to take the log because multiplying together tiny numbers can give us numerical problems.] The term $log(\theta/(1-\theta))$ doesn’t depend on a given document, just the word, so let’s rename it $w_j.$ Same with $log(\theta/(1-\theta)) = w_0$. The real weights that vary by document are the $x_j$‘s. We can now use Bayes’ Rule to get an estimate of $p(c|x),$ which is what we actually want. We can also get away with not computing all the terms if we only care whether it’s more likely to be spam or to be ham. Only the varying term needs to be computed. Wait, this ends up looking like a linear regression! But instead of computing them by inverting a huge matrix, the weights come from the Naive Bayes’ algorithm. This algorithm works pretty well and it’s “cheap to train” if you have pre-labeled data set to train on. Given a ton of emails, just count the words in spam and non-spam emails. If you get more training data you can easily increment your counts. In practice there’s a global model, which you personalize to individuals. Moreover, there are lots of hard-coded, cheap rules before an email gets put into a fancy and slow model. Here are some references: • “Idiot’s Bayes – not so stupid after all?” – the whole paper is about why it doesn’t suck, which is related to redundancies in language. • “Naive Bayes at Forty: The Independence Assumption in Information“ • “Spam Filtering with Naive Bayes – Which Naive Bayes?“ Laplace Smoothing Laplace Smoothing refers to the idea of replacing our straight-up estimate of the probability $\theta_{jc} = n_{jc}/n_c$ of seeing a given word in a spam email with something a bit fancier: $\theta_{jc} = (n_{jc} + \alpha)/ (n_c + \beta).$ We might fix $\alpha = 1$ and $\beta = 10$ for example, to prevents the possibility of getting 0 or 1 for a probability. Does this seem totally ad hoc? Well if we want to get fancy, we can see this as equivalent to having a prior and performing a maximal likelihood estimate. If we denote by $ML$ the maximal likelihood estimate, then we have: $\theta_{ML} = argmax _{\theta} p(D | \theta)$ In other words, we are asking the question, for what value of $\theta$ were the data D most probable? If we assume independent trials then we want to maximize $log(\theta^n (1-\theta)^{N-n})$ If you take the derivative, and set it to zero, we get $\hat{\theta} = n/N.$ In other words, just what we had before. Now let’s add a prior. Denote by $MAP$ the maximum a posteriori likelihood: $\theta_{MAP} = argmax p(\theta | D)$ This similarly asks the question, given the data I saw, which parameter is the most likely? Use Bayes’ rule to get $p(D|\theta)*p(\theta)$. This looks similar to above except for the $p(\theta)$, which is the “prior”. If I assume $p(\theta)$ is of the form $\theta^{\alpha} (1- \theta)^{\beta}$; then we get the above, Laplacian smoothed version. Sometimes $\alpha$ and $\beta$ are called “pseudo counts”. They’re fancy but also simple. It’s up to the data scientist to set the values of these hyperparameters, and it gives us two knobs to tune. By contrast, k-nearest neighbors has one knob, namely k. Note: In the last 5 years people have started using stochastic gradient methods to avoid the non-invertible (overfitting) matrix problem. Switching to logistic regression with stochastic gradient method helped a lot, and can account for correlations between words. Even so, Naive Bayes’ is pretty impressively good considering how simple it is. Scraping the web: API’s For the sake of this discussion, an API (application programming interface) is something websites provide to developers so they can download data from the website easily and in standard format. Usually the developer has to register and receive a “key”, which is something like a password. For example, the New York Times has an API here. Note that some websites limit what data you have access to through their API’s or how often you can ask for data without paying for it. When you go this route, you often get back weird formats, sometimes in JSON, but there’s no standardization to this standardization, i.e. different websites give you different “standard” formats. One way to get beyond this is to use Yahoo’s YQL language which allows you to go to the Yahoo! Developer Network and write SQL-like queries that interact with many of the common API’s on the common sites like this: select * from flickr.photos.search where text=”Cat” and api_key=”lksdjflskjdfsldkfj” limit 10 The output is standard and I only have to parse this in python once. What if you want data when there’s no API available? Note: always check the terms and services of the website before scraping. In this case you might want to use something like the Firebug extension for Firefox, you can “inspect the element” on any webpage, and Firebug allows you to grab the field inside the html. In fact it gives you access to the full html document so you can interact and edit. In this way you can see the html as a map of the page and Firebug is a kind of tourguide. After locating the stuff you want inside the html, you can use curl, wget, grep, awk, perl, etc., to write a quick and dirty shell script to grab what you want, especially for a one-off grab. If you want to be more systematic you can also do this using python or R. Other parsing tools you might want to look into: • lynx and lynx –dump: good if you pine for the 1970′s. Oh wait, 1992. Whatever. • Beautiful Soup: robust but kind of slow • Mechanize (or here) super cool as well but doesn’t parse javascript. Postscript: Image Classification How do you determine if an image is a landscape or a headshot? You either need to get someone to label these things, which is a lot of work, or you can grab lots of pictures from flickr and ask for photos that have already been tagged. Represent each image with a binned RGB – (red green blue) intensity histogram. In other words, for each pixel, for each of red, green, and blue, which are the basic colors in pixels, you measure the intensity, which is a number between 0 and 255. Then draw three histograms, one for each basic color, showing us how many pixels had which intensity. It’s better to do a binned histogram, so have counts of # pixels of intensity 0 – 51, etc. – in the end, for each picture, you have 15 numbers, corresponding to 3 colors and 5 bins per color. We are assuming every picture has the same number of pixels here. Finally, use k-nearest neighbors to decide how much “blue” makes a landscape versus a headshot. We can tune the hyperparameters, which in this case are # of bins as well as k. Categories: data science, math education, open source tools, statistics ## Why are the Chicago public school teachers on strike? September 14, 2012 The issues of pay and testing My friend and fellow HCSSiM 2012 staff member P.J. Karafiol explains some important issues in a Chicago Sun Times column entitled “Hard facts behind union, board dispute.” P.J. is a Chicago public school math teacher, he has two kids in the CPS system, and he’s a graduate from that system. So I think he is qualified to speak on the issues. He first explains that CPS teachers are paid less than those in the suburbs. This means, among other things, that it’s hard to keep good teachers. Next, he explains that, although it is difficult to argue against merit pay, the value-added models that Rahm Emanuel wants to account for half of teachers evaluation, is deeply flawed. He then points out that, even if you trust the models, the number of teachers the model purports to identify as bad is so high that taking action on that result by firing them all would cause a huge problem – there’s a certain natural rate of finding and hiring good replacement teachers in the best of times, and these are not the best of times. He concludes with this: Teachers in Chicago are paid well initially, but face rising financial incentives to move to the suburbs as they gain experience and proficiency. No currently-existing “value added” evaluation system yields consistent, fair, educationally sound results. And firing bad teachers won’t magically create better ones to take their jobs. To make progress on these issues, we have to figure out a way to make teaching in the city economically viable over the long-term; to evaluate teachers in a way that is consistent and reasonable, and that makes good sense educationally; and to help struggling teachers improve their practice. Because at base, we all want the same thing: classes full of students eager to be learning from their excellent, passionate teachers. Test anxiety Ultimately this crappy model, and the power that it yields, creates a culture of text anxiety for teachers and principals as well as for students. As Eric Zorn (grandson of mathematician Max Zorn) writes in the Chicago Tribune (h/t P.J. Karafiol): The question: But why are so many presumptively good teachers also afraid? Why has the role of testing in teacher evaluations been a major sticking point in the public schools strike in Chicago? The short answer: Because student test scores provide unreliable and erratic measurements of teacher quality. Because studies show that from subject to subject and from year to year, the same teacher can look alternately like a golden apple and a rotting fig. Zorn quotes extensively from Math for America President John Ewing’s article in Notices of the American Mathematical Society: Analyses of (value-added model) results have led researchers to doubt whether the methodology can accurately identify more and less effective teachers. (Value-added model) estimates have proven to be unstable across statistical models, years and classes that teachers teach. One study found that across five large urban districts, among teachers who were ranked in the top 20 percent of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40 percent. Another found that teachers’ effectiveness ratings in one year could only predict from 4 percent to 16 percent of the variation in such ratings in the following year. The politics behind the test I agree that the value-added model (VAM) is deeply flawed; I’ve blogged about it multiple times, for example here. The way I see it, VAM is a prime example of the way that mathematics is used as a weapon against normal people – in this case, teachers, principals, and schools. If you don’t see my logic, ask yourself this: Why would a overly-complex, unproved and very crappy model be so protected by politicians? There’s really one reason, namely it serves a political function, not a mathematical one. And that political function is to maintain control over the union via a magical box that nobody completely understands (including the politicians, but it serves their purposes in spite of this) and therefore nobody can argue against. This might seem ridiculous when you have examples like this one from the Washington Post (h/t Chris Wiggins), in which a devoted and beloved math teacher named Ashley received a ludicrously low VAM score. I really like the article: it was written by Sean C. Feeney, Ashley’s principal at The Wheatley School in New York State and president of the Nassau County High School Principals’ Association. Feeney really tries to understand how the model works and how it uses data. Feeney uncovers the crucial facts that, on the one hand nobody understands how VAM works at all, and that, on the other, the real reason it’s being used is for the political games being played behind the scenes (emphasis mine): Officials at our State Education Department have certainly spent countless hours putting together guides explaining the scores. These documents describe what they call an objective teacher evaluation process that is based on student test scores, takes into account students’ prior performance, and arrives at a score that is able to measure teacher effectiveness. Along the way, the guides are careful to walk the reader through their explanations of Student Growth Percentiles (SGPs) and a teacher’s Mean Growth Percentile (MGP), impressing the reader with discussions and charts of confidence ranges and the need to be transparent about the data. It all seems so thoughtful and convincing! After all, how could such numbers fail to paint an accurate picture of a teacher’s effectiveness? (One of the more audacious claims of this document is that the development of this evaluative model is the result of the collaborative efforts of the Regents Task Force on Teacher and Principal Effectiveness. Those of us who know people who served on this committee are well aware that the recommendations of the committee were either rejected or ignored by State Education officials.) Feeney wasn’t supposed to do this. He wasn’t supposed to assume he was smart enough to understand the math behind the model. He wasn’t supposed to realize that these so-called “guides to explain the scores” actually represent the smoke being blown into the eyes of educators for the purposes of dismembering what’s left of the power of teachers’ unions in this country. If he were better behaved, he would have bowed to the authority of the inscrutable, i.e. mathematics, and assume that his prize math teacher must have had flaws he, as her principal, just hadn’t seen before. Weapons of Math Destruction Politicans have created a WMD (Weapon of Math Destruction) in VAM; it’s the equivalent of owning an uzi factory when you’re fighting a war against people with pointy sticks. It’s not the only WMD out there, but it’s a pretty powerful one, and it’s doing outrageous damage to our educational system. If you don’t know what I mean by WMD, let me help out: one way to spot a WMD is to look at the name versus the underlying model and take note of discrepancies. VAM is a great example of this: • The name “Value-Added Model” makes us think we might learn how much a teacher brings to the class above and beyond, say, rote memorization. • In fact, if you look carefully you will see that the model is measuring exactly that: teaching to the test, but with errorbars so enormous that the noise almost completely obliterates any “teaching to the test” signal. Nobody wants crappy teachers in the system, but vilifying well-meaning and hard-working professionals and subjecting them to random but high-stakes testing is not the solution, it’s pure old-fashioned scapegoating. The political goal of the national VAM movement is clear: take control of education and make sure teachers know their place as the servants of the system, with no job security and no respect. No wonder the Chicago public school teachers are on strike. I would be too. Categories: data science, math education, news, rant, statistics ## Columbia data science course, week 2: RealDirect, linear regression, k-nearest neighbors September 13, 2012 Data Science Blog Today we started with discussing Rachel’s new blog, which is awesome and people should check it out for her words of data science wisdom. The topics she’s riffed on so far include: Why I proposed the course, EDA (exploratory data analysis), Analysis of the data science profiles from last week, and Defining data science as a research discipline. She wants students and auditors to feel comfortable in contributing to blog discussion, that’s why they’re there. She particularly wants people to understand the importance of getting a feel for the data and the questions before ever worrying about how to present a shiny polished model to others. To illustrate this she threw up some heavy quotes: “Long before worrying about how to convince others, you first have to understand what’s happening yourself” – Andrew Gelman “Agreed” – Rachel Schutt Thought experiment: how would you simulate chaos? We split into groups and discussed this for a few minutes, then got back into a discussion. Here are some ideas from students: • A Lorenzian water wheel would do the trick, if you know what that is. • Question: is chaos the same as randomness? • Many a physical system can exhibit inherent chaos: examples with finite-state machines • Teaching technique of “Simulating chaos to teach order” gives us real world simulation of a disaster area • In this class w want to see how students would handle a chaotic situation. Most data problems start out with a certain amount of dirty data, ill-defined questions, and urgency. Can we teach a method of creating order from chaos? • See also “Creating order from chaos in a startup“. Talking to Doug Perlson, CEO of RealDirect We got into teams of 4 or 5 to assemble our questions for Doug, the CEO of RealDirect. The students have been assigned as homework the task of suggesting a data strategy for this new company, due next week. He came in, gave us his background in real-estate law and startups and online advertising, and told us about his desire to use all the data he now knew about to improve the way people sell and buy houses. First they built an interface for sellers, giving them useful data-driven tips on how to sell their house and using interaction data to give real-time recommendations on what to do next. Doug made the remark that normally, people sell their homes about once in 7 years and they’re not pros. The goal of RealDirect is not just to make individuals better but also pros better at their job. He pointed out that brokers are “free agents” – they operate by themselves. they guard their data, and the really good ones have lots of experience, which is to say they have more data. But very few brokers actually have sufficient experience to do it well. The idea is to apply a team of licensed real-estate agents to be data experts. They learn how to use information-collecting tools so we can gather data, in addition to publicly available information (for example, co-op sales data now available, which is new). One problem with publicly available data is that it’s old news – there’s a 3 month lag. RealDirect is working on real-time feeds on stuff like: • when people start search, • what’s the initial offer, • the time between offer and close, and • how people search online. Ultimately good information helps both the buyer and the seller. RealDirect makes money in 2 ways. First, a subscription, \$395 a month, to access our tools for sellers. Second, we allow you to use our agents at a reduced commission (2% of sale instead of the usual 2.5 or 3%). The data-driven nature of our business allows us to take less commission because we are more optimized, and therefore we get more volume. Doug mentioned that there’s a law in New York that you can’t show all the current housing listings unless it’s behind a registration wall, which is why RealDirect requires registration. This is an obstacle for buyers but he thinks serious buyers are willing to do it. He also doesn’t consider places that don’t require registration, like Zillow, to be true competitors because they’re just showing listings and not providing real service. He points out that you also need to register to use Pinterest. Doug mentioned that RealDirect is comprised of licensed brokers in various established realtor associations, but even so they have had their share of hate mail from realtors who don’t appreciate their approach to cutting commission costs. In this sense it is somewhat of a guild. On the other hand, he thinks if a realtor refused to show houses because they are being sold on RealDirect, then the buyers would see the listings elsewhere and complain. So they traditional brokers have little choice but to deal with them. In other words, the listings themselves are sufficiently transparent so that the traditional brokers can’t get away with keeping their buyers away from these houses RealDirect doesn’t take seasonality issues into consideration presently – they take the position that a seller is trying to sell today. Doug talked about various issues that a buyer would care about- nearby parks, subway, and schools, as well as the comparison of prices per square foot of apartments sold in the same building or block. These are the key kinds of data for buyers to be sure. In terms of how the site works, it sounds like somewhat of a social network for buyers and sellers. There are statuses for each person on site. active – offer made – offer rejected – showing – in contract etc. Based on your status, different opportunities are suggested. Suggestions for Doug? Linear Regression Example 1. You have points on the plane: (x, y) = (1, 2), (2, 4), (3, 6), (4, 8). The relationship is clearly y = 2x. You can do it in your head. Specifically, you’ve figured out: • There’s a linear pattern. • The coefficient 2 • So far it seems deterministic Example 2. You again have points on the plane, but now assume x is the input, and y is output. (x, y) = (1, 2.1), (2, 3.7), (3, 5.8), (4, 7.9) Now you notice that more or less y ~ 2x but it’s not a perfect fit. There’s some variation, it’s no longer deterministic. Example 3. (x, y) = (2, 1), (6, 7), (2.3, 6), (7.4, 8), (8, 2), (1.2, 2). Here your brain can’t figure it out, and there’s no obvious linear relationship. But what if it’s your job to find a relationship anyway? First assume (for now) there actually is a relationship and that it’s linear. It’s the best you can do to start out. i.e. assume $y = \beta_0 + \beta_1 x + \epsilon$ and now find best choices for $\beta_0$ and $\beta_1$. Note we include $\epsilon$ because it’s not a perfect relationship. This term is the “noise,” the stuff that isn’t accounted for by the relationship. It’s also called the error. Before we find the general formula, we want to generalize with three variables now: $x_1, x_2, x_3$, and we will again try to explain $y$ knowing these values. If we wanted to draw it we’d be working in 4 dimensional space, trying to plot points. As above, assuming a linear relationship means looking for a solution to: $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \epsilon$ Writing this with matrix notation we get: $y = x \cdot \beta + \epsilon.$ How do we calculate $\beta$? Define the “residual sum of squares”, denoted $RSS(\beta),$ to be $RSS(\beta) = \sum_i (y_i - \beta x)^2,$ where $i$ ranges over the various data points. RSS is called a loss function. There are many other versions of it but this is one of the most basic, partly because it gives us a pretty nice measure of closeness of fit. To minimize $RSS(\beta) = (y - \beta x)^t (y - \beta x),$ we differentiate it with respect to $\beta$ and set it equal to zero, then solve for $\beta.$ We end up with $\beta = (x^t x)^{-1} x^t y.$ To use this, we go back to our linear form and plug in the values of $\beta$ to get a predicted $y$. But wait, why did we assume a linear relationship? Sometimes maybe it’s a polynomial relationship. $y = \beta_0 + \beta_1 x + \beta_2 x^2 + \beta_3 x^3.$ You need to justify why you’re assuming what you want. Answering that kind of question is a key part of being a data scientist and why we need to learn these things carefully. All this is like one line of R code where you’ve got a column of y’s and a column of x’s.: model <- lm(y ~ x) Or if you’re going with the polynomial form we’d have: model <- lm(y ~ x + x^2 + x^3) Why do we do regression? Mostly for two reasons: • If we want to predict one variable from the next • If we want to explain or understand the relationship between two things. K-nearest neighbors Say you have the age, income, and credit rating for a bunch of people and you want to use the age and income to guess at the credit rating. Moreover, say we’ve divided credit ratings into “high” and “low”. We can plot people as points on the plane and label people with an “x” if they have low credit ratings. What if a new guy comes in? What’s his likely credit rating label? Let’s use k-nearest neighbors. To do so, you need to answer two questions: 1. How many neighbors are you gonna look at? k=3 for example. 2. What is a neighbor? We need a concept of distance. For the sake of our problem, we can use Euclidean distance on the plane if the relative scalings of the variables are approximately correct. Then the algorithm is simple to take the average rating of the people around me. where average means majority in this case – so if there are 2 high credit rating people and 1 low credit rating person, then I would be designated high. Note we can also consider doing something somewhat more subtle, namely assigning high the value of “1″ and low the value of “0″ and taking the actual average, which in this case would be 0.667. This would indicate a kind of uncertainty. It depends on what you want from your algorithm. In machine learning algorithms, we don’t typically have the concept of confidence levels. care more about accuracy of prediction. But of course it’s up to us. Generally speaking we have a training phase, during which we create a model and “train it,”  and then we have a testing phase where we use new data to test how good the model is. For k-nearest neighbors, the training phase is stupid: it’s just reading in your data. In testing, you pretend you don’t know the true label and see how good you are at guessing using the above algorithm.  This means you save some clean data from the overall data for the testing phase. Usually you want to save randomly selected data, at least 10%. In R: read in the package “class”, and use the function knn(). You perform the algorithm as follows: knn(train, test, cl, k=3) The output includes the k nearest (in Euclidean distance) training set vectors, and the classification labels as decided by majority vote How do you evaluate if the model did a good job? This isn’t easy or universal – you may decide you want to penalize certain kinds of misclassification more than others. For example, false positives may be way worse than false negatives. To start out stupidly, you might want to simply minimize the misclassification rate: (# incorrect labels) / (# total labels) How do you choose k? This is also hard. Part of homework next week will address this. When do you use linear regression vs. k-nearest neighbor? Thinking about what happens with outliers helps you realize how hard this question is. Sometimes it comes down to a question of what the decision-maker decides they want to believe. Note definitions of “closeness” vary depending on the context: closeness in social networks could be defined as the number of overlapping friends. Both linear regression and k-nearest neighbors are examples of “supervised learning”, where you’ve observed both x and y, and you want to know the function that brings x to y. ## How is math used outside academia? September 7, 2012 Help me out, beloved readers. Brainstorm with me. I’m giving two talks this semester on how math is used outside academia, for math audiences. One is going to be at the AGNES conference and another will be a math colloquium at Stonybrook. I want to give actual examples, with fully defined models, where I can explain the data, the purported goal, the underlying assumptions, the actual outputs, the political context, and the reach of each model. The cool thing about these talks is I don’t need to dumb down the math at all, obviously, so I can be quite detailed in certain respects, but I don’t want to assume my audience knows the context at all, especially the politics of the situation. So far I have examples from finance, internet advertising, and educational testing. Please tell me if you have some more great examples, I want this talk to be awesome. The ultimate goal of this project is probably an up-to-date essay, modeled after this one, which you should read. Published in the Notices of the AMS in January 2003, author Mary Poovey explains how mathematical models are used and abused in finance and accounting, how Enron booked future profits as current earnings and how they manipulated the energy market. From the essay: Thus far the role that mathematics has played in these financial instruments has been as much inspirational as practical: people tend to believe that numbers embody objectivity even when they do not see (or understand) the calculations by which particular numbers are generated. In my final example, mathematical principles are still invisible to the vast majority of investors, but mathematical equations become the prime movers of value. The belief that makes it possible for mathematics to generate value is not simply that numbers are objective but that the market actually obeys mathematical rules. The instruments that embody this belief are futures options or, in their most arcane form, derivatives. Slightly further on she explains: In 1973 two economists produced a set of equations, the Black-Scholes equations, that provided the first strictly quantitative instrument for calculating the prices of options in which the determining variable is the volatility of the underlying asset. These equations enabled analysts to standardize the pricing of derivatives in exclusively quantitative terms. From this point it was no longer necessary for traders to evaluate individual stocks by predicting the probable rates of profit, estimating public demand for a particular commodity, or subjectively getting a feel for the market. Instead, a futures trader could engage in trades driven purely by mathematical equations and selected by a software program. She ends with a bunch of great questions. Mind you, this was in 2003, before the credit crisis: But what if markets are too complex for mathematical models? What if irrational and completely unprecedented events do occur, and when they do—as we know they do—what if they affect markets in ways that no mathematical model can predict? What if the regularity that all mathematical models assume effaces social and cultural variables that are not subject to mathematical analysis? Or what if the mathematical models traders use to price futures actually influence the future in ways the models cannot predict and the analysts cannot govern? Perhaps these are the only questions that can challenge the financial axis of power, which otherwise threatens to remake everything, including value, over in the image of its own abstractions. Perhaps these are the kinds of questions that mathematicians and humanists, working together, should ask and try to answer. Categories: data science, finance, math, math education ## Columbia data science course, week 1: what is data science? September 6, 2012 I’m attending Rachel Schutt’s Columbia University Data Science course on Wednesdays this semester and I’m planning to blog the class. Here’s what happened yesterday at the first meeting. Syllabus Rachel started by going through the syllabus. Here were her main points: • The prerequisites for this class are: linear algebra, basic statistics, and some programming. • The goals of this class are: to learn what data scientists do. and to learn to do some of those things. • Rachel will teach for a couple weeks, then we will have guest lectures. • The profiles of those speakers vary considerably, as do their backgrounds. Yet they are all data scientists. • We will be resourceful with readings: part of being a data scientist is realizing lots of stuff isn’t written down yet. • There will be 6-10 homework assignments, due every two weeks or so. • The final project will be an internal Kaggle competition. This will be a team project. • There will also be an in-class final. • We’ll use R and python, mostly R. The support will be mainly for R. Download RStudio. • If you’re only interested in learning hadoop and working with huge data, take Bill Howe’s Coursera course. We will get to big data, but not til the last part of the course. The current landscape of data science So, what is data science? Is data science new? Is it real? What is it? This is an ongoing discussion, but Michael Driscoll’s answer is pretty good: Data science, as it’s practiced, is a blend of Red-Bull-fueled hacking and espresso-inspired statistics. But data science is not merely hacking, because when hackers finish debugging their Bash one-liners and Pig scripts, few care about non-Euclidean distance metrics. And data science is not merely statistics, because when statisticians finish theorizing the perfect model, few could read a ^A delimited file into R if their job depended on it. Data science is the civil engineering of data.  Its acolytes possess a practical knowledge of tools & materials, coupled with a theoretical understanding of what’s possible. Driscoll also refers to Drew Conway’s Venn diagram of data science from 2010: We also may want to look at Nathan Yau’s “sexy skills of data geeks” from his “Rise of the Data Scientist” in 2009: 1. Statistics – traditional analysis you’re used to thinking about 2. Data Munging – parsing, scraping, and formatting data 3. Visualization – graphs, tools, etc. But wait, is data science a bag of tricks? Or is it just the logical extension of other fields like statistics and machine learning? For one argument, see Cosma Shalizi’s posts here and here and my posts here and here, which constitute an ongoing discussion of the difference between a statistician and a data scientist. Also see ASA President Nancy Geller’s 2011 Amstat News article, “Don’t shun the ‘S’ word,” where she defends statistics. One thing’s for sure, in data science, nobody hands you a clean data set, and nobody tells you what method to use. Moreover, the development of the field is happening in industry, not academia. In 2011, DJ Patil described how he and Jeff Hammerbacher, in 2008, coined the term data scientist. However, in 2001, William Cleveland wrote a paper about data science (see Nathan Yau’s post on it here). So data science existed before data scientists? Is this semantics, or does it make sense? It begs the question, can you define data science by what data scientists do? Who gets to define the field, anyway? There’s lots of buzz and hype – does the media get to define it, or should we rely on the practitioners, the self-appointed data scientists? Or is there some actual authority? Let’s leave these as open questions for now. Columbia just decided to start an Institute for Data Sciences and Engineering with Bloomberg’s help. The only question is why there’s a picture of a chemist on the announcement. There are 465 job openings in New York for data scientists last time we checked. That’s a lot. So even if data science isn’t a real field, it has real jobs. Note that most of the job descriptions ask data scientists to be experts in computer science, statistics, communication, data visualization, and to have expert domain expertise. Nobody is an expert in everything, which is why it makes more sense to create teams of people who have different profiles and different expertise, which together, as a team, can specialize in all those things. Here are other players in the ecosystem: • O’Reilly and their Strata Conference • DataKind • Meetup groups • VC firms like Union Square Ventures are pouring big money into data science startups • Kaggle hosts data science competitions • Chris Wiggins, professor of applied math at Columbia, has been instrumental in connecting techy undergrads with New York start-ups through his summer internship program HackNY. Note: wikipedia didn’t have an entry on data science until this 2012. This is a new term if not a new subject. How do you start a Data Science project? Say you’re working with some website with an online product. You want to track and analyse user behavior. Here’s a way of thinking about it: 1. The user interacts with product. 2. The product has a front end and a back end. 3. The user starts taking actions: clicks, etc. 4. Those actions get logged. 5. The logs include timestamps; they capture all the key user activity around the product. 6. The logs then get processed in pipelines: that’s where data munging, joining, and mapreducing occur. 7. These pipelines generate nice, clean, massive data sets. 8. These data sets are typically keyed by user, or song (like if you work at a place like Pandora), or however you want to see your data. 9. These data sets then get analyzed, modeled, etc. 10. They ultimately give us new ways of understanding user behavior. 11. This new understanding gets embedded back into the product itself. 12. We’ve created a circular process of changing the user interaction with the product by starting with examining the user interaction with the product. This differentiates the job of the data scientist from the traditional data analyst role, which might analyze users for likelihood of purchase but probably wouldn’t change the product itself but rather retarget advertising or something to more likely buyers. 13. The data scientist also reports to the CEO or head of product what she’s seeing with respect to the user, what’s  happening with the user experience, what are the patterns she’s seeing. This is where communication and reporting skills, as well as data viz skills and old-time story telling skills come in. The data scientist builds the narrative around the product. 14. Sometimes you have to scrape the web, to get auxiliary info, because either the relevant data isn’t being logged or it isn’t actually being generated by the users. Profile yourself Rachel then handed out index cards and asked everyone to profile themselves (on a relative rather than absolute scale) with respect to their skill levels in the following domains: • software engineering, • math, • stats, • machine learning, • domain expertise, • communication and presentation skills, and • data viz We taped the index cards up and got to see how everyone else thought of themselves. There was quite a bit of variation, which is cool – lots of people in the class are coming from social science. And again, a data science team works best when different skills (profiles) are represented in different people, since nobody is good at everything. It makes me think that it might be easier to define a “data science team” than to define a data scientist. Thought experiment: can we use data science to define data science? We broke into small groups to think about this question. Then we had a discussion. Some ideas: • Yes: google search data science and perform a text mining model • But wait, that would depend on you being a usagist rather than a prescriptionist with respect to language. Do we let the masses define data science (where “the masses” refers to whatever google’s search engine finds)? Or do we refer to an authority such as the Oxford English Dictionary? • Actually the OED probably doesn’t have an entry yet and we don’t have time to wait for it. Let’s agree that there’s a spectrum, and one authority doesn’t feel right and “the masses” doesn’t either. • How about we look at practitioners of data science, and see how they describe what they do (maybe in a word cloud for starters), and then see how people who claim to be other things like statisticians or physics or economics describe what they do, and then we can try to use a clustering algorithm or some other model and see if, when it takes as input “the stuff I do”, it gives me a good prediction on what field I’m in. Just for comparison, check out what Harlan Harris recently did inside the field of data science: he took a survey and used clustering to define subfields of data science, which gave rise to this picture: It was a really exciting first week, I’m looking forward to more! Categories: data science, math education, statistics ## Update on organic food August 19, 2012 So I’m back from some town in North Ontario (please watch this video to get an idea). I spent four days on a tiny little island on Lake Huron with my family and some wonderful friends, swimming, boating, picnicking, and reading the Omnivore’s Dilemma by Michael Pollan whenever I could. It was a really beautiful place but really far away, especially since my husband jumped gleefully into the water from a high rock with his glasses on so I had to drive all the way back without help. But what I wanted to mention to you is that, happily, I managed to finish the whole book – a victory considering the distractions. I was told to read the book by a bunch of people who read my previous post on organic food and why I don’t totally get it: see the post here and be sure to read the comments. One thing I have to give Pollan, he has written a book that lots of people read. I took notes on his approach and style because I want to write a book myself. And it’s not that I read statistics on the book sales – I know people read the book because, even though I hadn’t, lots of facts and passages were eerily familiar to me, which means people I know have quoted the book to me. That’s serious! In other words, there’s been feedback from this book to the culture and how we think about organic food vs. industrial farming. I can’t very well argue that I already knew most of the stuff in the book, even though I did, because I probably only know it because he wrote the book on it and it’s become part of our cultural understanding. I terms of the content, first, I’ll complain, then I’ll compliment. Complaint #1: the guy is a major food snob (one might even say douche). He spends like four months putting together a single “hunting and gathering” meal with the help of his friends the Chez Panisse chefs. It’s kind of like a “lives of the rich and famous” episode in that section of the book, which is to say voyeuristic, painfully smug, and self-absorbed. It’s hard to find this guy wise when he’s being so precious. Complaint #2: a related issue, which is that he never does the math on whether a given lifestyle is actually accessible for the average person. He mentions that the locally grown food is more expensive, but he also suggests that poor people now spend less of their income on food than they used to, implying that maybe they have extra cash on hand to buy local free-range chickens, not to mention that they’d need the time and a car and gas to drive to the local farms to buy this stuff (which somehow doesn’t seem to figure into his carbon footprint calculation of that lifestyle). I don’t think there’s all that much extra time and money on people’s hands these days, considering how many people are now living on food stamps (I will grant that he wrote this book before the credit crisis so he didn’t anticipate that). Complaint #3: he doesn’t actually give a suggestion for what to do about this to the average person. In the end this book creates a way for well-to-do people to feel smug about their food choices but doesn’t forge a path otherwise, besides a vague idea that not eating processed food would be good. I know I’m asking a lot, but specific and achievable suggestions would have been nice. Here’s where my readers can say I missed something – please comment! Compliment #1: he really educates the reader on how much the government farm subsidies distort the market, especially for corn, and how the real winners are the huge businesses like ConAgra and Monsanto, not the farmers themselves. Compliment #2: he also explains the nastiness of processed food and large-scale cow, pig, and chicken farms. Yuck. Compliment #3: My favorite part is that he describes the underlying model of the food industry as overly simplistic. He points out that, by just focusing on the chemicals like nitrogen and carbon in the soil, we have ignored all sorts of other important things that are also important to a thriving ecosystem. So, he explains, simply adding nitrogen to the soil in the form of fertilizer doesn’t actually solve the problem of growing things quickly. Well, it does do that, but it introduces other problems like pollution. This is a general problem with models: they almost by definition simplify the world, but if they are successful, they get hugely scaled, and then the things they ignore, and the problems that arise from that ignorance, are amplified. There’s a feedback loop filled with increasingly devastating externalities. In the case of farming, the externalities take the form of pollution, unsustainable use of petrochemicals, sick cows and chickens, and nasty food-like items made from corn by-products. Another example is teacher value-added models: the model is bad, it is becoming massively scaled, and the externalities are potentially disastrous (teaching to the test, the best teachers leaving the system, enormous amount of time and money spent on the test industry, etc.). But that begs the question, what should we do about it? Should we well-to-do people object to the existence of the model and send our kids to the private schools where the teachers aren’t subject to that model? Or should we acknowledge it exists, it isn’t going away, and it needs to be improved? It’s a similar question for the food system and the farming model: do we save ourselves and our family, because we can, or do we confront the industry and force them to improve their models? I say we do both! Let’s not ignore our obligation to agitate for better farming practices for the enormous industry that already exists and isn’t going away. I don’t think the appropriate way to behave is to hole up with your immediate family and make sure your kids are eating wholesome food. That’s too small and insular! It’s important to think of ways to fight back against the system itself if we believe it’s corrupt and is ruining our environment. For me that means being part of Occupy, joining movements and organization fighting against lobbyist power (here’s one that fights against BigFood lobbyists), and broadly educating people about statistics and mathematical modeling so that modeling flaws and externalities are understood, discussed, and minimized. Categories: data science, math education, musing ## Looterism August 9, 2012 My friend Nik recently sent me a PandoDaily article written by Francisco Dao entitled Looterism: The Cancerous Ethos That Is Gutting America. He defines looterism as the “deification of pure greed” and says: The danger of looterism, of focusing only on maximizing self interest above the importance of creating value, is that it incentivizes the extraction of wealth without regard to the creation or replenishment of the value building mechanism. I like the term, I think I’ll use it. And it made me think of this recent Bloomberg article about private equity and hedge funds getting into the public schools space. From the article: Indeed, investors of all stripes are beginning to sense big profit potential in public education. The K-12 market is tantalizingly huge: The U.S. spends more than \$500 billion a year to educate kids from ages five through 18. The entire education sector, including college and mid-career training, represents nearly 9 percent of U.S. gross domestic product, more than the energy or technology sectors. Traditionally, public education has been a tough market for private firms to break into — fraught with politics, tangled in bureaucracy and fragmented into tens of thousands of individual schools and school districts from coast to coast. Now investors are signaling optimism that a golden moment has arrived. They’re pouring private equity and venture capital into scores of companies that aim to profit by taking over broad swaths of public education. The conference last week at the University Club, billed as a how-to on “private equity investing in for-profit education companies,” drew a full house of about 100. [I think I know why that golden moment arrived, by the way. The obsession with test scores, a direct result of No Child Left Behind, is both pseudo-quantitative (by which I mean it is quantitative but is only measuring certain critical things and entirely misses other critical things) and has broken the backs of unions. Hedge funds and PE firms love quantitative things, and they don't really care if they numbers are meaningful if they can meaningfully profit.] Their immediate goal is out-sourcing: they want to create the Blackwater (now Academi) of education, but with cute names like Schoology and DreamBox. Lest you worry that their focus will be on the wrong things, they point out that if you make kids drill math through DreamBox “heavily” for 16 weeks, they score 2.3 points higher in a standardized test, although they didn’t say if that was out of 800 or 20. Never mind that “heavily” also isn’t defined, but it seems safe to say from context that it’s at least 2 hours a day. So if you do that for 16 weeks, those 2.3 points better be pretty meaningful. So either the private equity guys and hedge funders have the whole child in mind here, or it’s maybe looterism. I’m thinking looterism. Categories: finance, hedge funds, math education, rant ## Columbia Data Science Institute: it’s gonna happen July 30, 2012 So Bloomberg finally got around to announcing the Columbia Data Science Institute is really going to happen. The details as we know them now: 1. It’ll be at the main campus, not Manhattanville. 2. It’ll hire 75 faculty over the next decade (specifically, 30 new faculty by launch in August 2016 and 75 by 2030, so actually more than a decade but who’s counting?). 3. It will contain a New Media Center, a Smart Cities Center, a Health Analytics Center, a Cybersecurity Center, and a Financial Analytics Center. 4. The city is pitching in \$15 million whereas Columbia is ponying up \$80 million. 5. Columbia Computer Science professor Kathy McKeown will be the Director and Civil Engineering professor Patricia Culligan will be the Institute’s Deputy Director. Categories: data science, math education, news ## Does mathematics have a place in higher education? July 29, 2012 A recent New York Times Opinion piece (hat tip Wei Ho), Is Algebra Necessary?, argues for the abolishment of algebra as a requirement for college. It was written by Andrew Hacker, an emeritus professor of political science at Queens College, City University of New York. His concluding argument: I’ve observed a host of high school and college classes, from Michigan to Mississippi, and have been impressed by conscientious teaching and dutiful students. I’ll grant that with an outpouring of resources, we could reclaim many dropouts and help them get through quadratic equations. But that would misuse teaching talent and student effort. It would be far better to reduce, not expand, the mathematics we ask young people to imbibe. (That said, I do not advocate vocational tracks for students considered, almost always unfairly, as less studious.) Yes, young people should learn to read and write and do long division, whether they want to or not. But there is no reason to force them to grasp vectorial angles and discontinuous functions. Think of math as a huge boulder we make everyone pull, without assessing what all this pain achieves. So why require it, without alternatives or exceptions? Thus far I haven’t found a compelling answer. For an interesting contrast, there’s a recent Bloomberg View Piece, How Recession Will Change University Financing, by Gary Shilling (not to be confused with Robert Shiller). From Shilling’s piece: Most thought that a bachelor’s degree was the ticket to a well-paid job, and that the heavy student loans were worth it and manageable. And many thought that majors such as social science, education, criminal justice or humanities would still get them jobs. They didn’t realize that the jobs that could be obtained with such credentials were the nice-to-have but nonessential positions of the boom years that would disappear when times got tough and businesses slashed costs. Some of those recent graduates probably didn’t want to do, or were intellectually incapable of doing, the hard work required to major in science and engineering. After all, afternoon labs cut into athletic pursuits and social time. Yet that’s where the jobs are now. Many U.S.-based companies are moving their research-and-development operations offshore because of the lack of scientists and engineers in this country, either native or foreign-born. For 34- to 49-year-olds, student debt has leaped 40 percent in the past three years, more than for any other age group. Many of those debtors were unemployed and succumbed to for-profit school ads that promised high-paying jobs for graduates. But those jobs seldom materialized, while the student debt remained. Moreover, many college graduates are ill-prepared for almost any job. A study by the Pew Charitable Trusts examined the abilities of U.S. college graduates in three areas: analyzing news stories, understanding documents and possessing the math proficiency to handle tasks such as balancing a checkbook or tipping in a restaurant. The first article is written by a professor, so it might not be surprising that, as he sees more and more students coming through, he feels their pain and wants their experience to not be excruciating. The easiest way to do that is to remove the stumbling block requirement of math. He also seems to think of higher education as something everyone is entitled to, which I infer based on how he dismisses vocational training. The second article is written by a financial analyst, an economist, so we might not be surprised that he strictly sees college as a purely commoditized investment in future income, and wants it to be a good one. The easiest way to do that is to have way fewer students go through college to begin with, since having dumb or bad students get into debt but not learn anything and then not get a job afterwards doesn’t actually make sense. And where the first author acts like math is only needed for a tiny minority of college students, the second author basically dismisses non-math oriented subjects as frivolous and leading to a life of joblessness and debt. These are vastly different viewpoints. I’m thinking of inviting them both to dinner to discuss. By the way, I think that last line, where Hacker wonders what the pain of math-as-huge-boulder achieves, is more or less answered by Shilling. The goal of having math requirements is to have students be mathematically literate, which is to say know how to do everyday things like balancing checkbooks and reading credit card interest rate agreements. The fact that we aren’t achieving this goal is important, but the goal is pretty clear. In other words, I think my dinner party would be fruitful as well as entertaining. If there’s one thing these two agree on, it’s that students are having an awful lot of trouble doing basic math. This makes me wonder a few things. First, why is algebra such a stumbling block? Is it that the students are really that bad, or is the approach to teaching it bad? I suspect what’s really going on is that the students taking it have mostly not been adequately taught the pre-requisites. That means we need more remedial college math. I honestly feel like this is the perfect place for online learning. Instead of charging students enormous fees while they get taught high-school courses they should already know, and instead of removing basic literacy requirements altogether, ask them to complete some free online math courses at home or in their public library, to get them ready for college. The great thing about computers is that they can figure out the level of the user, and they never get impatient. Next, should algebra be replaced by a Reckoning 101 course? Where, instead of manipulating formulas, we teach students to figure out tips and analyze news stories and understand basic statistical statements? I’m sure this has been tried, and I’m sure it’s easy to do badly or to water down entirely. Please tell me what you know. Specifically, are students better at snarky polling questions if they’ve taken these classes than if they’ve taken algebra? Finally, I’d say this (and I’m stealing this from my friend Kiri, a principal of a high school for girls in math and science): nobody ever brags about not knowing how to read, but people brag all the time about not knowing how to do math. There’s nothing to be proud of in that, and it’s happening to a large degree because of our culture, not intelligence. So no, let’s not remove mathematical literacy as a requirement for college graduates, but let’s think about what we can do to make the path reasonable and relevant while staying rigorous. And yes, there are probably too many students going to college because it’s now a cultural assumption rather than a thought-out decision, and this lands young people in debt up to their eyeballs and jobless, which sucks (here’s something that may help: forcing for-profit institutions to be honest in advertising future jobs promises and high interest debt). Something just occurred to me. Namely, it’s especially ironic that the most mathematically illiterate and vulnerable students are being asked to sign loan contracts that they, almost by construction, don’t understand. How do we address this? Food for thought and for another post. Categories: math, math education, news, statistics ## Exploit me some more please July 22, 2012 I’m back home from HCSSiM (see my lecture notes here). Yesterday I took the bus from Amherst to New York and slept all the way, then got home and took a nap, and then after putting my 3-year-old to bed crashed on the couch until just now when I woke up. That’s about 13 hours of sleep in the past 20, and I’m planning to take a nap after writing this. That’s just an wee indication of how sleep deprived I became at math camp. Add to that the fact that my bed there was hard plastic, that I completely lost touch with the memory of enjoying good food (taste buds? what are those?), and that I was pitifully underpaid, and you might think I’m glad to be home. And I am, because I missed my family, but I’m already working feverishly to convince them to let me go again next year, and come with me next time if that would be better. Because I’m so in love with those kids and with those junior staff and with Kelly and with Hampshire College and with the whole entire program. Just so you get an idea of what there is to love, check out one of the students talking about his plan for his yellow pig day shirt which I luckily captured on my phone: And here’s a job description which always makes me laugh (and cry) (and I only worked there the first half): When people haven’t experienced HCSSiM, we worry about being able to explain adequately the unusual commitment required by the exploitative position. The workday is long and challenging; it is also exciting and rewarding. A senior faculty and 2 junior faculty members actively participate in the morning classes (8:30 – 2:30, M-S) and evening problem sessions (7:30 – 10:30, M-F) of each of the c. 14-student Workshops (7/2 – 7/20 = days); they prepare (and duplicate) daily problem sets; they proofread notes and program journal articles, and they write evaluations; they offer constructive criticism; they attend the afternoon Prime Time Theorems (a 51-minute math-club type talk, over half given by visitors) and give 1 or 2. The staffing and most of those teaching opportunities (chores) apply to the 2nd half of the program when students take a Maxi-course (8:30-11, M-S, and 7:30 – 10:30, M-F, 7/23 – 8/10). During the 2nd half of the program, students also take, consecutively, 2 Mini-courses, which meet from 11:17 until 12:30 for 7 days and which have no attached problem sessions; many minis are created or co-created by junior staff. Except for Kelly and Susan (who are on call) the staff live in in the dorm (Enfield this year), join students for meals and recreational activities, provide transportation and counseling and supervision for students, and help to get the program to sleep around 11:17. There is virtually no hope of getting any research done or of maintaining an outside social life. In spite of (with some because of) the preceding, the job is exhilarating as well as exhausting; we have repeaters, and there are a lot of good math teachers out there who credit HCSSiM with teaching them to teach. Categories: math education ## HCSSiM Workshop day 17 July 21, 2012 This is a continuation of this, where I take notes on my workshop at HCSSiM. Magic Squares First Elizabeth Campolongo talked about magic squares. First she exhibited a bunch, including these classic ones which I found here: Then we noted that flipping or rotating a magic square gives you another one, and also that the trivial magic square with all “1″‘s should also count as a crappy kind of magic square. Then, for 3-by-3 magic squares, Elizabeth showed that knowing 3 entries (say the lowest row) would give you everything. This is in part because if you add up all four “lines” going through the center, you get the center guy four times and everything else once, or in other words everything once and the center guy three times. But you can also add up all the horizontal lines to get everything once. The first sum is 4C, if C is the sum of any line, and the second sum is 3C, so subtracting we get that C is the the center three times, or the center is just C/3. This means if you have the bottom row, you can also infer C and thus the center. Then once you have the bottom and the center you can infer the top, and then you can infer the two sides. After convincing them of this, Elizabeth explained that the set of magic cubes was actually a vector space over the rational numbers, and since we’d exhibited 3 non-collinear ones and since we’d proved we can have at most 3 degrees of freedom for any magic square, we actually had a basis. Finally, she showed them one of the “all prime magic squares”: The one with the “17″ in the corner of course. She exhibited this as a sum of the basis we had written down. It was very cool. The game of Set My man Max Levit then whipped out some Set cards and showed people how to play (most of them already knew). After assigning numbers mod 3 to each of the 4 dimensions, he noted that taking any two cards, you’d be able to define the third card uniquely so that all three form a set. Moreover, that third card is just the negative of the sum of the first two, or in other words the sum of all three cards in a set is the vector $(0, 0, 0, 0).$ Next Max started talking about how many cards you can have where they don’t form a set. He started in the case where you have only two dimensions (but you’re still working mod 3). There are clearly at most 4, with a short pigeon hole argument, and he exhibited 4 that work. He moved on to 3 and 4 dimensions and showed you could lay down 9 in 3 and 20 in 4 dimensions without forming a set (picture from here), which one of our students eagerly demonstrated with actual cards: Finally, Max talked about creating magic squares with sets, tying together his awesome lecture with Elizabeth’s. A magic square of sets is also generated by 3 non-collinear cards, and you get the rest from those three placed anywhere away from a line: Probability functions on lattices Josh Vekhter then talked about probability distributions as functions from posets of “event spaces” to the real line.  So if you role a 6-sided die, for example, you can think of the event “it’s a 3, 4, or 6″ as being above the event “it’s a 3″. He talked about a lattice as a poset where there’s always an “or” and an “and”, so there’s always a common ancestor and child for any two elements. Then he talked about the question of whether that probability function distributes in the way it “should” with respect to “and” and “or”, and explained how it doesn’t in the case of the two slit experiment. He related this lack of distribution law of the probability function to the concept of the convexity of the space of probability distributions (keeping in mind that we actually have a vector space of possibly probability functions on a given lattice, can we find “pure” probability distributions that always take the values of 0 or 1 and which form a kind of basis for the entire set?). This is not my expertise and hopefully Josh will fill in some details in the coming days. King Chicken I took over here at the end and discussed some beautiful problems related to flocks of chickens and pecking orders, which can be found in this paper. It was particularly poignant for me to talk about this because my first exposure to these problems was my entrance exam to get into this math program in 1987, when I was 14. Notetaking algorithm Finally, I proved that the notetaking algorithm we started the workshop with 3 weeks before actually always works. I did this by first remarking that, as long as it really was a function from the people to the numbers, i.e. it never gets stuck in an infinite loop, then it’s a bijection for sure because it has an inverse. To show you don’t get stuck in an infinite loop, consider it as a directed graph instead of an undirected graph, in which case you put down arrows on all the columns (and all the segments of columns) from the names to the numbers, since you always go down when you’re on a column. For the pipes between columns, you can actually squint really hard and realize there are two directed arrows there, one from the top left to the lower right and the other from the top right to the lower left. You’ve now replaces the “T” connections with directed arrows, and if you do this to every pipe you’ve removed all of the “T” connections altogether. It now looks like a bowl of yummy spaghetti. But that means there is no chance to fall into an infinite loop, since that would require a vertex. Note that after doing this you do get lots of spaghetti circles falling out of your diagram. Categories: math education ## HCSSiM Workshop day 16 July 20, 2012 This is a continuation of this, where I take notes on my workshop at HCSSiM. Two days ago Benji Fisher came to my workshop to talk about group laws on rational points of weird things in the plane. Here are his notes. #### Degenerate Elliptic Curves in the plane Conics in the plane Pick $t \in \mathbb{R}$. Consider the line $L_t$ given by $y = tx + t$. Where does $L_t$ intersect the y-axis? Where does it intersect the unit circle, $x^2 + y^2 =1?$ Substitute $y = tx +t$ into the equation for the circle to get $(1 + t^2) \, x^2 + (2 t^2) \, x + (t^2 - 1) = 0.$ After you do it the hard way, notice that you already know one root: $x = -1$. The sum of the roots is $-2 t^2 / (1 + t^2)$ and their product is $(t^2 - 1) / (t^2 + 1)$. Either way, you get $x = (1 - t^2) / (1 + t^2)$. From $y= t x + t$ you get $y = 2 t / (1 + t^2)$. Do not forget that if you are given $x$ and $y$, then $t = y / (x + 1)$. This gives you a 1-1 correspondence between the points of the circle (conic) and the points of the $y$-axis (including $\infty$). The formula for Pythagorean triples also falls out. So do the formulae for the tangent-of-the-half-angle substitution, which is useful when integrating rational functions of $\sin$ and $\cos$: set $x = \cos(\theta),$ $y = \sin(\theta),$ and $t = \tan(\theta/2).$ There are several ways you can generalize this. You could project a sphere onto a plane. I want to consider replacing the circle with a cubic curve. The problem is that the expected number of intersections between a line and a cubic is 3, so you get a 2-to-1 correspondence in general. That is interesting, too, but for now I want to consider the cases where the curve has a double point and I choose lines through that point. Such lines should intersect the cube in one other point, giving a 1-1 correspondence between the curve (minus the singular point) and a line (minus a small number of points). cubic with a cusp Let $C$ be the curve defined by $y^2 = x^3$, which has a sharp corner at the origin. This type of singularity is called a cusp. Let $L_t$ denote the line through the origin and the point $(1,t)$, which has slope $t$. 1. Sketch the curve $C$. Does Mathematica do a good job of this? 2. The line $L_t$ meets the curve $C$ at the origin and in one other point, $(x, y)$. Find formulae for $x$ and $y$ in terms of $t$ and for $t$ in terms of $x$ and $y$. 3. You almost get a digestion (bijection) between $C$ and the line $x = 1$. What points do you have to omit from each in order to get a digestion? 4. Three points $(x_1, y_1), (x_2, y_2)$, and $(x_3, y_3)$ on $C$ are collinear. What condition does this impose on the corresponding values of $t$? The calculations are easier than for the circle: $x=t^2, y = t^3$, and $t = y/x.$ You have to remove the point $(0, 0)$ from the curve and the point $(1, 0)$ from the line. Well, you do not have to remove them, but the formula for $t$ in terms of $x$ and $y$ is fishy if you do not. The point at infinity (the third point of intersection between the curve and the $y$-axis) corresponds to itself. The condition for colliniarity is $\frac{y_3 - y_1}{x_3 - x_1} = \frac{y_2 - y_1}{x_2 - x_1}.$ Plug in the expressions in terms of the $t$ coordinates, chug away, and you should get $t_1^{-1} + t_2^{-1} + t_3^{-1} = 0$. If we let $u = t^{-1}$, then $u$ is the natural coordinate on the line $y=1$. (Maybe I should use that line to start with instead of $x=1$.) cubic with a node This problem deals with the curve $C$ defined by $y^2 = x^3 + x^2$, which intersects itself at the origin. This type of singularity is called a node. Let $L_t$ denote the line through the origin and the point $(1,t)$, which has slope $t$. 1. Sketch the curve $C$. Does Mathematica do a good job of this? 2. The line $L_t$ meets the curve $C$ at the origin and in one other point, $(x, y$). Find formulae for $x$ and $y$ in terms of $t$ and for $t$ in terms of $x$ and $y$. 3. You almost get a digestion (bijection) between $C$ and the line $x = 1$. What points do you have to omit from each in order to get a digestion? 4. Three points $(x_1, y_1), (x_2, y_2)$, and $(x_3, y_3)$ on $C$ are collinear. What condition does this impose on the corresponding values of $t$? Once again, $t = y / x$. The usual method gives $x = t^2 - 1$ and $y = t^3 - t$. In order to get a 1-1 correspondence, you need to delete the singular point $(0,0)$ from the curve and the points $(1,1)$ and $(1,-1)$ from the line. The lines through the origin with slope $\pm1$ are tangent to the curve. If you plug away, you should find that the condition for colliniarity is: $t_1 t_2 + t_1 t_3 + t_2 t_3 + 1 = 0$. Remember our curve $C$ (not to be Maximum Confused with $\mathbb{C}$)? It’s the one whose equation is $u^2 = x^3 + x^2.$ The condition for 3 points to be collinear on $C$ is: $t_1 t_2 + t_2 t_3 + t_3 t_1 +1 = 0.$ Claim: in terms of $u = \frac{t-1}{t+1}$, the condition is $u_1 u_2 u_3 = 1.$ (Hint: In terms of $u$, $t = \frac{1+u}{1 - u}$.) If you start with the known equation and replace the $t$‘s with $u$‘s, it takes some work to get down to the condition $u_1 u_2 u_3 = 1$. If you start with the LHS of the desired equation, there is a shortcut: $u_1 u_2 u_3 = \frac{t_1 - 1}{t_1 + 1} \cdot \frac{t_2 - 1}{t_2 + 1}$ $\cdot \frac{t_3-1}{t_3+1}$. But then we have $u_1 u_2 u_3 = \frac{t_1 t_2 t_3 + t_1 + t_2 + t_3 - (t_1 t_2 + t_1 t_3 + t _2 t_3 + 1)}{t_1 t_2 t_3 + t_1 + t_2 + t_3 + (t_1 t_2 + t_2 t_3 + t_3 t_1 + 1)} = 1.$ Note that the change-of-variable formulae are fractional linear transformations. Geometrically, $t$ is the natural coordinate on the line $x=1$ and $u$ is the natural coordinate on the line $x + y = 1$. To get from one line to the other, just draw a line through the origin. One interpretation of our results for the curves $y^2 = x^3$, $y^2 = x^3 + x^2$, is that it gives us a way to add points on the line $y = 1$ (with coordinate $t^{-1}$) and multiply points on the line $x + y = 1$ (with coordinate $u$) using just a straightedge, provided that we are allowed to draw lines through the point at infinity. In other words, we are allowed to draw vertical lines. I will continue to refer to this as “straightedge only.” I forgot to mention: you need to have the curve provided as well. Explicitly, this is the rule. Given two points on the line, find the corresponding point on the curve. Draw a line through them. The third point of intersection with the curve will be the (additive or multiplicative) inverse of the desired sum. Draw a vertical line through this point: the second point of intersection (or the third, if you count the point at infinity as being the second) will be the desired sum/product. More precisely, it is the point on the curve corresponding to the desired sum/product, so you have to draw one more line. Another interpretation is that we get a troupe (or karafiol) structure on the points of the curve, excluding the singular point but including the point at infinity. The point at infinity is the identity element of the troupe (group). The construction is exactly the same as in the previous paragraph, except you leave off the parts about starting and ending on the line. smooth cubic Similarly, we get a troupe (group) structure on any smooth cubic curve. For example, consider the curve $E$ defined by $E: y^2 = x^3 - x .$ Start with two points on the curve, say $P_1 = (x_1, y_1)$ and $P_2 = (x_2, y_2)$. The equation for the line through these two points is  $\frac{y - y_1}{x - x_1} = \frac{y_2 - y_1}{x_2 - x_1} = m .$\$ Solving for $y$ gives the equation $y = mx + b$ where $b = \frac{x_2 y_1 - x_1 y_2}{x_2 - x_1}.$ Plug this into the equation defining $R$ and you get, after a little algebra, $x^3- m^2 x^2 - (2 m b + 1)x - b^2 = 0 .$ Luckily, we know how to solve the general cubic. (Just kidding! Coming back to what we did with the circle, we observe that $x_1$ and $x_2$ are two of the roots, so we can use either of the relations $x_1 + x_2 + x_3 = m^2$ or $x_1 x_2 x_3 = b^2$.) The result is: $x_3 = m^2 - x_1 - x_2 = \frac{x_1^2 x_2 + x_1 x_2^2 - x_1 - x_2 - 2 y_1 y_2}{(x_1 - x_2)^3},$ where the final form comes from squaring $m$ and using the relations $y_1^2 = x_1^3 - x_1$ and $y_2^2 = x_2^3 - x_2$. At this point, a little patience (or a little computer-aided algebra) gives $y_3 = \frac{(3 x_1 x_2^2 + x_2^3 - x_1 - 3 x_2) y_1 + (-x_1^3 - 3 x_1^2 x_2 + 3 x_1 + x_2) y_2}{(x_1 - x_2)^3}.$ Do not forget the final step: $P_3 = (x_3, y_3)$ is the third point of intersection, but to get the sum we need to draw a vertical line, or reflect in the $x$-axis: $P_1 \oplus P_2 = \overline{P_3} = (x_3, - y_3).$ Now, I give up. With a lot of machinery, I could exlpain why the group law is associative. (The identity, as I think I said above, is the point at infinity. Commutativity and inverses are clear.) What I can do with a different machine (Mathematica or some other computer-algebra system) is verify the associative law. I could also do it by hand, given a week, but I do not think I would learn anything from that exercise. Categories: math education ## HCSSiM Workshop day 15 July 19, 2012 This is a continuation of this, where I take notes on my workshop at HCSSiM. Aaron was visiting my class yesterday and talked about Sandpiles. Here are his notes: Sandpiles: what they are For fixed $m,n \ge 2$, an $m \times n$ beach is a grid with some amount of sand in each spot. If there is too much sand in one place, it topples, sending one grain of sand to each of its neighbors (thereby losing 4 grains of sand). If this happens on an edge of the beach, one of the grains of sand falls off the edge and is gone forever. If it happens at a corner, 2 grains are lost. If there’s no toppling to do, the beach is stable. Here’s a 3-by-3 example I stole from here: Do stable $m \times n$ beaches form a group? Answer: well, you can add them together (pointwise) and then let that stabilize until you’ve got back to a stable beach (not trivial to prove this always settles! But it does). But is the sum well-defined? In other words, if there is a cascade of toppling, does it matter what order things topple? Will you always reach the same stable beach regardless of how you topple? Turns out the answer is yes, if you think about these grids as huge vectors and toppling as adding other 2-dimensional vectors with a ‘-4′ in one spot, a ’1′ in each of the four spots neighboring that, and ’0′ elsewhere. It inherits commutativity from addition in the integers. Wait! Is there an identity? Yep, the beach with no sand; it doesn’t change anything when you add it. Wait!! Are there inverses? Hmmmmm…. Lemma: There is no way to get back to all 0′s from any beach that has sand. Proof. Imagine you could. Then the last topple would have to end up with no sand. But every topple adds sand to at least 2 sites (4 if the toppling happens in the center, 3 if on an edge, 2 if on a corner). Equivalently, nothing will topple unless there are at least 4 grains of sand total, and toppling never loses more than 2 grains, so you can never get down to 0. Conclusion: there are no inverses; you cannot get back to the ’0′ grid from anywhere. So it’s not a group. Try again Question: Are there beaches that you can get back to by adding sand? There are: on a 2-by-2 beach, the ’2′ grid (which means a 2 in every spot) plus itself is the ’4′ grid, and that topples back to the ’2′ grid if you topple every spot once. Also, the ’2′ grid adds to the $(2, 0, 0, 2)$ grid and gets it back. Wow, it seems like the ’2′ grid is some kind of additive identity, at least for these two elements. But note that the ’1′ grid plus the ’2′ grid is the ’3′ grid, which doesn’t topple back to the ’1′ grid. So the ’2′ grid doesn’t work as an identity for everything. We need another definition. Recurrent sandpiles A stable beach C is recurrent if (i) it is stable, and (ii) given any beach A, there is a beach B such that C is the stabilization of A+B. We just write this C = A+B but we know that’s kind of cheating. Alternative definition: a stable beach C is recurrent if (i) it is stable, and (ii) you can get to C by starting at the maximum ’3′ grid, adding sand (call that part D), and toppling until you get something stable. C = ’3′ + D. It’s not hard to show these definitions are equivalent: if you have the first, let A = ’3′. If you have the second, and if A is stable, write A + A’ = ’3′, and we have B = A’ + D. Then convince yourself A doesn’t need to be stable. Letting A=C we get a beach E so C = C+E, and E looks like an identity. It turns out that if you have two recurrent beaches, then if you can get back to one using a beach E then you can get back to to the other using that same beach E (if you look for the identity for C + D, note that (C+D)+E = (C+E) + D = C+D; all recurrent beaches are of the form C+D so we’re done). Then that E is an identity element under beach addition for recurrent beaches. Is the identity recurrent? Yes it is (why? this is hard and we won’t prove it). So you can also get from A to the identity, meaning there are inverses. The recurrent beaches form a group! What is the identity element? On a 2-by-2 beach it is the ’2′ grid. The fact that it didn’t act as an identity on the ’1′ grid was caused by the fact that the ’1′ grid isn’t itself recurrent so isn’t considered to be inside this group. Try to guess what it is on a 2-by-3 beach. Were you right? What is the order of the ’2′ grid as a 2-by-3 beach? Try to guess what the identity looks like on a 198-by-198 beach. Were you right? Here’s a picture of that: We looked at some identities on other grids, and we watched an app generate one. You can play with this yourself. (Insert link). The group of recurrent beaches is called the m-by-n sandpile group. I wanted to show it to the kids because I think it is a super cool example of a finite commutative group where it is hard to know what the identity element looks like. You can do all sorts of weird things with sandpiles, like adding grains of sand randomly and seeing what happens. You can even model avalanches with this. There’s a sandpile applet you can go to and play with. Categories: math education
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 334, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431172609329224, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Essential_supremum
# Essential supremum and essential infimum (Redirected from Essential supremum) In mathematics, the concepts of essential supremum and essential infimum are related to the notions of supremum and infimum, but the former are more relevant in measure theory, where one often deals with statements that are not valid everywhere, that is for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero. Let (X, Σ, μ) be a measure space, and let f : X → R be a function defined on X and with real values, which is not necessarily measurable. A real number a is called an upper bound for f if f(x) ≤ a for all x in X, that is, if the set $\{x\in X: f(x)>a\}$ is empty. In contrast, a is called an essential upper bound if the set $\{x\in X: f(x)>a\}$ is contained in a set of measure zero, that is to say, if f(x) ≤ a for almost all x in X. Then, in the same way as the supremum of f is defined to be the smallest upper bound, the essential supremum is defined as the smallest essential upper bound. More formally, the essential supremum of f, ess sup f, is defined by $\mathrm{ess } \sup f=\inf \{a \in \mathbb{R}: \mu(\{x: f(x) > a\}) = 0\}\,$ if the set $\{a \in \mathbb{R}: \mu(\{x: f(x) > a\}) = 0\}$ of essential upper bounds is not empty, and ess sup f = +∞ otherwise. Exactly in the same way one defines the essential infimum as the largest essential lower bound, that is, $\mathrm{ess } \inf f=\sup \{b \in \mathbb{R}: \mu(\{x: f(x) < b\}) = 0\}\,$ if the set of essential lower bounds is not empty, and as −∞ otherwise. ## Examples On the real line consider the Lebesgue measure and its corresponding σ-algebra Σ. Define a function f by the formula $f(x)= \begin{cases} 5, & \text{if } x=1 \\ -4, & \text{if } x = -1 \\ 2, & \text{ otherwise. } \end{cases}$ The supremum of this function (largest value) is 5, and the infimum (smallest value) is −4. However, the function takes these values only on the sets {1} and {−1} respectively, which are of measure zero. Everywhere else, the function takes the value 2. Thus, the essential supremum and the essential infimum of this function are both 2. As another example, consider the function $f(x)= \begin{cases} x^3, & \text{if } x\in \mathbb Q \\ \arctan{x} ,& \text{if } x\in \mathbb R\backslash \mathbb Q \\ \end{cases}$ where Q denotes the rational numbers. This function is unbounded both from above and from below, so its supremum and infimum are ∞ and −∞ respectively. However, from the point of view of the Lebesgue measure, the set of rational numbers is of measure zero; thus, what really matters is what happens in the complement of this set, where the function is given as arctan x. It follows that the essential supremum is π/2 while the essential infimum is −π/2. Lastly, consider the function f(x) = x3 defined for all real x. Its essential supremum is +∞, and its essential infimum is −∞. ## Properties • $\inf f \le \liminf f \le \mathrm{ess } \inf f \le \mathrm{ess }\sup f \le \limsup f \le \sup f$ • $\mathrm{ess }\sup (fg) \le (\mathrm{ess }\sup f)(\mathrm{ess }\sup g)$ whenever both terms on the right are nonnegative.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8995598554611206, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/279172/can-a-closed-subset-of-an-affine-scheme-have-empty-interior
# Can a closed subset of an affine scheme have empty interior? I have an inclusion of closed subsets $V(J) \subset V(I)$ in an affine scheme $Spec(R)$ with the property that $V(I) = V(J) \cup \partial V(I)$. I would like to conclude that $V(J)=V(I)$. (Here $\partial$ denotes the boundary of a set.) Note that the interior of $V(I)$ is contained in $V(J)$ and hence the closure of the interior of $V(I)$ is contained in $V(J)$. But recall from basic topology that the closure of the interior of a closed set need not be the closed set. (The closed set could have empty interior for example.) The most important special case for me is when $V(J) = \emptyset$. In this case I have a closed subset $V(I)$ of an affine scheme with the property that $V(I) = \partial V(I)$ and I would like to claim that $V(I) = \emptyset$. Is there any hope for me? I'm willing to assume that the ring $R$ is Noetherian if that will help. Does an affine scheme always have non-empty interior? Is $V(I) = \partial V(I)$ only possible if $V(I)$ is empty? I may be grasping at phantoms but if there is something about the topology of affine schemes that could make this work I would be elated. Thanks for your attention. - Your question "Does an affine scheme always have non-empty interior?" has a trivial answer: every non-empty topological space $T$ has non-empty interior, namely $T$ ! – Georges Elencwajg Jan 15 at 10:21 ## 1 Answer You should beware that the intuition for topology obtained from calculus, say, is completely inadequate for the Zariski topology. For example if $R$ is a domain (noetherian or not) the affine scheme $X=Spec(R)$ is irreducible and any non-empty open subset $U\subset X$ is dense. This implies that any closed subset $V(I)\subsetneq X$ has empty interior and thus that $\partial V(I)=V(I)$, which is exactly the opposite of what you thought, namely that $\partial V(I)=V(I)$ would imply that $V(I)=\emptyset$. To put it bluntly, the concept of the boundary of a subset is essentially useless for a scheme endowed with its Zariski toplogy. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9601382613182068, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/8662/list
## Return to Answer The Ramanujan behavior is typically explained by the fact that the imaginary quadratic field $\mathbb{Q}(\sqrt{-163})$ has class number one (together with an integrality property of the j-function)j-function - see Wikipedia). Since there are only finitely many such imaginary quadratic fields, you can't really expect to have infinitely many similar phenomena (at least admitting the same a similar explanation). 1 The Ramanujan behavior is typically explained by the fact that the imaginary quadratic field $\mathbb{Q}(\sqrt{-163})$ has class number one (together with an integrality property of the j-function). Since there are only finitely many such imaginary quadratic fields, you can't really expect to have infinitely many similar phenomena (at least admitting the same explanation).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585303664207458, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=b2619247c84c22d3a343105875150e27&p=4169451
Physics Forums Page 2 of 2 < 1 2 ## The Klein-Gordon equation with a potential Quote by andrien Second one makes sense,first one does not. The KGE is the Euler-Lagrange eqn. for the action defined as $S = \int{d^d x \, \mathcal{L}}$, where the Lagrangian (density) is: [tex] \mathcal{L} = \mathcal{L_0} + \mathcal{L}_{\mathrm{int}} [/tex] Here $\mathcal{L_0} = \partial^{\mu} \Psi^{\ast} \, \partial_{\mu} \Psi - m^2 \Psi^{\ast} \, \Psi$ is the Lagrangian for the free complex scalar field. Varying w.r.t. $\Psi^{\ast}$, one obtains: [tex] \frac{\delta S}{\delta \Psi^{\ast}(x)} = -\left\lbrace \partial^2 + m^2 \right\rbrace \Psi(x) - \frac{\lambda}{2! 2!} 2 (\Psi^{\ast} \Psi) \, \Psi [/tex] Equate this variation to zero and you get: [tex] \left\lbrace \partial^2 + m^2 \right\rbrace \Psi = - \frac{\lambda}{2} (\Psi^{\ast} \Psi) \, \Psi [/tex] Apart from a wrong sign, the r.h.s. has the form I gave in my previous post. I thought your first term also some sort of a lagrangian which is not possible. Quote by andrien I thought your first term also some sort of a lagrangian which is not possible. How can a Lagrangian term be present in a Klein-Gordon equation?! Yes you are right with the correct potential form, though I assumed it was in relation to the Higgs boson as this is what the topic is on though I'm guessing a potential of this form is just an example to familiarise ourselves. Quote by Dickfore How can a Lagrangian term be present in a Klein-Gordon equation?! I did not read it rather just see. Hey I think the form of the potential was supposed to be this: $$\lambda\Psi_{f'}^{*} \Psi_{i'}$$ such that: $$(\frac{\partial^2 }{\partial t^2}-\bigtriangledown^2+m^2)\Psi=\lambda\Psi_{f'}^{*} \Psi_{i'}$$ Is this right? Recognitions: Gold Member For a complex scalar field theory you want a partition function of the form $$Z = \int D\phi \exp \left[ i \int d^4 x \frac{1}{2} \{ \partial \phi \partial \phi^{\dagger} + m^2 \phi \phi^{\dagger} \} + V(\lambda , \phi, \phi^{\dagger}) + J^{\dagger}\phi + J\phi^{\dagger} \right]$$ you can then assume weak coupling and expand the potential order by order in lambda, and replace the moment integrals with variations with respect to J, $\delta / \delta J$, like $$\int dx \, x^2 e^{-\alpha x} = \frac{\partial^2}{\partial \alpha^2} \int dx \, e^{-\alpha x}$$ doing this, along with a regularization procedure, you can make all the connected green's functions and feynman diagrams. Hmm I don't understand most of that - though it does sound like the route we are taking. Does the equation I just posted make sense at all though? Are you implying that it is and it's a complex scalar field? I haven't actually had any substantial teaching of quantum field theory, we're supposed to be taking a simplified route so forgive me for not understanding most of the terms you speak of (i.e. the partition function - i've heard of that but not applied to this!) Thanks! Recognitions: Gold Member something doesn't seem right... If I understand you correctly you are writing $$V(\Psi, \Psi^{\dagger}) = \lambda \Psi \Psi^{\dagger}$$ then the lagrangian would be $$\mathcal{L} = \frac{1}{2}\partial \Psi^{\dagger} \partial \Psi + \frac{1}{2}m^2 \Psi^{\dagger}\Psi - \lambda \Psi^{\dagger} \Psi$$ If you solve Lagrange's equations for $\Psi$ or $\Psi^{\dagger}$ you will not get what you have posted. If you want to get a vertex with three lines the potential needs to be cubic in field quantities. but with a complex field it will be asymmetrical too since you will write down a term like $\Psi\Psi^{\dagger}\Psi$ which is uneven in the two fields. For real fields it would be like $\phi^{3}/3!$. Well it probably isn't right knowing me, my professor said we model the perturbation as the interaction between some 2 scalar particles which is of the form of some coupling strength lambda and the product of the complex conjugated final state wavefunction of particle 2 and the initial state wavefunction of particle 2... I shall send him an email tomorrow and ask! I apologise if I'm not coherent/making sense. Thanks, Tom Recognitions: Gold Member please report back! :) I want to know too. haha will do! I think that interaction term posted by dickfore is the right one,it seems that in post #27 that is what is being said. Hey, My professor says an equation of this form: $$(\frac{\partial^2 }{\partial t^2}-\bigtriangledown^2+m^2)\Psi=\lambda\Psi_{f'}^{*} \Psi_{i'}\Psi$$ Will give a 4 particle interaction where I need to make the 'f'' state on the rhs an external (not sure what this means yet) and this useful for looking at the Higgs. Whereas an equation of form: $$(\frac{\partial^2 }{\partial t^2}-\bigtriangledown^2+m^2)\Psi=\lambda\Psi_{f'}^{*} \Psi_{i'}$$ Is a 3 particle interaction where they meet at a junction and some internal scalar particle is propagated. Recognitions: Gold Member sure, I don't have a problem with that. But I don't think that is a consequence of a potential of the form $V=\lambda \Psi^{\dagger}\Psi$. I think the potential must have a different form if the r.h.s of the equation looks like that. Well he included a delta sign next to the potential i.e. it was δV=λψ*ψ, are you supposing it should be δV=λψ*? Page 2 of 2 < 1 2 Thread Tools | | | | |-----------------------------------------------------------------|----------------------------|---------| | Similar Threads for: The Klein-Gordon equation with a potential | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 3 | | | Advanced Physics Homework | 4 | | | Classical Physics | 5 | | | General Physics | 4 | | | General Physics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422380924224854, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/noethers-theorem/new
# Tag Info ## New answers tagged noethers-theorem 1 ### Topological vs. non-topological noetherian charges Let there be given a physical system. What charges are Noetherian and what charges are topological often depend on the precise action formulation and field content of the physical system, see also this Phys.SE post. To simplify the discussion, let us assume that the action formulation is fixed, and below definitions will then refer to this fixed ... 0 ### Noether's identities Let us use Einstein's summation convention, DeWitt's condensed notation, and follow Ref. 1. Let there be given an action $S_0[\varphi]$ for a classical field theory. Let $$\tag{17.1a} \delta\varphi^i~=~ R^i{}_{\alpha} \varepsilon^{\alpha}$$ be infinitesimal gauge transformations. Here $\varepsilon^{\alpha}$ are infinitesimal gauge parameters, and ... Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8570753335952759, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16382/the-shape-of-the-earth-ldots
# The shape of the earth$\ldots$ ....is an oblate spheroid because centrifugal force stretches the tropical regions to a point farther from the center than they would be if the planet did not rotate. So we all learned in childhood, and it seems perfectly obvious. However... I am at $45^\circ$ north latitude. Does that mean • An angle with vertex at the center of the earth and one ray pointing toward the equator at the same longitude as mine, and one ray pointing toward me, is $45^\circ$ (that would mean I'm closer to the north pole than to the equator, measured along the surface, as becomes obvious if you think about really extreme oblateness); or • The normal to the ground where I stand makes a $45^\circ$ angle with the normal to the ground at the equator at the same latitude (this puts me closer to the equator than to the north pole); or • something else? If for the sake of simplicity we assume the earth is a fluid of uniform density, it seems one's potential energy relative to the center of the earth would be the same at all points on the surface. • Would the force of gravity at my location, assuming no rotation, be directly toward the center? Would it be just as strong as if the whole mass of the earth were at the center and my location is just as far from the center as it is now? • Would the sum of the force of gravity (toward the center or in whichever direction it is) and the centrifugal force (away from the axis) be normal to the surface at my location? • Given all this, how does one find the exact shape? • How well does that shape in this idealized problem match that of the actual earth? - 4 This belongs on physics, although I would still suggest to edit it to be a single question. – Phira Oct 30 '11 at 18:27 @Phira If I wondered whether the Cartesian vortex theory or Newton's theory is closer to the truth, that would certainly belong to physics and not to mathematics. But if you accept Newton's physics, all that remains to answer the questions above is mathematics. (BTW, this problem was the "crucial test" of the Cartesian-versus-Newtonian theories. The former predicted an oblong rather than an oblate earth. Geodetic measurements in 1733, paid for by the French taxpayers, showed it was indeed oblate, and the Cartesian theory did not survive that blow.) – Michael Hardy Oct 30 '11 at 18:39 1 @MichaelHardy If you accept all of theoretical physics, then most problems of theoretical physics "are mathematics". But it is much more likely that a physicist knows physics. – Phira Oct 30 '11 at 18:47 1 @MichaelHardy What makes you think that you will get a better answer here than on the physics site? – Phira Oct 30 '11 at 18:48 – Qmechanic♦ Oct 30 '11 at 20:21 ## 2 Answers 1, At 45 deg (N) latitude you are closer to the North pole, to picture this just draw the Earth as a much more extreme oblate spheroid. 2, The shape of the Earth is set by the outward rotational force exactly balancing the inward gravitational force at every point (except for local geography). So the overall potential is always down (except for local geology) There is a useful intro to the difference between mean sea level and the Earth's surface at ESRI (makers of the popular GIS/mapping software) - +1 for answering some of the questions and adding the link. Most of the question remains unanswered. – Michael Hardy Nov 1 '11 at 15:26 It is not popular to begin an answer with a question but I would do that: How do you know that you are at 45° north latitude? There are several latitudes defined. If you got the latitude from a map, Google Earth or maps, of read it at your GPS receiver, that its is geographic or geodetic latitude. In such a case 45° is the angle between normal to reference ellipsoid and equatorial plane. The reference ellispoid (for example WGS84 used in the Google and GPS) is a mathematical construction, an easily managed approximation to the actual shape of Earth. If you got the latitude by your own direct measurement of height of the north pole (in the sky) above horizon or its distance from zenith (supposing you took into account refraction, abberation, nutation etc.), than it is astronomic latitude. In that case 45° is the angle between normal to geiod and the equatorial plane. Geoid is equipotential surface passing through your point of observation. Finally, there is also geocentric latitude - angle between the equatorial plane and a line passing through your place and Earth's center. If Earth is a perfect non rotating sphere and distribution of its mass has spherical symmetry than all three latitudes has the same value at every point of globe. See Coordinate systems by R. Knippers for additional details. If you require that gravitational force at every point of the globe point towards the Earth's center than mass distribution must have sperical symmetry. Since rotation is axially symmetric you cannot get the required result unless mass distribution is so special (and unphysical) that it compensates for effect of rotation added. Only if sphape of Earth closely follows equpotential surface (for example is covered by ocean) than neat force (gravity plus centrifugal one) is perpendicular to the surface at every place. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406763315200806, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/169555-find-values-m.html
Thread: 1. find the values of m ok i wanted to know if someone could check my work and let me know if they got the same thing. im supposed to find the values of m so that the function y = x^m is a solution to, x^2y'' - 3xy' - y = 0. i got m = 0. i just took the first and 2nd derivative of the function and plugged it into the equation and solved. thanks in advance. 2. You should get : $m=2 \pm \sqrt{5}$. so that the two solutions are : $y_1=x^{2+\sqrt{5}}$ & $y_2=x^{2-\sqrt{5}}$. Hence, the general solution is $y=c_1x^{2+\sqrt{5}}+c_2x^{2-\sqrt{5}}$. 3. yeah i was or am a little tired, ive been trying to get ready for this exam all night..lol thanks for your reply. 4. Generalization : In case we don't know there are solutions of the form $y=x^m$ and considering that we have an Euler equation, the subtitution $x=e^u$ with $u$ independent variable, transforms the given equation into a linear homogeneous differential equation with constant coefficients. Fernando Revilla
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408792853355408, "perplexity_flag": "head"}
http://mathlesstraveled.com/2011/11/11/fun-with-repunit-divisors/
Explorations in mathematical beauty ## Fun with repunit divisors Posted on November 11, 2011 by In honor of today’s date (11/11/11), here’s a fun little problem (and some follow-up problems) I’ve seen posed in a few places (for example, here is a very similar problem). If I recall correctly, it was also a problem on a midterm for my number theory course in college. It’s a lovely little problem with an equally lovely solution—once you understand the solution you’ll start to see many other situations where a similar “technique” can be applied. A number of the form $1111\dots 111$, with any number of repeated ones, is called a (base-10) repunit. Prove that every prime other than 2 or 5 is a divisor of some repunit. In other words, if you make a list of the prime factorizations of repunits, every prime other than 2 or 5 will show up eventually. For example, here are the prime factorizations of the repunits up to length 20: $\begin{array}{rcl} 11 & = & 11\\ 111 & = & 3 \times 37\\ 1111 & = & 11 \times 101\\ 11111 & = & 41 \times 271\\ 111111 & = & 3 \times 7 \times 11 \times 13 \times 37\\ 1111111 & = & 239 \times 4649\\ 11111111 & = & 11 \times 73 \times 101 \times 137\\ 111111111 & = & 3 \times 3 \times 37 \times 333667\\ 1111111111 & = & 11 \times 41 \times 271 \times 9091\\ 11111111111 & = & 21649 \times 513239\\ 111111111111 & = & 3 \times 7 \times 11 \times 13 \times 37 \times 101 \times 9901\\ 1111111111111 & = & 53 \times 79 \times 265371653\\ 11111111111111 & = & 11 \times 239 \times 4649 \times 909091\\ 111111111111111 & = & 3 \times 31 \times 37 \times 41 \times 271 \times 2906161\\ 1111111111111111 & = & 11 \times 17 \times 73 \times 101 \times 137 \times 5882353\\ 11111111111111111 & = & 2071723 \times 5363222357\\ 111111111111111111 & = & 3 \times 3 \times 7 \times 11 \times 13 \times 19 \times 37 \times 52579 \times 333667\\ 1111111111111111111 & = & 1111111111111111111\\ 11111111111111111111 & = & 11 \times 41 \times 101 \times 271 \times 3541 \times 9091 \times 27961 \end{array}$ It’s clear that no repunit is divisible by 2 or 5 (why?), but at first sight it may seem unlikely that all the other primes are divisors of some repunit! There doesn’t seem to be a lot of rhyme or reason in the above list of factorizations. The above problem is really the heart of the matter, but once you solve that, here are some fun follow-up problems: 1. Compute a repunit which is divisible by 2011 (you’ll probably want to use a computer!). 2. Prove that every prime other than 2 or 5 is actually a divisor of infinitely many repunits. 3. Prove that every integer which is not divisible by 2 or 5 is a divisor of some repunit. 4. Generalize all of the above to repunits in bases other than 10. 5. What’s so special about repunits here? Can you generalize to other sorts of numbers? I’ll post some solutions in a few days. Happy 11/11/11! 39.953605 -75.213937 ### Share this: This entry was posted in arithmetic, challenges, modular arithmetic, number theory, primes and tagged divisors, primes, repunit. Bookmark the permalink. ### 16 Responses to Fun with repunit divisors 1. Sue VanHattum says: • Brent says: Yes, that’s where I saw it! Thanks! 2. John Baker says: I enjoyed this post. Your “repunit” table can be generated by the following J sentences repunits=. ‘x’ ,~&.> (2 + i. 19) #&.> ’1′ repunits ,. q:@”. &.> repunits For more about the J programming language see: http://www.jsoftware.com/jwiki/FrontPage • Brent says: Neat, thanks! I do like J, I have actually written about it before. I actually generated the table using a bit of Haskell (I can show it if you’re interested but would have to reconstruct it, since I just did it as a one-off thing in an interpreter and didn’t save it). 3. Matt Gardner Spencer says: ** (you’ll probably want to use a computer!).** I’ll say. It looks to me as if you need to go back to 1881 before you find a year which is a factor of 111…111 with fewer than 20 digits. As another potentially neat question, what is the connection between repunits, primes and the sequence that can be found at http://oeis.org/A001913 • Brent says: Ah, nifty. I have some conjectures from a few minutes of computational fiddling but haven’t sat down to prove anything yet… 4. Fergal Daly says: Re: Compute a repunit which is divisible by 2011 (you’ll probably want to use a computer!) Is this not trivial from what you learn while proving that all the primes occur? • Brent says: I’m not sure what your point is. Whether it is trivial probably depends on your definition of “trivial”, and on your method of proof, and on your facility with getting a computer to do the requisite computation. But even if it is trivial, so what? • Fergal Daly says: By “trivial”, I mean that computation can be done by anyone who can count to 2011, not that I’m a genius or anything. So the “so what” is that there is definitely no computer needed. Perhaps we have different proofs. In fact, my proof only works nicely for p > 10 (or whatever base you’re working in), which makes me think we do. • Fergal Daly says: Actually, the proof works just fine for p < 10. • Brent says: Well, I actually know two proofs: one leads to a very quick solution but requires the knowledge of some number theory. The other proof works on “first principles” but leads to a much longer computation. I was assuming most readers would find the second one. • Fergal Daly says: I think I did it the number theory way but from first principles (which is not hard). Looking at the factorisations you list leads to that. While I was doing it, I realised that the argument was familiar to me and then spotted the connection to the well known theorem. 5. David Radcliffe says: It follows from Fermat’s Little Theorem that $(10^{2010} - 1)/9$ is divisible by 2011, but you would probably want to use a computer to find the smallest repunit that is divisible by 2011. 6. Pingback: Fun with repunit divisors: proofs | The Math Less Traveled 7. Pingback: Fun with repunit divisors: more solutions | The Math Less Traveled 8. Pingback: NO SUCH LUCK or Numerology for Idiots « Mathspig Blog Comments are closed. • Brent's blogging goal Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382601380348206, "perplexity_flag": "middle"}