url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/21638/definition-of-the-j-invariant-of-an-elliptic-curve?answertab=oldest
# Definition of the j-invariant of an elliptic curve It seems that most introductory books on elliptic curves simply state the definition of the j-invariant of an elliptic curve without giving any background on how that definition was conceived. Of course, for moduli reasons, it is clear why one might want such an invariant, but the actual formula has always seemed quite mysterious to me. Does anyone know of a nice self-contained source that explains the definition of the j-invariant? - The complete definition, or just the definition up to a constant? Matt has given a nice explanation for why one should care about the j-invariant up to a constant, but the precise form of the j-invariant has to do with its number-theoretic properties and is not easy to explain in an introduction. On the other hand books on elliptic curves don't want to give you the wrong definition! So there is a dilemma here. – Qiaochu Yuan Feb 12 '11 at 9:27 ## 2 Answers Actually, Ravi Vakil's notes give a great reason (modulo the strange constant out front). This explanation is given somewhere in the Foundations of Algebraic Geometry notes here http://math.stanford.edu/~vakil/preprints.html#coursenotes. I'm just going off memory, so don't attribute any errors I make to him. Notice that once you prove that every elliptic curve has an affine model given by $y^2=x(x-1)(x-\lambda)$, then you know the $j$-invariant has to be independent of the different $\lambda$ you get by permuting. You can just explicitly work it out that the six choices of lambda then are $\lambda, \frac{1}{\lambda}, 1-\lambda, \frac{1}{1-\lambda}, \frac{\lambda}{\lambda -1}, \frac{\lambda -1}{\lambda}$. Some obvious first choices for an invariant with respect to all of these is to multiply them all together. You get $1$, oops, that isn't a good invariant since it not only doesn't depend on the choice of $\lambda$, but also is independent of curve. So you could also try adding them all, oops again, they come in pairs each adding to $1$, so they add to $3$. So let's try the next best thing which is to sum the squares. If you check, this is exactly the $j$ invariant (but without the constant which is stuck in for characteristic $2$ reasons). - 1 In other words, we have an action of the symmetric group of degree $3$ on the field of rational functions on $\lambda$ and we look at the invariant subfield. We know from Lüroth's theorem that that subfield is the field of rational functions of some rational function of $\lambda$. Standard ideas of invariant theory help us find that function. Moreover, in this way we see why it is enough to consider the usual invariant: all others can be deduced from it. – Mariano Suárez-Alvarez♦ Feb 12 '11 at 5:18 The $j$-invariant has the following classical interpretation. Consider a model $E\subset{\Bbb P}^2$ of the elliptic curve (one knows that $E$ is a cubic). Let $P\in E$. There are 4 lines through $P$ that are tangent to $E$ and one can show that the set of cross-ratios $c$ of these 4 lines are independent of $P$. Then $$j=\frac{(c^2-c+1)^3}{c^2(c-1)^2}$$ is invariant under the 24 permutations of the 4 tangents (which give up to 6 different values of the cross-ratio) and is the $j$-invariant of the elliptic curve, up to normalization. When the curve $E$ is given in Legendre form $y^2=x(x-1)(x-\lambda)$ the formula for the $j$-invariant is obtained taking $P$ the point at infinity and the 4 tangents the line at infinity and the lines $x=0$, $x=1$ and $x=\lambda$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415364861488342, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/154870-quadratic-graphing-problem-need-help-print.html
# Quadratic Graphing Problem, Need Help Printable View • August 31st 2010, 10:48 AM Xeaxy Quadratic Graphing Problem, Need Help Hi, I need someone to help guide me through the following problem: A football player kicks a ball at an angle of 37° above the ground with an initial speed of 20 meters per second. The height, h, as a function of the horizontal distance traveled, d, is given by: $h(d) = 0.75 - 0.00192d^2$ A. Graph the path of the ball as follows B. When the ball hists the ground, how far is it from the spot where the football player kicked it? C. What is the maximum height the ball reaches during its flight? D. What is the horizontal distance the ball has traveled when it reaches its maximum height? • August 31st 2010, 08:15 PM Educated A. To graph the equation using pen and paper, plot points and then join them up. Eg. if x = 1, then y = 0.75192. If x = 2, y = 0.75384. And so on... B. Use the quadratic formula: $d = \dfrac{-b\pm\sqrt{b^2-4ac}}{2a}$ You will get 2 answers, use the one most appropriate for the distance travelled. C. Differentiate the equation, then substitute in h'(d) = 0. (Gradient 0) This will give you the distance. Substitute this value into your original equation to find the maximum height. D. The horizontal distance travelled is the area under the graph, so you have to integrate it. Can you figure it out for yourself? All times are GMT -8. The time now is 08:12 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042597413063049, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/174431-polynomial-equation.html
Thread: 1. polynomial equation calculate all polynomial $p(x)$ for which $(p(x))^2+(p(x-1))^2+1=(P(x)-x)^2$ 2. Originally Posted by jacks calculate all polynomial $p(x)$ for which $(p(x))^2+(p(x-1))^2+1=(p(x)-x)^2$ If you put x=0 in this equation then you get $(p(-1))^2+1=0$. This cannot happen if the polynomial has real coefficients. So there are no such real polynomials. If complex coefficients are allowed, then $p(-1)=\pm\sqrt{-1}$, but I don't see where to go from there. 3. Originally Posted by jacks calculate all polynomial $p(x)$ for which $(p(x))^2+(p(x-1))^2+1=(P(x)-x)^2$ If we presume that p(x) is going to have at least real coefficients we can expland the definig equation to get $\displaystyle p(x+1) = \frac{1}{2}(x + 1) - \frac{p^2(x)}{2(x + 1)}$ The above says that p(x) must contain a factor of x + 1 and since no real p(0) exists (ala Opalg) we must also require that p(x) contains a factor of x as well, so we have $\displaystyle p(x) = x(x + 1)q(x)$ where q(x) is a polynomial. This expression needs to be run through the original equation to get restrictions on q(x), but I can't make any sense out of the resulting equation. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488778114318848, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/41998/orthogonal-in-the-b-norm?answertab=active
# Orthogonal in the B Norm? If you have two generalized eigenvectors $\varphi_1 , \varphi_2$ (with different eigenvalues) of a matrix A, then they will be orthogonal in the B norm. In this context, I do not understand what is meant by the "B norm" where B is a matrix of the same dimensions as A. What does it mean to be orthogonal in another matrices' norm? - 1 Could you provide a reference for this? It is probably easier to figure it out in context. – Calle May 29 '11 at 18:27 ## 1 Answer You have $A \varphi_i = \lambda_i B \varphi_i$. I'm assuming $A$ and $B$ are symmetric, with $B$ positive definite. Then $\lambda_1 \varphi_1^T B \varphi_2 = \varphi_1^T A \varphi_2 = \lambda_2 \varphi_1^T B \varphi_2$ with $\lambda_1 \ne \lambda_2$, so $\varphi_1^T B \varphi_2 = 0$. This says that $\varphi_1$ and $\varphi_2$ are orthogonal in the inner product $(u,v) = u^T B v$ corresponding to the matrix $B$, which might be abbreviated as "in the $B$ norm". - Thanks for the information! I'm trying to parse your response because I'm a novice in Linear Algebra. – sam May 29 '11 at 18:39 When you say that, "$\varphi_1$ and $\varphi_2$ are orthogonal in the inner product $(u,v)=u^TBv$", are $u,v$ just $\varphi_1 , \varphi_2$? – sam May 29 '11 at 18:53 In the definition of the inner product, $u$ and $v$ are any vectors in the vector space. You are using this definition with $u = \varphi_1$, $v = \varphi_2$. – Robert Israel May 30 '11 at 2:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388154745101929, "perplexity_flag": "head"}
http://mathoverflow.net/questions/63265/what-are-maps-between-proper-classes
## What are “maps” between proper classes? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When defining a functor (between categories), I am usually told that it assigns to each object of the source category an object of the target category. I do not find this very satisfactory since we are dealing with proper classes here. Judging by the definition, it must be possible to have the concept of a "map" between proper classes. I would like to know what exactly that is and how it is defined. I have attempted to read some books on set theory in search for an answer, but they all treat classes very briefly and never mention the possibility of having anything like a map between two of them. I would be just as happy if you could point me to a book where this is explained. - 5 This is becoming a very much non-research question. I think perhaps we can close the question. On the other hands, foundational questions on MO seem to be generally below research level and have mostly educational value, which would be a reason to keep this one. – Andrej Bauer Apr 28 2011 at 8:35 A class is given by a formula (which it defines). If $C,D$ are classes, then a map $C \to D$ is simply a class $f$ which defines a "map" $C \to D$ in the obvious sense: For all $x \in C$, i.e. all elements satisfying the formula defining $C$, there is exactly one $y \in D$, i.e. an element satisfying the formula defining $D$, such that $y = f(x)$, i.e. $(x,y)$ satisfies the formula defining $f$. As an exercise, prove that $V \to V, x \mapsto x + 1 := x \cup \{x\}$ is a map, where $V$ is the universe. – Martin Brandenburg Apr 28 2011 at 8:46 1 My problem was that I simply did not know you could easily define ordered pairs of classes. – Jesko Hüttenhain Apr 28 2011 at 8:51 1 -1. I agree with Andrej Bauer. – Sergei Tropanets Apr 30 2011 at 14:16 ## 3 Answers http://en.wikipedia.org/wiki/Ordered_pair#Morse_definition Definition: A relation $R$ is functional if and only if for all ordered pairs $\langle x,z\rangle$ and $\langle y,w\rangle$ in $R$, if $x=y$ then $z=w$. Definition: If $R$ is a relation, `$\operatorname{Range}(R) = \{y : (\exists x)(\langle x,y\rangle \in R)\}$`. Definition: A map is an ordered pair $\langle R,C\rangle$ such that $R$ is a functional relation and $\operatorname{Range}(R) \subseteq C$. - I suppose it wouldn't hurt to mention that all of what you've written applies to classes equally well as to sets. – Andrej Bauer Apr 28 2011 at 8:12 I suppose you're right. – Ricky Demer Apr 28 2011 at 8:15 1 So, I am guessing, this is only possible if I use Morse-Kelley and not with ZFC alone? My problem was that I thought $C\times D$ was not really defined if $C$ and $D$ were proper classes. – Jesko Hüttenhain Apr 28 2011 at 8:18 It's possible with ZFC as well. Of couse $C \times D$ is defined for classes, just use the same definition as for sets. – Andrej Bauer Apr 28 2011 at 8:34 Well, now, that helped clarify a lot. Thank you both a lot, that trivial little miscomprehension caused me years of agony. I would really like to accept both answers, but apparently you can't do that. – Jesko Hüttenhain Apr 28 2011 at 8:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Your question sounds like your preferred foundation is set theory, so let me speak in terms of set theory. A map $f : A \to B$ between sets is a functional relation, i.e., a subset $f \subseteq A \times B$ satisfying: 1. Totality: $\forall x \in A . \exists y \in B . (x,y) \in f$ 2. Single-valuedness: $\forall x \in A . \forall y, z \in B . ((x,y) \in f \land (x,z) \in f \implies y = z)$. We usually write $f(x) = y$ instead of $(x,y) \in f$. The same definition applies to classes. A map $F : C \to B$ between classes $C$ and $D$ is a subclass of $C \times D$ which is total and single-valued. Exercise (allowed since this is not a research question): the domain and codomain of a function $F : C \to D$ cannot be recovered from the functional relation $F$. (If $C$ and $F$ are empty, how do we recover $D$?) Therefore, the object part of a functor must be a triple $(C,D,F)$ rather than just $F$. But how can we form ordered triples of classes? - Wait, can't the domain be recovered from the relation by Totality? I mean, if this really works for classes the same way it does for sets. Then, a functor would only have to be a pair $(F,D)$ and I just learned that that's easily possible. – Jesko Hüttenhain Apr 28 2011 at 8:43 Yes, you can record just $D$, but why would you do such an ugly thing? – Andrej Bauer Apr 28 2011 at 8:44 Well then, beauty it is! If you can have ordered pairs of classes, however, it should not be a problem to have any finite ordered tuple of classes, right? (So in particular, triples) – Jesko Hüttenhain Apr 28 2011 at 8:49 Right, but please don't confused the product $C \times D$ of two classes with an ordered pair $(C,D)$ of classes. That's what the exercise was for. – Andrej Bauer Apr 28 2011 at 9:19 1 I kan't sspell. – Andrej Bauer Apr 28 2011 at 9:20 Instead of MK set theory + Morse ordered pair definition, is ARC set theory (F.A.Muller, "Sets, Classes, and Categories", 2001, Bibliography PDF) with the usual Kuratowski ordered pair an option? ARC supposedly proves the existence of the $n$-th power-class of the set universe V for any `$n \in \mathbb{N}$`, and all so-called "good" classes provably exist, good classes being the class of all sets and "the powerclass and the union-class of a good class, and the union-class, the intersectionclass, the complement-class, the pair-class, the ordered pair-class, and the Cartesian product-class of any finite number of members of one good class". According to the cited paper, ARC is consistent relative to ZFC plus a strongly inaccessible cardinal axiom. I think that this set theory looks quite nice, so I'm wondering why it didn't take off at all. The formal proofs should be in Muller's PhD thesis, which I don't have access to. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473958611488342, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/232604/solve-the-given-cauchy-problem-on-the-bounded-interval
# Solve the given Cauchy problem on the bounded interval $$u_{tt}-16u_{xx}=0, \quad 0<x<3, \quad 0 < t < \infty,$$ $$u(x,0)=x(3-x), \quad u_t(x,0)=\cos(\pi x), \quad 0<x<3,$$ $$u(0,t)=u(3,t)=t, \quad 0 < t < \infty.$$ Determine $u(x,t)$ in terms of x and t for $(x,t)$ in regions 1, 2 , and 3 determined by the characteristics. So I know for region 1 $u(x,t)$ can simply be found using d'Alembert's solution. What I am not sure of is for regions 2 and 3. Suppose $P: (x,t)$ is in region 2. You form a characteristic quadrilateral having one vertex on the line $x=0$ and two vertices on the piece of the characteristic from the origin bounding region 1. You can find this by $u(P)=u(A)+u(B)-u(C)$ where A is on x=0 and C and B are on the piece of the characteristic from the origin bounding region 1. I have how you find u(A), u(B), and u(C) in my notes, but I do not really understand it. $\hskip1.5in$ - You have two contradictory conditions for $u(x,0)$. – Christopher A. Wong Nov 8 '12 at 2:20 What are regions 1, 2, 3? – Pragabhava Nov 8 '12 at 2:22 @Pragabhava-Region 1 is the triangle that has its base on $t=0$, its left side is $x-4t=0$, and its right side is $x+4t=3$. Region 2 is the triangle that has a side on $x-4t=0$ (the left side of region 1), $x=0$, and $x+4t=3$. Region 3 has a side on $x+4t=3$ (the right side of region 1), $x=3$, and $x-4t=0$. I hope that makes sense. – Sprock Nov 8 '12 at 2:44 @Christopher A. Wong-that was a typo. It was supposed to be $u_t(x,0)=cos(\pi x)$. – Sprock Nov 8 '12 at 2:46 – Pragabhava Nov 8 '12 at 3:09 show 1 more comment ## 1 Answer For any parallelogram $ABCD$ in the $xt$-plane bounded by four characteristic lines, the sums of the values of $u$ at opposite vertices are equal, that is $$u(A) + u(C) = u(B) + u(D)$$ Let $A = (x,t) \in \mbox{II}$, $B = (0,t_B)$, $C = (x_C,0)$ and $D = (x_D,t_D)$ as shown on the figure: $\hskip1.5in$ Then $u(x,t) = u(0,t_B) - u(x_C,0) + u(x_D,t_D)$. Now, it's easy to see that \begin{align} t_B &= t - \frac{x}{4}\\ x_C &= 4 t - x\\ t_D &= \frac{x}{4}\\ x_D &= 4t \end{align} and then the solution on region II is $$u(x,t) = u\big(0,t-\tfrac{x}{4}\big) - u\big(4t -x,0\big) + u_I\big(4t,\tfrac{x}{4}\big),$$ where $u_I$ is the d'Alambert solution in region I. Can you do region III? Moreover, can you solve for all time? - How would the solution change if C and D were on the characteristic line $x-4t=0$? – Sprock Nov 8 '12 at 13:18 @Sprock Now that you know the solution on regions I and II, why dont you take the limit and see it yourself? – Pragabhava Nov 8 '12 at 16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378477931022644, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=227680
Physics Forums complex analysis Also when trying to find the integral of (1/8z^3 -1) around the contour c=1. I found the singularities to be 1/2, 1/2exp(2pi/3), and 1/2exp(4pi/3) What is the next step here. Do I just assume the integral is 6pi(i) after using partial fractions to find the numerators of the 3 fractions. The answer is actually 0, but I dont understand how it was solved. I know it might have something to do with the rule that says, the sum of the integral of a C1, C2 and C3 is equal to the integral of the outside contour, but how do i know the orientation of the three contours inside the big contour C. Thanks, hope im not asking too many questions PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Find the function's residue at each of the poles inside the contour. Then use the residue theorem. Quote by Pere Callahan Find the function's residue at each of the poles inside the contour. Then use the residue theorem. We haven't covered residue theorem yet. Is there any other way to solve it? complex analysis Quote by soulsearching We haven't covered residue theorem yet. Is there any other way to solve it? Of course there is. Find a parametrization of the contour and do it by hand. In general if you want to evaluate [tex] \int_C dz f(z) [/tex] where C is the contour, you can find a parametrization [tex] \gamma : [0,1]\to \mathbb{C} [/tex] such that $\gamma([0,1])=C$ (formalities omitted - you are certainly familiar with that ) [tex] \int_0^1 dt f(\gamma(t))|\gamma'(t)| [/tex] This will give a not-too eays but doable integral. Note however, that normally it works the other way around. If you have some definite real integral, you can sometimes try to interpret it as a complex integral over a closed contour and use the residue theorem to evaluate it, but that's a different story. Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Quote by soulsearching What is the next step here. Do I just assume the integral is 6pi(i) after using partial fractions to find the numerators of the 3 fractions. The answer is actually 0, but I dont understand how it was solved. I know it might have something to do with the rule that says, the sum of the integral of a C1, C2 and C3 is equal to the integral of the outside contour, but how do i know the orientation of the three contours inside the big contour C. Hi soulsearching! I'm a little confused (especially by the 6πi). Are you saying that you know how to integrate (1/2z - 1) round a contour containing z = 1/2? If so, just use the same contour for all three partition fractions! Quote by tiny-tim Hi soulsearching! I'm a little confused (especially by the 6πi). Are you saying that you know how to integrate (1/2z - 1) round a contour containing z = 1/2? If so, just use the same contour for all three partition fractions! Like after breaking up the fraction into 3 fractions using partial fractions, what's the next step? Thanks... :) Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Quote by soulsearching Life after breaking up the fraction into 3 fractions using partial fractions, what's the next step? erm … if you don't know how to integrate (1/2z - 1) round a contour containing z = 1/2, then there is no next step … you're stuck (or, rather, you have to follow Pere Callahan's method)! So … do you know ? … you didn't say. Quote by tiny-tim erm … if you don't know how to integrate (1/2z - 1) round a contour containing z = 1/2, then there is no next step … you're stuck (or, rather, you have to follow Pere Callahan's method)! So … do you know ? … you didn't say. I think its going to be 2pi(i) because the singularity (1/2) is inside the contour. is this right? i did not get Pere Callahan's method. and why did you choose the contour as z=1/2? the question says integrate it around z=1 Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Quote by soulsearching i did not get Pere Callahan's method. and why did you choose the contour as z=1/2? the question says integrate it around z=1 No, I said a contour containing z = 1/2. You're getting confused between z = 1/2 (a point) and |z| = 1/2 (a circle) … I did warn you about that in another thread … you must write |z| when you mean it, or you'll lose track. The integral, of course, is the same for any contour that includes the same singularities. So you could, for example, choose the contour |z| = 100! And then integrate all three partial fractions round that. I think its going to be 2pi(i) because the singularity (1/2) is inside the contour. is this right? erm … I can't remember! … but I know it's 1/2 times something. Thread Tools Similar Threads for: complex analysis Thread Forum Replies Calculus 7 Differential Geometry 3 Calculus & Beyond Homework 2 Calculus & Beyond Homework 11 Calculus 6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156884551048279, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/114828/first-countability-and-convergence
First countability and convergence I am trying to prove the following statement: Let $X$ and $Y$ be topological spaces. If $X$ is first countable and $f:X \rightarrow Y$ is a map such that $a_n \rightarrow a$ in $X$ implies $f(a_n)\rightarrow a$ in $Y$, then $f$ is continuous. I am missing something critical, as my proof, which I include below, does not use the fact that $X$ is first countable. What is it that I am missing? Flawed Proof: Assume $f$ is such a map. Let $U$ be an open set in $Y$. To prove $f$ is continuous, we must show $f^{-1}(U)$ is an open set in $X$. Let $a_n$ be a sequence converging to $a \in f^{-1}(U)$. By hypothesis, this implies $f(a_n)$ converges to $f(a) \in U$. So, there exists some $N \in \mathbb{N}$ such that $f(a_i) \in U$ for all $i \geq N$. This implies $a_i \in f^{-1}(U)$ for all $i \geq N$. This implies $f^{-1}(U)$ is open since every sequence converging to a pont in $f^{-1}(U)$ is eventually contained in $f^{-1}(U)$. - In the statement of the theorem you want $f(a_n)\to f(a)$, not $f(a_n)\to a$. – Brian M. Scott Feb 29 '12 at 14:09 What does your last argument mean? – Ilya Feb 29 '12 at 14:10 The characterization of openness in the last sentence (which should read "every sequence $(a_n)$ converging to an $a\in f^{-1}(U)$..." uses it. – David Mitra Feb 29 '12 at 14:11 In this particular case, I find that proof by contradiction is easier. Try assuming on the contrary and everything else should follow easily. – Stuck_pls_help Feb 29 '12 at 14:16 2 Answers You are using the following fact: Characterization of openness in first countable spaces: In a first countable space, a set $U$ is open if and only if every sequence that converges to a point in $U$ is eventually contained in $U$. You appeal to this in the last sentence of your proof; so this is where you are using first countability. As for why the characterization of openness above relies on first countability, let's look at it's proof. One way to prove the reverse implication of the Characterization of openness theorem (which is what you use in your proof) is to use the Characterization of closedness in first countable spaces: Characterization of closedness in first countable spaces: If $X$ is first countable and $A\subset X$, then $x\in\overline A$ if and only if there is a sequence $(x_n)$ contained in $A$ which converges to $x$. The proof of the forward implication of Characterization of closedness theorem uses first countability in an essential way: If $x\in \overline A$, pick a countable decreasing nhood base $\{U_n\}$ at $x$. Then choosing $x_n\in U_n\cap A$ provides the required sequence (note that since $x\in \overline A$ the sets $U_n\cap A$ are nonempty). Now, back to the prove of the reverse implication of the Characterization of openness theorem. We proceed with the proof of the contrapositive: Let $A$ be a subset of $X$ that is not open. Then there is a point $x$ in $A$ that is also in the closure of $A^C$. From the Characterization of closedness theorem, there is a sequence in $A^C$ that converges to $x$. Such a sequence cannot be eventually contained in $A$. (One could also argue directly here: every nhood of $x$ contains points of $A^C$. So, select a decreasing countable nhood base at $x$ from which to find a sequence $(x_n)\subset A^C$ that converges to $x$.) Note that Brian M. Scott's answer shows that first countability is necessary for the reverse implication in the Characterization of openness theorem. - The problem is with your last assertion: the fact that every sequence converging to a point of $f^{-1}[U]$ is eventually inside $f^{-1}[U]$ does not in general imply that $f^{-1}[U]$ is open. In the ordinal space $[0,\omega_1]$, for instance, with the order topology, the set $A=\{\omega_1\}$ has the property that every sequence converging to $\omega_1$ is eventually inside $A$ (because every such sequence is eventually constant), but $A$ isn’t open. First countability is used to draw that final inference. However, that’s doing it the hard way. Try showing instead that if $f$ isn’t continuous, there must be a point $a\in X$ and a sequence $\langle a_n:n\in\mathbb{N}\rangle$ in $X$ converging to $a$ such that $\langle f(a_n):n\in\mathbb{N}\rangle$ does not converge to $f(a)$ in $Y$. It should be much easier to see where to use first countability of $X$ in this approach. - Upon looking at the proof a second time, I saw first countability was the condition needed to guarantee the final assertion. I used that condition because I had previously proven it. So you think proving the statement by contradiction is more illustrative in terms of showing the need for first countability? I will give it a go. Thank you. – Holdsworth88 Feb 29 '12 at 14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535197615623474, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/hurwitz-theory
## Tagged Questions 3answers 183 views ### Automorphism of finite groups and Hurwitz spaces If $G$ is a finite group, embedded as a transitive subgroup of $S_n$ for some $n$, will every automorphism of $G$ extend to an inner automorphism of $S_n$? I'm trying to connect t … 1answer 124 views ### holomorphic automorphisms of universal cover of configuration spaces Hello everyone, I have been trying (without success) to determine the following. Let $P$ denote the space of monic polynomials of degree $n$ with complex coefficients, which have … 1answer 214 views ### Spaces parametrizing ramified covers of surfaces Let $\Sigma$ be a surface (let's say oriented and of finite type). We can consider the configuration space $F(\Sigma,n)$ of $n$ ordered distinct points on $\Sigma$, i.e. \$\Sigma^n\ …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948900103569031, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/250920/open-cover-of-a-metric-space
# Open cover of a metric space I am trying to find a definition for the open cover of a metric space, but i cannot find it. So, if X is a metric space and A is a subset of X, then what is the definition for open cover of A? Can anyone help? Thank you. - Do you know what an open set (in a metric space) is? – Adam Saltz Dec 4 '12 at 19:59 ## 2 Answers An open cover of $A$ in $X$ is simply a family $\mathscr{U}$ of open sets in $X$ such that $A\subseteq\bigcup\mathscr{U}$. A relatively open cover of $A$ as a subspace of $X$ is a family $\mathscr{U}$ of open sets in $A$, i.e., of sets of the form $U\cap A$ for some open $U$ in $X$. Example: Let $X=\Bbb R$, and let $A=(0,1)$. Then $$\mathscr{U}=\left\{\left(\frac1n,1\right):n\in\Bbb Z^+\right\}$$ is an open cover of $A$, because each $x\in A$ belongs to at least one member of $\mathscr{U}$. Specifically, if $x\in A$, then $x>0$, so there is a positive integer $n$ such that $\frac1n<x$, and $x\in\left(\frac1n,1\right)\in\mathscr{U}$. Finally, a set $U\subseteq X$ is open if and only if it is a union of open balls: for each $x\in U$ there is an $\epsilon_x>0$ such that $B(x,\epsilon_x)\subseteq U$, where $B(x,\epsilon_x)=\{y\in X:d(x,y)<\epsilon_x\}$. - Thanks, when you say "family of" open sets, do you mean union of open sets? – user49065 Dec 4 '12 at 20:02 @user49065: No, family, collection, and set all mean the same thing: $\mathscr{U}$ is a set of open sets in $X$. I used family only because a set of sets sounds awkward. – Brian M. Scott Dec 4 '12 at 20:06 @user49065: I’ve added a concrete example that may help make it clearer. – Brian M. Scott Dec 4 '12 at 20:14 Hi Brian. Did you mean to say $A\subseteq \cup \mathscr{U}$ in the first line? And $X$ instead of $M$? – Thomas E. Dec 5 '12 at 12:36 @ThomasE.: Yes, and yes; and thanks for catching them. – Brian M. Scott Dec 5 '12 at 21:19 An open cover of a subset $A \subseteq X$, is a collection $\{U_i\}_{i \in I}$ of open sets in $X$, such that $$A \subseteq \bigcup_{i \in I} U_i$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415585398674011, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Volume_of_Revolution&oldid=20363
# Volume of Revolution ### From Math Images Revision as of 17:37, 14 June 2011 by Nordhr (Talk | contribs) Solid of revolution Field: Calculus Image Created By: Nordhr Website: Nordhr Solid of revolution This image is a solid of revolution # Basic Description When finding the volume of revolution of solids, in many cases the problem is not with the calculus, but with actually visualizing the solid. To find the volume of a solid like a cylinder, usually we use the formula ${\pi} {r^2} h$. Alternatively we can imagine chopping up the cylinder into thin cylindrical plates, much like slicing up bread, computing the volume of each thin slice, then summing up the volumes of all the slices. The disc method is much like slicing up bread and computing the volume of each slice http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html # A More Mathematical Explanation [Click to view A More Mathematical Explanation] ## Disk Method In general, given a function, we can graph it then revolve the area under the curve b [...] [Click to hide A More Mathematical Explanation] ## Disk Method In general, given a function, we can graph it then revolve the area under the curve between two specific coordinates about a fixed axis to obtain a solid called the solid of revolution. The volume of the solid can then be computed using the disc method. Note: There are other ways of computing the volumes of complicated solids other than the disc method. In the disc method, we imagine chopping up the solid into thin cylindrical plates calculating the volume of each plate, then summing up the volumes of all plates. For example, let's consider a region bounded by $y=x^2$, $y=0$,$x=0$ and $x=1$ <-------Plotting the graph of this area, If we revolve this area about the x axis ($y=0$), then we get the image below to the left. This image shows a plane area being revolved to create a solid http://curvebank.calstatela.edu/volrev/volrev.htm To find the volume of the solid using the disc method: Volume of one disc = ${\pi} y^2{\Delta x}$ where $y$- which is the function- is the radius of the circular cross-section and $\Delta x$ is the thickness of each disc. Using the analogy of the bread, computing the volume of one disc would correspond to computing the volume of one slice of bread. With this in mind, the area of one disc would correspond to the area of a slice of bread, while the thickness of a disc would correspond to the thickness of a slice of bread. To find the total volume of the bread, we would have to sum up the volumes of each of the slices. Volume of all discs: Volume of all discs = ${\sum}{\pi}y^2{\Delta x}$, with $X$ ranging from 0 to 1 If we make the slices infinitesmally thick, the Riemann sum becomes the same as: $\int_0^1 {\pi}y^2\,dx ={\pi}\int_0^1 (x^2)^2\, dx$ Evaluating this intergral, ${\pi}\int_0^1 x^4 dx$ =$[{{x^5\over 5} + C|}_0^1] {\pi}$ =$[{1\over 5} + {0\over 5}] {\pi}$ =${\pi}\over 5$ volume of solid= ${\pi\over 5} units^3$ In the example we discussed, the area is revolved about the $x$-axis. This does not always have to be the case. A function can be revolved about any fixed axis. Also, given a different function, to find the volume of revolution about the $x$-axis, we can substitute it in the place of $x^2$. Note: we would also need to change the bounds as per the given information. The method discussed in the example works for all functions that have bounds and are revolved about the $x$-axis. ## References Bread image http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html Revolving image http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # About the Creator of this Image made in OpenGL Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8543380498886108, "perplexity_flag": "middle"}
http://www.newworldencyclopedia.org/entry/Special_relativity%2C_an_introduction
# Special relativity, an introduction From New World Encyclopedia Jump to: navigation, search This article is intended as a generally accessible introduction to the subject. Albert Einstein during a lecture in Vienna in 1921. Special relativity is a fundamental physics theory about space and time that was developed by Albert Einstein in 1905[1] as a modification of Newtonian physics. It was created to deal with some pressing theoretical and experimental issues in the physics of the time involving light and electrodynamics. The predictions of special relativity correspond closely to those of Newtonian physics at speeds which are low in comparison to that of light, but diverge rapidly for speeds which are a significant fraction of the speed of light. Special relativity has been experimentally tested on numerous occasions since its inception, and its predictions have been verified by those tests. Einstein postulated that the speed of light is the same for all observers, irrespective of their motion relative to the light source. This was in total contradiction to classical mechanics, which had been accepted for centuries. Einstein's approach was based on thought experiments and calculations. In 1908, Hermann Minkowski reformulated the theory based on different postulates of a more geometrical nature.[2] His approach depended on the existence of certain interrelations between space and time, which were considered completely separate in classical physics. This reformulation set the stage for further developments of physics. Special relativity makes numerous predictions that are incompatible with Newtonian physics (and everyday intuition). The first such prediction described by Einstein is called the relativity of simultaneity, under which observers who are in motion with respect to each other may disagree on whether two events occurred at the same time or one occurred before the other. The other major predictions of special relativity are time dilation (under which a moving clock ticks more slowly than when it is at rest with respect the observer), length contraction (under which a moving rod may be found to be shorter than when it is at rest with respect to the observer), and the equivalence of mass and energy (written as E=mc2). Special relativity predicts a non-linear velocity addition formula, which prevents speeds greater than that of light from being observed. Special relativity also explains why Maxwell's equations of electromagnetism are correct in any frame of reference, and how an electric field and a magnetic field are two aspects of the same thing. Special relativity has received experimental support in many ways,[3][4] and it has been proven far more accurate than Newtonian mechanics. The most famous experimental support is the Michelson-Morley experiment, the results of which (showing that the speed of light is a constant) was one factor that motivated the formulation of the theory of special relativity. Other significant tests are the Fizeau experiment (which was first done decades before special relativity was proposed), the detection of the transverse Doppler effect, and the Haefele-Keating experiment. Today, scientists are so comfortable with the idea that the speed of light is always the same that the meter is now defined as being the distance traveled by light in 1/299,792,458th of a second. This means that the speed of light is now defined as being 299,792,458 m/s. ## Reference frames and Galilean relativity: A classical prelude A reference frame is simply a selection of what constitutes stationary objects. Once the velocity of a certain object is arbitrarily defined to be zero, the velocity of everything else in the universe can be measured relative to it. When a train is moving at a constant velocity past a platform, one may either say that the platform is at rest and the train is moving or that the train is at rest and the platform is moving past it. These two descriptions correspond to two different reference frames. They are respectively called the rest frame of the platform and the rest frame of the train (sometimes simply the platform frame and the train frame). The question naturally arises, can different reference frames be physically differentiated? In other words, can one conduct some experiments to claim that "we are now in an absolutely stationary reference frame?" Aristotle thought that all objects tend to cease moving and become at rest if there were no forces acting on them. Galileo challenged this idea and argued that the concept of absolute motion was unreal. All motion was relative. An observer who couldn't refer to some isolated object (if, say, he was imprisoned inside a closed spaceship) could never distinguish whether according to some external observer he was at rest or moving with constant velocity. Any experiment he could conduct would give the same result in both cases. However, accelerated reference frames are experimentally distinguishable. For example, if an astronaut moving in free space saw that the tea in his tea-cup was slanted rather than horizontal, he would be able to infer that his spaceship was accelerated. Thus not all reference frames are equivalent, but people have a class of reference frames, all moving at uniform velocity with respect to each other, in all of which Newton's first law holds. These are called the inertial reference frames and are fundamental to both classical mechanics and SR. Galilean relativity thus states that the laws of physics can not depend on absolute velocity, they must stay the same in any inertial reference frame. Galilean relativity is thus a fundamental principle in classical physics. Mathematically, it says that if one transforms all velocities to a different reference frame, the laws of physics must be unchanged. What is this transformation that must be applied to the velocities? Galileo gave the common-sense "formula" for adding velocities: If 1. Particle P is moving at velocity v with respect to reference frame A and 2. Reference frame A is moving at velocity u with respect to reference frame B, then 3. The velocity of P with respect to B is given by v + u. The formula for transforming coordinates between different reference frames is called the Galilean transformation. The principle of Galilean relativity then demands that laws of physics be unchanged if the Galilean transformation is applied to them. Laws of classical mechanics, like Newton's second law, obey this principle because they have the same form after applying the transformation. As Newton's law involves the derivative of velocity, any constant velocity added in a Galilean transformation to a different reference frame contributes nothing (the derivative of a constant is zero). Addition of a time-varying velocity (corresponding to an accelerated reference frame) will however change the formula (see pseudo force), since Galilean relativity only applies to non-accelerated inertial reference frames. Time is the same in all reference frames because it is absolute in classical mechanics. All observers measure exactly the same intervals of time and there is such a thing as an absolutely correct clock. ## Invariance of length: The Euclidean picture Pythagoras theorem The length of an object is constant on the plane during rotations on the plane but not during rotations out of the plane. In special relativity, space and time are joined into a unified four-dimensional continuum called spacetime. To gain a sense of what spacetime is like, we must first look at the Euclidean space of Newtonian physics. This approach to the theory of special relativity begins with the concept of "length." In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place; as a result the simple length of an object doesn't appear to change or is "invariant." However, as is shown in the illustrations below, what is actually being suggested is that length seems to be invariant in a three-dimensional coordinate system. The length of a line in a two-dimensional Cartesian coordinate system is given by Pythagoras' theorem: $h^2 = x^2 + y^2. \,$ One of the basic theorems of vector algebra is that the length of a vector does not change when it is rotated. However, a closer inspection tells us that this is only true if we consider rotations confined to the plane. If we introduce rotation in the third dimension, then we can tilt the line out of the plane. In this case the projection of the line on the plane will get shorter. Does this mean length is not invariant? Obviously not. The world is three-dimensional and in a 3D Cartesian coordinate system the length is given by the three-dimensional version of Pythagoras's theorem: $k^2 = x^2 + y^2 + z^2. \,$ Invariance in a 3D coordinate system: Pythagoras theorem gives k2 = h2 + z2 but h2 = x2 + y2 therefore k2 = x2 + y2 + z2. The length of an object is constant whether it is rotated or moved from one place to another in a 3D coordinate system This is invariant under all rotations. The apparent violation of invariance of length only happened because we were "missing" a dimension. It seems that, provided all the directions in which an object can be tilted or arranged are represented within a coordinate system, the length of an object does not change under rotations. A 3-dimensional coordinate system is enough in classical mechanics because time is assumed absolute and independent of space in that context. It can be considered separately. Note that invariance of length is not ordinarily considered a dynamic principle, not even a theorem. It is simply a statement about the fundamental nature of space itself. Space as we ordinarily conceive it is called a three-dimensional Euclidean space, because its geometrical structure is described by the principles of Euclidean geometry. The formula for distance between two points is a fundamental property of an Euclidean space, it is called the Euclidean metric tensor (or simply the Euclidean metric). In general, distance formulas are called metric tensors. Note that rotations are fundamentally related to the concept of length. In fact, one may define length or distance to be that which stays the same (is invariant) under rotations, or define rotations to be that which keep the length invariant. Given any one, it is possible to find the other. If we know the distance formula, we can find out the formula for transforming coordinates in a rotation. If, on the other hand, we have the formula for rotations then we can find out the distance formula. ## The postulates of Special Relativity Einstein developed Special Relativity on the basis of two postulates: • First postulate—Special principle of relativity—The laws of physics are the same in all inertial frames of reference. In other words, there are no privileged inertial frames of reference. • Second postulate—Invariance of c—The speed of light in a vacuum is independent of the motion of the light source. Special Relativity can be derived from these postulates, as was done by Einstein in 1905. Einstein's postulates are still applicable in the modern theory but the origin of the postulates is more explicit. It was shown above how the existence of a universally constant velocity (the speed of light) is a consequence of modeling the universe as a particular four dimensional space having certain specific properties. The principle of relativity is a result of Minkowski structure being preserved under Lorentz transformations, which are postulated to be the physical transformations of inertial reference frames. ## The Minkowski formulation: Introduction of spacetime Main article: Spacetime Hermann Minkowski After Einstein derived special relativity formally from the counterintuitive proposition that the speed of light is the same to all observers, the need was felt for a more satisfactory formulation. Minkowski, building on mathematical approaches used in non-Euclidean geometry[5] and the mathematical work of Lorentz and Poincaré, realized that a geometric approach was the key. Minkowski showed in 1908 that Einstein's new theory could be explained in a natural way if the concept of separate space and time is replaced with one four-dimensional continuum called spacetime. This was a groundbreaking concept, and Roger Penrose has said that relativity was not truly complete until Minkowski reformulated Einstein's work. The concept of a four-dimensional space is hard to visualize. It may help at the beginning to think simply in terms of coordinates. In three-dimensional space, one needs three real numbers to refer to a point. In the Minkowski space, one needs four real numbers (three space coordinates and one time coordinate) to refer to a point at a particular instant of time. This point at a particular instant of time, specified by the four coordinates, is called an event. The distance between two different events is called the spacetime interval. A path through the four-dimensional spacetime, usually called Minkowski space, is called a world line. Since it specifies both position and time, a particle having a known world line has a completely determined trajectory and velocity. This is just like graphing the displacement of a particle moving in a straight line against the time elapsed. The curve contains the complete motional information of the particle. The spacetime interval. In the same way as the measurement of distance in 3D space needed all three coordinates we must include time as well as the three space coordinates when calculating the distance in Minkowski space (henceforth called M). In a sense, the spacetime interval provides a combined estimate of how far two events occur in space as well as the time that elapses between their occurrence. But there is a problem. Time is related to the space coordinates, but they are not equivalent. Pythagoras's theorem treats all coordinates on an equal footing (see Euclidean space for more details). We can exchange two space coordinates without changing the length, but we can not simply exchange a space coordinate with time, they are fundamentally different. It is an entirely different thing for two events to be separated in space and to be separated in time. Minkowski proposed that the formula for distance needed a change. He found that the correct formula was actually quite simple, differing only by a sign from Pythagoras's theorem: $s^2 = x^2 + y^2 + z^2 - (ct)^2 \,$ where c is a constant and t is the time coordinate. Multiplication by c, which has the dimension ms − 1, converts the time to units of length and this constant has the same value as the speed of light. So the spacetime interval between two distinct events is given by $s^2 = (x_2 - x_1)^2 + (y_2 - y_1)^2 + (z_2 - z_1)^2 - c^2 (t_2 - t_1)^2. \,$ There are two major points to be noted. Firstly, time is being measured in the same units as length by multiplying it by a constant conversion factor. Secondly, and more importantly, the time-coordinate has a different sign than the space coordinates. This means that in the four-dimensional spacetime, one coordinate is different from the others and influences the distance differently. This new 'distance' may be zero or even negative. This new distance formula, called the metric of the spacetime, is at the heart of relativity. This distance formula is called the metric tensor of M. This minus sign means that a lot of our intuition about distances can not be directly carried over into spacetime intervals. For example, the spacetime interval between two events separated both in time and space may be zero (see below). From now on, the terms distance formula and metric tensor will be used interchangeably, as will be the terms Minkowski metric and spacetime interval. In Minkowski spacetime the spacetime interval is the invariant length, the ordinary 3D length is not required to be invariant. The spacetime interval must stay the same under rotations, but ordinary lengths can change. Just like before, we were missing a dimension. Note that everything this far are merely definitions. We define a four-dimensional mathematical construct which has a special formula for distance, where distance means that which stays the same under rotations (alternatively, one may define a rotation to be that which keeps the distance unchanged). Now comes the physical part. Rotations in Minkowski space have a different interpretation than ordinary rotations. These rotations correspond to transformations of reference frames. Passing from one reference frame to another corresponds to rotating the Minkowski space. An intuitive justification for this is given below, but mathematically this is a dynamical postulate just like assuming that physical laws must stay the same under Galilean transformations (which seems so intuitive that we don't usually recognize it to be a postulate). Since by definition rotations must keep the distance same, passing to a different reference frame must keep the spacetime interval between two events unchanged. This requirement can be used to derive an explicit mathematical form for the transformation that must be applied to the laws of physics (compare with the application of Galilean transformations to classical laws) when shifting reference frames. These transformations are called the Lorentz transformations. Just like the Galilean transformations are the mathematical statement of the principle of Galilean relativity in classical mechanics, the Lorentz transformations are the mathematical form of Einstein's principle of relativity. Laws of physics must stay the same under Lorentz transformations. Maxwell's equations and Dirac's equation satisfy this property, and hence, they are relativistically correct laws (but classically incorrect, since they don't transform correctly under Galilean transformations). With the statement of the Minkowski metric, the common name for the distance formula given above, the theoretical foundation of special relativity is complete. The entire basis for special relativity can be summed up by the geometric statement "changes of reference frame correspond to rotations in the 4D Minkowski spacetime, which is defined to have the distance formula given above." The unique dynamical predictions of SR stem from this geometrical property of spacetime. Special relativity may be said to be the physics of Minkowski spacetime.[6][7][8][9] In this case of spacetime, there are six independent rotations to be considered. Three of them are the standard rotations on a plane in two directions of space. The other three are rotations in a plane of both space and time: These rotations correspond to a change of velocity, and are described by the traditional Lorentz transformations. As has been mentioned before, one can replace distance formulas with rotation formulas. Instead of starting with the invariance of the Minkowski metric as the fundamental property of spacetime, one may state (as was done in classical physics with Galilean relativity) the mathematical form of the Lorentz transformations and require that physical laws be invariant under these transformations. This makes no reference to the geometry of spacetime, but will produce the same result. This was in fact the traditional approach to SR, used originally by Einstein himself. However, this approach is often considered to offer less insight and be more cumbersome than the more natural Minkowski formalism. ## Reference frames and Lorentz transformations: Relativity revisited We have already discussed that in classical mechanics coordinate frame changes correspond to Galilean transfomations of the coordinates. Is this adequate in the relativistic Minkowski picture? Suppose there are two people, Bill and John, on separate planets that are moving away from each other. Bill and John are on separate planets so they both think that they are stationary. John draws a graph of Bill's motion through space and time and this is shown in the illustration below: John's view of Bill and Bill's view of himself John sees that Bill is moving through space as well as time but Bill thinks he is moving through time alone. Bill would draw the same conclusion about John's motion. In fact, these two views, which would be classically considered a difference in reference frames, are related simply by a coordinate transformation in M. Bill's view of his own world line and John's view of Bill's world line are related to each other simply by a rotation of coordinates. One can be transformed into the other by a rotation of the time axis. Minkowski geometry handles transformations of reference frames in a very natural way. Changes in reference frame, represented by velocity transformations in classical mechanics, are represented by rotations in Minkowski space. These rotations are called Lorentz transformations. They are different from the Galilean transformations because of the unique form of the Minkowski metric. The Lorentz transformations are the relativistic equivalent of Galilean transformations. Laws of physics, in order to be relativistically correct, must stay the same under Lorentz transformations. The physical statement that they must be same in all inertial reference frames remains unchanged, but the mathematical transformation between different reference frames changes. Newton's laws of motion are invariant under Galilean rather than Lorentz transformations, so they are immediately recognizable as non-relativistic laws and must be discarded in relativistic physics. Schrödinger's equation is also non-relativistic. Maxwell's equations are trickier. They are written using vectors and at first glance appear to transform correctly under Galilean transformations. But on closer inspection, several questions are apparent that can not be satisfactorily resolved within classical mechanics (see History of special relativity). They are indeed invariant under Lorentz transformations and are relativistic, even though they were formulated before the discovery of special relativity. Classical electrodynamics can be said to be the first relativistic theory in physics. To make the relativistic character of equations apparent, they are written using 4-component vector like quantities called 4-vectors. 4-Vectors transform correctly under Lorentz transformations. Equations written using 4-vectors are automatically relativistic. This is called the manifestly covariant form of equations. 4-Vectors form a very important part of the formalism of special relativity. ## Einstein's postulate: The constancy of the speed of light Einstein's postulate that the speed of light is a constant comes out as a natural consequence of the Minkowski formulation[6] Proposition 1: When an object is traveling at c in a certain reference frame, the spacetime interval is zero. Proof: The spacetime interval between the origin-event (0,0,0,0) and an event (x, y,z, t) is $s^2 = x^2 + y^2 + z^2 - (ct)^2 .\,$ The distance travelled by an object moving at velocity v for t seconds is: $\sqrt{x^2 + y^2 + z^2} = vt \,$ giving $s^2 = (vt)^2 - (ct)^2 .\,$ Since the velocity v equals c we have $s^2 = (ct)^2 - (ct)^2 .\,$ Hence the spacetime interval between the events of departure and arrival is given by $s^2 = 0 \,$ Proposition 2: An object traveling at c in one reference frame is traveling at c in all reference frames. Proof: Let the object move with velocity v when observed from a different reference frame. A change in reference frame corresponds to a rotation in M. Since the spacetime interval must be conserved under rotation, the spacetime interval must be the same in all reference frames. In proposition 1 we showed it to be zero in one reference frame, hence it must be zero in all other reference frames. We get that $(vt)^2 - (ct)^2 = 0 \,$ which implies $|v| = c .\,$ The paths of light rays have a zero spacetime interval, and hence all observers will obtain the same value for the speed of light. Therefore, when assuming that the universe has four dimensions that are related by Minkowski's formula, the speed of light appears as a constant, and does not need to be assumed (postulated) to be constant as in Einstein's original approach to special relativity. ## Clock delays and rod contractions: More on Lorentz transformations Another consequence of the invariance of the spacetime interval is that clocks will appear to go slower on objects that are moving relative to you. This is very similar to how the 2D projection of a line rotated into the third-dimension appears to get shorter. Length is not conserved simply because we are ignoring one of the dimensions. Let us return to the example of John and Bill. John observes the length of Bill's spacetime interval as: $s^2 = (vt)^2 - (ct)^2 \,$ whereas Bill doesn't think he has traveled in space, so writes: $s^2 = (0)^2 - (cT)^2 \,$ The spacetime interval, s2, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. This means that Bill's spacetime interval equals John's observation of Bill's spacetime interval so: $(0)^2 - (cT)^2 = (vt)^2 - (ct)^2 \,$ and $-(cT)^2 = (vt)^2 - (ct)^2 \,$ hence $t = \frac{T}{\sqrt{1 - \frac{v^2}{c^2}}} \,$. So, if John sees a clock that is at rest in Bill's frame record one second, John will find that his own clock measures between these same ticks an interval t, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock." The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock. In special relativity, therefore, changes in reference frame affect time also. Time is no longer absolute. There is no universally correct clock, time runs at different rates for different observers. Similarly it can be shown that John will also observe measuring rods at rest on Bill's planet to be shorter in the direction of motion than his own measuring rods. This is a prediction known as "relativistic length contraction of a moving rod." If the length of a rod at rest on Bill's planet is X, then we call this quantity the proper length of the rod. The length x of that same rod as measured on John's planet, is called coordinate length, and given by $x = X \sqrt{1 - \frac{v^2}{c^2}} \,$. How Bill's coordinates appear to John at the instant they pass each other These two equations can be combined to obtain the general form of the Lorentz transformation in one spatial dimension: $\begin{align}T &= \gamma \left( t - \frac{v x}{c^{2}} \right) \\ X &= \gamma \left( x - v t \right)\\\end{align}$ or equivalently: $\begin{align}t &= \gamma \left( T + \frac{v X}{c^{2}} \right) \\ x &= \gamma \left( X + v T \right)\\\end{align}$ where the Lorentz factor is given by $\gamma = { 1 \over \sqrt{1 - v^2/c^2} }$ The above formulas for clock delays and length contractions are special cases of the general transformation. Alternatively, these equations for time dilation and length contraction (here obtained from the invariance of the spacetime interval), can be obtained directly from the Lorentz transformation by setting X = 0 for time dilation, meaning that the clock is at rest in Bill's frame, or by setting t = 0 for length contraction, meaning that John must measure the distances to the end points of the moving rod at the same time. A consequence of the Lorentz transformations is the modified velocity-addition formula: $s = {v+u \over 1+(v/c)(u/c)}.$ ## Simultaneity and clock desynchronization Rather counter-intuitively, special relativity suggests that when 'at rest' we are actually moving through time at the speed of light. As we speed up in space we slow down in time. At the speed of light in space, time slows down to zero. This is a rotation of the time axis into the space axis. We observe that the object speeding by relativistically as having its time axis not at a right angle. The consequence of this in Minkowski's spacetime is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronized so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. This means that observers who are moving relative to each other see different events as simultaneous. This effect is known as "Relativistic Phase" or the "Relativity of Simultaneity." Relativistic phase is often overlooked by students of special relativity, but if it is understood, then phenomena such as the twin paradox are easier to understand. The "plane of simultaneity" or "surface of simultaneity" contains all those events that happen at the same instant for a given observer. Events that are simultaneous for one observer are not simultaneous for another observer in relative motion. Observers have a set of simultaneous events around them that they regard as composing the present instant. The relativity of simultaneity results in observers who are moving relative to each other having different sets of events in their present instant. The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion, and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a rotation out of three-dimensional space. Great care is needed when interpreting spacetime diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length spacetime interval appears. ## Mass Velocity Relationship E = mc2 where m stands for rest mass (invariant mass), applies most simply to single particles with no net momentum. But it also applies to ordinary objects composed of many particles so long as the particles are moving in different directions so the total momentum is zero. The mass of the object includes contributions from heat and sound, chemical binding energies and trapped radiation. Familiar examples are a tank of gas, or a hot bowl of soup. The kinetic energy of their particles, the heat motion and radiation, contribute to their weight on a scale according to E = mc2. The formula is the special case of the relativistic energy-momentum relationship: $\, E^2 - (pc)^2 = (m c^2)^2$ This equation gives the rest mass of a system which has an arbitrary amount of momentum and energy. The interpretation of this equation is that the rest mass is the relativistic length of the energy-momentum four-vector. If the equation E = mc2 is used with the rest mass of the object, the E given by the equation will be the rest energy of the object, and will change with according to the object's internal energy, heat and sound and chemical binding energies, but will not change with the object's overall motion). If the equation E = mc2 is used with the relativistic mass of the object, the energy will be the total energy of the object, which is conserved in collisions with other fast moving objects. In developing special relativity, Einstein found that the total energy of a moving body is $E = \frac{m_0 c^2}\sqrt{1-\frac{v^2}{c^2}},$ with v the velocity. For small velocities, this reduces to $E = m_0 c^2 + \frac{1}{2}m_0 v^2 + ...$ Which includes the newtonian kinetic energy, as expected, but also an enormous constant term, which is not zero when the object isn't moving. The total momentum is: $P = \frac{m_0 v}\sqrt{1-\frac{v^2}{c^2}}.$ The ratio of the momentum to the velocity is the relativistic mass, and this ratio is equal to the total energy times c2. The energy and relativistic mass are always related by the famous formula. While this is suggestive, it does not immediately imply that the energy and mass are equivalent because the energy can always be redefined by adding or subtracting a constant. So it is possible to subtract the m0c2 from the expression for E and this is also a valid conserved quantity, although an ugly one. Einstein needed to know whether the rest-mass of the object is really an energy, or whether the constant term was just a mathematical convenience with no physical meaning. In order to see if the m0c2 is physically significant, Einstein considered processes of emission and absorption. He needed to establish that an object loses mass when it emits energy. He did this by analyzing two photon emission in two different frames. After Einstein first made his proposal, it became clear that the word mass can have two different meanings. The rest mass is what Einstein called m, but others defined the relativistic mass as: $m_{\mathrm{rel}} = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}} .$ This mass is the ratio of momentum to velocity, and it is also the relativistic energy divided by c2. So the equation E = mrelc2 holds for moving objects. When the velocity is small, the relativistic mass and the rest mass are almost exactly the same. E = mc2 either means E = m0c2 for an object at rest, or E = mrelc2 when the object is moving. Einstein's original papers[10] treated m as what would now be called the rest mass and some claim that he did not like the idea of "relativistic mass."[11] When modern physicists say "mass," they are usually talking about rest mass, since if they meant "relativistic mass," they would just say "energy." We can rewrite the expression for the energy as a Taylor series: $E = m_0 c^2 \left[1 + \frac{1}{2} \left(\frac{v}{c}\right)^2 + \frac{3}{8} \left(\frac{v}{c}\right)^4 + \frac{5}{16} \left(\frac{v}{c}\right)^6 + \ldots \right].$ For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v / c is small. For low speeds we can ignore all but the first two terms: $E \approx m_0 c^2 + \frac{1}{2} m_0 v^2 .$ The total energy is a sum of the rest energy and the Newtonian kinetic energy. The classical energy equation ignores both the m0c2 part, and the high-speed corrections. This is appropriate, because all the high order corrections are small. Since only changes in energy affect the behavior of objects, whether we include the m0c2 part makes no difference, since it is constant. For the same reason, it is possible to subtract the rest energy from the total energy in relativity. In order to see if the rest energy has any physical meaning, it is essential to consider emission and absorption of energy in different frames. The higher-order terms are extra correction to Newtonian mechanics which become important at higher speeds. The Newtonian equation is only a low speed approximation, but an extraordinarily good one. All of the calculations used in putting astronauts on the moon, for example, could have been done using Newton's equations without any of the higher order corrections. ## Mass-energy equivalence: Sunlight and atom bombs Einstein showed that mass is simply another form of energy. The energy equivalent of rest mass m is E = mc2. This equivalence implies that mass should be interconvertible with other forms of energy. This is the basic principle behind atom bombs and production of energy in nuclear reactors and stars (like Sun). The standard model of the structure of matter has it that most of the 'mass' of the atom is in the atomic nucleus, and that most of this nuclear mass is in the intense field of light-like gluons swathing the quarks. Most of what is called the mass of an object is thus already in the form of energy, the energy of the quantum color field that confines the quarks. The sun, for instance, fuels its prodigious output of energy by converting each second 600 billion kilograms of hydrogen-1 (single proton]]s) into 595.2 billion kilograms of helium-4 (2 protons combined with 2 neutrons)—the 4.2 billion kilogram difference is the energy which the sun radiates into space each second. The sun, it is estimated, will continue to turn 4.2 billion kilos of mass into energy for the next 5 billion years or so before leaving the main sequence. The atomic bombs that ended the Second World War, in comparison, converted about a thirtieth of an ounce of mass into energy. The energy involved in chemical reactions is so small, however, that the conservation of mass is an excellent approximation. ## General relativity: A peek forward Unlike Newton's laws of motion, relativity is not based upon dynamical postulates. It does not assume anything about motion or forces. Rather, it deals with the fundamental nature of spacetime. It is concerned with describing the geometry of the backdrop on which all dynamical phenomena take place. In a sense therefore, it is a meta-theory, a theory that lays out a structure that all other theories must follow. In truth, Special relativity is only a special case. It assumes that spacetime is flat. That is, it assumes that the structure of Minkowski space and the Minkowski metric tensor is constant throughout. In General relativity, Einstein showed that this is not true. The structure of spacetime is modified by the presence of matter. Specifically, the distance formula given above is no longer generally valid except in space free from mass. However, just like a curved surface can be considered flat in the infinitesimal limit of calculus, a curved spacetime can be considered flat at a small scale. This means that the Minkowski metric written in the differential form is generally valid. $ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2 \,$ One says that the Minkowski metric is valid locally, but it fails to give a measure of distance over extended distances. It is not valid globally. In fact, in general relativity the global metric itself becomes dependent on the mass distribution and varies through space. The central problem of general relativity is to solve the famous Einstein field equations for a given mass distribution and find the distance formula that applies in that particular case. Minkowski's spacetime formulation was the conceptual stepping stone to general relativity. His fundamentally new outlook allowed not only the development of general relativity, but also to some extent quantum field theories. ## See also • Albert Einstein • Henri Poincaré • General relativity, an introduction • Light • Spacetime ## Notes 1. ↑ Einstein, Albert, On the Electrodynamics of Moving Bodies, Annalen der Physik 17: 891-921. Retrieved December 18, 2007. 2. ↑ Hermann Minkowski, Raum und Zeit, 80. Versammlung Deutscher Naturforscher, Physikalische Zeitschrift 10: 104-111. 3. ↑ UCR, What is the experimental basis of Special Relativity? Retrieved December 22, 2007. 4. ↑ Core Power, What is the experimental basis of the Special Relativity Theory? Retrieved December 22, 2007. 5. ↑ S. Walter and J. Gray (eds.), "The non-Euclidean style of Minkowskian relativity." The Symbolic Universe (Oxford, UK: Oxford University Press, 1999, ISBN 0198500882). 6. ↑ 6.0 6.1 Albert Einstein, R.W. Lawson (trans.), Relativity. The Special and General Theory (London, UK: Routledge classics, 2003). 7. ↑ Richard Feyman, Six Not so Easy Pieces (Reading, MA: Addison-Wesley Pub, ISBN 0201150255). 8. ↑ Hermann Weyl, Space, Time, Matter (New York, NY: Dover Books, 1952). 9. ↑ Kip Thorne, and Roger Blandford, Caltec physics notes, Caltech. Retrieved December 18, 2007. 10. ↑ FourmiLab, Special Relativity. Retrieved December 19, 2007. 11. ↑ UCR, usenet physics FAQ. Retrieved December 19, 2007. ## References • Bais, Sander. 2007. Very Special Relativity: An Illustrated Guide. Cambridge, MA: Harvard University Press. ISBN 067402611X. • Robinson, F.N.H. 1996. An Introduction to Special Relativity and Its Applications. River Edge, NJ: World Scientific Publishing Company. ISBN 9810224990. • Stephani, Hans. 2004. Relativity: An Introduction to Special and General Relativity. Cambridge, UK: Cambridge University Press. ISBN 0521010691.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327639937400818, "perplexity_flag": "head"}
http://simple.wikipedia.org/wiki/Math
Mathematics (Redirected from Math) The Rhind Papyrus is the source of most of modern knowledge about mathematics in Ancient Egypt Mathematics (sometimes shortened to maths (England, Australia, New Zealand, Canada, France) or math (United States, Germany)) is the study of numbers, shapes and patterns. Mathematicians are people who learn about and discover such things in mathematics. Mathematics is useful for solving problems that occur in the real world, so many people besides mathematicians study and use mathematics. Today, mathematics is needed in many jobs. Business, science, engineering, and construction need some knowledge of mathematics. Mathematicians solve problems by using logic. Mathematicians often use deduction. Deduction is a special way of thinking to discover and prove new truths using old truths. To a mathematician, the reason something is true is just as important as the fact that it is true. Using deduction is what makes mathematical thinking different from other kinds of thinking. About Mathematics includes the study of: • Numbers (example 3+6=9) • Structure: how things are organized. • Place: where things are and their arrangement. • Change: how things become different over time. Mathematics uses logic paper and calculator. these things and to create general rules, which are an important part of mathematics. These rules leave out information that is not important so that a single rule can cover many situations. By finding general rules, mathematics solves many problems at the same time. A proof gives a reason why a rule in mathematics is correct. This is done by using certain other rules that everyone agrees are correct, which are called axioms. A rule that has a proof is sometimes called a theorem. Experts in mathematics perform research to create new theorems. Sometimes experts find an idea that they think is a theorem but can not find a proof for it. That idea is called a conjecture until they find a proof. Sometimes, mathematics finds and studies rules or ideas in the real world that we don't understand yet. Often in mathematics, ideas and rules are chosen because they are considered simple or beautiful. On the other hand, sometimes these ideas and rules are found in the real world after they are studied in mathematics; this has happened many times in the past. In general, studying the rules and ideas of mathematics can help us understand the world better. Number Mathematics includes the study of number, or quantity. | | | | | | |----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------| | $1, 2, 3, \ldots$ | $\ldots, -1, 0, 1, \ldots$ | $\frac{1}{2}, \frac{2}{3}, 0.125,\ldots$ | $\pi, e, \sqrt{2},\ldots$ | $1+i, 2e^{i\pi/3},\ldots$ | | Natural numbers | Integers | Rational numbers | Real numbers | Complex numbers | | $0, 1, \ldots, \omega, \omega + 1, \ldots, 2\omega, \ldots$ | $\aleph_0, \aleph_1, \ldots$ | $+,-,\times,\div$ | $>,\ge, =, \le, <$ | $f(x) = \sqrt x$ | | Ordinal numbers | Cardinal numbers | Arithmetic operations | Arithmetic relations | Functions | Structure Some areas of mathematics study the structure that an object has. Shape Some areas of mathematics study the shapes of things. Change Some areas of mathematics study the way things change. Applied mathematics Applied mathematics uses mathematics to solve problems of other areas such as engineering, physics, and computing. Numerical analysis – Optimization – Probability theory – Statistics – Mathematical finance – Game theory – Mathematical physics – Fluid dynamics - computational algorithms Famous theorems These theorems have interested mathematicians and people who are not mathematicians. Pythagorean theorem – Fermat's last theorem – Goldbach's conjecture – Twin Prime Conjecture – Gödel's incompleteness theorems – Poincaré conjecture – Cantor's diagonal argument – Four color theorem – Zorn's lemma – Euler's Identity – Church-Turing thesis These are theorems and conjectures that have greatly changed mathematics. Riemann hypothesis – Continuum hypothesis – P Versus NP – Pythagorean theorem – Central limit theorem – Fundamental theorem of calculus – Fundamental theorem of algebra – Fundamental theorem of arithmetic – Fundamental theorem of projective geometry – classification theorems of surfaces – Gauss-Bonnet theorem – Fermat's last theorem Foundations and methods Progress in understanding the nature of mathematics also influences the way mathematicians study their subject. Philosophy of mathematics – Mathematical intuitionism – Mathematical constructivism – Foundations of mathematics – Set theory – Symbolic logic – Model theory – Category theory – Logic – Reverse Mathematics – Table of mathematical symbols History and the world of mathematicians Mathematics in history, and the history of mathematics. History of mathematics – Timeline of mathematics – Mathematicians – Fields medal – Abel Prize – Millennium Prize Problems (Clay Math Prize) – International Mathematical Union – Mathematics competitions – Lateral thinking – Maths and gender Name The word "mathematics" comes from the Greek word "μάθημα" (máthema). The Greek word "μάθημα" means "science, knowledge, or learning". Often, the word "mathematics" is made shorter into maths (in British English) or math (in American English). The short words math or maths are often used for arithmetic, geometry or simple algebra by young students and their schools. Awards in mathematics There is no Nobel prize in mathematics. Mathematicians can receive the Abel prize and the Fields Medal for important works. The Clay Mathematics Institute has said it will give one million dollars to anyone who solves one of the Millennium Prize Problems Mathematical tools Tools that are used to do mathematics or find answers to mathematics problems. Old: New: Wikimedia Commons has media related to: Mathematics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167211651802063, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/118993-implicit-differentiation.html
# Thread: 1. ## Implicit differentiation Given x^2 + y^2 = e^xy Tangent lines. find y' at point (1,0), then find the equations of line tangent to y at (1, 0) thx 2. Originally Posted by fymp Given x^2 + y^2 = e^xy Tangent lines. find y' at point (1,0), then find the equations of line tangent to y at (1, 0) thx You should show that you've attempted the problem. We aren't suppose to just give you answers. Do you know how to differentiate the function? Use implicit differentiation and the chain rule. $2x+2yy'=e^{xy}(y+xy')$ Just solve for $y'$ and evaluate the derivative at the point you were given by substituting the values of x and y. From this point, you should be able to find the tangent line.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8976678848266602, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/287425/ell-infty-and-ell1
# $\ell^\infty$ and $\ell^1$ Show that $\ell^\infty$ and $\ell^1$ are normed linear spaces. Solution: Since $\ell^p$ is the collection of real sequences $a=(a_1,a_2, ... )$ for which $\sum_{k=1}^{\infty} |a_k|^p < \infty$ and in $\ell^\infty$, $\sup_{1 \leq k < \infty} |a_k|$. Does this merely require checking the properties of a norm (Triangle-Inequality, Positive Homogeneity, Non-negativity)? - excuse the mistake in the title. – Jake Casey Jan 26 at 16:56 you can always edit your answer, as soon as you discover mistakes, see the edit option below your question. – clark Jan 26 at 17:00 You may also have to verify that they are linear spaces. – David Mitra Jan 26 at 17:02 1 Or, show that the space $F$ of all infinite sequences is a linear space (hopefully you have this result in hand), and that $\ell_1$ and $\ell_\infty$ are subspaces of $F$ (non-empty and closed under addition and scalar multiplication). – David Mitra Jan 26 at 17:17 1 to be precis you need to verify that they are linear spaces, scalar multiplication and addition, and that the map to $\mathbb{R}$ is indeed a norm with the properties you just stated. – user25470 Jan 26 at 17:18 show 2 more comments ## 1 Answer Recall first that the set of all real sequences $\mathbb{R}^{\mathbb{N}^*}=\{a=(a_n)_{n\geq 1}\}\;|\;a_n\in\mathbb{R}\}$ is a real vector space. Now denote $\|a\|_\infty:=\sup_{n\geq 1}|a_n|$ and $\|a\|_p:=\left(\sum_{n\geq 1}|a_n|^p\right)^{1/p}$ for $p\geq 1$. For every $a\in\mathbb{R}^{\mathbb{N}^*}$, it is clear that both $\|a\|_\infty$ and $\|a\|_p$ are nonnegative or infinite. Next define $\ell_\infty:=\{a\in \mathbb{R}^{\mathbb{N}^*}\;|\;\|a\|_\infty<\infty\}$ and $\ell_p:=\{a\in \mathbb{R}^{\mathbb{N}^*}\;|\;\|a\|_p<\infty\}$. Then note that the null sequence $(0,0,0,\ldots)$ belongs to both $\ell_\infty$ and $\ell_p$, so that they are nonempty subsets of $\mathbb{R}^{\mathbb{N}^*}$. Now, indeed, it only remains to prove that $\|a\|_\infty$ and $\|a\|_p$ satisfy the three axioms of a norm http://en.wikipedia.org/wiki/Norm_(mathematics): 1)positive homogeneity 2)triangle inequality 3)separation. It will follow from 1) and 2) that both $\ell_\infty$ and $\ell_p$ are stable under linear combinations, hence vector (linear) subspaces of $\mathbb{R}^{\mathbb{N}^*}$. This will show that they are normed vector (linear) spaces. 1) is easy for both $\ell_\infty$ and $\ell_p$. 2) is easy for $\ell_\infty$ and for $\ell_p$, it follows from the Minkowski inequality: http://en.wikipedia.org/wiki/Minkowski_inequality 3) is easy for both $\ell_\infty$ and $\ell_p$. I hope this helps. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923537015914917, "perplexity_flag": "head"}
http://mathoverflow.net/questions/90782/soft-algebraic-groups-question
## Soft(?) algebraic groups question ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose $G$ is a linear algebraic group over $\mathbb{C}$, defined over $\mathbb{Z}$ (for example, $SL(n, \mathbb{C})$ is defined by $\det x = 1,$ which visibly has integer coefficients). Let $H$ be an algebraic subgroup of $G.$ Is it always true that some conjugate of $H$ is defined over $\mathbb{Z}?$ This sounds like it should be (if true) be a totally soft fact, but what do I know... - Is a finite cyclic subgroup of $\text{SL}_2(\mathbb{C})$ defined over $\mathbb{Z}$? – Qiaochu Yuan Mar 10 2012 at 2:44 @Qiaochu, sure. – Mariano Suárez-Alvarez Mar 10 2012 at 3:17 3 I think there exist continuous families of nilpotent complex Lie algebras of dimension $7$. The general such Lie algebra would give rise to a unipotent algebraic group which cannot be defined over $\mathbb{Z}$. If you embed such a group in $SL_n$ for some $n$ you would get an algebraic subgroup $H$ such that no conjugate is defined over $\mathbb{Z}$. – ulrich Mar 10 2012 at 6:29 2 Igor, take $G={\mathbb C}^2$ with obvious integral structure and the subgroup given by equation $z=\sqrt{2}w$. Since $G$ is an abelian group, no conjugation will help you. Thus, you probably want to assume that $G$ is semisimple and $H$ is too. Then the answer could be positive. – Misha Mar 10 2012 at 11:20 1 It is not particularly easy but could be found for instance in Steinberg's "Lectures on Chevalley Groups" math.ucla.edu/~rst (published in Russian but not in English, I think). One can trace this result to rationality of characters of algebraic tori. – Misha Mar 10 2012 at 17:25 show 7 more comments ## 1 Answer As some of the comments indicate, arbitrary Zariski-closed subgroups of for instance `$SL(n, \mathbb{C})$` can be defined by all kinds of polynomial equations over `$\mathbb{C}$`. This makes it very unlikely that much can be said in general about integral forms of conjugate subgroups. The whole subject of `$\mathbb{Z}$`-forms for linear algebraic groups strikes me as quite delicate. In the case of simple (or more generally reductive) groups the best insight originates with Chevalley, though it has taken a long time to reach a clear definitive treatment: this is explained by Lusztig in a paper published in JAMS 22 (2009), with an arXiv version available here. (As noted in some of the comments, there is relevant background in Steinberg's lectures on Chevalley groups from 1967-68, sold for many years in mimeographed format by the Yale math department but never formally typeset and "published" in English in spite of some efforts by people over the years. These notes are still quite useful, but lack for instance an index. See Steinberg's UCLA homepage cited above here.) Even with explicit information about existence of integral forms, it would require some further argument if you start with an arbitrary (even simple) algebraic group having such a form and then consider all its closed reductive subgroups. Unless the big group is of general or special linear type to begin with, I'm not convinced there is enough motivation to get into this (?) ADDED: Questions about algebraic groups over `$\mathbb{Z}$` get rather sophisticated even when you consider only reductive groups. This is developed in a short conference paper: "Non-split reductive groups over `$\mathbb{Z}$`" by Brian Conrad and Benedict Gross (from Luminy last September), online at the conference page here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377793669700623, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/13121/can-fermionic-symmetries-be-fully-integrated-into-geometric-deformation-complexe/13149
# Can Fermionic symmetries be fully integrated into geometric deformation complexes or symplectic reduction? How should a geometer think about quotienting out by a Fermionic symmetry? Is this a formal concept? A strictly linear concept? A sheaf theoretic concept? How does symplectic reduction work with odd symmetries within geometric quantization? Are there moduli or instanton-like deformation complexes where the space of symmetries acting on the space of fields includes both even and odd symmetries? For example, can the familiar instanton moduli problem whose deformations are governed by the linearized complex: $$\Omega^0(Ad_0) \stackrel{d_A}\longrightarrow \Omega^1(Ad_0 ) \stackrel{\Pi \circ d_A} \longrightarrow \bar\Omega^2(Ad_0 )$$ be enhanced by fermionic symmetries to some structure whose linearized deformation complex looks like: $$\Omega^0(Ad_0 \oplus Ad_1) \stackrel{d_A}\longrightarrow \Omega^1(Ad_0 \oplus Ad_1) \stackrel{\Pi \circ d_A} \longrightarrow \bar\Omega^2(Ad_0 \oplus Ad_1)$$ in a geometrically meaningful way? In some sense the question is trying to understand how 'odd symmetries' or Fermionic coordinates can be meaningfully carried through a familiar non-linear problem in geometry which makes use of a non-linear group action. Are these odd symmetries true geometric symmetries to be exponentiated in some way, or are they more properly formal symmetry-like operators analogous to true symmetries? Thanks in advance. - ## 1 Answer This answer will address the two questions about symplectic reduction and the instanton moduli space separately. First question: An accepted method to generalize symplectic reduction to include fermionic symmetries is by means the theory of symplectic supermanifolds (and more generally Poisson supermanifolds). Please see the following exposition by Tilmann Glimm. Symplectic supermanifolds can be equipped with two types of symplectic structures, odd and even. In Glim's article the reduction theorem is proved for the two types of symplectic structures. An example of a symplectic supermanifold is the supercotangent bundle (see for example the following article by: J. P. Michel) of a spin manifold. Locally this bundle is covered by charts $\{p_i, q^i, \theta_i\}$, consisting of position, momentum and Grassmann coordinates. The symmetries divided by in the supersymplectic reduction consist of Lie supergroups with Hamiltonian action on the supersymplectic manifold. In Glimm's article, there is a detailed description of the symplectic reduction of the Bose-Fermi oscillator by its Hamiltonian supersymmetry group (which is a subgroup of the orthosymplectic group). The theory of geometric quantization can be extended to supergeometry. The quantization of odd symplectic structures lead to Batalin-Vilkovisky theory. In its simplest application, this quantization results quantization spaces isomorphic to the de-Rham complex of the manifold. The classical mechanical theory associated with the even symplectic particle is the Berezin-Marinov classical description of the spin by means of Grassmann variables. Its quantization corresponds to superparticles, where the fermions consist of sections of the spinor bundle over the base manifold. Second question: Basically, the instanton moduli space for super Yang-Mills theories consists of the usual instanton moduli space given by Atiyah Hitchin Singer deformation complex together with the zero modes of a twisted Dirac operator (see for example the following article by: Mainiero and Walter Tangarife in the context of the Seiberg-Witten theory. A supersymmetric generalization of the deformation complex actually exists at least for the N= 4 super Yang-Mills as given in Labastida and Lozano's article (equation (3.6).) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8847909569740295, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/151413-functions-question.html
# Thread: 1. ## Functions question. I really don't understand this question. If $f(x) = 1/x$, show that $f(a) - f(b) = f(ab/(b-a))$ Help would be greatly appreciated 2. Originally Posted by Archelaus I really don't understand this question. If $f(x) = 1/x$, show that $f(a) - f(b) = f(ab/(b-a))$ Help would be greatly appreciated Just transform the right hand side of this equation to the right hand side, by applying the definition of $f(x)$ and some quite routine algebraic transformations $f(a)-f(b)=\frac{1}{a}-\frac{1}{b}=\ldots = \frac{1}{\frac{ab}{b-a}}=f\left(\frac{ab}{b-a}\right)$ and you are done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493788480758667, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/65425-convergent-iteration-print.html
# Convergent iteration Printable View • December 17th 2008, 04:38 PM namelessguy Convergent iteration Can someone help me with this problem? Consider the fixed point iteration $x_{k+1}=(m+1)x_k-{x_k}^2$, k=0,1,2,...where $1\leq m \leq 2$ a) Show that the iteration converges for any initial guess $x_0$ satisfying $m-1/5 \leq x_0 \leq m+1/5$ b) Assume that the iteration converges, find m such that the method converges quadratically. I think I got part a) by letting $g(x)=(m+1)x-x^2$, then take the derivative $g'(x)=2x+m+1$. I then show that there exists $0<k<1$ such that $\mid g'(x) \mid <k$, and by a theorem this implies the iteration converges for any initial guess in the specified interval. For part b), I need to find m such that $lim_{k\to \infty} \frac{|x_{k+1}-x|}{|x_k-x|^2}= lim_{k\to \infty} \frac{|(m+1)x_k-{x_k}^2-x|}{|x_k-x|^2}$ exists, but I don't get anywhere from here. Hope someone can give a hand. • December 17th 2008, 10:53 PM CaptainBlack Quote: Originally Posted by namelessguy Can someone help me with this problem? Consider the fixed point iteration $x_{k+1}=(m+1)x_k-{x_k}^2$, k=0,1,2,...where $1\leq m \leq 2$ a) Show that the iteration converges for any initial guess $x_0$ satisfying $m-1/5 \leq x_0 \leq m+1/5$ b) Assume that the iteration converges, find m such that the method converges quadratically. I think I got part a) by letting $g(x)=(m+1)x-x^2$, then take the derivative $g'(x)=2x+m+1$. I then show that there exists $0<k<1$ such that $\mid g'(x) \mid <k$, and by a theorem this implies the iteration converges for any initial guess in the specified interval. For part b), I need to find m such that $lim_{k\to \infty} \frac{|x_{k+1}-x|}{|x_k-x|^2}= lim_{k\to \infty} \frac{|(m+1)x_k-{x_k}^2-x|}{|x_k-x|^2}$ exists, but I don't get anywhere from here. Hope someone can give a hand. Assuming that the itteration converges it is obvious it must converge to either $x=0$ or $x=m$, but it is easy to show that itteration will not converge to $x=0$. So suppose $x_n=m+\varepsilon$ Then: $x_{n+1}=(m+1)(m+\varepsilon)-(m_\varepsilon)^2=m+\varepsilon(1-m)-\varepsilon^2$ So if $m=1$ the itteration converges quadraticaly to $x=m=1$ CB All times are GMT -8. The time now is 12:06 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8900300860404968, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/tagged/mean?sort=unanswered
# Tagged Questions The mean tag has no wiki summary. learn more… | top users | synonyms 0answers 61 views ### Mean-variance minimizser I am working on a project that involves pricing european call options in incomplete markets. Now I need to find a unique measure $Q^*$ such that Q^* = \min_{M_e} E_Q [V(T)-F(w)]^2 = \min_{u} E_Q ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204395413398743, "perplexity_flag": "middle"}
http://physics.aps.org/articles/v2/44
# Viewpoint: Observing unification on a grand scale , Michigan Center for Theoretical Physics (MCTP),Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA Published May 26, 2009  |  Physics 2, 44 (2009)  |  DOI: 10.1103/Physics.2.44 New connections have been made between experimental astrophysical signatures and theories that unify the electromagnetic, weak, and strong forces, called grand unified theories. #### Astrophysical probes of unification Asimina Arvanitaki, Savas Dimopoulos, Sergei Dubovsky, Peter W. Graham, Roni Harnik, and Surjeet Rajendran Published May 26, 2009 | PDF (free) Maurice Goldhaber placed an early limit on proton decay, noting that if protons had a lifetime shorter than $1017$ years, you would “feel it in your bones.” That is, the radiation from these decays in your body would provide a fatal radiation exposure. The limits on proton decay have greatly improved since then. For some decay modes, the limits from the Super-Kamiokande experiment now exceed $1033$ years [1], a limit only made possible by observing a tank of water with approximately $1034$ protons in it. Why look so hard for such a rare process? Decay rates are suppressed by the masses of the particles responsible for mediating the decay. So, long lifetimes probe large energy scales. For example, in the case of the proton, the symmetry that is responsible for its stability—conservation of baryon number—may only be approximate. Just such a violation of baryon number occurs at scales near $1016GeV$ in “grand unified theories” (GUTs), theories that seek to unify electromagnetism, the weak and strong forces, but not gravity. This enormous energy scale translates into proton lifetimes close to the current bounds. So, if a very rare proton decay is observed, we will be probing high energy scales, indeed. Work by Asimina Arvanitaki of the University of California, Berkeley, and colleagues at Stanford University in the US, appearing in the current issue of Physical Review D [2], points out that in a broad class of grand unified theories, there are additional ways to probe this large energy scale. Their starting point is supersymmetric theories. These theories posit a doubling of those particles that comprise the standard model, with far reaching consequences: these theories contain an excellent candidate for dark matter [3] and can stabilize the weak scale ($Mw∼100GeV$) against quantum corrections that might drag it up to the Planck scale ($Mpl∼1018GeV$). Perhaps most significantly, these theories quantitatively modify how the strength of the forces change with energy. If the strength of the known forces (excluding gravity) are extrapolated to high energies using calculations within the standard model, nothing particularly surprising occurs. With supersymmetry, this changes. In the simplest supersymmetric extension of the standard model, the “minimal supersymmetric standard model,” the lines parameterizing the strength of the three known forces meet at a single point, corresponding to an energy of $2×1016GeV$ [4]. At this energy, the known forces might unify. Thus supersymmetric theories provide a natural home for grand unification. There is no guarantee that nature has chosen the simplest of supersymmetric theories, and minor modifications in the theory can make a big difference in its observable consequences. Arvanitaki et al. demonstrate that certain supersymmetric GUTs have exciting predictions in three a priori unrelated realms: the Large Hadron Collider (LHC), big bang nucleosynthesis (BBN), and cosmic rays. Combining the measurements from these sources may help to establish physics that operates at very high energies indeed (see Fig. 1). It is the unique nature of dark matter in these theories that makes cosmic rays an important way to probe them. In these theories the dark matter can decay. This is in contrast to the simplest supersymmetric GUT, where the dark matter is comprised of the lightest supersymmetric particle, which is taken to be absolutely stable. Its infinite lifetime is ensured by a symmetry called R parity. However, as Arvanitaki et al. point out, minor modifications to the GUT could make the symmetry that ensures the longevity of the dark matter an approximate one. Then, just as violation of baryon number might induce proton decay with a lifetime of $1033$ years, the dark matter might naturally decay with a lifetime of roughly $1019$ years. Clearly, most of the dark matter will not have decayed—the universe is only $1010$ years old—but the tiny fraction that does decay does so spectacularly, with decay products possessing hundreds of $GeV$ in energy. So, $1019$ years turns out to be an important number, for as the authors emphasize, there are a variety of existing and upcoming cosmic-ray experiments that are capable of observing energetic decay products of dark matter, should it possess this lifetime. Depending on the precise route that dark matter takes to decay, experiments might observe an excess of gamma rays, neutrinos, positrons, or antiprotons. While the interpretation of these experiments can be complicated by astrophysical backgrounds, there is hope that features in the spectra of these decay products might be enough to conclude that it is truly the decay of dark matter that is being observed. Recent data from the PAMELA and Fermi experiments [5], e.g., have already caused quite a buzz, even if their interpretation remains unclear. If multiple dark matter decay channels are observed, the relative rates might be a window onto the details of the grand unified physics responsible for their decay. Big bang nucleosynthesis also turns out to be an important probe of this class of grand unified theories. The understanding of how light elements were synthesized in the furnace of the early universe is one of the great successes of the big bang theory. While most heavy elements are synthesized chiefly in stars, significant production of light elements (e.g., deuterium, helium, lithium) occurred within the first three minutes following the big bang, when temperatures were $TBBN∼MeV$. And even though the production (or destruction) of these elements has continued since then, it is possible to look in regions of space where minimal stellar processing takes place. One can compare the observed abundances of elements to the predictions made by theory, which depend sensitively on only one parameter, the ratio of the number of baryons to the number of photons in the universe. The predictions for multiple elemental abundances all agree for a single input—strong evidence for the big bang theory. (The experimental value for this ratio also agrees with a separate determination that comes from detailed measurements of the cosmic microwave background radiation). However, the agreement for one of the light elements, lithium, is not perfect. As a recent review by Cyburt, Field, and Olive has emphasized [6] observations of both lithium-6 and lithium-7 in low-metallicity stars [7] are somewhat discrepant from the BBN prediction. It is possible that there is misunderstood astrophysics at work—perhaps the measured abundance of lithium is not primordial after all? The other possibility is novel cosmology. For example, if a particle were produced in the big bang with a lifetime of $100$–$1000s$, its decays would be occurring during the crucial epoch of nucleosynthesis. This could shift the predictions for the lithium abundance into accordance with observations [8]. While there are many multiple examples in the literature of such long-lived relic particles with the proper properties to solve the lithium problem, the authors note a lifetime of several minutes—an eternity on particle scales—is particularly elegantly explained if these decays only occur suppressed by powers of the GUT scale. The lifetime of the relics that modify nucleosynthesis (minutes) and the lifetime of the dark matter ($1019$ years) could even be explained by the same physics in these theories. Decays during the BBN epoch would result from processes suppressed by the square of the GUT energy, while the decay of the dark matter would correspond to processes suppressed by four powers of the GUT energy. It will not be easy to determine that GUT-scale physics is the reason for the long lifetime of these relics. The case could be sharpened, however, if the relevant particles are produced at the Large Hadron Collider. If the LHC produces these particles, they could slow down and stop in the detectors there. Their decays could be seen minutes later, a striking signal. The best hope for establishing that one of the GUTs that Arvanitaki et al. discuss is in play is to use a mixture of data from three sources: cosmic rays, colliders, and BBN. It should be noted that the presence of particles with such long lifetimes is not automatic in just any supersymmetric grand unified theory. Depending on the structure of the theory, the relevant particles could either be absolutely stable, or could have far too short a lifetime to explain either modifications of BBN or cosmic-ray signals. So, Arvanitaki et al. point out structures that ensure that a theory will have particles with lifetimes in the cosmic “sweet spots.” The fact that not all grand unified theories give rise to the relevant decays shows the predictive power of this exploration. So, should the type of signals that Arvanitaki et al. predict be confirmed with upcoming cosmic-ray data (e.g., from the Fermi Gamma Ray Space Telescope), and should the lithium problem sharpen, it will point to a particular class of theories—those with long lived relics, and we may have a chance to learn something about grand unified theories. Hopes of probing GUT-scale physics would no longer solely rest on proton decay. ### References 1. H. Nishino et al., Phys. Rev. Lett. 102, 141801 (2009). 2. A. Arvanitaki, S. Dimopoulos, S. Dubovsky, P. W. Graham, R. Harnik, and S. Rajendran, Phys. Rev. D 79, 105022 (2009). 3. H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983). 4. S. Dimopoulos, S. Raby, and F. Wilczek, Phys. Rev. D 24, 1681 (1981). 5. O. Adriani et al., Nature 458, 607 (2009); A. A. Abdo et al., Phys. Rev. Lett. 102, 181101 (2009); See also the Viewpoint commentary by B. Winstein and K. Zurek, Physics 2, 37 (2009). 6. R. H. Cyburt, B. D. Fields, and K. A. Olive, arXiv:0808.2818. 7. F. Spite and M. Spite, Astron. Astrophys. 115, 357 (1982). 8. S. Bailly, K. Jedamzik, and G. Moultaka, arXiv:0812.0788 (2008). ### About the Author: Aaron Pierce Aaron Pierce is an Assistant Professor of physics at the University of Michigan and member of the Michigan Center for Theoretical Physics. He is a particle theorist and his work focuses on the phenomenology of physics beyond the standard model. ## Related Articles ### More Particles and Fields Positrons Galore Viewpoint | Apr 3, 2013 A Year-Long Search for Dark Matter Synopsis | Mar 28, 2013 ### More Astrophysics Positrons Galore Viewpoint | Apr 3, 2013 A Year-Long Search for Dark Matter Synopsis | Mar 28, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91288161277771, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73329?sort=votes
Flat sheaves and induced morphism Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I want to consider a module $P$ on a product $X \times Y$ of varieties over a field of characteristic zero, such that $P$ is flat over $X$. Furthermore I want to consider for each closed point $x$ on $X$ the natural map $i_x:Y\rightarrow X\times Y$, which comes from the cartesian diagram induced by the inclusion of $x$ in $X$, so naively it is just: sending a point $y$ on $Y$ to $(x,y)$ for that fixed $x$. With $P_x$ I want to denote the pullback of $P$ via $i_x$. Assume that for every closed $x$ you have that $P_x$ is the skyscraper $k(y)$ for a closed point $y$ on $Y$. Then I can define a map of sets $f:Cl(X)\rightarrow Cl(Y)$ between the closed points $Cl(X)$ and $Cl(Y)$ on $X$ and $Y$. My question: can you use this $f$ to define a real morphism between $X$ and $Y$ as varieties? Remark: in the case I am interested in the f is bijective. Does this then imply that, if the extension exists, it is automatically an iso? Thanks! - 3 Answers I will assume that $X$ is normal. Let $Z$ be the support of $P$ on $X\times Y$. Then by your assumption $Z$ projects bijectively onto $X$. Since we are in characteristic $0$, this map is separable, so by Zariski's main theorem, this is an isomorphism. Then composing the inverse with this isomorphism with the projection to $Y$ you get a morphism $X\to Y$. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I saw this question while trying to understand the proof of Cor. 5.23 of Huybrecht's Furier-Mukai transform. With exactly the same assumptions, there is a conclusion that using local sections of the module one can define a morphism. This cannot be difficult, but I am in the middle of understanding it. There is also an answer to the second question: if varieties are smooth then it is an iso. If not Huybrechts uses the assumption that derived categories are equivalent. - Just to my prev. note that is not good: 1. In Cor. 5.23 one works with smooth varieties (but not nec. in char 0) 2. For isomorphism there are two cases distinguished: in char 0 (as an easy consequence) or not in char 0, where an inverse in constructed (using other assumptions) 3. "Choosing local sections of P shows that it indeed defines the morphism f:X->Y" (P is the sheaf on XxY, flat over X). This should be easy and characteristic free. Piotr can you see it? Definitely support of P is a graph, but I would much appreciate a (preferably algebraic) proof :) Best -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446433186531067, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/141922-characteristic-function-evaluation-definite-integral-print.html
# characteristic function (evaluation of an definite integral) Printable View • April 28th 2010, 09:20 AM Random Variable characteristic function (evaluation of an definite integral) I want to show that the characteristic function of Cauchy's distribution ( $f(x) = \frac{1}{\pi} \frac{b}{b^{2}+x^{2}}, \ b>0$) is $\phi_{X}(t) = e^{-b|t|}$ $\phi_{X}(t) = E[e^{itX}] = E[\cos(tX)] + i E[\sin (tX)]$ $= \frac{b}{\pi} \int^{\infty}_{-\infty} \frac{\cos(tx)}{b^{2}+x^{2}} \ dx + i \ \frac{b}{\pi} \int^{\infty}_{-\infty} \frac{\sin(tx)}{b^{2}+x^{2}} \ dx$ The second integral evaluates to zero because the integrand is odd. The integrand of the first integral is even, so I need to evaluate $\frac{2 b}{\pi} \int^{\infty}_{0} \frac{\cos(tx)}{b^{2}+x^{2}} \ dx$ Any suggestions? • April 28th 2010, 11:32 AM Laurent Quote: Originally Posted by Random Variable I want to show that the characteristic function of Cauchy's distribution ( $f(x) = \frac{1}{\pi} \frac{b}{b^{2}+x^{2}}, \ b>0$) is $\phi_{X}(t) = e^{-b|t|}$ Any suggestions? Usual proofs are: - Fourier inversion formula. It is easy to compute the Fourier transform of $e^{-b|t|}$ and it relates simply to Cauchy's distribution, so we can have it work backwards using Fourier inversion formula... - contour integration in the complex plane. The integral $\int_{-R}^R \frac{e^{itx}}{1+t^2}dt$ is completed into a contour integral using a half circle in the upper half-plane; the integral along the circle is seen to converge to 0 as the radius goes to infinity (dominated convergence theorem) while the value of the contour integral is obtained by computing the residue at the pole $i$ (Residue theorem). • April 28th 2010, 07:45 PM Random Variable I hate contour integration. But this one is not that bad because there's a nice formula. Let $f(z) = \frac{e^{itz}}{b^{2}+z^{2}}$ which has the simple pole $z = bi$ in the upper half plane so $\int^{\infty}_{-\infty} \frac{\cos (tx)}{b^{2}+x^{2}} = -2 \pi \ Im (Res\{f,bi\})$ $Res\{f,bi\} = \lim_{z \to bi} (z-bi) \frac{e^{itz}}{b^{2}+z^{2}}= \lim_{z \to bi} \frac{e^{itz}}{z+bi} = \frac{e^{-bt}}{2bi} = -i \ \frac{e^{-bt}}{2b}$ so $\int^{\infty}_{-\infty} \frac{\cos (tx)}{b^{2}+x^{2}} = \pi \ \frac{e^{-bt}}{b}$ and $\phi_{X}(t) = e^{-bt}$ From where is the absolute value coming? (Wondering) • April 29th 2010, 03:23 AM Laurent Quote: Originally Posted by Random Variable From where is the absolute value coming? (Wondering) It's hidden in your computation. The integral along the upper half-circle converges to 0 only when $t>0$. Otherwise, you have the consider the lower half-circle hence the pole is $-ib$ (or you can note from the beginning that the characteristic function is even by symmetry of the Cauchy distribution). All times are GMT -8. The time now is 04:00 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8732141256332397, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/66233/convexity-of-a-constrained-optimization-problem
## Convexity of a constrained optimization problem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, this is a continuation of a previous question I asked about the convexity of an optimization problem I am working with. Consider the function \begin{multline} B_i(a_0,\mathbf{p}) \equiv B(\vec{x}_i,a_0,\mathbf{p}) = \left(\begin{array}{cc} 1 & -p_{N}^*/a_0 \\ p_{N}/a_0 & 1 \end{array}\right) \left(\begin{array}{cc} 1 & 0 \\ 0 & z_{N}(\vec{x}_i) \end{array}\right) \cdots \\ \left(\begin{array}{cc} 1 & -p_1^*/a_0 \\ p_1/a_0 & 1 \end{array}\right) \left(\begin{array}{cc} 1 & 0 \\ 0 & z_1(\vec{x}_i) \end{array}\right) \left(\begin{array}{cc} a_0 \\ 0 \end{array} \right), \end{multline} where the variable $a_0$ is between 0 and 1, and the variables $\mathbf{p} = [\begin{array}{ccc} p_1 & \cdots & p_N \end{array}]$ satisfy $\vert p_j \vert \leq a_0$. The $z_j(\vec{x}_i)$'s are complex exponentials, e.g., $z_j(\vec{x}_i) = e^{\imath \vec{x}_i \cdot \vec{k}_j}$, where $\vec{x}_i$ is a spatial location and $\vec{k}_j$ is a spatial frequency location. Here is the optimization problem: \begin{equation} \begin{array}{lll} \textrm{maximize} & a_0 & \\ \textrm{subject to} & 0 \leq a_0 \leq 1 & \\ & \vert p_j \vert \leq a_0, & j = 1,\dots,N \\ & \vert B^d_i - B_i(a_0,\mathbf{p}) \vert \leq \delta_i, & i = 1,\dots,N_s \\ & a_0^2 \prod_{j=1}^N(1+\vert p_j \vert^2/a_0^2) \leq 1, & \end{array} \end{equation} where $B^d_i$ is a target spatial pattern I want to achieve at spatial location $\vec{x}_i$, with maximum error $\delta_i$, and $N_s$ is the number of spatial locations I consider. $B^d_i$ has a maximum magnitude of 1. The variables are $a_0$ and $\mathbf{p}$. Here is how I solve it, using bisection on $a_0$. I set initial lower and upper bounds on $a_0$ based on the result of a much faster, approximate optimization method. Then, for the initial lower-bound $a_0$ value, I solve the following feasibility problem, holding $a_0$ fixed: \begin{equation} \begin{array}{lll} \textrm{minimize} & r & \\ \textrm{subject to} & \delta_i^{-2} \vert B^d_i - B_i(\mathbf{p};a_0) \vert^2 \leq r, & i = 1,\dots,N_s \\ & \vert p_j \vert \leq 1, & j = 1,\dots,N \\ & a_0^2\prod_{j=1}^{N}\left(1 + \vert p_j \vert^2/a_0^2\right) \leq 1, \end{array}. \end{equation} I solve this subproblem using the barrier method, with Quasi-Newton search directions for $\mathbf{p}$. If that problem is solved with $r \leq 1$, then these values of $a_0$ and $\mathbf{p}$ are feasible, and I can set the lower $a_0$ bound to this value of $a_0$, and repeat the problem at the midway point between this value of $a_0$ and the upper bound $a_0$. If $r > 1$, then these $a_0$ and $\mathbf{p}$ are infeasible, and I set the upper bound $a_0$ to this $a_0$, and solve the problem again for the midway point between the lower bound $a_0$ and the new upper bound $a_0$. I stop when $a_0$ stops changing by very much. My question: Is this problem convex? I have already proven that $a_0^2 \prod_{j=1}^N(1+\vert p_j \vert^2/a_0^2) \leq 1$ is convex in the domain of this problem, so it remains to be answered whether the $B_i$ error functions are convex. In a response to a previous question I asked it was shown that when $B^d_i = 0$, $\vec{x}_i = 0$ and $N=3$, then $\vert B^d_i - B_i(a_0,\mathbf{p})\vert$ is NOT convex. So, is it possible that when I use, for example, the barrier method to solve this problem, that the sum of the log-barriers for the $B_i$ error functions is a convex function? - 2 The short answer is "No, there is no hope for any convexity here". Indeed, you may easily have two different ways to hit the target exactly already when $N=2$ but there is no hope that it'll be hit by every convex combination of those. The more interesting question is how on Earth to solve it then? Here I do not have a ready answer but I'll think of it. – fedja May 27 2011 at 22:55 Hi fedja, thanks for spending some time on it. As mentioned, I currently solve this problem (well, find some local minimum I suppose) using the barrier method. I remain hopeful that there is something provable about the optimality of my solutions, given that the solutions i get are simply very good and much better than competing methods. Furthermore, the algorithm always achieves the correct solutions for toy problems to which I know the answer a priori. – Will May 28 2011 at 8:05 Then just tell how exactly you do it and we may try to either prove the optimality or give you an example where your technique fails. That may take some time though. – fedja May 28 2011 at 12:43 Hi Fedja - just updated with a description of the algorithm. Thanks! – Will May 30 2011 at 6:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8678540587425232, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/6280/are-there-infinitely-many-x-for-which-pix-mid-x
# Are there infinitely many $x$ for which $\pi(x) \mid x$? Let $\pi(x)$ denote the Prime Counting Function. • One observes that, $\pi(6) \mid 6$, $\pi(8) \mid 8$. Does $\pi(x) \mid x$ for only finitely many $x$, or is this fact true for infinitely many $x$. - 4 I think you switched 'finitely' and 'infinitely' in you question. I have no idea, but look forward to hearing some great responses. – Brandon Carter Oct 7 '10 at 18:07 @Brandon: Yes, i did that because finitely and finitely doesn't sound realistic – anonymous Oct 7 '10 at 18:09 @Chandru1: I don't understand your comment. Could you please clarify what didn't sound realistic? – Jonas Meyer Nov 21 '10 at 8:23 ## 2 Answers Well, it's not hard to show that for every natural $k>2$ equation $x=k\pi(x)$ has a positive solution. Proof by contradiction: Imagine, that $x\ne k\pi(x)$ for every natural $x$. But then - for $x=2$:$x-k\pi(x)=2-k<0$ and for very large $x$: $x-k\pi(x)\sim x(1-\frac{k}{\ln x})>0$. So there should be such $t$, that $t-k\pi(t)<0$ and $(t+1)-k\pi(t+1)>0$. But then from one hand: $$t+1-k\pi(t+1)-(t-k\pi(t))\ge 2$$ as a difference of positive integer and negative integer, but from the other hand: $$t+1-k\pi(t+1)-(t-k\pi(t))=1-k(\pi(t+1)-\pi(t))\le 1$$ Contradiction. - How did you get from $x-k\pi(x)$ to $x(1-\frac{k}{\ln x})$ ? – configurator Oct 8 '10 at 2:36 In other words, how do you know that pi(x) is 1/ln(x)? – configurator Oct 8 '10 at 2:38 3 – Jonas Meyer Oct 8 '10 at 4:40 @Jonas: Thanks for the link – configurator Oct 8 '10 at 13:46 This is Sloane's A057809, but unfortunately there's not much information there. Heuristically, the chance that $\pi(x)\mid x$ is $\displaystyle\frac{\log{x}}{x}$ which suggests that there are $0.5\log^2 x$ such numbers up to $x$, that is, infinitely many. - A comment in the sequence suggests that there are log x 'clumps' of numbers up to x. – Charles Oct 12 '10 at 19:31 The clumping makes sense -- there's a clump for $x/\pi(x) = 1$, a clump for $x/\pi(x) = 2$, and so on. – Michael Lugo Mar 22 '11 at 23:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356314539909363, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/69229/proof-strength-of-calculus-of-inductive-constructions/115813
## Proof strength of Calculus of (Inductive) Constructions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a follow-on from this question, where I pondered the consistency strength of Coq. This was too broad a question, so here is one more focussed. Rather, two more focussed questions: I've read that CIC (the Calculus of Inductive Constructions) is interpretable in set theory (IZFU - intuitionistic ZF with universes I believe). Is there a tighter result? And What is the general consensus of the relative consistency of constructive logics anyway? I am familiar, in a rough-and-ready way, with the concept of consistency strength in set theory, but more so of the 'logical strength' one has in category theory, where one considers models of theories in various categories. Famously, intuitionistic logic turns up as the internal logic of a topos, but perhaps this is an entirely different dimension of logical strength. I guess one reason for bringing this up is the recent discussion on the fom mailing list about consistency of PA - Harvey Friedman tells us that $Con(PA)$ is equivalent to 15 (or so) completely innocuous combinatorial statements (none of which were detailed - if someone could point me to them, I'd be grateful), together with a version of Bolzano-Weierstrass for $\mathbb{Q}_{[0,1]} = \mathbb{Q} \cap [0,1]$ every sequence in $\mathbb{Q}_{[0,1]}$ has a Cauchy subsequence with a specified sequence of 'epsilons', namely $1/n$). A constructive proof of this result would be IMHO very strong evidence for the consistency of PA, if people are worried about that. - Thanks for the editing, Zev. – David Roberts Jul 1 2011 at 7:02 @David Isn't CIC too broad? Voevodsky has a Coq fork reflecting his new theory, so I am inclined to believe Coq and Voevodsky's fork are quite different types of CIC. – joro Jul 1 2011 at 8:35 @joro - Voevodsky's personal version of Coq is different to the trunk, for sure, but I don't know the depth at which his modification is made. That would be another facet to this line of reasoning. – David Roberts Jul 1 2011 at 9:14 1 If you need constructive justification for consistency of PA, I guess there are easier ways to do it. I’m not familiar with CIC, but it sounds like some sort of higher-order logic/type theory. Thus, it’s quite likely much stronger than the constructive Heyting arithmetic, which is well-known to be equiconsistent with PA. – Emil Jeřábek Jul 1 2011 at 10:20 1 @David: This is an old result of Gödel. See en.wikipedia.org/wiki/… for definition of the translation and some references (the verification that the translation works is routine). – Emil Jeřábek Jul 1 2011 at 11:06 show 3 more comments ## 4 Answers IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many inaccessibles -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC.) As I understand it, the homotopy type theory project does not need (or want?) the full consistency strength of Coq; they make use of it simply because it is one of the better implementations of type theory. Instead, what makes this work interesting is not its consistency strength, but its focus on a wholly new dimension of logical complexity: the complexity of equality (which, utterly amazingly, they relate to homotopy type). It puts me in mind of a famous quotation of Rota: “What can you prove with exterior algebra that you cannot prove without it?” Whenever you hear this question raised about some new piece of mathematics, be assured that you are likely to be in the presence of something important. In my time, I have heard it repeated for random variables, Laurent Schwartz’ theory of distributions, ideles and Grothendieck’s schemes, to mention only a few. A proper retort might be: “You are right. There is nothing in yesterday’s mathematics that could not also be proved without it. Exterior algebra is not meant to prove old facts, it is meant to disclose a new world. Disclosing new worlds is as worthwhile a mathematical enterprise as proving old conjectures.” (Indiscrete Thoughts, 48) (This is not meant as criticism of your question, but just as a caution not to let old battles cause us to lose sight of the new innovations brought to us.) That said, as a sometime constructivist, I do not find consistency to be a primary philosophical notion. If PA is consistent, so is PA+$\lnot$Con(PA). That is, systems can lie despite being consistent, and so I hesitate to build my foundations on consistency. Instead, I prefer a proof-theoretic justification for logical systems, such as Gentzen's proof of cut-elimination. This guarantees that a system is not merely consistent, but that theorems actually have proper proofs. For a good introduction to these ideas, you can hardly do better than Per Martin-Lof's Siena lectures, "On the Meaning of the Logical Constants and the Justification of the Logical Laws". I should be clear that these are more stringent conditions than consistency, and hence offer no way at all of avoiding the obstacles that the halting/incompleteness theorems pose. Heyting and Peano arithmetic are equi-consistent: there is no reason to trust constructivism more, if consistency strength is all that you are interested in. It is other logical qualities that make constructivism attractive. - Thanks, Neel! I had a quick browse of Werner's article while posing this question. But it is the latter part of your answer which is really useful. I was uncertain as to whether restricting one's allowed fragment of logic (to constructive or otherwise) had any bearing on consistency strength. Naively it might have done, but I'm becoming convinced that we are dealing with two separate foundational 'axes' here. – David Roberts Jul 2 2011 at 0:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would just like to point out that there is no constructive proof of "Every sequence or rationals in $[0,1]$ has a $1/n$-convergent subsequence." because in the effective topos this is false. Consider a Specker sequence which has no accumulation point in a strong sense, so it cannot have a convergent subsequence. Moreover, I heard it claimed that IZF (intuotionistic Zermelo-Fraenkel) does not prove consistency of PA. Can someone confirm this? - 2 Oh? I had naively thought that IZF surely proved consistency of Heyting Arithmetic by exhibiting a model (same as ZF proves consistency of PA), and that double-negation translation interpreted Peano Arithmetic into Heyting Arithmetic in such a way that the consistency of PA followed quite simply from the consistency of HA. Where did my naive assumptions falter? – Sridhar Ramesh Jul 1 2011 at 21:43 1 That's exactly what puzzles me, too. What seems clearer to me is that IZF cannot show the existence of a model of PA, as long as logic in interpreted in the straightforward way. So perhaps I am confusing existence of a model of PA and consistency of PA? Do these differ intuitionistically? Hmm, I should know these things. – Andrej Bauer Jul 1 2011 at 22:24 1 Does $Con(HA) \Rightarrow Con(PA)$ hold intuitinistically? – Andrej Bauer Jul 2 2011 at 12:01 2 Con(HA) implies Con(PA) intuitionistically (even in ridiculously weak arithmetic like iPV). That IZF cannot prove the existence of a model of arithmetic is quite possible, because (IIRC) it would imply the existence of a completion of arithmetic, contradicting a derivation rule corresponding to the Church thesis (which is admissible in IZF). However, I would be quite surprised if IZF does not prove the consistency of HA (or PA), as it is a $\Pi^0_1$ statement. I’m not sure whether the usual Friedman translation showing $\Pi^0_2$-conservativity of classical over intuitionistic theories ... – Emil Jeřábek Jul 2 2011 at 12:45 1 ...applies to ZF vs. IZF, but I’d expect it does apply to one of those theories of second-order arithmetic which also prove Con(PA). – Emil Jeřábek Jul 2 2011 at 12:47 show 5 more comments As Neel Krishnaswami says, Heyting arithmetic and Peano arithmetic are equiconsistent. However this does not necessarily hold at the level of set theory or type theory. So, while consistency strength cannot motivate constructive arithemtic, it can indeed motivaite constructive set theory or type theory. As for type theory, the flavour of CIC you are referring to contains some impredicative features, not often used in practice. My understanding is that as of version 8, unless explictly told otherwise, Coq by default uses the predicative calculus of inductive constructions (pCIC). This should have vastly lower proof theoretic strength. As for set theory, CZF is consistent, provably so by transfinite recursion up to the Bachmann Howard ordinal, while IZF is equiconsistent with ZF, whose proof-theoretic ordinal is unknown, perhaps non-existent. Adding classical logic to either CZF or IZF recovers ZF. - I just stumbled upon this old question by chance, and I thought maybe you should have a look at Alexandre Miquel's thesis if you haven't already done so (and if you can read French). Conjecture 9.7.12 on page 329 (331 of PDF) suggests that the Calculus of Constructions with universes should be equiconsistent with Zermelo set theory with universes (assuming I'm not misreading—I'm easily confused between all these theories), which at least gives some lower bound. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445749521255493, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/142883-period.html
# Thread: 1. ## period Hey, I'm trying to find the period of: $f(x)=(\cos(\pi\gamma))^{n} \hspace{0,5cm}, \gamma \in \mathbb{R}, n \in \mathbb{N}$ It seems that when $n$ is even then the $f$ has period=1 and when $n$ is odd then $f$ has period=2 . How to show that analytically? 2. As written, your function is not periodic. It's constant. Care to have another go? 3. ## period Hey, My mistake. It should say: $<br /> f(x)=(\cos(\pi x))^{n} \hspace{0,5cm}, x \in \mathbb{R}, n \in \mathbb{N}$ 4. Must you consider more than this? $[cos(\pi x)]^{n+1} = [cos(\pi x)]^{n}\cdot cos(\pi x)$ If you can make conclusions about even values of n, there is not much left for odd values. 5. ## periodicity Hey, Sorry for the inconvenience. What I wanted do to was to arrive at the conclusion analytically rather than explanatory. How do I do that? 6. What's the definition of "Period"? Something like: The minimum value of 'h' such that f(x+h) = f(x). Definitions are always good places to start. They are not always the best places to start writing programs, but for proving results, they are gold.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400900602340698, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions?page=1&sort=votes
# All Questions 19answers 55k views ### Cooling a cup of coffee with help of a spoon During the breakfast with my colleagues, a question popped into my head: What is the fastest method to cool a cup of coffee, if your only available instrument is a spoon? A qualitative answer would ... 14answers 41k views ### A mirror flips left and right, but not up and down Why is it that when you look in the mirror left and right directions appear flipped, but not the up and down? 13answers 5k views ### How does gravity escape a black hole? My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to ... 7answers 25k views ### If you view the Earth from far enough away can you observe its past? From my understanding of light, you are always looking into the past based on how much time it takes the light to reach you from what you are observing. For example when you see a star burn out, if ... 1answer 5k views ### Book recommendations [closed] Every once in a while, we get a question asking for a book or other educational reference on a particular topic at a particular level. This is a meta-question that collects all those links together. ... 3answers 5k views ### What really allows airplanes to fly? What aerodynamic effects actually contribute to producing the lift on an airplane? I know there's a common belief that lift comes from the Bernoulli effect, where air moving over the wings is at ... 14answers 9k views ### What experiment would disprove string theory? I know that there's big controversy between two groups of physicists: those who support string theory (most of them, I think) and those who oppose it. One of the arguments of the second group is ... 14answers 21k views ### What software programs are used to draw physics diagrams, and what are their relative merits? People undoubtedly use a variety of programs to draw diagrams for physics, but I am not familiar with many of them. I usually hand-draw things in GIMP. GIMP is powerful in some regards, but it's ... 13answers 6k views ### Superluminal neutrinos I was quite surprised to read this all over the news today: Elusive, nearly massive subatomic particles called neutrinos appear to travel just faster than light, a team of physicists in Europe ... 6answers 2k views ### Why do tuning forks have two prongs? I believe the purpose of a tuning fork is to produce a single pure frequency of vibration. How do two coupled vibrating prongs isolate a single frequency? Is it possible to produce the same effect ... 6answers 844 views ### What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis? This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or ... 7answers 3k views ### How can a black hole produce sound? I was reading this article from NASA -- it's NASA -- and literally found myself perplexed. The article describes the discovery that black holes emit a "note" that has physical ramifications on the ... 7answers 4k views ### Why is there a scarcity of lithium? One of the major impediments to the widespread adoption of electric cars is a shortage of lithium for the batteries. I read an article a while back that says that there is simply not enough lithium ... 14answers 69k views ### How Does Mass Leave the Body When you Lose Weight? When your body burns calories and you lose weight, obviously mass is leaving your body. In what form does it leave? In other words, what is the physical process by which the body loses weight when ... 7answers 3k views ### Does juggling balls reduce the total weight of the juggler and balls? A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that ... 9answers 7k views ### Don't heavier objects actually fall faster because they exert their own gravity? The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object ... 5answers 2k views ### Is there a small enough planet or asteroid you can orbit by jumping? I just had this idea of orbiting a planet just by jumping and then flying upon it on its orbit kind of like superman. So, Would it be theoretically possible or is there a chance of that small body to ... 4answers 3k views ### Gauge symmetry is not a symmetry? I have read before in one of Seiberg's articles something like, that gauge symmetry is not a symmetry but a redundancy in our description, by introducing fake degrees of freedom to facilitate ... 33answers 5k views ### Great unsolved physics problems [closed] We all know that some theoretical ideas lack experimental evidence while in other cases there's a lack of a suitable theory for known phenomena and established facts and concepts. But what problem in ... 6answers 4k views ### What does it mean for two objects to “touch”? If you've ever been annoyingly poked by a geek, you might be familiar with the semi-nerdy obnoxious response of "I'm not actually touching you! The electrons in the atoms of my skin are just ... 10answers 3k views ### How can I stand on the ground? EM or/and Pauli? There is this famous example about the order difference between gravitational force and EM force. All the gravitational force of Earth is just countered by the electromagnetic force between the ... 3answers 1k views ### What causes insects to cast large shadows from where their feet are? I recently stumbled upon this interesting image of a wasp, floating on water: Assuming this isn't photoshopped, I have a couple of questions: Why do you see its image like that (what's the ... 2answers 6k views ### Was the 2013 meteor over Russia stronger than an atomic bomb? Reports of the Russian meteor event (2013) say that it released more energy than 20 atomic bombs of the size dropped on Hiroshima, Japan: Scientists estimated the meteor unleashed a force 20 times ... 0answers 1k views ### Superfields and the Inconsistency of regularization by dimensional reduction Question: How can you show the inconsistency of regularization by dimensional reduction in the $\mathcal{N}=1$ superfield approach (without reducing to components)? Background and some references: ... 10answers 4k views ### Is it possible that there is a color our human eye can't see? Is it possible that there's a color that our eye couldn't see? Like all of us are color blind to it. If there is, is it possible to detect/identify it? 9answers 3k views ### What makes running so much less energy-efficient than bicycling? Most people can ride 10 km on their bike. However, running 10 km is a lot harder to do. Why? According to the law of conservation of energy, bicycling should be more intensive because you have to ... 2answers 575 views ### Analog Hawking radiation I am confused by most discussions of analog Hawking radiation in fluids (see, for example, the recent experimental result of Weinfurtner et al. Phys. Rev. Lett. 106, 021302 (2011), ... 14answers 4k views ### Number theory in Physics As a Graduate Mathematics student, my interests lies in Number theory. I am curious to know if Number theory has any connections or applications to physics. I have never even heard of any applications ... 9answers 3k views ### Is Angular Momentum truly fundamental? This may seem like a slightly trite question, but it is one that has long intrigued me. Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and ... 5answers 3k views ### How long can you survive 1 million degrees? I asked my Dad this once when I was about 14, and he said that no matter how short the amount of time you were exposed to such a great temperature, you would surely die. The conversation went ... 6answers 771 views ### The Role of Rigor The purpose of this question is to ask about the role of mathematical rigor in physics. In order to formulate a question that can be answered, and not just discussed, I divided this large issue into ... 11answers 4k views ### How long a straw could Superman use? To suck water through a straw, you create a partial vacuum in your lungs. Water rises through the straw until the pressure in the straw at the water level equals atmospheric pressure. This ... 1answer 2k views ### Why do we not have spin greater than 2? It is commonly asserted that no consistent, interacting quantum field theory can be constructed with fields that have spin greater than 2 (possibly with some allusion to renormalization). I've also ... 13answers 6k views ### Why does kinetic energy increase quadratically, not linearly, with speed? As Wikipedia says: [...] the kinetic energy of a non-rotating object of mass $m$ traveling at a speed $v$ is $mv^2/2$. Why does this not increase linearly with speed? Why does it take so much ... 7answers 5k views ### Why do people categorically dismiss some simple quantum models? Deterministic models. Clarification of the question: The problem with these blogs is that people are inclined to start yelling at each other (I admit, I got infected and it's difficult not to raise ... 7answers 1k views ### Cyclist's electrical tingling under power lines It's been happening to me for years. I finally decided to ask users who are better with "practical physics" when I was told that my experience – that I am going to describe momentarily – prove that I ... 1answer 3k views ### Why does a window become a mirror at night? In day, when you look in the room through the window out, you can clearly see what happens outside. At night when it's dark outside but there's light inside you can look in the window but it becomes a ... 2answers 4k views ### Why do earphone wires always get tangled up in pocket? What is the reason? Is it caused by their narrow shape, the soft material, walking vibration or something else? 18answers 2k views ### Quantum Field Theory from a mathematical point of view I'm a student of mathematics with not much background in physics. I'm interested in learning Quantum field theory from a mathematical point of view. Are there any good books or other reference ... 7answers 3k views ### Why is the Earth so fat? I made a naive calculation of the height of Earth's equatorial bulge and found that it should be about 10km. The true height is about 20km. My question is: why is there this discrepancy? The ... 2answers 775 views ### What is the meaning of the third derivative printed on this T-shirt? Don't be a $\frac{d^3x}{dt^3}$ What does it all mean? 10answers 4k views ### Why does dust stick to rotating fan propeller? Why does dust stick to rotating fan propeller? Intuitively, most people (including I) think of the dust will not stick to rotating fan propellers. EDIT 1: Thank you for the great explanations. I am ... 3answers 1k views ### Can the solar system really fit in a thimble? Almost every time somebody talks about atoms, at some point they mention something like this: If we remove the spaces between the atoms and atomic components, we can fit the solar system in a ... 7answers 7k views ### What Is Energy? Where did it come from? The simplistic undergrad explanation aside. I've never really understood what energy really is. I've been told that it's something when converted from one kind of something to another kind does some ... 8answers 7k views ### What practical issues remain for the adoption of Thorium reactors? From what I've read on thorium reactors, there's enormous benefit to them. Their fuel is abundant enough to power human civilization for centuries, their fission products are relatively short-lived, ... 8answers 2k views ### Is time continuous? I was making universe simulations, and I noticed that I implemented discrete time (the only type possible on computers). By that, I mean that I had an update function, that was called many times per ... 7answers 2k views ### Why do we have an elementary charge but no elementary mass? Why do we have an elementary charge $e$ in physics but no elementary mass? Is an elementary mass ruled out by experiment or is an elementary mass forbidden by some theoretical reason? 5answers 2k views ### Why do we think there are only three generations of fundamental particles? In the standard model of particle physics, there are three generations of quarks (up/down, strange/charm, and top/bottom), along with three generations of leptons (electron, muon, and tau). All of ... 5answers 144 views ### Models of neutrinos consistent with OPERA's results I guess by now most people have heard about the new paper (arXiv:1109.4897) by the OPERA collaboration which claims to have observed superluminal neutrinos with 6$\sigma$ significance. Obviously this ... 4answers 2k views ### Dimensionless Constants in Physics Forgive me if this topic is too much in the realm of philosophy. John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, ... 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485336542129517, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/24942/universal-binary-operation-and-finite-fields-ring
# Universal binary operation and finite fields (ring) Take Boolean Algebra for instance, the underlying finite field/ring $0, 1, \{AND, OR\}$ is equivalent to $0, 1, \{NAND\}$ or $0, 1, \{ NOR \}$ where NAND and NOR are considered as universal gates. Does this property, that AND ('multiplication') and OR ('addition') can be written in terms of a single universal binary relation (e.g. NAND or NOR), hold with every finite field (or finite ring)? EDIT : I am interested in mathematical structures where boolean algebra holds (so that I can design a digital circuit.). Comments from JDK and jokiri point out that this is a valid question for finite rings at least and for finite fields in one case (i.e. $1, 0$ case). - How do you define AND, OR and NAND for other finite fields? – Aryabhata Mar 4 '11 at 5:32 1 Say AND is 'multiplication' and OR is 'addition' ($+, .$). Can one find or define a binary operation which resemble NAND. – Dilawar Mar 4 '11 at 8:06 4 Boolean algebras are never fields, except the two element algebra, since only $1$ has a multiplicative inverse. But I find your question interesting for rings. I interpret it as the question: Does every ring admit a single binary operation from which both $+$ and $\cdot$ are expressible? – JDH Mar 4 '11 at 9:33 1 The question makes sense for fields also (which of course are rings and so included in the rings case). I was objecting to the characterization of Boolean algebras as examples of fields, which of course is seldom true. – JDH Mar 4 '11 at 11:52 2 – JDH Mar 5 '11 at 15:40 show 9 more comments ## 1 Answer I'm not sure I get the question right, I understand you are asking if it is true that you can express any boolean operation using only one gate. If this is your question, the answer is yes. Take the NAND, for example (represented in boolean argebra by the sheffer stroke `|`). It can replace any unary or binary gate. • We already know that anything can be expressed with AND and NOT. • If we can express AND and NOT with NAND, • therfore we can express anything with NAND. Reminder, NAND can be understood in English as "At most one", which means it's true except if both p and q are true: ````p q p|q ------------- 0 0 1 0 1 1 1 0 1 1 1 0 ```` Let's prove that NOT (`¬`) can be expressed with NAND (`|`): ````p ¬p p|p --------------------- 0 1 1 (0|0=1) 1 0 0 (1|1=0) ```` NOT can be expressed with NAND: `¬p = p|p` Let's now prove that AND (`^`) can be expressed with NAND(`|`). `p^q = ¬(p|q)` and we already know how to express NOT with NAND: ````p q p^q p|q (p|q)|(p|q) ---------------------------------- 0 0 0 1 0 (1|1=0) 0 1 0 1 0 (1|1=0) 1 0 0 1 0 (1|1=0) 1 1 1 0 1 (0|0=1) ```` AND can be expressed with NAND: `p^q = (p|q)|(p|q)` For your information, the OR gate can be expressed `(p|p)|(q|q)`, I'm sure you can prove it for yourself. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196012616157532, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38165/
## “Why” is every polynomial representation of SL(2) selfdual? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a field $K$ of characteristic $0$. It seems to me that every finite-dimensional polynomial representation of $\mathrm{SL}_2\left(K\right)$ is self-dual (i. e., isomorphic to its dual). In fact, every representation of $\mathrm{SL}_2\left(K\right)$ is a direct sum of irreducible representations (since $\mathrm{SL}_2\left(K\right)$ is semisimple), and the irreducible representations are the canonical representations on $K\left[x,y\right]_n$ which are known to be self-dual. But is there a simpler proof without subdividing into irreducibles? - By the way, I am aware that for compact Lie groups, we have a Haar measure and we get an invariant bilinear form by integrating an arbitrary bilinear form. But there are things I don't like here: (1) This works only over $\mathbb R$, and we would still have to prove that all irreducible representations of $\mathrm{SL}_2\left(\mathbb C\right)$ are already defined over $\mathbb Q$. (2) $\mathrm{SL}_2$ is not compact and we would need Weyl's unitary trick to get it compact. I'm not sure whether it really preserves all representations. (3) I hate analysis and want to see it exterminated. :P – darij grinberg Sep 9 2010 at 12:19 1 I'm confused. How could it be possible to have a `canonical' choice for the isomorphism? Even if V is irreducible, this can't be done because (scalar) automorphisms of V act on such choices non-trivially. – t3suji Sep 9 2010 at 12:23 Uhm, yes. Sorry! I meant a canonical (up to scalar multiplication) choice. – darij grinberg Sep 9 2010 at 12:27 2 @darij grinberg: this does not help for reducible representations: again, consider the action of automorphisms on possible choices. – t3suji Sep 9 2010 at 12:31 1 Darij, you need to be a lot more clear from the get-go about the category of representations you are interested in. For example, any automorphism $\sigma$ of $K$ induces an automorphism of $SL_2(K)$ viewed as an abstract group; composing it with a representation $\rho$ that occurs in the tensor space gives one that does not, unless $\sigma$ or $\rho$ is trivial. (Borel and Tits proved some amazing results about "abstract" homomorphisms of algebraic groups which may imply that this is the only obstruction in char 0.) Of course, this is impossible in the category of rational representations. – Victor Protsak Sep 9 2010 at 16:01 show 7 more comments ## 4 Answers Because its Weyl group contains -1. For split semisimple groups in char 0, taking duals corresponds to acting by -1 on the weight lattice, where irreducible polynomial representations correspond to weights modulo the action of the Weyl group. So if -1 is in the Weyl group (acting on the weight lattice), then any (irreducible) representation is isomorphic to its dual. This includes the groups with Dynkin diagrams A1, Bn, Cn, Dn for n even, E7, E8, F4, G2 but not An for n>1, Dn for n odd and E6. - This is again a reduction to the irreducibles, but it gives a nice context. Thanks! – darij grinberg Sep 9 2010 at 14:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm not sure that you can expect to avoid "subdividing into irreducibles", since the result is not true without semisimplicity. If $K$ has characteristic $p>0$, the $SL_2(K) = SL(V)$-representation on the $p+1$-dimensional space $\operatorname{Sym}^p(V)$ (the $p$th symmetric power of the natural 2 dim'l representation) is not self-dual. - Okay, this show that $\mathrm{char}K=0$ is important. But there is still hope for synthetic methods such as the embedding $\mathrm{Sym}^n V\to \otimes^n V$ which require zero characteristic. – darij grinberg Sep 9 2010 at 15:45 Here is a variant of Richard's answer. In the group $SU(2)$ every element is conjugate to its inverse. Hence the characters of a representation and its dual are the same. (Richard's answer is essentially this answer restricted to the torus, which is sufficient.) To alleviate your concerns, smooth complex representations of $SU(2)$ and algebraic complex representations of $SL_2(\mathbb{C})$ are the same thing. - Oh, the character theory approach. But using character theory to prove representations isomorphic requires knowing that the group is semisimple, right? – darij grinberg Sep 9 2010 at 14:31 @darij grinberg: for $\chi_V=\chi_W$ to imply $V\cong W$ we don't need the group to be semisimple (in the sense of algebraic groups, i.e. its Lie algebra ..., since $\mathbb R$ is not algebraically closed anyway); all we need is that the (continuous) representation theory of the Lie group $SU(2)$ is semisimple, which comes from the fact that one can do integral w.r.t. the Haar measure on compact Lie groups. – shenghao Mar 10 2011 at 23:53 The question itself and some of the comments seem out of focus to me, so let me add to what Richard and George write the following summary version of an answer. I'd stress that nothing here is really complicated or subtle to prove apart from the basic Cartan-Weyl classification and (in characteristic 0) complete reducibility for finite dimensional representations. First, the group itself is defined and split over the prime field (here `$\mathbb{Q}$`), hence over any larger field. Chevalley's theory implies that the representations discussed here are absolutely irreducible over `$K$`. (For a semisimple group defined but not split over a field, more analysis is needed of representations which require a field extension to become absolutely irreducdible.) Anyway, for a connected semisimple group the "rational" and "polynomial" representations are the same, unlike the reductive group GL`$(n,K)$`. The group also being simply connected in this case, the rational/polynomial representations are essentially those of the Lie algebra and are more easily classified by dominant integral highest weights in that setting. So each irreducible representation or simple module in question has a unique highest weight `$\lambda$`. The easy textbook criterion for such a module to be self-dual is just that `$\lambda = -w_0 \lambda$` where `$w_0$` is the longest element of the Weyl group. As Richard Borcherds points out, this is -1 just for simple types listed, including type `$A_1$`. So far nothing really depends on characteristic 0. But as George McNinch observes, there are plenty of cases where nonsimple modules in prime characteristic fail to be completely reducible and are typically not self-dual. So you do need to invoke complete reducibility (and non-canonical direct sum decompositions) to dispose of the characteristic 0 question. P.S. It's certainly possible to treat the rank 1 case here by direct ad hoc methods in characteristic 0, including the needed proof of complete reducibility (using the easily computed Casimir operator). For irreducible representations, self-duality is a trivial consequence of the fact that these representations are uniquely classified (up to isomorphism) by their dimensions 1, 2, 3, .... But such an ad hoc argument fails to provide much enlightenment. And the general theory allows one to see that the group representations and Lie algebra representations are essentially the same, whether the groups are regarded as Lie groups or algebraic groups (or just as abstract groups). Of course, finite dimensionality is a key point throughout, since the infinite dimensional representation theory involves harder questions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357088804244995, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/68553-permutation-help.html
# Thread: 1. ## Permutation Help! Consider the Permutations of the word SUCCESSFUL ? a) How many permutations total? b)How many have the 3 vowels adjacent (such as SUUESSFCLC)? c)How many permuations have none of the 3 vowels adjacent? d)How many permutations start with an S and end with an L? e)How many permutations have a consonant as the first letter? 2. Originally Posted by zackgilbey Consider the Permutations of the word SUCCESSFUL ? a) How many permutations total? $\frac {10!} {(3!)(2!)(2!)}$ Now you do some work. Show us some work. 3. im plugging away at this, and i understand the 10 factorial as well as the (2!) (2!), but not positive on how the 3! was determined. thanks 4. Originally Posted by zackgilbey im plugging away at this, and i understand the 10 factorial as well as the (2!) (2!), but not positive on how the 3! was determined. thanks Surely this topic is included in your text material: arrangements with repetitions. The number of ways to rearrange the word “MISSISSIPPI” is $\frac {11!}{(4!)^2(2!)}$. That is due to the fact there are 11 letters in all, 4 S’s, 4 I’s, and 2 P’s. Consider the word “UNUSUAL”. There are there are seven letters in all. If we add subscripts $U_1NU_2SU_3AL$, now the number of ways to rearrange the word is $7!$. But removing the subscripts makes us divide $3!$ to account for the fact that is the number of ways to rearrange $U_1U_2U_3$. 5. thanks for the help ive been able to plug right through these except the last one e) im having trouble with how to approach this one as I havent seen this type of question before... any help regarding this would be excellent thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427179098129272, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/11/27/
# The Unapologetic Mathematician ## Topological Groups Now we’ve said a lot about the category $\mathbf{Top}$ of topological spaces and continuous maps between them. In particular we’ve seen that it’s complete and cocomplete — it has all limits and colimits. But we’ve still yet to see any good examples of topological spaces. That will change soon. First, though, I want to point out something we can do with these limits: we can define topological groups. Specifically, a topological group is a group object in the category of topological spaces. That is, it’s a topological space $G$ along with continuous functions $m:G\times G\rightarrow G$, $e:\{*\}\rightarrow G$, and $i:G\rightarrow G$ that satisfy the usual commutative diagrams. A morphism of topological groups is then just like a homomorphism of groups, but by a continuous function between the underlying topological spaces. Alternately we can think of it as a group to which we’ve added a topology so that the group operations are continuous. But as we’ve seen, a topological structure feels a bit floppier than a group structure, so it’s not really as easy to think of a “topology object” in a category. So we’ll start with $\mathbf{Top}$ and take group objects in there. Now it turns out that every topological group is a uniform space in at least two ways. We can declare the set $E_U=\{(x,y)|xy^{-1}\in U\}$ to be an entourage for any neighborhood $U$ of the identity, along with any subset of $G\times G$ containing such an $E_U$. Since any neighborhood of $e$ contains $e$ itself, each $E_U$ must contain the diagonal $\{(x,x)\}$. The intersection $E_U\cap E_V$ is the entourage $E_{U\cap V}$, and so this collection is closed under intersections. To see that $\bar{E}_U$ is an entourage, we must consider the inversion map. Any neighborhood $N$ of the identity contains an open set $U$. Then the preimage $i^{-1}(U)$ is just the “reflection” that sends each element of $U$ to its inverse, which must thus be open. The reflection of $N$ contains the reflection of $U$, and is thus a neighborhood of the identity. Then $\bar{E}_U=\{(x,y)|yx^{-1}=(xy^{-1})^{-1}\in U\}$ is the same as $E_{i^{-1}(U)}$. Now, why must there be a “half-size” entourage? We’ll need to construct a half-size neighborhood of the identity. That is, a neighborhood $V$ so that the product of any two elements of $V$ lands in the neighborhood $U$. Then $(x,y)$ and $(y,z)$ in $E_V$ means that $xy^{-1}$ and $yz^{-1}$ are in $V$, and thus their product $xz^{-1}$ is in $U$, so $(x,z)\in E_U$. To construct this neighborhood $V$ let’s start by assuming $U$ is an open neighborhood by passing to an open subset of our neighborhood if necessary. Then its preimage $m^{-1}(U)$ is open in $G\times G$ by the continuity of $m$, and $U\times G$ and $G\times U$ will be open by the way we built the product topology. The intersection of these will be the collection of pairs $(x,y)\in G\times G$ with both $x$ and $y$ in $U$, and whose product also lands in $U$, and will be open as a finite intersection of open sets. We can project this set of pairs onto its first or second factor, and take the intersection of these two projections to get the open set $V$ which is our half-size neighborhood. The uniform structure we have constructed is called the right uniformity on $G$ because if we take any element $a\in G$ the function from $G$ to itself define by right multiplication by $a$ — $x\mapsto m(x,a)$ — is uniformly continuous. Indeed, right multiplication sends an entourage $E_U=\{(x,y)|xy^{-1}\in U\}$ to itself, since the pair $(xa,ya)$ satisfies $xa(ya)^{-1}=xaa^{-1}y^{-1}=xy^{-1}\in U$. Left multiplication, on the other hand, sends a pair $(x,y)$ in $E_U$ to $(ax,ay)$, for which we have $ax(ay)^{-1}=axy^{-1}a^{-1}\in aUa^{-1}$. Thus to an entourage $E_U$ we can pick the entourage $E_{a^{-1}Ua}$. So left multiplication is also uniformly continuous, but not quite as easily. We could go through the same procedure to define the left uniformity which again swaps the roles of left and right multiplication. Note that the left and right uniformities need not be the same collection of entourages, but they define the same topology. Still, this doesn’t tell us how to get our hands on any topological groups to begin with, so here’s a way to do just that: start with an ordered group. That is, a set with the structures of both a group and a partial order so that if $a\leq b$ then $ga\leq gb$ and $ag\leq bg$. Using this translation invariance we can determine the order just by knowing which elements lie above the identity, for then $a\leq b$ if and only if $e\leq a^{-1}b$. The elements $x$ with $1\leq x$ form what we call the positive cone $G^+$. We can now use this to define a topology by declaring the positive cone to be closed. Then we’d like our translations to be homeomorphisms, so for each $a$ the set of $x$ with $a\leq x$ must also be closed. Similarly we want inversion to be a homeomorphism, and since it reverses the order we find that for each $a$ the set of $x$ with $x\leq a$ is closed. And then we can use the complements of all these as a subbase to generate a topology. This topology will in fact be uniform by everything we’ve done above. And, finally, one specific example. The field $\mathbb{Q}$ of rational numbers is an ordered group if we forget the multiplication. And thus we get a uniform topology on it, generated by the subbase of half-infinite sets. Specifically, for each rational number $a$ the set $(a,\infty)$ of all $x\in\mathbb{Q}$ with $a<x$ and the set $(-\infty,a)$ of all $x\in\mathbb{Q}$ with $x<a$ are declared open, and they generate the topology. A neighborhood of $0\in\mathbb{Q}$ will be any subset which contains one of the form $(-a,a)$. Since the group is abelian, both the left and the right uniformities coincide. For each rational number $a$ we have an entourage $E_a=\{(x,y)|-a<x-y<a\}$. That is, a pair of rational numbers are in $E_a$ if they differ by less than $a$. Posted by John Armstrong | Group theory, Topology | 2 Comments ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 92, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441933035850525, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38772/induced-current-in-parallel-wires/38775
# Induced current in parallel wires Consider two parallel wires of finite radius. When a current is applied to one of the wire for a short period of time, what is the current induced in the other wire? Applying Maxwell's equations, it seems that there is a change in magnetic field perpendicular to the second wire, and as a result, the induced current has a nontrivial distribution which averages somewhat to zero. Is this correct? Intuitively, I had instead expected a simpler result similar to the case of two coils of wires placed side by side. - ## 1 Answer I think there is a non-zero induced current. 1) During the rise (and fall) of the current pulse in wire 1, a changing, azimuthal, magnetic field is generated (calculable by Ampere's law). 2) Per Maxwell, that changing B-field induces an electric field parallel to the wires, which: 3) causes a current to flow in wire 2 (assuming a wire resistivity $\rho$ and applying Ohm's law). Thus, if one applies a short pulse of current to wire 1, you'd see two even shorter pulses in wire 2, one positive and one negative, aligned with the rise and fall of the wire 1 current. There is some variation in E with respect to the radial dimension, which would complicate an exact solution, but the variation is small (and it doesn't result in cancellation). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460596442222595, "perplexity_flag": "middle"}
http://www.birs.ca/events/2010/5-day-workshops/10w5031
# Quasisymmetric Functions (10w5031) Arriving Sunday, November 14 and departing Friday November 19, 2010 ## Organizers Louis Billera (Cornell University) Sara Billey (University of Washington) Richard Stanley (Massachusetts Institute of Technology) ## Objectives By holding this five-day workshop on the theme of "Quasisymmetric Functions" at the Banff International Research Station, we hope to bring together the various researchers using these functions in different ways. The objective of this workshop is to highlight the important role that quasisymmetric functions play in many different research topics, to foster new developments and enhance the common understanding of the applications. Quasisymmetric functions are power series of bounded degree in variables $x_1,x_2,dots$ (say) which are shift invariant in the sense that $x_1^{a_1}cdots x_k^{a_k}$ and $x_{i_1}^{a_1}cdots x_{i_k}^{a_k}$ have the same coefficient for any strictly increasing sequence of positive integers \$i_1 For over a century, symmetric functions have played a major role in mathematics with applications in algebraic topology, combinatorics, representation theory, and geometry. Quasisymmetric functions are extensions of symmetric functions that are becoming of comparable importance. They were first used by Stanley around 1970 in the theory of $P$-partitions, though he did not consider them per se. In the 1980's Ira Gessel defined quasisymmetric functions, developed many of their basic properties, and applied them to permutation enumeration. Quasisymmetric functions have developed into a powerful tool in many areas of mathematics today. Their first algebraic interpretations were as Frobenius characteristics of the representations of the 0-Hecke algebra and as the dual to Solomon's descent algebra of the symmetric group. Their applications have since exploded in many directions. We have outlined some of these topics below. We feel the timing of this workshop comes at a critical point in the development of quasisymmetric functions. There are lots of interesting applications in current research yet there have not been specific conferences devoted this subject to date. Research Directions - Partially ordered sets. Quasisymmetric functions appear in the enumeration of chains in finite graded posets cite{Ehr,rs:fs}, in Eulerian posets cite{BMSV, BMSV2,BHV} and in Bruhat intervals cite{BBr}. Here a single homogeneous quasisymmetric function is used to encode the entire flag $f$-vector of a graded poset. Changing invariants in this case to the flag $h$-vector or the ${bf cd}$-index amounts to changing bases in the algebra of quasisymmetric functions. Extending the definitions, one can get a non-homogeneous version of this quasisymmetric function for any Bruhat interval in a Coxeter group, allowing the expression of its Kazhdan-Lusztig polynomial in terms of its coefficients. In all of these cases, the classical nonnegativity theorems (and conjectures) can be viewed in a new light. - Stanley symmetric functions. Every permutation can be expressed as a product of the adjacent transpositions $s_{i}=t_{i,i+1}$. If $w=s_{a_{1}}s_{a_{2}}cdots s_{a_p}$ and $p$ is minimal over all such expressions for $w$, then $i_{1}i_{2}dotsb i_{p}$ is a textit{reduced word} for $w$. Given a permutation $w$, let $R(w)$ be the set of all reduced words for $w$. Let $X$ be the infinite alphabet ${x_{1},x_{2},ldots}$, and let \$ newcommand{bm}[1]{{mbox{bf {it #1}}}} newcommand{bms}[1]{{mbox{bf {it scriptsize{#1}}}}} \$ begin{equation} label{e:stanley.symmectric.functions} G_{w}(X)= sum_{bms{a}=a_{1}dotsb a_{p} in R(w)} sum_{i_{1}dotsb i_{p} in C(bms{a})} x_{i_{1}}x_{i_{2}}cdots x_{i_{p}}, end{equation} where $C(bm{a})$ is the set of all integer sequences $1leq i_{1}leq i_{2} leq dotsb leq i_{p}$ such that \$i_{j} Later Edelman and Greene cite{EG} showed each $G_{w}$ expands into a positive sum of Schur functions: $G_{w} = sum a_{lambda } s_{lambda }$ with $a_{lambda } in mathbb{N}$. As a corollary of this result we know $#R(w)= sum a_{lambda } f^{lambda }$ where $f^{lambda}$ is the number of standard tableaux of shape $lambda$, which is easily computed using the Frame-Robinson-Thrall hook length formula. The definition of the Stanley symmetric functions in eqref{e:stanley.symmectric.functions} inspired formulas for Schubert polynomials of various types cite{BJS,FS,BH,Lam-2008}. Schubert polynomials are used to compute cup products in the cohomology ring of flag manifolds. For more results inspired by eqref{e:stanley.symmectric.functions}, see the subsection on strong Schur functions below. - Noncommutative symmetric functions. The central result connecting noncommutative symmetric functions with quasisymmetric functions is that the algebra of quasisymmetric functions is dual to the algebra of noncommutative symmetric functions cite[{S}6]{GKL}. This result has led to a vast number of applications and generalizations that connect quasisymmetric functions with other algebraic objects. - Combinatorics of riffle shuffles. Given a probability distribution $x_i$ on a totally ordered set, there is a natural way to define for each $ngeq 1$ a probability distribution on the symmetric group $mathfrak{S}_n$, called the $QS$-distribution (corresponding to $x_i$) cite{rs:riffle}. Several previously studied probability distributions on $mathfrak{S}_n$ turn out to be special cases of the $QS$-distribution. If $winmathfrak{S}_n$ is a random permutation with respect to the $QS$-distribution, then the probability that $w$ satisfies many standard properties is a quasisymmetric function in the $x_i$'s which often in fact is a symmetric function. Hence the machinery of symmetric and quasisymmetric functions can be used to investigate the behavior of $w$ as a random permutation, such as its descent set and its shape under the RSK algorithm. - Combinatorial Hopf algebras. These are Hopf algebras equipped with a multiplicative linear functional (character) cite{Ag,ABS}. A principal example is the Hopf algebra of quasisymmetric functions, with character the indicator function of monomial symmetric functions corresponding to trivial compositions into one (or no) parts. A main result is that the quasisymmetric functions are terminal in the category of combinatorial Hopf algebras, giving explanation to the large number of quasisymmetric generating functions one finds in many areas of combinatorics. One example of this is a matroid quasisymmetric function cite{BJR} that gives a valuation on decompositions of the matroid basis polytope. - Macdonald polynomials. Macdonald polynomials are a generalization of Schur functions that have many remarkable connections to Hilbert schemes of points in the plane, Cherednik algebras cite{Haiman-2006}, affine Hecke algebras cite{H-Gr}, Catalan numbers cite{Haglund}, and representation theory cite{Garsia-Haiman}. Macdonald originally showed the existence of these polynomials as the basis of symmetric functions over $mathbb{Q}(q,t)$ satisfying certain orthogonality conditions with respect to a given inner product. This definition proved that these "polynomials" are symmetric functions but resulted in a very inefficient means of calculation. Recently, Haiman, Haglund, and Loehr cite{HHL} gave the following beautiful formula for expanding Macdonald polynomials into the textit{fundamental basis} ${L_{sigma} }$ of quasisymmetric functions cite{ECII} with coefficients in $mathbb{N}[q,t]$ determined by generalizations of the inversion statistic and major index statistic for permutations: begin{equation} %tildelabel{eq:hhl} tilde{H}_{mu}(X; q,t) = sum_w q^{mathrm{inv}(w)} t^{mathrm{maj}(w)} L_{mathrm{Des}(w^{-1})}. end{equation} summed over all bijective fillings $w$ of the Ferrers diagram for $mu$. This formula for Macdonald polynomials is now taken as the definition by some researchers because it is more efficient for computations. In particular, Assaf cite{Assaf} has used this definition to give a combinatorial proof that the Macdonald polynomials expand into a sum of Schur functions with coefficients in $mathbb{N}[q,t]$. One important open problem in this field is to determine if Macdonald polynomials also expand positively in this way into a sum of $k$-Schur functions defined by Lapointe, Lascoux, and Morse cite{Lapointe-Lascoux-Morse-2003}. Proving this conjecture would establish a interesting link between Macdonald polynomials, the Hilbert Scheme, and affine Grassmannians as detailed further in the next subsection. - Strong Schur functions and affine Grassmannians. Let $W$ be a finite irreducible Weyl group associated to a simple connected compact Lie Group $G$, and let $widetilde{W}$ be its associated affine Weyl group. In analogy with the Grassmann manifolds in classical type $A$, the quotient $widetilde{W}/W$ is the indexing set for the Schubert varieties in the affine Grassmannians $mathcal{L} _G$. Much of the geometry and topology for the affine Grassmannians can be studied from the combinatorics of the minimal length coset representatives for $widetilde{W}/W$ and vice versa. A beautiful example of this phenomena can be found in the work of Lam, Lapointe, Morse and Shimozono cite{LLMS}. They have identified a family of symmetric functions they call textit{strong Schur functions} which determine a Schubert basis for the cohomology ring of $mathcal{L} _G$. The strong Schur functions are related to the $k$-Schur functions cite{Lapointe-Lascoux-Morse-2003} by setting $t=1$. The LLMS formula for strong Schur functions can compactly be written as a positive sum of fundamental basis elements for quasisymmetric functions. The sum in this case is indexed by labeled sequences of reflections. Furthermore, there is a conjectured statistic on labeled sequences of reflections which would allow them to recover the $k$-Schur functions from the data for the strong Schur functions in the fundamental quasisymmetric function basis cite{LLMS}. Bibliography {Ag} M. Aguiar, Infinitesimal Hopf algebras and the ${bf cd}$-index of polytopes, Discrete Comput. Geometry {bf 27} (2002), 3-28. {ABS} M. Aguiar, N. Bergeron, and F. Sottile, Combinatorial Hopf algebras and generalized Dehn-Sommerville relations, Compositio Math. {bf 142} (2006), 1--30. {Assaf} S. Assaf, The Schur expansion of Macdonald polynomials. Preprint. 2007. {BMSV} N. Bergeron, S. Mykytiuk, F. Sottile and S. van Willigenburg, Non-commutative Pieri operators on posets, {it J. Comb. Theory Ser. A} {bf 91} (2000), 84-110. {BMSV2} N. Bergeron, S. Mykytiuk, F. Sottile and S. van Willigenburg, Shifted quasi-symmetric functions and the Hopf algebra of peak functions, Discrete Math. {bf 256} (2002), 57-66. {BBr} L.J. Billera and F. Brenti, Quasisymmetric functions and Kazhdan-Lusztig polynomials, arXiv:0710.3965, October, 2007. {BHV} L.J. Billera, S. K. Hsiao, and S. van Willigenburg, Peak quasisymmetric functions and Eulerian enumeration, Adv. Math. {bf 176} (2003), no. 2, 248--276. {BJR} L.J. Billera, N. Jia and V. Reiner, A quasisymmetric function for matroids, {it Europ. J. Combinatorics} (to appear), arXiv:math/0606646. {BH} { S.~Billey and M.~Haiman}, { {Schubert polynomials for the classical groups}}, J. Amer. Math. Soc., 8 (1995), pp.~443--482. {BJS} { S.~Billey, W.~Jockusch, and R.~Stanley}, { {Some Combinatorial Properties of Schubert Polynomials}}, J. Alg. Comb.}, 2 (1993), pp.~345--374. {EG} { P.~Edelman and C.~Greene}, { {Balanced Tableaux}}, {it Adv. Math.}, 63 (1987), pp.~42--99. {Ehr} R. Ehrenborg, On posets and Hopf algebras, Adv. in Math. {bf 119} (1996), 1--25. {FS}{ S.~Fomin and R.~P. Stanley}, { {Schubert Polynomials and the NilCoxeter Algebra}}, Adv. Math., 103 (1994), pp.~196--207. {GKL} I. M. Gelfand, D. Krob, A. Lascoux, B. Leclerc, V. Retakh and J.-Y. Thibon, Noncommutative symmetric functions, Adv. in Math. {bf 112} (1995), 218--348. {Garsia-Haiman}{ A.~M. Garsia and M.~Haiman}, { A graded representation model for {M}acdonald's polynomials}, Proc. Nat. Acad. Sci. U.S.A., 90 (1993), pp.~3607--3610. {Ges} I. M. Gessel, Multipartite $P$-partitions and inner products of Schur functions, in Combinatorics and Algebra, C. Greene, ed., Contemporary Mathematics, vol. 34, Amer. Math. Soc., Providence, 1984. {Haglund}{ J.~Haglund}, "The {$q$},{$t$}-{C}atalan numbers and the space of diagonal harmonics", vol.~41 of University Lecture Series, American Mathematical Society, Providence, RI, 2008. With an appendix on the combinatorics of Macdonald polynomials. {HHL} { J.~Haglund, M.~Haiman, and N.~Loehr}, { A combinatorial formula for nonsymmetric {M}acdonald polynomials}, Amer. J. Math., 130 (2008), pp.~359--383. {Haiman-2006} { M.~Haiman}, { Cherednik algebras, {M}acdonald polynomials and combinatorics}, in International {C}ongress of {M}athematicians. {V}ol. {III}, Eur. Math. Soc., Zurich, 2006, pp.~843--872. {H-Gr} { M.~Haiman and I.~Grojnowski}, "Affine Hecke algebras and positivity of LLT and Macdonald polynomials", preprint available at http://math.berkeley.edu/$sim$mhaiman/, (2007). {LLMS} { T.~Lam, L.~Lapointe, J.~Morse, and M.~Shimozono}, {em Affine insertion and pieri rules for the affine Grassmannian}, preprint, 2006. {Lam-2008} { T.~Lam}, { Schubert polynomials for the affine {G}rassmannian}, {it J. Amer. Math. Soc.}, 21 (2008), pp.~259--281 (electronic). {Lapointe-Lascoux-Morse-2003} { L.~Lapointe, A.~Lascoux, and J.~Morse}, { Tableau atoms and a new {M}acdonald positivity conjecture}, Duke Math. J., 116 (2003), pp.~103--146. {Luoto} K. Luoto, A matroid-friendly basis for quasisymmetric functions, Journal of Combinatorial Theory, Series A, to appear. {MR} C. Malvenuto and C. Reutenauer, Duality between quasi-symmetric functions and the Solomon descent algebra, Journal of Algebra {bf 177} (1995), 967--982. {S3} { R.~Stanley}, { {On the number of reduced decompositions of elements of Coxeter groups}}, Europ. J. Combinatorics, 5 (1984), pp.~359--372. {rs:fs} { R.~Stanley}, Flag-symmetric and locally rank-symmetric partially ordered sets, Electronic J. Combinatorics textbf{3}, R6 (1996), 22 pp.; reprinted in emph{The Foata Festschrift} (J. Desarmenien, A. Kerber, and V. Strehl, eds.), Imprimerie Louis-Jean, Gap, 1996, pp. 165--186. {ECII} R. Stanley, Enumerative Combinatorics, Vol. 2, Cambridge Studies in Advanced Mathematics, Vol. 62, Cambridge University Press, Cambridge, UK, 1999. {rs:riffle} R. Stanley, A generalized riffle shuffle and quasisymmetric functions, Ann. Combinatorics textbf{5} (2001), 479--491. {Stem} J. Stembridge, Enriched $P$-partitions, Trans. Amer. Math. Soc. {bf 349} (1997), 763--788.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7908003330230713, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/16354/can-you-encode-if-then-else-in-arithmetic?answertab=oldest
# Can you encode if-then-else in arithmetic Is there a general way to encode if-then-else/ITE in arithmetic, i.e. using the usual mathematical operators? Example: let $f(x) = x^2$ if $x < 10^{30}$ and $log x$ otherwise, shortly written as $f(x) = ITE(x<10^{30},x^2,log x)$. - 1 what are "the usual mathematical operators"? That's probably different from "in arithmetic". – Alon Amit Jan 4 '11 at 20:40 ## 2 Answers If you allow the use of indicator or characteristic functions, then the answer is yes. For your example, if you let $A = \{x \in \mathbb{R}| x < 10^{30}\}$, then $f(x) = x^2 \cdot \chi_{A}(x) + \log x \cdot \chi_{A^{C}}(x)$ where $\chi_A$ and $\chi_A^{C}$ are the characteristic functions of $A$ and $A$ complement, respectively. - 1 And you can use the $\text{sgn}$ function for the indicator, in case of sets like $x \lt 10^{30}$. – Aryabhata Jan 4 '11 at 20:55 This seems related to: Representing IF ... THEN ... ELSE ... in math notation Although, since none of the answers there do a particularly good job at answering the question posted here, I'll just add a little bit more: On top of what Jason wrote, you may also write: • Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be the function defined by $f(x)=x^2$ for $x<10^{30}$ and $f(x)=\log x$ otherwise. • $f(x) = \begin{cases} x^2 & \text{if } x<10^{30} \\ \log x & \text{otherwise.} \end{cases}$ These have the advantage that you don't need to introduce an auxiliary function (such as a characteristic function). The latter is typeset in LaTeX using: `$f(x) = \begin{cases} x^2 & \text{if } x<10^{30} \\ \log x & \text{otherwise.} \end{cases}$` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065113663673401, "perplexity_flag": "middle"}
http://arcsecond.wordpress.com/tag/mechanics/
# Arcsecond ## Posts Tagged ‘mechanics’ ### Unwinding: Physics of a spool of string June 1, 2012 It’s been a long day. Let’s unwind with a physics problem. This problem was on the pre-entrance exam I took before arriving at Caltech for my freshman year. I’ve seen it from time to time since, and here I hope to find an intuitive solution. You have a spool of thread, already partially unwound. You pull on the thread. What happens? Here it is in side view. The dashed circle is the inside of the spool and the green line is the thread. Take a minute to see if you can tell how it works. Does the spool go right or left? The usual method is to work it out with torques. The forces you must account for are the force of tension from the string and the force of friction from the table. Torques are actually a pretty easy way to solve this problem, especially if you calculate the torque around the point of contact between the spool and the table (since in that case friction has no moment arm and exerts no torque). This method is direct, but it’s useful to find another viewpoint if you can. Let’s first examine a different case where the string is pulled up rather than sideways. In this case, even if the first situation was unclear, you probably know that the spool will roll off to the left. To see why, let’s imagine that the thread isn’t being pulled by your hand, but by a weight connected to a pulley. I put a red dot on the string to help visualize its motion. The physics idea is simply that the weight must fall, so the red dot must come closer to the pulley. Which way can the spool roll so the red dot moves upward? When the spool rolls (we assume without slipping), the point at the very bottom, where it touches the table, is stationary. The spool’s motion can be described, at least instantaneously, as rotation around that contact point. Googling, I found a nice description of this by Sunil Kumar Singh at Connexions. This image summarizes the point: If the spool rolls to the right, as above, the point where the string leaves the spool (near point B), will have a somewhat downward motion. This will pull the red dot down and raise the weight. That’s the opposite of what we want, so what really happens is that the spool rolls to the left, the string rises, and the weight falls. With this scenario wrapped-up (or unwrapped, I suppose), let’s return to the horizontal string segment. Again, the weight must fall and so the red dot must go towards the pulley. If we check out Mr. Singh’s graphic, we’re now concerned with the motion of a point somewhere near the bottom-middle, between points A and C. As the spool rolls to the right, this point also moves to the right. This is indeed what happens as the weight falls. Notice that the red point actually moves more slowly than the spool as a whole. This means the spool catches up to the string as we move along – the spool winds itself up. If the inside of the spool is 3/4 as large as the outside (like it is in my picture), the spool rolls 4 times as fast the string moves, and so for every centimeter the weight falls, the spool rolls four centimeters. Here’s a short video demonstration: Tags:mechanics, physics, problems and solutions ### Leakier, Slower, and No Rain December 7, 2010 A while ago, I asked a standard freshman physics problem about a cart that has rain fall into it, then opens a hole and rain leaks out. Then I gave an answer saying that as rain falls vertically into an open cart running on a frictionless track, the cart slows down, but as rain leaks out it shows no change in speed. That was mostly correct, given the picture I drew of the hole: Water leaks out a hole in the bottom of the cart as it slides to the right. The key is that the hole is in the center. Yesterday, Martin Gales posed a question on Physics.StackExchange pointing out that this makes a difference, because if we imagine a stationary cart with a hole all the way to the left, then as it drains, the water moves left, and so the cart will have to move right a little to conserve momentum. But then once the cart starts moving the water leaking out of the cart is moving… I spent three hours last night trying to solve this seemingly-trivial problem. (My answer is at the original question.) It’s simple enough to pose to first-term freshmen, and yet I went through dozens of slightly-wrong ideas and calculations before hitting on the surprising answer. Further, once I knew what happens, it didn’t seem very complicated any more, leaving me to wonder what the hell is wrong with my overclocked simian brain. The feeling you get when thinking about such a problem is an asymmetric oscillation of healthy frustration and premature joy unparalleled in other pursuits. I want to be mind-fucked like this every night. Tags:mechanics, physics, physics problems, problems ### Three-Way October 21, 2010 I’ve been told that when you and your sweetheart get accepted to different grad schools, you’ve encountered the “Two-Body Problem”. It’s an inapt analogy, because the two-body problem can be solved elegantly. What’s really difficult is the three-body problem. If we have three massive bodies interacting through Newton’s gravitational law, with arbitrary initial positions and velocities, it turns out that exact analytic solutions are extremely difficult to find. That’s not to say there is no solution – the massive bodies have no trouble finding it. But if we want to predict their motion we’ll need to do some numerical integration. However, we might wonder if there are some particularly simple or intriguing solutions, perhaps in very symmetrical situations. There are. Quite a few of them are cited in the Scholarpedia article. Here are some: I’m not sure how these solutions were discovered. But if we consider the problem a moment it’s clear the center of mass cannot accelerate, and that energy and angular momentum must be preserved. An especially simple way to do this is to have the angular momentum and energy of each individual mass be constant by making them all rotate around the center of mass at a fixed frequency. To see if this works, measure the positions of the three masses by vectors $\vec{r_1}, \vec{r_2}, \vec{r_3}$ from the axis of rotation, which is through the center of mass and perpendicular to the plane of the bodies. Let the masses of the bodies be $m_1, m_2, m_3$. Then the equation for the center of mass states $\vec{r_1}m_1 + \vec{r_2}m_2 + \vec{r_3}m_3 = 0$. If we look at a reference frame comoving with the center of mass and corotating with the masses, the masses will be stationary, so there must be no force on them. Let’s take $m_1$ as an example. The centrifugal force on it is $F_c = \omega^2 m_1 \vec{r_1}$. The gravitational force on it is $F_g = G m_1 \left(m_2\frac{\vec{r_2} - \vec{r_1}}{\left|\vec{r_1} - \vec{r_2}\right|^3} + m_3\frac{\vec{r_3} - \vec{r_1}}{\left|\vec{r_3} - \vec{r_1}\right|^3}\right)$ If those forces are going to add to zero to give $m_1$ zero acceleration, they better at least point the same direction. It’s not clear that they do. The centrifugal force points only in the direction of $\vec{r_1}$, while the gravitational force has all three position vectors in there. So we need the amounts of $\vec{r_2}$ and $\vec{r_3}$ to be such that they add up to point towards $\vec{r_1}$. From the equation for the center of mass we get, $-m_1\vec{r_1} = m_2\vec{r_2} + m_3\vec{r_3}$ whence we know the proportions in which the $\vec{r_2}$ and $\vec{r_3}$ must appear in the gravitational force. We conclude $m_2\vec{r_2} + m_3\vec{r_3} \propto \left(\frac{m_2\vec{r_2}}{\left|\vec{r_1} - \vec{r_2}\right|^3} + \frac{m_3 \vec{r_3}}{\left|\vec{r_3} - \vec{r_1}\right|^3}\right)$. This will work perfectly iff the denominators on the right hand side of that equation are equal, meaning mass $m_1$ must be equidistant from masses $m_2$ and $m_3$. Repeating the argument while writing out forces on $m_2$ this time, the masses must be in an equilateral triangle. We still don’t know if such a solution exists, only that it’s possible to get the centrifugal force to point opposite the gravitational force for arbitrary masses, as long as we put them in an equilateral triangle. Let’s continue, setting the centrifugal and gravitational forces equal for $m_1$ and see what the resulting rotation rate is. Also let’s let the sides of the triangle be length $l$ $F_c = \omega^2 m_1 \vec{r_1} = F_g = \frac{G m_1}{l^3} \left( m_2(\vec{r_2} - \vec{r_1}) + m_3 (\vec{r_3} - \vec{r_1}) \right)$ Applying the center of mass equation $-m_1\vec{r_1} = m_2\vec{r_2} + m_3\vec{r_3}$ one more time and setting the total mass equal to $M$, this simplifies to $\omega^2 = \frac{G M}{l^3}$. A clean result that is the same for all three masses, meaning these orbits indeed solve the equations of motion. Tags:gravity, mechanics, three-body problem ### Shaking The Right Way April 30, 2010 The other day I was working at a job – a real one, like grownups have. I used to think this job was boring. But things aren’t boring; they’re just what they are. People can be bored, but that’s their business. Don’t blame the thing! I work at a place where they make widgets. One of the pieces for the widgets is a little, flat, plastic rod. There’s an assembly line with a robot that puts the rods in the partially-assembled widgets. But before that, the rods have to be fed into the robot one by one. The rods come from the rod factory in a big pile, though, and the robot can’t reach in a pick one out. We need a way to take the rods from a jumbled mess and feed them one-at-a-time into the robot. To do that we dump the plastic rods in a bowl about a meter across. There’s a ramp winding up along the edge of it. We vibrate the bowl, and the rods march up the ramp and into the rod-eating widget-making robot. The bowl looks a little like this: There was a library nearby when I was a kid that had a ramp like this in it. I always wanted to go to that library. I never read any of its books. This is supposed to be a cut-away view of the side of the bowl. The ramp is on the inside of the bowl, angled up slightly to keep the rods from falling off. When we turn it on, the bowl starts vibrating fast enough that it’s just a blur (60 Hz seems plausible). And the rods walk their way up the ramp at a very even pace. The rods will go up the ramp whether there are one thousand or just one of them in the bowl (although if there are lots of them some will get pushed off the ramp, fall to the bottom, and start over). This is really crazy! How do the rods go up the ramp, not down? Things are supposed to go down, generally speaking. It’s pretty freaky to watch a single rod climb right up this long winding ramp as the bowl vibrates. The bowl doesn’t spin. It doesn’t visibly tilt. It works for different sizes of rods and even for a penny (I think. I haven’t found an opportunity to slip one in yet, but I’m pretty sure it’ll work for a penny. Not a marble, though.) I asked the technician who works with the bowl, and he said he wasn’t sure, but that the rods don’t always go up. The people who make the bowl can make some adjustments to it, and then the rod will go up slower, then stop, then march their way back down as the workers keep adjusting. I couldn’t figure it out, so I asked my boss, who’s mechanically-minded, and we worked out a theory. In short, it goes like this: the bowl can vibrate in two different ways. It can bob up and down, and it can rotate back and forth. It can move a couple of millimeters in each direction. The bowl makes the rods go up the ramp by doing both of these at the same time, and we can adjust the speed of the rods up the bowl’s ramp by adjusting the phase lag between the two oscillation modes. Draw a little dot on the side of the bowl. Since a few millimeters (the amplitude of the bowl’s vibrations) is small compared to a meter (the bowl’s diameter), for any given point vibrating the bowl by rotation is the same as shaking back a forth horizontally in the direction tangent to the edge of the bowl. So we’ve got a sine wave motion horizontally. Close-up of one part of the ramp. The red arrows show the motion of the the red dot, although any other point on the ramp would move the same way. If the bowl instead bobs up and down, the motion of the dot is like this: Same as last time, but the bowl bobs up and down. Now make the bowl rotate and bob simultaneously. There are different ways we could do this. Assuming we give both modes the same frequency (easier to engineer, I’d think), then we can describe the way these modes are combined with a single parameter – the phase lag between them. When the phase lag is zero the bowl moves all the way forward at the same time it’s all the way up, and all the way back at the same time it’s all the way down. The red dot traces out a diagonal line, like this: Moving forward/backward and up/down in sync. If the phase lag is 180 degrees, the diagonal line switches directions, like this: But if the phase lag is 90 degrees, the dot instead traces out a circle. Make it -90 degrees and the circle goes the other direction. Combining back/forth and up/down just so, we get a circle. We can also make the circle go the other way, if we want. This is just right for moving a rod up a ramp. Remember that when something moves in circle, its acceleration points towards the center of the circle. So imagine the ramp shaking this way when it’s at the bottom of the circle. The ramp is accelerating up, pushing into the rod. That means there’s a big normal force on the rod, and a big normal force means more friction. To take advantage of this high friction, the ramp should move forward, pushing the rod ahead in space. The CCW circle moves forward at the bottom of its loop (green arrow). The ramp is accelerating up (blue arrow), jamming into the rod and carrying it forward along with it. Now the red dot has reached the high point of the circle. It’s accelerating down. That means it’s falling away from beneath the rod, so the normal force goes down. If the acceleration is high enough, the ramp may even pull away from the bottom of the rod completely (I’m not sure whether it actually does this). Now the friction is low, and the ramp can slide backwards, leaving the rod where it is. The net result is that the rod is further along the ramp at the end of the cycle than at the beginning. Same circle, but now the ramp is at the top of its motion. The rod will get left behind as the ramp pulls away from underneath. This really works, even without a fancy industrial bowl. I did it just now. I found the longest hardback book in my room (The Feynman lectures) and an appropriate rod substitute (a Rubik’s Cube), and I easily sent the cube uphill on the tilted book by rotating circularly one direction with my hands, and brought it back down by rotating circularly the other direction. The book was still pretty short, but I have a longer flat, mobile surface – a white board. I’m a bit too clumsy to shake it in a circle with my hands, though. But I have a bicycle… I duct taped my white board to the pedal of my bike in an attempt to recreate the circular motion I posited for the ramp. I propped the other end of the white board up against a table, This didn’t work well. The board was tilting during the cycle because the far end, leaning on the table, wasn’t going up and down the same way as the near end. I tried propping the table up at an angle in hopes of keeping the white board close to flat, but that didn’t help much. Time to recruit the neighbors. It was 11:20 PM so my neighbors, who are college students, were awake and drunk enough to agree to anything that sounded weird or stupid. With a human at the other end of the white board, I could keep the board pretty much flat, or pretty close to tilted at a constant angle as it went around. The results were that as long as I kept the white board flat, my Rubik’s Cube would go forward and backward the way I predicted, but I couldn’t get it to climb any significant hill. I tried switching out the cube for a high-friction rock-climbing shoe, but that still didn’t make much progress. I conclude the bowl at my job relies on high-speed, high frequency vibrations, the opposite of my bicycle. Finally, in order to be a good scientist I needed a more objective test of my theory. I was able to get my conveyor belt to work roughly, but I already knew the result I wanted. Maybe I was subconsciously tipping the book/board the direction that I wanted things to go? To test this, I called a friend and asked him to replicate my experiment with a book and a penny, without telling him what I expected to happen. He reported the same result! We can indeed make things move forward or backward, even up slight inclines, just by shaking the right way. Tags:boredom, friction, mechanics ### New Problem: Leaky, Rainy, and Slow February 22, 2010 Here’s a classic physics problem I asked my MCAT students yesterday. They unanimously chose the wrong answer. A cart runs along a frictionless track on a rainy day. The rain falls straight down, and some of it lands in the open cart. As the cart accumulates rain, does it slow down, speed up, or keep going the same rate? (Do not worry about the cart running into raindrops ahead of it. We imagine that the raindrops fall in such a way that they either land in the cart or don’t hit it. Also, there’s no wind resistance.) Rain falls straight down into the cart, which is coasting to the right. Next, the rain stops, but the cart gets a leak. Water pours out a hole in the bottom of the cart. Does the cart get faster, slower, or stay the same speed? How does its final speed, when all the water has leaked out, compare to its original speed before the rain? (Again, ignore friction.) Water leaks out a hole in the bottom of the cart as it slides to the right. The answer is now up here. Tags:cart, mechanics, physics, physics problems, water Posted in problems and solutions | 7 Comments » ### Conservation of Momentum February 20, 2009 Here’s something that’s in textbooks, but they tend to leave out lots of little bits and pieces, the way I used to when I made sandwiches for Arby’s one summer. Not that you’ll get the full story here, either. But you’ll get a more satisfying hunk of disgusting, gray, dampish meat clumps, and a little piece of metaphorical lettuce, too. When two particles interact, Newton’s third law postulates $F_{12} = - F_{21}$, where $F_{12}$ means, “the force particle ’1′ exerts on particle ’2′.” This is useless knowledge unless you have some sort of interpretation of force. Force is defined by the second law $F = ma$. Someone once tried to tell me that Newton’s second law is not just a definition of force, but has some deeper meaning. I think they were lying because they wanted to seduce me. (No luck there, Grandpa!) So Newton’s second law defines force, and is meaningless without some rules about what force should do. For example, if you say that a particle with absolutely nothing around to interact with must have no force on it, you’ve said something about force and now Newton’s second law can step in. In this case it says $0 = ma$ so that a free particle does not accelerate. (That’s Newton’s first law. However, there are philosophical problems with such a conclusion. If there is nothing around for the particle to interact with, then how could you tell whether or not it’s accelerating?) The third law, a rule about force, is lame without a definition of force. The second law, a definition of force, is lame without any rules. They were made for each other, like rabbits and lawn mowers (but with less of those annoying screaming sounds). By combining Newton’s second and third laws for two interacting particles, we get $m_1a_1 = F_{21} = -F_{12} = -m_2a_2$ By the transitive law $m_1a_1 = -m_2a_2$ or $m_1a_1 + m_2a_2 = 0$ and assuming that mass is constant $\int_{t_a}^{t_b}dt \left(m_1a_1 + m_2a_2\right) = \int_{t_a}^{t_b} dt*0 = 0$ for arbitrary times $t_a$ and $t_b$. Using the fundamental theorem of calculus and the definition $a = \frac{dv}{dt}$ yields $\left(m_1v_1(t_b) + m_2v_2(t_b)\right) - \left(m_1v_1(t_a) + m_2v_2(t_a)\right) = 0$ again for arbitrary times $t_a, t_b$. What we’ve discovered is that if you take measure the quantity $m_1v_1 + m_2v_2$ at any two times, you will always get the same answer. That quantity is called “momentum”, and the fact that it doesn’t change is called “conservation of momentum.” We haven’t proved it to be true. Science doesn’t prove anything to be true. What we’ve proved is that it follows from certain assumptions. If we make some measurements and find that the “law” of momentum conservation doesn’t hold, there are a few possibilities that I can think of: 1. We made a mistake with the measurements. Our apparatus is broken, or we did something dumb like converting units wrong, etc. 2. Newton’s laws are wrong. They do not accurately represent the interaction of particles. 3. We were not doing an experiment with exactly two particles. (That is the only situation for which we did the proof. Maybe the theorem failed because there was some third particle around that we didn’t see, or maybe the objects in our experiment were not particles, but instead more complicated composite things that are not bound by Newton’s laws.) 4. The mass of the particles is not constant. (Remember that this was an assumption used in the proof). Maybe you can think of other explanations. I can’t at the moment. But it turns out that these explanations can account for a lot of situations. Item (1) comes up frequently enough – it’s just a fact that people make misteaks. Explanation (2) is sometimes correct as well; Newton’s laws aren’t true. Special relativity modifies them. General relativity pretty much scraps them (er, don’t quote me on that). In quantum mechanics, momentum is important, but no longer has an interpretation as mass*velocity. In fact it (mathematically) no longer has any “interpretation” – instead it is its own primary quantity, equally as fundamental to the theory as the concept of “position”. It even steals “position”‘s claim to the letter ‘p’. Momentum is nobody’s bitch. Complication (3), that we aren’t using two isolated particles, arises in practice as well. There are obvious examples, such as everything. When I drop my spoon, it starts gaining momentum until it hits the floor, when it loses momentum. Then I pick it up and lick it clean, and its momentum bounces all around as I lick more and more violently. All this occurs because a spoon is not a system of two particles. There are more interesting (but less tasty) examples where the “not-two-particles” explanation manifests. Take two charged nonrelativistic, non-quantum particles and let them interact. They won’t conserve momentum. The reason is charged bodies generate electromagnetic fields, and our assumption that the only things around are the charged bodies fails. The electromagnetic field can carry its own momentum, although technically in order to break the proof all it would have to do is exist. In another example, Wolfgang Pauli was thinking about another case in which momentum is not conserved – beta decay. He decided options (1), (2), and (4) were not for him, and instead guessed that beta decay must involve some previously-unseen stuff. That stuff is the neutrino. Finally, explanation (4), that the mass is variable, is not something that occurs in practice to my knowledge, but it could. Of course, if a meteor shooting through space hurls off some of its rock-junk when it get near the sun and heats up, then the meteor’s mass decreases. But that doesn’t count because it’s not two particles, and also momentum actually is conserved in that situation if you consider the momentum of the space junk, the meteor, and the sun altogether. What I mean is that I’m not aware of any evidence that fundamental particles can have variable mass. What if there are three particles? Can we prove that momentum is still conserved if we define momentum to be $p = m_1v_1 + m_2v_2 + m_3v_3?$ No. We can’t because we could only prove anything by getting some knowledge about force from Newton’s third law. But Newton’s third law is only telling us the story for two lone particles. When there’s a third, all bets are off. However, there is another assumption that we usually take along with Newton’s laws, often implicitly. This is that forces add linearly. Imagine conducting an experiment with particles 1 and 2, and no particle 3 around. Measure the force on particle 1. Now conduct a new experiment where particle 1 does the same thing it did before, but particle 2 is absent, and particle 3 is around doing whatever it wants. Again, measure the force on particle 1. We assume that if we conduct a third experiment with particles 1, 2, and 3 all together, the force on particle 1 will be the sum of the forces in the first two experiments. With this law that forces add linearly, we can prove momentum conservation for three particles. And if we assume forces continue to add in the simple manner for any number of particles, then momentum conservation also holds for any number of particles. tomorrow: energy Tags:conservation, conservation law, mechanics, momentum, Newon's Laws, Newton, second law, Third Law Posted in physics | 2 Comments » ### A Brief Illustration of One Forms and Tensors in Mechanics December 26, 2008 note: Don’t worry too much about dimensions and constants. I drop them like I would a new born child. other note: I originally identified the eigenvalues incorrectly. Changing it didn’t alter the math significantly. 12/27/08 In this post I will examine the mechanics of a classical particle in the 2-D potential $U(x,y) = \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2$ This is a harmonic oscillator potential, but $\omega$ is different in the $x$ and $y$ directions. A simple harmonic oscillator in one dimension is canonically the mass on a spring: or a grandfather clock pendulum These correspond to the potential $U(x) = \frac{k x^2}{2}$. The fastest way to get the equation of motion is Newton’s second law, which yields $\begin{array}{rcl} -\frac{d U}{d x} & = & \frac{d^2 x}{dt^2} = k x \\ x(t) & = & A\sin\left(\omega t + \phi\right)\end{array}$ with $\omega$ some function of $k$. The easiest way to make the hop to two dimensions is to keep the apparatus the same, but give the mass another degree of freedom. For the mass on a spring, this might correspond to sitting the mass on a table and tacking down one side of the spring, while leaving it free to spin as well as expand/contract. You have to think of this as a spring with no rest length, and collapsible, so that it can pass through itself. Combine the properties of a slinky and a rubber band and you’re pretty much there. The 2-D pendulum is just a pendulum free to swing to and fro in whatever direction it wants. This scheme would set $\begin{array}{rcl} U(x,y) &=& \left( \frac{ k x^2}{2}\right)+\left(\frac{k y^2}{2}\right) \\ {} & = & \left(\frac{k (x^2 + y^2)}{2}\right) \\ {} & = & \left(\frac{k r^2}{2}\right) \end{array}$ with $r$ the displacement distance from equilibrium. Then $r$ and $\theta$ are decoupled, and we just get simple harmonic motion of $r$, while $\theta$ increases linearly with time. The mass traces out an ellipse in space, with an eccentricity and orientation depending on the initial conditions, and with the same period as before. It is a bit more interesting to break the symmetry between the two orthogonal directions and treat them distinctly, as you would people with different ethnicities. In the example of the spring, we could set up a situation like this: displacement and force on a mass with springs on either side Before examining the dynamics of this configuration, let’s first hash out the language we’ll be using by describing the kinematics. The displacement of the mass from equilibrium is a two-dimensional vector, and the set of all possible displacements constitutes a vector space. It can be easier to manipulate this vector when we set up a coordinate system, so we’ll set up standard Cartesian coordinates with x the left/right direction and y the up/down direction. Now we are ready to define a (1,1) tensor that characterizes this system. You might be worried because a (1,1) tensor takes a vector and a one form as its arguments, but we haven’t yet identified any one forms! You’ll see in a moment that we can do quite a bit without knowing exactly what the one forms are, but don’t worry, we’ll return and identify them at the end. Remember that a (0,1) tensor is a vector. That means that a (1,1) tensor can also be considered a function that takes in a vector and gives a (0,1) tensor – another vector. (1,1) tensors are the set of linear operators on the vector space. On to the tensor. The spring constant of the system is a (1,1) tensor, $K$, which takes in the displacement of the mass and returns the force on the mass. Notice that the force does not necessarily point in the same direction as the displacement. If you push the mass straight up/down, the springs will pull it back towards the center, as they will with a pure left/right displacement. But suppose you push the mass at a 45 degree angle. Then the springs push left/right much more than they pull up/down. Let’s quantify this by working out the tensor. We need only work out its action on unit displacements in the x and y directions, and then linearity will take care of the rest. If we displace the mass a small amount to the right, we pull the left spring out and push the right spring in. They each create a restoring force of 2k . $K(\hat{\mathbf{x}},\_\_) = -2k\hat{\mathbf{x}}$. I have used the underscore in the second argument of $K$ to indicate this is still an open slot. $K$ must still take on a one form before it can spit out a scalar. Right now, it’s a vector – a (0,1) tensor. If we push the mass up a small amount in the y direction, we don’t make the springs any longer to first order, but the forces they exert on the mass are no longer balanced. Each spring creates a restoring force that is the equilibrium tension in the spring times the sine of the angle the springs make with the horizontal. $K(\hat{\mathbf{y}},\_\_) = -2\frac{\textrm{T}_0}{L}\hat{\mathbf{y}}$ with $\textrm{T}_0$ the equilibrium tension of the springs and $L$ their length. If the springs had zero rest length, this would evaluate to the same thing as in the x direction, but let’s just pretend these springs don’t have that. This defines the tensor spring constant, and allows us to write the force on the mass by the equation $\mathbf{F} = K(\mathbf{x},\_\_)$. We can again use Newton’s second law to find the equations of motion. $K(\mathbf{x},\_\_) = m \frac{d^2 \mathbf{x}}{d t^2}$. Note that this is a vector equation, not a coordinate equation. Before we solve this equation, we’ll return to the discussion of kinematics left incomplete earlier. We have a vector space of displacements, so we know there exists a dual space of one forms. This is the space of linear, scalar-valued functions of the displacement. It also has dimension two. We can find a basis for this space by finding two independent one-forms. Call them $p_x$ and $p_y$, and define them by $p_x(\hat{\mathbf{x}}) = 1$ $p_x(\hat{\mathbf{y}}) = 0$ $p_y(\hat{\mathbf{x}}) = 0$ $p_y(\hat{\mathbf{y}}) = 1$ A general one form can be expressed in this basis as follows: $\begin{array}{rcl} p(\mathbf{x} &=& p(x\hat{\mathbf{x}} + y\hat{\mathbf{y}}) \\ {} &=& x*p(\hat{\mathbf{x}}) + y*p(\hat{\mathbf{y}}) \\ {} &=& x*\alpha + y*\beta \\ {} &=& \alpha*p_x(\mathbf{x}) + \beta*p_y(\mathbf{x}) \\ p &=& \alpha p_x + \beta p_y \end{array}$ You may have noticed that the basis one forms have a simple interpretation as projection operators. From the above discussion, we see that $\left[\alpha p_x + \beta p_y\right](\mathbf{x}) = \alpha*x + \beta*y$ So that the one forms are acting in a way similar to the dot product in elementary physics. However, since I haven’t defined the dot product in this little series of posts, I won’t pursue that notion. Now we can return to the equations of motion, and put each of the basis one forms into the vector equations. $\mathbf{F}(p_x) = K(\mathbf{x},p_x) = -2 k x$, $\mathbf{F}(p_y) = K(\mathbf{x},p_y) = -2 \frac{T_0}{L} y$. You might verify that these forces are consistent with the potential I quoted at the beginning of the post. You probably won’t, though. Nobody ever does shit like that. Anyway, the solutions are of the form $\mathbf{x}(t) = A\sin(\omega_1 t)\hat{\mathbf{x}} + B\sin\left(\omega_2 t+\phi\right)\hat{\mathbf{y}}$ where $\omega_1$ is a function of $k$ and $\omega_2$ is a function of $T_0$ and $L$. In general they are different, and their ratio will determine the orbit of the mass in 2-D space. The motion of the mass traces out a Lissajous figure, one example of which is this: A Lissajous figure with frequency ratio 1.8 It might not seem like the tensor accomplished anything, since we could easily have found the same system of equations without knowing anything about tensors. The payoff here is theoretical. The equation $\mathbf{F} = K(\mathbf{x},\_\_)$ involves no coordinates. It is simply a statement about vectors and linear functions of vectors. The coordinate system was an easy way to do some calculations, but in writing the tensor equation we’ve mathematically depicted the system geometrically instead of algebraically. In the x-y basis, $K$ can be written as a diagonal matrix with eigenvalues $-2k$, $-2\frac{T_0}{L}$. If we were to change coordinate systems, for example rotating x and y by some angle, a coordinate-based approach would have to recalculate all the forces. On the other hand, our tensor equation would still hold, and we could evaluate it just as before – by plugging in the one forms corresponding to projection operator on the axis of the springs and orthogonal to that axis. The only work would be in finding new coordinate representation of the one forms. Tags:force, Lissajous, Lissajous figures, mass, mechanics, one form, oscillator, pendulum, projection operator, spring, spring constant, tensor, vector
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 83, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396830797195435, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/218211/taylor-series-of-a-modulus-argument
# Taylor series of a modulus argument What is the definition of a Taylor series of the function $F(|\vec a -\vec x|)$ about the point $\vec a$ in $\vec x$? - ## 3 Answers As filmor notes, the Taylor series isn't well-defined if you consider this as a function of the full vector $\vec x$. If you don't mind a singularity at $\vec a$ (or if the odd derivatives of $F$ at $\vec a$ vanish), you can use the one-dimensional Taylor series for $F$ to write $$\sum_{n=0}^\infty\frac{F^{(n)}(|\vec a-\vec x|)}{n!}|\vec a-\vec x|^n\;.$$ - There is no proper definition for this (if you take $F(\lvert \vec a - \vec x\rvert)$ as a function of $\vec x$), as $\lvert\vec v\rvert$ is not differentiable at $\vec v = 0$ (in this case: $\vec x = \vec a$). - This definition is the same as Taylor series of function $F_2(\vec x)=F(|\vec x|)$. I'm not sure if you are familiar with vector function Taylor series. In general case the formula is kind of long, so I'll just give and example for bivariate function $f(x,y)$ about the point $(a,b)$ $$f(x,y)=f(a,b) + \frac{\partial f}{\partial x} \bigg|_{(a,b)} (x-a) + \frac{\partial f}{\partial y}\bigg|_{(a,b)} (y-b) + \frac{\partial^2 f}{\partial x^2}\bigg|_{(a,b)} (x-a)^2 + 2\frac{\partial^2 f}{\partial x \partial y}\bigg|_{(a,b)} (x-a)(y-b)+\frac{\partial^2 f}{\partial y^2}\bigg|_{(a,b)} (y-b)^2 + \dots$$ But there is still one thing, you see, the derivative of modulus ($|\cdot|$) does not exist around zero vector. So if you could give us a little more detail on why are you trying to solve a problem like that, them maybe you will get a good answer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483122229576111, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/205409-finding-limit-g-x-analysis-algebra-limits-print.html
# Finding a limit at g(x). Analysis. Algebra of Limits. Printable View • October 15th 2012, 05:04 PM JesseMoeller Finding a limit at g(x). Analysis. Algebra of Limits. Define $g: (0,1)\to\mathbb{R}$ by $g (x)=\frac{\sqrt{1+x}-1}{x}$. Prove that g has a limit at 0 and find it. In my course we are given only the definition of a limit, namely that $g$ has a limit at $x_0$ iff there exists a $\delta$ such that given any $\epsilon >0$ we have $|g(x)-L|<\epsilon$ for all $0<|x-x_0|<\delta$. We are also given the algebra of limits for Addition, Multiplication, and Division (given obvious constraints). I cannot seem to find a clever way to write g(x) in the problem as a composition of two functions with either of these three operations. Any help would be appreciated. All times are GMT -8. The time now is 08:50 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462506175041199, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/quantum-hall-effect+conformal-field-theory
# Tagged Questions 1answer 109 views ### What is the operator for the edge current of a fracional quantum Hall state? The edge of a fractional quantum Hall state is a chiral conformal field theory. In the Laughlin case it corresponds to the chiral boson, S = \frac{1}{4\pi} \int dt dx ... 2answers 258 views ### Edge theory of FQHE - Unable to produce Green's function from anticommutation relations and equation of motion? I'm studying the edge theory of the fractional quantum Hall effect (FQHE) and I've stumbled on a peculiar contradiction concerning the bosonization procedure which I am unable to resolve. Help! In ... 1answer 380 views ### How do you obtain the commutation relations at non-equal times (for the edge of a fractional quantum Hall state)? The edge of a fractional quantum Hall state is an example of a chiral Luttinger liquid. Take, for the sake of simplicity, the edge of the Laughlin state. The Hamiltonian is: H = ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8293958306312561, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20604/divergence-of-frac-vecrr2
# Divergence of $\frac{\vec{r}}{r^2}$ In David J. Griffiths 'Introduction to ELECTRODYNAMICS' , as one the excercize he gave the following problem. Sketch the vector function: $$\vec{v} = \frac{\vec{r}}{r^2}$$ , and compute its divergence. The answer may surprise you... Can you explain it? I found the divergence of this function as $$\frac{1}{x^2+y^2+z^2}$$ Please tell me what is the surprising thing here/ - 1 convert your expression for $r$ into Cartesian coordinates, and then compute the divergence in these coordinates. You definitely have the wrong answer. – Jerry Schirmer Feb 6 '12 at 14:38 Sorry, the numerator 'r' is a vector. I do not know how to put a hat over the 'r' here in this website. vecor v = vector r/ r^2. – Inquisitive Feb 6 '12 at 14:43 Yes, but still, your answer should be half of what you've written. – Manishearth♦ Feb 6 '12 at 14:46 2 Wait, r hat or r vector? R hat means that it is a unit vector, whereas r vector means that it is a full r vector. $\vec{r}=x\hat{i}+y\hat{j}+z\hat{k}$, $\hat{r}=\frac{x\hat{i}+y\hat{j}+z\hat{k}}{\sqrt{x^2+y^2+z^2}}$. Mouseover the above two formula and right-click, show source to get an idea of how to make vectors in TeX. – Manishearth♦ Feb 6 '12 at 14:50 ya, now I made it right. The denominator is the equation of the sphere , is that the surprising thing? or anything else important here? – Inquisitive Feb 6 '12 at 14:50 show 2 more comments ## 3 Answers Pretty sure the question is about $\frac{\hat{r}}{r^2}$, i.e. the electric field around a point charge. Naively the divergence is zero, but properly taking into account the singularity at the origin gives a delta-distribution. - You may wish to check if the divergence is finite everywhere. - Thanks, I got it. – Inquisitive Feb 6 '12 at 16:33 But what is the physical meaning of infinite divergence? – Inquisitive Feb 6 '12 at 16:38 1 @sree: Infinite charge density (if the vector field is the electric field). – akhmeteli Feb 6 '12 at 16:46 The electron is a single point here. Countably many charge in an infinitely small point -> infinite charge density. – queueoverflow Feb 6 '12 at 19:36 I have the same book, so I take it you are referring to Problem 1.16, which wants to find the divergence of $\frac{\hat{r}}{r^2}$. If you look at the front of the book. There is an equation chart, following spherical coordinates, you get $\nabla\cdot\vec{v} = \frac{1}{r^2}\frac{\mathrm{d}}{\mathrm{d}r}\left (r^2 v_r\right) + \text{ extra terms}$. Since the function $\vec{v}$ here has no $v_\theta$ and $v_\phi$ terms the extra terms are zero. Hence $\nabla\cdot\vec{v} = \frac{1}{r^2}\frac{\mathrm{d}}{\mathrm{d}r}\left(r^2 \frac{1}{r^2}\right) = \frac{1}{r^2}\frac{\mathrm{d}}{\mathrm{d}r}\left(1\right) = 0$. At least this is how I interpret the surprising element of the question. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178948402404785, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/136494-image-after-transformation.html
# Thread: 1. ## Image after Transformation Alright, so I've got to "Find the image of the set $S$ after the given transformation." $S$ is the square bounded by the lines $u=0, u=1, v=0, v=1; x=v, y=u(1+v^2)$ How do I do this? I've found the Jacobian to be $-u$ but I don't even know how to use that for this problem. Can someone please start me in the correct direction. 2. I think I've figured it out. Basically I took all the existing boundry curves/lines, and converted them. The Jacobian wasn't used. But I ended up getting a figure that looks like the area underneath $y=x^2+1$ from 0 to 1. Does that sound correct? 3. Originally Posted by Rhode963 Alright, so I've got to "Find the image of the set $S$ after the given transformation." $S$ is the square bounded by the lines $u=0, u=1, v=0, v=1; x=v, y=u(1+v^2)$ How do I do this? I've found the Jacobian to be $-u$ but I don't even know how to use that for this problem. Can someone please start me in the correct direction. Since x= v, v= 0 and v= 1 become x= 0 and x= 1. The other sides are harder. Since $y= u(1+ v^2)$ and x= v, we can rewrite that as $y= u(1+ x^2)$ so that $u= \frac{y}{1+ x^2}$. Now u= 0 becomes $\frac{y}{1+ x^2}= 0$ or $y= 0$. Finally, $u= 1$ becomes $\frac{y}{1+ x^2}= 1$ or $y= 1+ x^2$, a parabola. That's exactly what you have!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695128798484802, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Trigonometry/Similar,Congruent,Isosceles,_and_Equilateral
# Trigonometry/Similar,Congruent,Isosceles, and Equilateral At this point we're assuming: • You already know what a triangle is • You already know how to measure angles We assume that you know that two triangles of different sizes can have exactly the same sized angles as each other. One reason we assume you already know how to measure angles is that, actually, it is rather difficult to be mathematically precise about what we mean. What does it mean for one angle to be one third or twice another? A mathematically precise approach that we'll come to much later on, in book 2, is to divide angles repeatedly in half, and then build up larger angles from the smaller ones. For now we don't need to worry about this; it's enough to know how to measure angles, and to know some angles like $90^\circ$ and $45^\circ$. ## About Exercises The exercises in this book are here for your benefit. They are here to help you check for yourself that you are understanding what the book is saying. ↑Jump back a section ## Terminology You need to know the meaning of these words for the next part . Isosceles triangle Equilateral triangle ### Terminology: Isosceles • An isosceles triangle is a triangle with at least two sides that are the same length. The picture of the isosceles triangle shows a small mark on the two sides that are the same. When a triangle has two sides that are the same length, it is symmetric. Since it is symmetric, in our isosceles triangle the right and left sides are the same; it also has two angles that are the same. Exercise: A quadrilateral with four different angles... If you have a quadrilateral (that is, a polygon with four sides) and two of them are the same length it does not have to be a symmetric quadrilateral. In a four sided polygon all the angles can be different even though two sides have the same length. Your task in this exercise is to draw a quadrilateral which has two sides the same length but that has all the angles different. Testing whether a triangles is isosceles A triangle must be an isosceles triangle if either of the following is true: Some two sides are the same length. Some two angles are the same. If one is true, the other is automatically true too. . Isosceles triangle Equilateral triangle ### Terminology: Equilateral • An equilateral triangle is a triangle with all three sides the same length. An equilateral triangle happens also to be an isosceles triangle. If all three sides are the same length then we can certainly find two sides that are the same length! If that seems strange, well yes, it is different to other terminology we have for shapes. 'Triangle', 'square', 'pentagon', 'hexagon' are all different things. But an isosceles triangle can be an equilateral triangle, and it will be if all three sides are the same length. Testing whether a triangle is equilateral A triangle must be an equilateral triangle if either of the following is true: All three sides are the same length. All three angles are the same. If one is true, the other is automatically true too. ### Terminology: Similar and Congruent Similar Triangles • Two triangles are congruent if they have the same shape and size. The diagram above does not show congruent triangles, because although the two triangles have the same angles the triangles are different sizes. The triangles in that picture are similar triangles. • If you can move, enlarge evenly or shrink evenly or reflect one triangle exactly onto another, or match one with another by doing a series of such steps, then they are similar. • If you can move or reflect one triangle exactly onto another, or match one with another by doing a series of such steps, then they are congruent. Testing whether two triangles are congruent When two triangles are congruent, the angles of one triangle are the same as the angles of the other and the lengths of the sides of one triangle are the same as the lengths of the sides of the other. When we want to prove that two triangles are congruent, we don't need to prove all six correspondences. That's because, for example, once you know the lengths of the three sides of a triangle the angles are completely determined. There is only one triangle that can be made with sides of length 6,7 and 8 cm. In the following we use 'S' as an abbreviation for Side, and A as an abbreviation for Angle, so (Side-Side-Side) is written as (SSS). There are four tests that can be used to prove that two triangles are congruent: The sides of one are all equal in length to the sides of the other (SSS). Two of the sides of one and the angle between those two sides equal the corresponding sides and angle of the other (SAS). Two of the angles of one and the side between those two angles equal the corresponding angles and side of the other (ASA). Both of the triangles are right angled, and the hypotenuse and another side of one are the same length as the hypotenuse and another side of the other. There is only one kind of equilateral triangle, though they come in different sizes, so all equilateral triangles are similar to each other. There are many different kinds of isosceles triangle. They aren't necessarily similar to each other. ↑Jump back a section ## Exercises Find two triangles on this page that are similar to each other but that are different sizes. You're not allowed to use the ones in the diagram of similar triangles. Find two isosceles triangles on this page that are not congruent with each other. True or false: "All equilateral triangles are congruent to each other". Justify your answer (how would you convince someone that your answer is correct). Testing for similar triangles. Look at the box about testing for congruent triangles. Is it easy to modify what it says so that it can be used for similar triangles rather than congruent triangles? ↑Jump back a section ## Examples Some examples of triangles are shown below, also showing the size of the angles. . | | | | | | |----------------------|-------------------|-------------------|-------------------|--------------------| | | | | | | | Equilateral Triangle | 45-45-90 Triangle | 30-60-90 Triangle | 50-60-70 Triangle | 20-40-120 Triangle | Exercise: Terminology Two of the triangles above are isosceles. Which two? One of the triangles above has an obtuse angle (an angle greater than 90 degrees). Which one? Are any of the five triangles congruent to each other? ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933914303779602, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/10071/what-is-the-local-minimum-of-a-complete-binary-tree
# What is the local minimum of a complete binary tree I have this confusion. What is the local minimum of a complete binary tree? Consider an $n$-node complete binary tree $T$, where $n = 2^d − 1$ for some $d$. Each node $v \in V(T)$ is labeled with a real number $x_v$. You may assume that the real numbers labeling the nodes are all distinct. A node $v \in V(T)$ is a local minimum if the label $x_v$ is less than the label $x_w$ for all nodes $w$ that are joined to $v$ by an edge. - 3 You gotta give us some more info to answer this. It could be anything, does the vertices/nodes represent something? Is the tree a heap, and the local minimum for a given subtree (subtree generated by some node) the least value in the subtree? – Pål GD Feb 24 at 22:42 1 Where did you encounter the expression? – saadtaame Feb 24 at 22:44 3 I'm not sure where the problem is, the definition you have seems pretty straightforward, a local minimum is any vertex where its label is less than the labels of all its neighbours. The complete binary tree part seems unimportant (unless your tree is embedded in a larger graph). – Luke Mathieson Feb 25 at 0:18 2 You could just traverse the tree in any convenient order, and for each node check if its value is less than the parent/children. Other than doing that, I'm at a loss. – vonbrand Feb 25 at 1:07 3 You're talking about "the" local minimum, but there could be several ones. – Yuval Filmus Feb 25 at 4:26 show 4 more comments ## 1 Answer As it stands, the definition is quite straightforward. A local minimum is simple a vertex where its label is less than the labels of all its neighbours. A completely made up example might be a vertex $v$ with label $5$, with a parent with label $10$ and children with labels $7$ and $9$. Then $v$'s label is less than all its neighbours, so it is a local minimum. Of course there can be more than one local minimum (as suggested by the local part). The fact that it's in a complete binary tree has no bearing on the definition, it's equally applicable to general graphs (or even hypergraphs, digraphs... pretty much anything where adjacency is well defined). As an additional note, this concept of "local minimum of a binary tree" is not a common notion, and doesn't appear as a usual property of a (complete) binary tree, though it does look like it might be related to search trees, or similar. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496333599090576, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/172495/connection-of-a-line-bundle-induced-by-a-divisor
# Connection of a line bundle induced by a divisor Say I have a Line bundle induced by some Divisor $\mathfrak{L}(D)$. And $D$ is given by the zero locus of some polynomial $P$. I actually know this polynomial explicitly. Now, I also have a Connection and even this connection i know explicitly. I've been trying to show, that this connection belongs to the above line bundle. To my mind, this should be possible, as the first chern class of the line bundle is nothing more than the Curvature form of the Connection. Here is what I tried so far: • I tried computing the curvature form from the connection and then simply comparing it to the first chern class of $D$. This has proven difficult. Specifically, I don't know how to show that the Curvature form I find is of the class $c_1$ is in. • I tried using the fact that the transition functions of the line bundle are given by $g_{ab} = f_a/f_b$. And that a connection can be found by $D = d \log g_{ab}$. However, I don't really know how to construct these $f_a$ functions from my polynomial. Is there another way? Or perhaps know how to solve the two problems above? This must have been done by someone, but I can't find anything. Any examples are also highly appreciated! Thanks! - Is $D$ linearly equivalent to $0$ so that $\mathcal{L}(D)\simeq \mathcal{O}_X$? In what way do you "know" the connection? I find it hard to be handed a connection without the data of the line bundle being used in the description which automatically tells you it belongs to that line bundle. – Matt Jul 18 '12 at 17:23 Yes, some of my divisors are equivalent to 0. How can one use this? It is indeed a bit weird that I don't really know if the connection belongs to that line bundle or not, but sadly I only have hand-wavy arguments so far. And by "know the connection", I have an explicit formula for $D$. As well as for the Polynomial $P$ – FMN Jul 18 '12 at 17:30 The actual problem arises from the fact that the Divisors, etc, all arise from a pure mathematical derivation of the problem. And the Connection thusfar has come from a physics perspective. Therefore both "languages" used are a bit incompatible up to exactly this point. I'm trying to understand better on how to make the connection – FMN Jul 18 '12 at 17:32 Here's what is confusing me. To me a connection is a homomorphism $\nabla : \mathcal{L}\to \mathcal{L}\otimes \Omega^1$ that satisfies the Leibniz rule. There are other characterizations, like if you know it is flat then you can recover it from the local system or a representation of the fundamental group, but otherwise I don't know how to describe a connection without using $\mathcal{L}$ to do it. – Matt Jul 19 '12 at 0:08 Basically I just have a 1-form given explicitly. It appeared in some physical equation and it was possible to solve for it. That's also my problem, I'm not even sure if it actually is a connection. That's why I was hoping to recover some information of the connection using Divisors – FMN Jul 19 '12 at 8:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9637205004692078, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/128695-binomial-coefficients-series.html
# Thread: 1. ## binomial coefficients series Knowing that: $(1+x)^k = \sum_{n=0}^{\infty}\binom{k}{n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + ...$ Give the Maclaurin series for the function: $f(x) = \frac{x^2}{\sqrt{2+x}}$ I figure I can rewrite the equation as $f(x) = x^2(1+(1+x))^{-1/2}$ so that my series becomes: $x^2(1+(x+1))^{-\frac1{2}} = x^2\sum_{n=0}^{\infty}\binom{(-\frac1{2})}{n}(x+1)^n =$ $x^2 [1 + (-\frac1{2})(x+1) + \frac{(-\frac1{2})((-\frac1{2})-1)}{2!}(x+1)^2 +$ $\frac{(-\frac1{2})((-\frac1{2})-1)((-\frac1{2})-2)}{3!}(x+1)^3 + ... ]$ which is $= x^2 [1 -\frac{x+1}{2} + \frac{3(x+1)^2}{{2^2}2!} - \frac{15(x+1)^2}{{2^3}3!} +$ $... + \frac{(-1)^n(1\cdot3\cdot5\cdot\cdot\cdot(2n-1))(x+1)^n}{{2^n}n!}]$ I thought this might be okay until I checked my answer on WolframAlpha and got a different answer: (x^2)*(2+x)^(-1/2) - Wolfram|Alpha $\sum_{n=2}^{\infty}x^n 2^{3/2-n} \binom{-1/2}{-2+n}$ $\frac{x^2}{\sqrt{2}}-\frac{x^3}{4\sqrt{2}} + \frac{3x^4}{32\sqrt{2}}-\frac{5x^5}{128\sqrt{2}}+ ...$ So are both answers somehow correct? Or did I do something wrong? Thanks, Brian 2. Do you have to use the Binomial Series? I'd find it easier just to find the Taylor series for $\frac{1}{\sqrt{2 + x}}$ and then multiply everything by $x^2$. 3. where $\Gamma$ is the usual gamma function. this allows for complex or real arguments. i think u need to use this because binomial coeffs are usually defined for z>k>0 where z,k are integers. that $z=\frac{-1}{2}$ is gnarly. ill keep thinking on this one. 4. well, as "luck" would have it: Binomial coefficient - Wikipedia, the free encyclopedia The definition of the binomial coefficients can be extended to the case where n is real and k is integer. In particular, the following identity holds for any non-negative integer k : This shows up when expanding into a power series using the Newton binomial serie : cleary now this is very doable...just express it like this, $\frac{x^2}{(1+(x+1))^\frac{1}{2}}$ use the result above and youre done. kinda sleazy, but what can u do. in no mood to attempt deriving the above. 5. Originally Posted by Prove It Do you have to use the Binomial Series? I'd find it easier just to find the Taylor series for $\frac{1}{\sqrt{2 + x}}$ and then multiply everything by $x^2$. Well, the problem says "Use a Maclaurin series from Table 1 to obtain the Maclaurin series for the given function." Table 1 has the series for: $\frac1{1-x} = \sum_{n=0}^{\infty} x^n$ $e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}$ $sin x = \sum (blah)$ $cos x = \sum (blah)$ $tan^{-1}x = \sum (blah)$ $(1+x)^k = \sum_{n=0}^{\infty} \binom{k}{n}x^n = 1 + kx + \frac{k(k-1)x^2}{2!} + ...$ And I figured that the given function best fit the binomial series. I got that the x^2 would be factored out first, and then multiplied back in, and that's what I did, using k = -1/2 and x+1 in place of x. I'm new to series in general, so I'm just not sure if I'm making correct use of this binomial series. The idea of "just finding the Taylor series" has no meaning to me yet. 6. Originally Posted by vince In particular, the following identity holds for any non-negative integer k <snip> cleary now this is very doable...just express it like this, $\frac{x^2}{(1+(x+1))^\frac{1}{2}}$ That is how I expressed it, but then k=-1/2, so it is neither non-negative nor an integer, right? (Are we mixing Ks and Ns here? I see a k in the bottom of your binomial expression, and no Ns at all? I don't get it.) Oh, and I have absolutely no idea what the "usual gamma function" is. This being a section where we are just being introduced to Maclaurin, I don't think it applies to this problem. 7. Originally Posted by buckeye1973 Knowing that: $(1+x)^k = \sum_{n=0}^{\infty}\binom{k}{n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + ...$ Give the Maclaurin series for the function: $f(x) = \frac{x^2}{\sqrt{2+x}}$ I figure I can rewrite the equation as $f(x) = x^2(1+(1+x))^{-1/2}$ so that my series becomes: $x^2(1+(x+1))^{-\frac1{2}} = x^2\sum_{n=0}^{\infty}\binom{(-\frac1{2})}{n}(x+1)^n =$ $x^2 [1 + (-\frac1{2})(x+1) + \frac{(-\frac1{2})((-\frac1{2})-1)}{2!}(x+1)^2 +$ $\frac{(-\frac1{2})((-\frac1{2})-1)((-\frac1{2})-2)}{3!}(x+1)^3 + ... ]$ which is $= x^2 [1 -\frac{x+1}{2} + \frac{3(x+1)^2}{{2^2}2!} - \frac{15(x+1)^2}{{2^3}3!} +$ $... + \frac{(-1)^n(1\cdot3\cdot5\cdot\cdot\cdot(2n-1))(x+1)^n}{{2^n}n!}]$ I thought this might be okay until I checked my answer on WolframAlpha and got a different answer: (x^2)*(2+x)^(-1/2) - Wolfram|Alpha $\sum_{n=2}^{\infty}x^n 2^{3/2-n} \binom{-1/2}{-2+n}$ $\frac{x^2}{\sqrt{2}}-\frac{x^3}{4\sqrt{2}} + \frac{3x^4}{32\sqrt{2}}-\frac{5x^5}{128\sqrt{2}}+ ...$ So are both answers somehow correct? Or did I do something wrong? Thanks, Brian Brian, Your answer, although it may be correct (I haven't checked the details), is not a Maclaurin series, because it is in terms of powers of 1+x. A Maclaurin series is the Taylor series about 0, i.e. it is in terms of powers of x. I suggest you start with $\frac{1}{\sqrt{2+x}} = 2^{-1/2} (1+ \frac{x}{2})^{-1/2}$, then apply the Binomial Theorem. 8. The path i suggested would lead to a messy formula that would equal the one provided by wolfram. Still, with Awkward's wise choice of factorization the following steps will lead to wolfram's answer. $\frac{x^2}{\sqrt{2+x}} = {x^2}2^{-1/2} (1+ \frac{x}{2})^{-1/2}$ Using the binomial theorem, this is then equal to: $\sum_{k=0}^{\infty}2^\frac{-1}{2}x^2\binom{(-\frac1{2})}{k}(x/2)^k$ $\sum_{k=0}^{\infty}2^\frac{-1}{2}\binom{(-\frac1{2})}{k}(\frac{x^\frac{2}{k}x}{2})^k$ let k=n-2 $\Rightarrow$ $\sum_{n=2}^{\infty}2^\frac{-1}{2}\binom{(-\frac1{2})}{n-2}(\frac{x^\frac{2}{n-2}x}{2})^{n-2}$ = $\sum_{n=2}^{\infty}2^{\frac{3}{2}-n}\binom{(-\frac1{2})}{n-2}(x^{\frac{2+n-2}{n-2}})^{n-2}$ = $\sum_{n=2}^{\infty}2^{\frac{3}{2}-n}\binom{(-\frac1{2})}{n-2}(x^{\frac{n}{n-2}})^{n-2}$ = $\sum_{n=2}^{\infty}x^n 2^{3/2-n}\binom{(-\frac1{2})}{n-2}$ 9. Originally Posted by awkward Brian, Your answer, although it may be correct (I haven't checked the details), is not a Maclaurin series, because it is in terms of powers of 1+x. A Maclaurin series is the Taylor series about 0, i.e. it is in terms of powers of x. I suggest you start with $\frac{1}{\sqrt{2+x}} = 2^{-1/2} (1+ \frac{x}{2})^{-1/2}$, then apply the Binomial Theorem. Ah-ha! It's amazing how I can get the big concepts, but it's missing the basic algebra that so often trips me up. It seems like there may be something wrong with using an offset of x like I did, while multiples and powers of x are okay, as they still center around zero. Not just in terms of technically being called a Maclaurin, but actually not being mathematically correct. I don't know. Thanks, Brian 10. Originally Posted by vince The path i suggested would lead to a messy formula that would equal the one provided by wolfram. Still, with Awkward's wise choice of factorization the following steps will lead to wolfram's answer. $\frac{x^2}{\sqrt{2+x}} = {x^2}2^{-1/2} (1+ \frac{x}{2})^{-1/2}$ Using the binomial theorem, this is then equal to: $\sum_{k=0}^{\infty}2^\frac{-1}{2}x^2\binom{(-\frac1{2})}{k}(x/2)^k$ $\sum_{k=0}^{\infty}2^\frac{-1}{2}\binom{(-\frac1{2})}{k}(\frac{x^\frac{2}{k}x}{2})^k$ let k=n-2 $\Rightarrow$ $\sum_{n=2}^{\infty}2^\frac{-1}{2}\binom{(-\frac1{2})}{n-2}(\frac{x^\frac{2}{n-2}x}{2})^{n-2}$ <snip> I haven't learned binomial theorem well enough to understand the way you are using it. I don't understand this having the k in the bottom, only the top, as shown in my original post: $(1+x)^k = \sum_{n=0}^{\infty}\binom{k}{n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + ...$ Where k is simply -1/2 and just gets plugged in to the product sequences on the right side above, and x/2 is used in place of x. (And this is about all the explanation I'm given in the text as well, so I don't think I'm expected to understand anything any fancier yet.) I'd multiply the terms of the resulting series by $\frac{x^2}{\sqrt{2}}$ to get my original function $f(x) = \frac{x^2}{\sqrt{2+x}}$ I'll have to try that, and see if it gets me to the same answer. If it doesn't, I'll do some more research on binomial theorem and look closer at your solution. Thanks, Brian 11. Please note and remember that the letters/symbols ascribed to terms in a function, theorem, whatever, are never absolute, they are only relative. by this i mean any pair of letters $k,n$ have no menaing unless they are related to each other -- otherwise, any choice of letter/symbols will work, such as $x$ and $y$, etc. Now look at this: $(1+ \frac{x}{2})^{-1/2}$ By comparing this relative to the statement of the binomial theorem, is not the $k$ u see in the statement meant to mean ${-1/2}$? is not the $x$ equal to $\frac{x}{2}$ ?(Where the $x$ and the $\frac{x}{2}$ here are not the same, they're just referring to the same argument.) so all in all, you really should not ever get attached to symbols or letters used to relate variables in a mathematical expression. they can be switched, twirled, and discarded if it suits the progression of the operations to make things simpler. 12. from the identity $\frac{1}{\sqrt{1+x}}= \sum^{\infty}_{k=0}{2k \choose k}\frac{(-1)^{k}x^{k}}{4^{k}}<br />$ , we change $x<br />$ by $\frac{x}{2}<br />$ , so $\frac{\sqrt{2}}{\sqrt{2+x}}= \sum^{\infty}_{k=0}{2k \choose k}\frac{(-1)^{k}x^{k}}{8^{k}}<br />$ so $\frac{x^{2}}{\sqrt{2+x}}= \frac{1}{\sqrt{2}}\sum^{\infty}_{k=0}{2k \choose k}\frac{(-1)^{k}x^{k+2}}{8^{k}}.<br />$ 13. Originally Posted by vince Please note and remember that the letters/symbols ascribed to terms in a function, theorem, whatever, are never absolute, they are only relative. by this i mean any pair of letters $k,n$ have no menaing unless they are related to each other -- otherwise, any choice of letter/symbols will work, such as $x$ and $y$, etc. Now look at this: $(1+ \frac{x}{2})^{-1/2}$ By comparing this relative to the statement of the binomial theorem, is not the $k$ u see in the statement meant to mean ${-1/2}$? is not the $x$ equal to $\frac{x}{2}$ ?(Where the $x$ and the $\frac{x}{2}$ here are not the same, they're just referring to the same argument.) so all in all, you really should not ever get attached to symbols or letters used to relate variables in a mathematical expression. they can be switched, twirled, and discarded if it suits the progression of the operations to make things simpler. Well, yes, I understand this, but you have things like "n-2" in the lower half of your binomial expression, and I do not know how to evaluate that. It has no meaning to me, and it is made more confusing when I use the variable k in one role and you use it in a different role. Like I said, I'm using k=-1/2 and x/2 in place of x, and plugging those into the sequence $<br /> (1+x)^k = \sum_{n=0}^{\infty}\binom{k}{n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + ...<br />$ It turns out this ultimately did lead me to $<br /> \frac{x^2}{\sqrt{2}}-\frac{x^3}{4\sqrt{2}} + \frac{3x^4}{32\sqrt{2}}-\frac{5x^5}{128\sqrt{2}}+ ...<br />$ but my final expression looks different, although I do think it is correct: [EDIT: No, it isn't, see below..] $<br /> \frac{x^2}{\sqrt{2}} + \sum_{n=1}^{\infty}\frac{(-1)^n x^{n+2}}{n 2^{2n+\frac{1}{2}}}<br />$ I get this because factoring out the $(-1)^n$ allows me to cancel out $\frac{(n-1)!}{n!}$ to get $\frac1{n}$, thus eliminating the need for the binomial coefficient expression. Brian 14. i hear ya man. but what is unclear about the function, $\binom{(-\frac1{2})}{n-2}$ is the $-\frac1{2}$ not the $n-2$, at least to me. that is why i suggested the gamma function earlier, and then happened to run into the following formula which would have done the trick. The problem with your most recent answer, as Awkward pointed out, is that "A Maclaurin series is the Taylor series about 0, i.e. it is in terms of powers of x." Your answer includes higher order powers of x, and by that i mean it needs to be cast as powers of x and not the sum of $x^2$ and the sum of powers of $x^{2+n}$. Best regards 15. Using the definition ${x\choose k}=\frac {\prod\limits^{k-1}_{s=0}(x-s)}{k!}$ and the identity $\prod^{k-1}_{s=0}(1+2s)=\frac{(2k)!}{2^{k}k!}$ we have $(1+x)^{-\frac{1}{2}}=\sum^{\infty}_{k=0}{-\frac{1}{2}\choose k}x^{k}=\sum^{\infty}_{k=0}\prod^{k-1}_{s=0}(-\frac{1}{2}-s)\frac{x^{k}}{k!} = <br />$ $=\sum^{\infty}_{k=0}\frac{(-1)^{k}}{2^{k}}\prod^{k-1}_{s=0}(1+2s)\frac{x^{k}}{k!}=\sum^{\infty}_{k=0} \frac{(-1)^{k}}{2^{k}}\frac{(2k)!}{2^{k}k!}\frac{x^{k}}{k! }= \sum^{\infty}_{k=0}\frac{(-1)^{k}(2k)!x^{k}}{(k!^{2})4^{k}}<br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 92, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443065524101257, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28197/cy-moduli-fields
# CY moduli fields When one does string compactification on a Calabi-Yau 3-fold. The parameters in Kähler moduli and complex moduli gives the scalar fields in 4-dimensions. It is claimed that the Kähler potentials of the CY moduli space gives the kinetic terms of the scalar fields in 4d. Could anyone let me know why? I know that a consistent coupling of a SUSY multiplet containing scalars with supergravity requires the scalar kinetic term comes from a Kähler potential. But I am not sure why precisely this Kähler potential coincides with the one for CY moduli spaces in the case of string compactification. Could anyone explain it to me? Thanks! - 1 Unfortunately I don't know the answer, but I've tried to share this question around and hopefully someone else who does will stumble upon it. Welcome to Physics Stack Exchange! – David Zaslavsky♦ May 12 '12 at 23:44 ## 1 Answer In order to describe physics of 4 dimensional space-time starting from 10 dimensional space, we consider that the extra 6 dimensional space has very tiny size (about the Planck length). This is called the comopactification in string theory. Then the symmetry of theory requires this internal space to be Ricci flat: \begin{equation} R_{IJ}=0 \end{equation} where $I$ and $J$ run from 0 to 6. Since C-Y manifolds have the structure of the complex manifolds, it is appropriate to use the indices of the complex manifolds, i.e. $i,j=1,2,3$. The metric on C-Y manifolds is of type (1,1) $G_{i,\bar j}$ and we obtain the closed (1,1)-form from the metric \begin{equation} \omega =\sqrt{-1}G_{i\bar j} dz^i\wedge d\bar z^{\bar j} . \end{equation} And there exists nowhere vanishing holomorphic 3-form $\Omega_{klm}$ since the Ricci tensor vanishes. In what follows we consider the moduli of C-Y manifolds. If the geometric object can be continuously deformed with its geometric properties preserved, we call the parameter of this deformation moduli. Suppose that the metric $G_{IJ}(y)$ is given on a C-Y manifold $M$ where $y$ is the local coordinate of $M$. And assume that the metric change $G_{IJ}+g_{IJ}$ and this new metric also gives RIcci flat. Then taking the first order of the deformation of metric, we have \begin{equation} \Delta_6 g_{IJ}(y)=0 \end{equation} where $\Delta_6$ is a 6 dimensional Laplacian. Thus, the deformation of the metric which preserves the definition of C-Y manifolds is given by the eigenfuntion of the Laplacian with its eigenvalue zero. In general, the eigenvalues of $-\Delta_6$ take zero and positive values: \begin{equation} -\Delta_6 f_{IJ}^\alpha(y)=m_\alpha^2 f_{IJ}^\alpha(y), \,\,\,\alpha=1,2,\cdots. \end{equation} Let us consider the case that the deformation of metric $g_{IJ}$ is of type (1,1), $\delta g_{i\bar j}$, and of type (2,0) , $\delta g_{i j}$, respectively. Generally, the metric on a K\"ahler manifold is of type (1,1) and $g_{i\bar j}$ gives the deformation preserving this type. This is called the deformation of K\"ahler structure. The deformation of K\"ahler structure is described by a solution of the equation \begin{equation} \Delta_6 \omega_{i j}(y)=0, \end{equation} i.e. it is given by a harmonic (1,1)-form. The number of a harmonic (1,1)-form is provided by the Hodge number $h_{1,1}$ of the manifold $M$. On the other hand, the type (2,0) deformation of metric implies the deformation of complex structure on a complex manifold. Using the complex conjugate $\bar \Omega$ of $\Omega$ , we obtain \begin{equation} \chi_{i\bar j \bar k} \equiv \delta g_{ij} G^{j\bar k} \bar \Omega_{\bar k\bar l\bar m}. \end{equation} Therefore the deformation of complex structure is described by harmonic (1,2)-form and the degree of freedom of the deformation becomes the Hodge number $h_{1,2}=h_{2,1}$. As we see, C-Y manifolds have two kinds of the deformation parameters, the K\"ahler parameters and the parameters of complex structure, which are called moduli, and the K\"ahler parameters correspond to the degree of the change of the size and the parameters of complex structure correspond to that of the deformation of the shape. The metric of the complex structure moduli space is \begin{equation} G_{\alpha\bar\beta}^{\rm mod}=-\frac{i\int\chi_\alpha\wedge\bar\chi_{\bar \beta}}{i\int\Omega\wedge\bar\Omega} \end{equation} Recalling that the metric $G^{\rm mod}_{\alpha\bar\beta}$ of the complex structure moduli space can be obtained from the K\"ahler potential $\cal K$ \begin{equation} G_{\alpha\bar\beta}^{\rm mod}=\partial_{\alpha}\partial_{\bar\beta}{\cal K} \ , \end{equation} one finds that the K\"ahler potential can be written as \begin{equation} {\cal K}=-\log\int\left(i\int \Omega\wedge\bar\Omega \right) \ . \end{equation} Let us choose the basis $C_a$ $(a=1,\cdots, h_{1,1})$ of the 4-cycle as the duals of the harmonic (1,1)-form $\omega^a\equiv\omega^a_{i\bar j}d\!z^i\wedge d\! z^{\bar j}$ $(a=1,\cdots, h_{1,1})$. Then we can expand \begin{equation} \ast C+\sqrt{-1}\omega=\sum t_a \omega^a \end{equation} where $\omega$ is the K\"ahler form and $C$ is 4-th antisymmetric tensor field which is the partner of the gravitational field. Then the coefficients of the expansion is given by \begin{equation} t_a =\int _{C_a} (C+\sqrt{-1}\ast\omega). \end{equation} These are the parameters of the complexified K\"ahler moduli. Similarly, let us choose the basis $A_a$ and $B_a$ $(a=0,1,\cdots, h_{1,2})$ of 3-cycle so that the intersection numbers satisfy $A_a\cap B_b=\delta_{ab}, A_a\cap A_b=B_a\cap B_b=0$. In this case, it is known that the parameters are taken as the moduli of the deformation of complex structure \begin{equation} z_a=\int_{A_a} \Omega, \,\,\, a=a,\cdots, h_{1,2}. \end{equation} And also it is know that the integral of $\Omega$ over the cycle $B_a$ can be written \begin{equation} \frac{\partial F}{\partial z_a}=\int _{B_a} \Omega,\,\,\, a=a,\cdots, h_{1,2} \end{equation} where $F$ is a holomorphic function of $z_a$ and is called the prepotential. Under the compactification of a C-Y manifold fields of 10 dimensional theory can be expanded by the eigenfunctions of the 6 dimensional Laplacian \begin{equation} f_{IJ}(x,y)=\sum_\alpha \phi^\alpha(x)f_{IJ}^\alpha(y). \end{equation} Then the wave function of 10 dimensional theory reduces to the 4 dimensional field equation: \begin{eqnarray} &&\Delta_{10}f_{IJ}(x,y)=(\Delta_{4}+\Delta_{6})\sum_\alpha\phi^\alpha(x)f_{IJ}^\alpha(y)=0 \ && \rightarrow (\Delta_{4}-m_\alpha^2) \phi^\alpha(x)=0. \end{eqnarray} Hence the scalar field $\phi_\alpha$ with mass $m_\alpha$ shows up in 4 dimensional space corresponding to the eigenvalue $m_\alpha^2$ of the six dimensional Laplacian. Especially, the mass of the scalar field corresponding to the moduli of the manifold becomes zero and the massless scalar particle, moduli particle shows up in the four dimensional effective theory. Then the expectation value of the vacuum corresponding to the parameter $\{t_a, z_a\}$. Since the nonzero eigenvalue $m_\alpha$ of the Laplacian is inversely proportional to the square of the size of the space $M$ and acquire very heavy mass, we can neglect them. ${\rm \bf Note\ Added}$: Once Type II string theory is compactified on a Calabi-Yau manifold $M$, the 4d low-energy effective theory is described by ${\cal N}=2$ supergravity. The field contents of ${\cal N}=2$ supergravity consist of the Weyl (gravity) multiplet, vector multiplets and hypermultiplets. The effective actions for vector multiplets and hypermultiplets are described by non-linear sigma model with target spaces, the vector multiplet moduli space and the hypermultiplet moduli space respectively. In particular, the kinetic terms of the scalars $\phi^i$ in the vector multiplets and the hypermultiplets can be written as \begin{equation} \int_{M_4}d^4x\sqrt{g}G_{ij}\partial_\mu \phi^i \partial^\mu \phi^j +\cdots \end{equation} where $G_{ij}$ is the metric of the moduli space. In type IIA compactications, the vector multiplet moduli space coincide with the complex K\"ahler moduli and the hypermultiplet moduli space is the complex structure moduli space. In type IIB, vice-versa. In type IIB compactifications, the low energy effective action of the vector multiplets are dictated by the prepotential $F$ due to supersymmetry. Due to mirror symmetry we can restrict our attention to one of the two type II theories. - This is very nice +1, but it doesn't answer the question, which is about the Kahler potential of the scalars, the field dependence of the kinetic terms, or the derivative terms in the equation of motion. It's not asking why the moduli correspond to massless fields, nor is it asking what gives rise to the superpotential for the massive case. The question is entirely restricted to the massless scalars parametrizing the Kahler moduli--- it is asking why the kinetic term of these fields coincides with the Kahler potential of the Moduli space. Maybe its only true in SUGRA approximation. – Ron Maimon May 14 '12 at 1:35 Thanks for the answering Satoshi. But indeed my question is why the kinetic term of these fields coincides with the Kahler metric of the CY moduli space. – MHan May 15 '12 at 10:56 I added the comments. Hope it answers your question. – Satoshi Nawata May 16 '12 at 21:37 @SatoshiNawata: Your comment is the statement of the thing, but he wants to know why the metric on the 4d-fields is the same as the metric on the moduli space. It's intuitively reasonable that they should be the same, since constant values of the fields correspond to different manifolds, and the local spring-constant "should be" the metric distance between the manifolds. But this requires an argument, why isn't it G plus the Ricci curvature of the moduli space? That the two notions of metric coincide is probably easy to derive in SUGRA, by derivative counting, but I didn't work it out. – Ron Maimon May 17 '12 at 7:28 @Ron Maimon It's because the low energy effective theory is non-linear sigma model with a target manifold the moduli space of CY. – Satoshi Nawata May 18 '12 at 6:45 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8342388272285461, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=4173339&postcount=2
View Single Post Your textbook should be clearer. The rule is, if there's no external forces on a system, then its centre of mass doesn't accelerate. What's the center of mass? Roughly, it's the average position of the mass of the system. Quantitatively, the center of mass of a system of N particles, each with mass mi and position ri, is $$\vec{r}_{\mathrm{centre \ of \ mass}}=\frac{1}{M}\sum_{i=1}^N m_i \vec{r}_i.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144140481948853, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/75296/what-is-the-difference-between-total-recursive-and-primitive-recursive-functions?answertab=votes
# What is the difference between total recursive and primitive recursive functions I am studying the theory of computation. Here are some terminologies that I am confused about. Are total recursive function and primitive recursive function equivalent? I think they are equal because their domains are both total and they always generate output. Can anyone tell me if there is some difference. Thanks. - ## 3 Answers There is another example of a non-primitive-recursive but total computable function that explains better what the restricted definition of primitive recursion entails. Each primitive recursive function is defined by a particular finite set of recursion equations, in terms of a fixed set of basic functions. We can use this to define an effective scheme for indexing all the primitive recursive functions. Let $(f_e : e \in \mathbb{N})$ be an effective indexing of the unary primitive recursive functions, meaning that • Every unary primitive recursive function is of the form $f_{e_f}(n)$ for some fixed $e_f$. • There is a single algorithm to compute $f_e(n)$ given $e$ and $n$. Let $g(e,n)$ be the function that computes $f_e(n)$. We call $g$ a universal binary function. It is universal in the sense that it encapsulates all unary primitive recursive functions. The function $g(e,n) = f_e(n)$ is certainly a total computable function. It is not primitive recursive. One reason is that for some $e_0$, $h(n) = g(e_0,n)$ is the Ackermann function, which is not primitive recursive. But there is another reason that I want to explain. Suppose that $g(e,n)$ is primitive recursive. Then the function $f(n) = g(n,n)+1$ is also primitive recursive, as you can verify. But this function $f(n)$ cannot be of the form $g(e,n)$ for any fixed $e$, because for every $e$ we have $f(e) = g(e,e) + 1 \not = g(e,e)$, by the definition of $f$. Thus $g$ is total computable but not primitive recursive. This diagonalization proof can be used in any system that is closed under a few basic operations. The moral of this construction is a key fact in computability theory: A system of (partial) functions in which every function is total, and which has certain basic closure properties, cannot include a universal binary function. (The basic closure properties are the ones used in the proof above.) The point is that there is a fundamental incompatibility between the goal of making a system concrete enough that every function is total, and making a system strong enough that it includes universal functions for itself. Only one of these goals can be achieved at a time. There is a universal binary partial function for the set of unary total computable functions. In fact there is a universal binary partial function for the set of unary partial computable functions, a larger class. This fact is essentially equivalent to the existence of a universal Turing machine. - The Ackermann function, for example, is recursive, but is not primitive recursive. Indeed it "grows faster" than any primitive recursive function. From the definition in the link, it is not difficult to write an explicit program that computes the Ackermann function. There are many other concrete examples. The class of primitive recursive functions is a small subclass of the class of recursive functions. - They are not equivalent. The standard example is the Ackermann function, which is (total) recursive, but not primitive recursive. But if you are a programmer, here's another way to think of the difference between total recursive and primitive recursive functions. I'll discuss this in terms of an idealized imperative programming language running on an idealized computer (no memory or storage limits). You can think in terms of any standard imperative language, such as C or Java. A total recursive function is any function you can write which always terminates. A primitive recursive function is any function you can write where the only loops are those of the form "for i=1 to n do ..." Here $n$ is fixed in advance (before the loop starts), and you cannot (explicitly) change $i$ nor $n$ inside the loop. So the number of times the loop executes is determined in advance. This is the only looping structure allowed. You do not have a while loop, which terminates based on a condition, or a goto statement that can jump back to an arbitrary point in the code, or recursive function calls. These conditions make infinite loops impossible. An example of a programming language that only supports primitive recursive functions is BlooP (this stands for Bounded Loop). It is impossible to write an infinite loop in BlooP, whereas it is undecidable whether a general program terminates. However, even though all BlooP programs terminate, there are terminating programs that cannot be written in BlooP. The Ackermann function is one of them. The simplest example, though, is the BlooP interpreter. This is a program that takes as input a BlooP program plus any input the BlooP program requires, then runs the BlooP program, and produces its output. Since BlooP programs always terminate, the interpreter always terminates too. But it cannot be written in BlooP, by a diagonalization argument. Roughly, the diagonalization argument goes as follows: For simplicity, we'll assume all functions map natural numbers to natural numbers. (Other types of inputs can be simulated by a Goedel encoding.) Let $B_1, B_2, \ldots$ be a recursive list of all BlooP programs, and set $f(n) = B_n(n) + 1$. Since every $B_n$ terminates, $f$ always terminates too, and is therefore total recursive. But $f \ne B_n$ for any $n$, since they differ at the value $n$ by construction. So $f$ is already an example of a total recursive function not primitive recursive (i.e. not expressible in BlooP). But since the only obstacle to calculating $f$ is the calculation of $B_n(n)$, which can be achieved by an interpreter, it follows that the BlooP interpreter cannot be written in BlooP. - +1 for the answer. However I don't agree that the BlooP interpreter is a simpler example than the Ackermann function. One can view $A(m,n)$ as computing $f_m(n)$ in a very particular and simple list $f_0,f_1,f_2,\ldots$ of functions, all of which are primitive recursive (each $f_{n+1}$ is based on $f_n$ by a same primitive recursion scheme). So the Ackermann function interprets this very limited subset of all BlooP programs. What is simpler about the full interpreter is the proof that it is not primitive recursive. – Marc van Leeuwen Jan 14 '12 at 22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184314012527466, "perplexity_flag": "head"}
http://gauravtiwari.org/2011/05/13/dedekinds-theory-of-real-numbers/
# MY DIGITAL NOTEBOOK A Personal Blog On Mathematical Sciences and Technology Home » Math » Dedekind’s Theory of Real Numbers # Dedekind’s Theory of Real Numbers ### Start here Image via Wikipedia # Intro Let $\mathbf{Q}$ be the set of rational numbers. It is well known that $\mathbf{Q}$ is an ordered field and also the set $\mathbf{Q}$ is equipped with a relation called “less than” which is an order relation. Between two rational numbers there exists infinite number of elements of $\mathbf{Q}$. Thus the system of rational numbers seems to be dense and so apparently complete. But it is quite easy to show that there exist some numbers (?) (e.g., ${\sqrt{2}, \sqrt{3} \ldots}$ etc.) which are not rational. For example, let we have to prove that $\sqrt{2}$ is not a rational number or in other words, there exist no rational number whose square is 2. To do that if possible, purpose that $\sqrt{2}$ is a rational number. Then according to the definition of rational numbers $\sqrt{2}=\dfrac{p}{q}$, where p & q are relatively prime integers. Hence, ${\left(\sqrt{2}\right)}^2=p^2/q^2$ or $p^2=2q^2$. This implies that p is even. Let $p=2m$, then $(2m)^2=2q^2$ or $q^2=2m^2$. Thus $q$ is also even if 2 is rational. But since both are even, they are not relatively prime, which is a contradiction. Hence $\sqrt{2}$ is not a rational number and the proof is complete. Similarly we can prove that why other irrational numbers are not rational. From this proof, it is clear that the set $\mathbf{Q}$ is not complete and dense and that there are some gaps between the rational numbers in form of irrational numbers. This remark shows the necessity of forming a more comprehensive system of numbers other that the system of rational number. The elements of this extended set will be called a real number. The following three approaches have been made for defining a real number. 1. Dedekind’s Theory 2. Cantor’s Theory 3. Method of Decimal Representation The method known as Dedekind’s Theory will be discussed in this not, which is due to R. Dedekind (1831-1916). To discuss this theory we need the following definitions: Rational number A number which can be represented as $\dfrac{p}{q}$ where p is an integer and q is a non-zero integer i.e., $p \in \mathbf{Z}$ and $q \in \mathbf{Z} \setminus \{0\}$ and p and q are relatively prime as their greatest common divisor is 1, i.e., $\left(p,q\right) =1$. Ordered Field: Here, $\mathbf{Q}$ is, an algebraic structure on which the operations of addition, subtraction, multiplication & division by a non-zero number can be carried out. Least or Smallest Element: Let $A \subseteq Q$ and $a \in Q$. Then $a$ is said to be a least element of $A$ if (i) $a \in A$ and (ii) $a \le x$ for every $x \in A$. Greatest or Largest Element: Let $A \subseteq Q$ and $b \in Q$. Then $b$ is said to be a least element of $A$ if (i) $b \in A$ and (ii) $x \le b$ for every $x \in A$. ## Dedekind’s Section (Cut) of the Set of All the Rational Numbers Since the set of rational numbers is an ordered field, we may consider the rational numbers to be arranged in order on straight line from left to right. Now if we cut this line by some point $P$, then the set of rational numbers is divided into two classes $L$ and $U$. The rational numbers on the left, i.e. the rational numbers less than the number corresponding to the point of cut $P$ are all in $L$ and the rational numbers on the right, i.e. The rational number greater than the point are all in $U$. If the point $P$ is not a rational number then every rational number either belongs to $L$ or $U$. But if $P$ is a rational number, then it may be considered as an element of $U$. Def. ### Real Numbers: Let $L \subset \mathbf{Q}$ satisfying the following conditions: 1. $L$ is non-empty proper subset of $\mathbf{Q}$. 2. $a, b \in \mathbf{Q}$ , $a < b$ and $b \in L$ then this implies that $a \in L$. 3. $L$ doesn’t have a greatest element. Let $U=\mathbf{Q}-L$. Then the ordered pair $< L,U >$ is called a section or a cut of the set of rational numbers. This section of the set of rational numbers is called a real number. Notation: The set of real numbers $\alpha, \beta, \gamma, \ldots$ is denote by $\mathbf{R}$. Let $\alpha = \langle L,U \rangle$ then $L$ and $U$ are called Lower and Upper Class of $\alpha$ respectively. These classes will be denoted by $L(\alpha)$ and $U(\alpha)$ respectively. ### Remark From the definition of a section $\langle L,U \rangle$ of rational numbers, it is clear that $L \cup U = \mathbf{Q}$ and $L \cap U = \phi$. Thus a real number is uniquely determined iff its lower class $L$ is known. For Example Let $L = \{ x: x \in \mathbf{Q}, x \le 0 \ or \ x > 0, \ \text{and} \ x^2<2 \}$ Then prove that $L$ is a lower class of a real number. Proof: $\star$ Since $0 \in L$ and $2 \not\in L$ $\Rightarrow L$ is non-empty proper subset of $\mathbf{Q}$. $\star \star$ Let $a,b \in \mathbf{Q} , a > b$ and $b \in L$. If $a \le 0$ then $a \in L$. If $a > b >$ so $b \in \Rightarrow b^2 < 2 \Rightarrow a^2 < b^2 < 2 \Rightarrow a \in L$. $\star \star \star$ Let $a \in \mathbf{Q} \ \text{s.t.} \ a \in L$. If $a \le 0$ then $a \in L$. If $a > 0$ then $a^2 < 2$. Let for $b > 0$ and $b \in \mathbf{Q}$ ,$b =\dfrac {m+na}{o+pa}$ for any $m \ge n \ge o \ge p \in \mathbf {Z}$. Also, $b-a=\dfrac{m+na}{o+pa} -a \\ =\dfrac{m+na-oa-pa^2}{o+pa} \\ =\dfrac {m+(n-o)a-pa^2}{o+pa} > 0$ $\Rightarrow b > a$. And similarly, $2-b^2 > 0, \Rightarrow b^2 <2$. Thus, $0 < a < b$ and $b^2 < 2 \Rightarrow b \in L.$ Hence $L$ has no greatest element. Since, $L$ satisfies all the conditions of a section of rational numbers, it is a lower class of a real number. [Proved] Remark: In the given problem, $U$ is an upper class of a real number given by the set $U=\{x: x \in \mathbf{Q}, x > 0 \ \text{and} \ x^2 > 0 \}$, since it has no smallest element. Real Rational Number: The real number $\alpha = \langle L,U \rangle$ is said to be a real rational number if its upper class $U$ has a smallest element. If $r$ is the smallest element of $U$, then we write $\alpha =r^*$. Irrational Number: The real number $\alpha = \langle L,U \rangle$ is said to be an irrational number if $U$ does not have a smallest element. ## Important Theorems & Results If $\langle L,U \rangle$ is a section of rational numbers, then 1. $U$ is a non-empty proper subset of $\mathbf{Q}$. 2. $a, b \in \mathbf{Q}, \ a < b \ and \ a \in U \Rightarrow b \in U$. 3. $a \in L, b \in U \Rightarrow a < b$. 4. if $k$ is a positive rational number, then there exists $x \in L \ y \in U$ such that $y-x=k$ 5. if $L$ contains some positive rational numbers and $k > 1$ then there exists $x \in L$ and $y \in U$ such that $\dfrac{y}{x}=k$. 26.740278 83.888889 • ## Smell everything with Google Nose! Just figured Google Nose (beta) in Google Search results, a great new feature which make you smell the things. Like how does a ‘wet dog’… • ## Life After Google Reader Seems like a false title, should have been “Life after the announcement of death of Google Reader” – whatever, Google Reader is reaching to its… • ## Happy Holi! : The Village Tour Holi, the festival of colors, was celebrated this year on 27th and 28th of March all over India. I decided to move to my own… • ## Euler’s (Prime to) Prime Generating Equation The greatest number theorist in mathematical universe, Leonhard Euler had discovered some formulas and relations in number theory, which were based on practices and were… Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 121, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209087491035461, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=17378
Physics Forums vector space proof how do you prove that if v is an element of V (a vector space), and if r is a scalar and if rv = 0, then either r = 0 or v = 0... it seems obvious, but i have no idea how to prove it... PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug If v is the vector (a, b, c), then the vector that results from the multiplication rv is (ra, rb, rc). If the result is equal to 0, the zero vector, then (ra, rb, rc) = (0, 0, 0). If we write this formally we get: $$ra = 0$$ $$rb = 0$$ $$rc = 0$$ The solution is that either r equals 0, or a, b, and c all equal 0. And if a, b, and c are 0 then the vector v is (0, 0, 0) which is the zero vector. Is this proof satisfactory? There are probably a lot of ways to prove this. Recognitions: Homework Help Science Advisor That proof requires you to pick a basis. If I pick a different basis, do you know that it still holds? Here's the basis free proof. Suppose rv=0, then if r is zero we are done, if not multiply rv=0 by 1/r and we see v=0. vector space proof Quote by matt grime That proof requires you to pick a basis. If I pick a different basis, do you know that it still holds? Yes... if you pick your basis at (A, B, C) then the vector v becomes (a - A, b - B, c - C) and the zero vector is now (A, B, C). $$r(a - A) = A$$ $$r(b - B) = B$$ $$r(c - C) = C$$ For this to be true for r <> 0, a must equal A, b must equal B and c must qual C, and thus the vector v becomes (A, B, C) which is again the zero vector. Recognitions: Homework Help Science Advisor No, that isn't how one does a change of basis (of a vector space: the origin isn't fixed.) Recognitions: Homework Help Science Advisor It's also dimension dependent. The result is true for every vector space, even those where picking a basis, never mind solving an uncountable set of linear equations requires the axiom of choice. Thread Tools | | | | |-----------------------------------------|----------------------------|---------| | Similar Threads for: vector space proof | | | | Thread | Forum | Replies | | | Linear & Abstract Algebra | 5 | | | Classical Physics | 0 | | | Calculus & Beyond Homework | 8 | | | Linear & Abstract Algebra | 8 | | | Calculus & Beyond Homework | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065698385238647, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/dimensions+dimensional-analysis
# Tagged Questions 1answer 58 views ### What does the Reynolds Number of a flow represent physically? What does the Reynolds Number of a flow represent physically? I am having trouble understanding the meaning and the utility of the Reynolds number for a certain flow, could someone please tell me how ... 2answers 92 views ### Is the “dimension” in dimensional analysis the same as the “dimension” in “three spatial dimensions”? When we talk about the dimension of a quantity (e.g. the dimension of acceleration is$[ L \ T ^ {-2}]$) are we talking about the same "dimension" as when we talk about three dimensional space? Are ... 3answers 545 views ### Understanding counterintuitive units like s^2 One of the things I never understood but was too afraid to ask is this: how should I think of things like kg/s^2. What exactly is a square second? Square foot makes sense to me because I can see it, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375572204589844, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/47746-solved-matrix-valued-complex-functions.html
# Thread: 1. ## [SOLVED] matrix-valued complex functions we have a contractive analytic in the unit disk matrix-valued function ( $ff^*\le I$), which is unitary on the unit circle ( $ff^*=I$), and which in nondegenerate throughout the unit disk. I want to conclude that $f$ is a constant unitary matrix. We can assume that $f$ is analytic in some neighbourhood of the unit disk (rather than analytic inside the unit disk with boundary values which are unitary) the proof for the 1D case could go as follows -- since f doesn't vanish, we can take $\log|f|$ which is a harmonic function. on the unit circles $\log|f|$ is zero, so by maximum principle for harmonic functions it is zero everywhere, i.e. $|f|$ is constant everywhere, and then so is f. clearly this approach doesn't help me in the matrix case 2. Here's another approach to the scalar-valued case, which looks as though it might generalise more easily to the matrix-valued case. Since the unit disk $\mathbb D$ is simply connected, we can define log(f) in such a way that it is continuous on $\mathbb D$ and analytic in the interior. Use a conformal map taking the unit disk to the upper half-plane $\mathbb H$ to transport log(f) to a bounded analytic function on $\mathbb H$. Then i*log(f) will be real on the real axis, so by the Schwarz reflection principle it extends to a bounded entire (therefore constant) function. 3. Can someone state this problem for the 1-D case? I just want to see how it looks like - I cannot solve the general one. 4. Originally Posted by ThePerfectHacker Can someone state this problem for the 1-D case? I just want to see how it looks like - I cannot solve the general one. In the scalar-valued case, the result is saying this: Let f be a complex-valued continuous function on the closed unit disc, analytic in the interior of the disc, with no zeros in the disc, and with $|f|=1$ on the boundary of the disc. Then f must be constant. 5. Opalg, I don't like the matrix-valued logarithm, so I slightly modified your idea: instead of taking logarithm and then extending it, we can extend the function f immediately (basically via Schwarz lemma) for $|z|\ge1$ define $\hat{f}(z)=\left(f(1/\bar{z})^*\right)^{-1}$, which is analytic and well defined since f doesn't vanish. Then on the unit circle $\hat{f}(e^{i\theta})=\left(f(e^{i\theta})^*\right) ^{-1}=f(e^{i\theta})$ by the unitarity condition, so $f=\hat{f}$, i.e. f extends to an entire function. It's bounded, so we're happy. on this subject: is it actually true that there exists an analytic logarithm of a nonvanishing analytic function on a simply connected domain for the matrix-valued case? I wasn't able to find it in literature, though it looks like the scalar case proof should go through without problems. 6. Originally Posted by choovuck Opalg, I don't like the matrix-valued logarithm, so I slightly modified your idea: instead of taking logarithm and then extending it, we can extend the function f immediately (basically via Schwarz lemma) for $|z|\ge1$ define $\hat{f}(z)=\left(f(1/\bar{z})^*\right)^{-1}$, which is analytic and well defined since f doesn't vanish. Then on the unit circle $\hat{f}(e^{i\theta})=\left(f(e^{i\theta})^*\right) ^{-1}=f(e^{i\theta})$ by the unitarity condition, so $f=\hat{f}$, i.e. f extends to an entire function. It's bounded, so we're happy. Yes, that looks neat. Originally Posted by choovuck on this subject: is it actually true that there exists an analytic logarithm of a nonvanishing analytic function on a simply connected domain for the matrix-valued case? I wasn't able to find it in literature, though it looks like the scalar case proof should go through without problems. I can't give a reference for that. I think it's true (defining the log by the spectral theorem) but I wouldn't want to put money on it. 7. This does not help the poster but I had a question. Let $f$ be analytic on the intetrior of the unit disk and continous on the boundary. With $|f|=1$ on the boundary. Furthermore, $f$ has zeros inside the disk. Then what happens? I know that if $f$ has finitely many zeros $\alpha_1,...,\alpha_n$ (with multiplicity) then $f(z) = \omega \prod_{k=1}^n \frac{z-\alpha_k}{1-\bar \alpha_k z}$ for some $\omega$ on the boundary. But what happens if $f$ has infinitely many zeros? I just realized the moment I wrote that phrase that it is impossible for it to has infinitely many zeros. And this question was bothering me today for a non-trivial amount of time. How stupid of me. I just leave it here just in case the original poster finds this interesting. Sorry. 8. well, the original question arose in the connection with the so-called inner functions (you can read a bit on this link), which are bounded analytic inside the unit disk (not necessarily continuous on the boundary) functions with boundary values (as limits) of modulus 1. inner function can actually have infinitely many zeros $z_k$, but in this case $\sum(1-|z_k|)<\infty$ and the infinite Blaschke product (of the form you described) converges and we can factor it out and assume that our inner function is in fact nonvanishing. now, this nonvanishing inner function can be pretty ugly, but if its continuous on the boundary, we just proved that it's not. yay
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477390646934509, "perplexity_flag": "head"}
http://mathoverflow.net/questions/44266?sort=oldest
## Rooks on a lifeline ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The short version of this question is: If $G$ is a graph whose nodes are associated with squares of a chessboard, such that no two nodes in the same row or column of the board are adjacent, we want to associate rooks with the vertices of $G$, such that at most one rook appears in each row and column of the chessboard under the constraint that the vertices containing the rooks induce a connected subgraph of $G$ (thus, the rooks are connected to each other with a lifeline, or lifegraph if you want to be specific). A maximal configuration of rooks is such that no rooks can be added to the chessboard without violating the constraint that each column/row contain only one rook. Question: A back-tracking depth first search will find all maximal configutions. Will a back-tracking breadth-first search do the same? Let's consider an $m \times n$ chessboard that will be inhabited by rooks. As is usual with chess problems in graph theory, each square is represented by a vertex: let $v_{s,t}$ represent the square at row $s$ and column $t$ of the board. Now, let $G$ be an arbitrary graph on the set of vertices $V$ that comprise the squares of the chessboard. We want to fill the chessboard with rooks (said another way: we want to associate rooks with vertices), such that: • Every row and every column contain at most one rook (more formally, if a rook is associated with a vertex $v_{i,j}$, then no rook will be associated with $v_{i,s}$ for $s \in \{1,\ldots,n\}$) or $v_{t,j}$ for $t \in \{1,\ldots,m\}$), • The vertices with rooks must induce a connected subgraph of $G$ (hence the reference to a "lifeline" that must connect all rooks). An example of a chessboard with a single rook (indicated by the black vertex) is shown here (as a new user, I cannot embed images). The gray squares show the area covered by the rook - no other rook can be placed on any of the gray squares. Note that I have neglected to color the squares themselves black and white as they should appear on a real chessboard. Edges are indicated by red lines. Let's consider how we might extend the neighborhood of the black node - call it $v$. There are three vertices adjacent to $v$, but we can only place rooks on at most two of them at a time without violating the constraint that only a single rook cover a given row/column. In fact, there are two ways of placing the rooks, as shown here and here. A maximal configuration is a valid placement of rooks (according to our constraints) that cannot be extended by the addition of another rook without violating the constraints. We can enumerate all maximal configurations by performing a back-tracking depth-first search from each node (i.e. each square on the chessboard). With a depth-first search, we only add one rook to the chessboard at a time. Suppose that we perform a back-tracking breadth-first search instead. At each step, we add as many rooks the board as possible. Of course there possibly are many different ways of adding as many rooks as possible at each step. This is exactly what is done in the two images above: the maximum number of rooks are added in both possible configurations. Will this strategy also enumerate all maximal configurations? - 1 I may be out of my depth (and breadth?) here, but my understanding is that if you have a (finite) tree then depth-first and breadth-first searches will both eventually visit every node in the tree, just not in the same order. So if depth-first eventually finds everything you're looking for, so will breadth-first, unless you're using those terms with non-standard meanings. Now I'll stop to catch my breadth. – Gerry Myerson Oct 30 2010 at 22:44 That was my first guess too. But the fact that we're adding as many neighbors as possible with each "breadth-first search" step, means that we might commit ourselves to rook placements that would make us miss maximal configurations. If, instead of only adding as many neighbors as possible with each BFS step, we considered all possibilities of adding $\{1,\ldots,n\}$ neighbors (under our constraints), where $n$ is the maximum number of neighbors, then we would definitely find all maximal configurations. So, we're being "greedy" in a sense here. I am worried that this might bite us. – winterstream Oct 31 2010 at 17:37 ## 1 Answer The answer is that you have gotten too greedy! I give my squares coordinates in R^2. Place vertices on the points (z,z) for integers z with absolute value less than 100 and also at (-2,2), (1,2) and (-1,-2). Place edges between (z,z) and (z+1,z+1) for all z and also place edges between ((-2,2) and (0,0)) , ((-1,-2) and (0,0)) , ((-1,-2) and (1,1)), ((2,1) and (0,0)), ((2,1) and (-1,-1)). So 3 non diagonal vertices and 5 non diagonal edges. Obviously the best we can do is place rooks on the diagonal from (-100,-100) to (100,100). Now if you start your search at a vertex on the diagonal at or left of (-1,-1) (resp. at or right of (1,1)) then you won't get to the other side of the diagonal since at (-1,-1) (resp (1,1)) you take the edge to (2,1) (resp (-1,-2)) which cuts you off from (2,2) (resp (-2,-2)). That also explains why you can't start at (2,1) or (-1,-2). If you start at (-2,2) you are cut off from both diagonals as you can't cross (-2,-2) and (2,2). But if you start at the origin, you take the edge to (2,2) which shows you can't start at the origin either. Thus you got too greedy. Close though, and interesting. - I can use some reputation to post a bounty ... this is a very nice construction. – Orange Nov 5 2010 at 8:01 Hope that you got the reputation boost with the acceptance of the answer! Thanks Orange. – winterstream Jan 19 2011 at 10:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447779655456543, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/104172-expected-value.html
# Thread: 1. ## Expected Value A sack contains 20 products, where two are defective. Given that five products are chosen randomly, determine the expected number of defective products. Let $P_k$ be the probability of there being k defective products out of the 5 chosen. $P_k=\frac {_2C_k \cdot _{18}C_{5-k}}{_{20}C_5}$ $\therefore E(X) = 0 \cdot \frac {_2C_0 \cdot _{18}C_5}{_{20}C_5} + 1 \cdot \frac {_2C_1 \cdot _{18}C_4}{_{20}C_5} + 2 \cdot \frac {_2C_2 \cdot _{18}C_3}{_{20}C_5}$ Why is it $k \cdot ...$? 2. Originally Posted by chengbin A sack contains 20 products, where two are defective. Given that five products are chosen randomly, determine the expected number of defective products. Let $P_k$ be the probability of there being k defective products out of the 5 chosen. $P_k=\frac {_2C_k \cdot _{18}C_{5-k}}{_{20}C_5}$ $\therefore E(X) = 0 \cdot \frac {_2C_0 \cdot _{18}C_5}{_{20}C_5} + 1 \cdot \frac {_2C_1 \cdot _{18}C_4}{_{20}C_5} + 2 \cdot \frac {_2C_2 \cdot _{18}C_3}{_{20}C_5}$ Why is it $k \cdot ...$? I don't really understand the trouble here .... k is the number of defectives in the chosen sample. 3. Why is it $k \cdot P_k$? 4. Originally Posted by chengbin Why is it $k \cdot P_k$? This follows directly from the definition of expected value.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448413252830505, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/68063/are-there-any-very-hard-unlinks
## Are there any very hard unlinks? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is closely related to a question of Gowers: http://mathoverflow.net/questions/53471/are-there-any-very-hard-unknots . I'm thinking about how to create interesting knots from small numbers of local moves on unlinks. The "standard embedded n-component unlink" (let's call it the untangled unlink) is defined as a union of n circles in n disjoint 3-balls; and an unlink is a link which is ambient isotopic to an untangled unlink. In order to get some feel for what is possible and to have a non-trivial example to play with, I'm looking for an unlink which is difficult to untangle. Paraphrasing Greg Kuperberg's formulation: Can you untangle any unlink with relatively little work, say a polynomial number of geometric moves of some kind? I'm interested in untangling the components one from another, as opposed to to untangling individual components. I mean this very much in the same sense that Tim Gowers meant his question: Is there a geometric algorithm to untangle components of any unlink, which "makes the unlink simpler" (this is intentionally vague) at each stage? Conversely, is there an unlink which, if it were given to me as a physical object (some tangled loops of rope), I would not be able to disentangle one component from the other without considerable ingenuity? At the moment, I am most interested in the question for 2-component links and for 3-component links. It's a bit embarassing that I have no intuition at all for what the answer to this question might look like- the "link case" seems completely different from the "knot case". Edit: Just an aside: experimentally, it's known that fluids made of long closed molecules flow much faster than fluids made of open molecules, although I don't think that the mathematics behind this is understood. But intuitively, it's clear what's going on- closed molecules don't get tangled up in one another as easily as open ones do. So if hard unlinks exist (and in rheology we're talking unlinks with thousands or millions of components), experimentally we can argue that they must at least be rare. Maybe. - 2 Dynnikov's paper "Arc-presentations of links. Monotonic simplification" (arXiv:0208153) was mentioned several times in answers to the unknot recognition question. The algorithm in that paper can also recognize split links and hence unlinks, and it does so without ever increasing the size of the diagram, but I don't think there are any good (e.g. subexponential) upper bounds known on the number of moves it requires. – Steven Sivek Jun 17 2011 at 15:28 My intuition would say this reduces to the case in which each component is an unknot, as you can make the knotted part of each component very very small... Am I mistaken? – Qfwfq Jun 17 2011 at 15:29 The Haken algorithm works for unlink recognition as well. So I suppose it sets the bar on what "fast" should be -- subexponential in the complexity of the diagram. – Ryan Budney Jun 17 2011 at 16:48 Evaluating the Jones polynomial is #P-hard (at any complex number, expect at eight exceptional values) This makes me unoptimistic! – Mariano Suárez-Alvarez Jun 17 2011 at 16:51 1 @Mariano I suspect that we can know whether a knot is trivial long before we can know its Jones polynomial. – Greg Kuperberg Jun 19 2011 at 21:14 show 2 more comments ## 2 Answers I conjecture that the answer is yes, that you may undo a split link with polynomially many moves. This would yield another proof that unlinking is in NP, but it would be somewhat more satisfying, since the certificate would say that you could actually show someone how to tease apart the two components in polynomial time, rather than some abstract normal surface coordinate certificate. I have a vague idea how one might use Dynnikov's argument to prove this. He analyzes a sphere in the complement of an arc presentation for the link, by considering an induced singular foliation from an open book, following a method that Birman and Menasco applied to braids. One ought to be able to bound the complexity of this sphere using normal surface theory. There is a nice triangulation of $S^3$ with $n^2$ tetrahedra which has an arc presentation of the link in the 1-skeleton (think of $S^3$ as the join of the two components of the Hopf link with each component having a cell structure with $n$ 1-cells). I think one ought to be able to relate the number of singularities of the foliation with the complexity of the normal 2-sphere with respect to this triangulation, which is at most exponential in $n^2$. Now Dynnikov shows that one may perform exchange moves to decrease the number of singularities of the foliated sphere. In order to get a polynomial number of moves, one would have to show that the number of singularities could be decreased by a definite factor by each move. I suspect this should be true by analyzing things from the normal surface perspective: in a normal surface with exponentially many cells, there are large swaths of the surface which are parallel. If one could eliminate a singularity in the foliation in one part of the surface, then one ought to be able to eliminate singularities in the parallel sheets, and thus eliminate a definite fraction of the singularities. Hopefully this would be similar to the type of algorithm for counting components of normal curves. Incidentally, Dynnikov's argument is a bit easier to understand in the split link case, since there are no "boundary conditions" to consider for a sphere versus a disk. An algorithm to detect the unlink gives an algorithm to detect the unknot: just push off a parallel unlinked copy of the knot, and see if the resulting two-component link is split. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is not a direct answer to Daniel's question, but it could be potentially useful. Suppose we replace the unlink in the question by a split link $L$ whose components $K_1, \ldots, K_n$ are prime, non-trivial knots. Then, the $K_i$ are separated by a collection of essential two-spheres $S_j$, but these spheres can of course look extremely complicated in a given diagram of $L$. The analogue of Daniel's question in this context is: Does there exist a fast geometric algorithm that will identify the essential two-spheres in the prime decomposition of $S^3 \setminus L$? Now, there is a paper of Marc Lackenby that provides some insight into this question: ````http://arxiv.org/abs/0805.4706 ```` Starting from an arbitrary diagram of the connected sum $K_1 \sharp \ldots \sharp K_n$, he cuts it into a distant union (split link) $L = K_1 \cup \ldots \cup K_n$, and proves that the essential two-spheres separating these components have to look (relatively) simple in this handle structure. I am very far from an expert on algorithms, but it seems that knowing the essential two-spheres are not too complicated ought to guide a way to "pull them apart", which is what you're after. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445832967758179, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118246/characterising-categories-of-vector-spaces/118297
## Characterising categories of vector spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the category $FdVect_k$ of finite dimensional $k$-vector spaces, for some given field. It is abelian, semisimple, in that each object is a finite sum of simple objects (of which there is only one up to isomorphism), and also compact closed with simple tensor unit which is a progenerator. Can we characterise $FdVect_k$ as a category of vector spaces purely by properties of the category such as above? I don't mean to demand for instance that $End(I)$ is a field, for instance, and I suspect that this may stop any such characterisation. But this I don't mind, seeing as if rings which are 'nice enough' cannot be distinguished from fields in this way, then so be it. I should point out that if someone says 'but what about Morita equivalence?', then I'm not sure that's right answer, since I'm looking for equivalence as a compact closed semisimple abelian ... category, not just as a bare category - but I may be wrong on this point. - I should point out this tangentially relevant paper: arxiv.org/abs/0807.2927 – David Roberts Jan 7 at 6:56 Let f be an endomorphism of I. The object I is simple, so f has kernel 0 or all of I. If the kernel is 0, then f has a left inverse since every injection is split in a semisimple category. Using images instead of kernels gives right inverses as well. This shows that End(I) is a division ring. – John Wiltshire-Gordon Jan 7 at 7:48 2 I think Morita equivalent rings should have equivalent module categories that preserve all the extra structure in sight as well. – Mike Shulman Jan 7 at 7:59 4 @Mike: What is this extra structure? If $R$ is a non-commutative ring, then $\mathsf{Mod}(R)$ is not monoidal. And for commutative rings Morita-equivalence is just isomorphism. – Martin Brandenburg Jan 7 at 11:52 1 @Mariano: Indeed, or R could be Morita-equivalent to a commutative ring. A more precise (and correct) statement is that the monad corresponding to the forgetful functor $\mathsf{Mon}(R) \to \mathsf{Set}$ is monoidal if and only if $R$ is commutative. – Martin Brandenburg Jan 7 at 17:45 show 7 more comments ## 3 Answers Let $C$ be an abelian monoidal category such that $1 \in C$ is simple, and each object is a finite sum of copies of $1$, i.e. isomorphic to $1^{\oplus n}$ for some $n \in \mathbb{N}$. It is well-known that $k:=\mathrm{End}(1)$ is a commutative ring and that $C$ is $k$-linear. By Schur's Lemma $k$ is even a field. Now, consider the functor $\hom(1,-) : C \to \mathsf{Mod}(k)$. It maps $1^{\oplus n}$ to $k^n$, thus factors as an essentially surjective functor $C \to \mathsf{Mod}_f(k)$. It is also fully faithful, because $\hom(1^{\oplus n},1^{\oplus m}) \cong \prod_n \prod_m \hom(1,1) = k^{n \times m}$. It has a canonical lax monoidal structure given by $\hom(1,x) \otimes \hom(1,y) \xrightarrow{\otimes} \hom(1 \otimes 1,x \otimes y) \cong \hom(1,x \otimes y)$. This is an isomorphism: Since both sides commute with finite direct sums in $x$ and $y$, it is enough to verify this for $x=y=1$, where it is clear. Thus, $C \cong \mathsf{Mod}_f(k)$ as monoidal categories. - 6 nice! related paper: Finite, Connected, Semisimple Rigid Tensor Categories are Linear - Greg Kuperberg – Eduardo Pareja Tobes Jan 7 at 12:03 8 If you add a contravariant identity-on-objects involution to the requirements (complete, compact, semisimple, monoidal with simple unit), the scalars will be an involutive field $k$, and the category will be (equivalent to) that of finite-dimensional $k$-Hilbert spaces. See tac.mta.ca/tac/volumes/22/13/22-13abs.html. – Chris Heunen Jan 7 at 13:18 Thanks for that reference, Chris. This is exactly the sort of thing I'm thinking about, just in the plain vector space setting. – David Roberts Jan 7 at 22:47 So if "C [is] an abelian monoidal category such that 1∈C is simple, and each object is a finite sum of copies of 1", then C is Mod_f(k) for some field k? Excellent. Thanks, Eduardo! Link: arxiv.org/abs/math/0209256 – David Roberts Jan 7 at 23:16 The classification I have proven above is quite trivial, but the one by Greg Kuperberg is more general and profound. – Martin Brandenburg Jan 7 at 23:53 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In Schur Functors I, Todd Trimble and I proved that $FdVect_k$ is the free symmetric monoidal $k$-linear Cauchy complete category on no objects. Here a $k$-linear category is Cauchy complete if it has direct sums (that is, biproducts) and all idempotents split. So, we don't need to mention compact closedness, abelianness, semisimplicity.... More precisely: Proposition. For any field $k$, if $C$ is a symmetric monoidal $k$-linear Cauchy complete category, there exists exactly one symmetric monoidal $k$-linear functor $i:FdVect_k \to C$, up to symmetric monoidal $k$-linear isomorphism. - Do we really need that $C$ is Cauchy complete? I think we only need that finite direct sums exist. – Martin Brandenburg Jan 7 at 23:59 The Cauchy completeness assumption is important for our development of the abstract theory of Schur functors; it's not particularly critical for this particular proposition. – Todd Trimble Jan 8 at 7:46 Todd, yes I agree with that. – Martin Brandenburg Jan 9 at 10:16 The category of f.d. vector spaces is the unique fusion category of Perron-Frobenius dimension $1$, if I recall correctly. Pavel Etingof, Dmitri Nikshych, Viktor Ostrik classified categories with prime $\operatorname{PFdim}$ $p$ as $\mathsf{Vec}_{C_p}^\omega$, twists by cocycles of the cat. of reps of the cyclic group $C_p$) and $1$ should be prime :-) This does start assuming the category is $k$-linear, and you dit not want that, though. - (The condition of rigidity that comes with being a fusion category s automatic when the only simple object is the identity object, so this characterization is not that far away from Martin's) – Mariano Suárez-Alvarez Jan 7 at 17:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8916338682174683, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/144805/exact-sequences-and-proving-the-five-lemma/144809
# Exact sequences and proving the five lemma I am currently trying to understand a proof of the Five lemma. For reference, the Five lemma is as follows: In an abelian category, consider the commutative diagram $A \longrightarrow B \longrightarrow C \longrightarrow D \longrightarrow E$ $\downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\, \downarrow$ $A' \rightarrow\,\, B' \rightarrow\,\, C' \rightarrow\,\, D' \rightarrow\, E'$ with exact rows (in the category theoretic sense), the left vertical arrow epic, the right vertical arrow monic, and the second and fourth vertical arrows isomorphisms. Then the central vertical arrow is also an isomorphism. (You'll have to forgive my diagram, and the one that follows - I couldn't figure out how to TeX a proper commutative diagram. If anyone would care to replace my horrible ad-hoc solution with a properly formatted one then please go ahead.) Now the proof I have starts as follows: We write out the image factorisation of all the horizontal maps. $A \longrightarrow I_1 \longrightarrow B \longrightarrow I_2 \longrightarrow C \longrightarrow I_3 \longrightarrow D \longrightarrow I_4 \longrightarrow E$ $\downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow \,\,\,\,\,\,\,\,\,\,\,\,\downarrow \,\,\,\,\,\,\,\,\,\,\,\,\,\, \downarrow$ $A' \rightarrow\,\, I_1' \rightarrow\,\, B' \rightarrow\,\, I_2' \rightarrow\,\, C' \rightarrow\,\, I_3' \rightarrow\, D' \rightarrow\, I_4' \rightarrow\,\, E'$ Then $I_1 \to I_1'$ and $I_4$ to $I_4'$ are epic and monomorphic, therefore isomorphic. Could anyone explain why these maps are epic and monomorphic? It's probably something to do with the image factorisation, but since we're only factorising horizontally I don't see how we can say much about the vertical maps. The horozontal arrows also have some mono/epimorphism adornments (alternating mono/epi), but again I'm afraid I'm not sure how to TeX them. Your explanations would be greatly appreciated - thanks. - 2 – Arturo Magidin May 14 '12 at 0:29 ## 1 Answer You may have seen the decomposition of a long exact sequence into diagonal short exact sequences. If not, I won't try to reproduce it here, but the upshot is that every map in an abelian category decomposes into an epi to the image from the domain and a mono from the image to the codomain. So all you need to do is remember that every square in your long diagram is commutative. Since $A\rightarrow A'$ and $A' \rightarrow I_1'$ are epic, their composition is too, and it equals the composition of $I_1 \rightarrow I'_1$ with $A \rightarrow I_1$. All you need is the fact, which holds in any category and which I bet you can prove yourself if you don't know it, that if $f\circ g$ is an epimorphism, $f$ is as well. So $I_1 \rightarrow I'_1$ is epic. Going around the next square, we find that $(I'_1\rightarrow B')\circ(I_1\rightarrow I'_1)= (B\rightarrow B')\circ(I_1\rightarrow B)$ is monic, and apply the dual of the fact I just mentioned to see $I_1 \rightarrow I'_1$ is also monic. - Yes, I have seen that decomposition. I was aware of what the "image factorisation" meant, but I didn't put together the fact that we were using the commutative property of each square and taking into account the vertical maps we do know about to deduce the properties of the intermediate maps. Your answer cleared that up for me, thank you. – Spyam May 14 '12 at 2:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9610785841941833, "perplexity_flag": "head"}
http://mathoverflow.net/questions/116398/schemes-with-no-nonconstant-maps-to-lower-dimensional-schemes
## Schemes with no nonconstant maps to lower dimensional schemes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix an algebraically closed field $k$ (arbitrary characteristic), all schemes will be of finite type over $k$. (Property *): I'm interested in (classes of) examples of schemes $X$ (irreducible, of dimension $n$) so that any morphism of schemes $\phi: X \rightarrow Y$ with $\dim Y < n$ is constant. There are two examples I know of. Projective spaces $\mathbb{P}^n_k$ have this property and simple abelian varieties do too. (One may also put arbitrary non-reduced structures on these, see below). $\textbf{Claim}$: More generally, if $X$ is a proper irreducible scheme so that every effective divisor is ample (so proper=projective), then $X$ has property (*). (Projective spaces have this property by definition, and simple abelian varieties do too by a general result from Mumford's book.) Proof: (Eisenbud-Harris give a similar argument for the case of projective space). Let $X$ be as in the theorem and $\phi: X \rightarrow Y$ be a morphism with $Y$ of smaller dimension than $n = \dim X$. Without loss of generality, we may assume $\phi$ is surjective (hence we can pullback cartier divisors). Choose an effective Cartier divisor $D$ and a point $p \not \in |D|$ the support of $D$, but in the image of $\phi$ (since $\phi$ surjective). The pullback of an effective Cartier divisor is also effective, Cartier - hence ample. The pullback of the point will contain a complete curve (hence these two subschemes of $X$ meet by the Nakai-Moishezon criteria - contradicting that $p \not \in |D|$. $\square$ Easy observations (1) Having property $*$ is not stable under blowing-up (Blow up of $\mathbb{P}^2$ is $\mathbb{P}^1 \times \mathbb{P}^1$.) (2) If a scheme $X$ satisfies the claim, then by definition, so does $X_{red}$. Further, any thickening of $X$ has property $*$. $\textbf{Proof: }$ To check that an $X$ satisfying the claim satisfies $*$, we did calculations in the intersection ring. This is invariant under changing the non-reduced structure. $\square$ Questions Main question: Are there other (families of?) examples of schemes satisfying (*)? (1) Does every scheme (no finiteness conditions!) have a dense affine open subset? This came up when I was thinking about this, and I realized I can't prove it offhand. Certainly it is true for irreducible schemes, and suffices to show it for connected schemes. (2) Do you suspect that the only examples also satisfy the claim above? That is, have every effective divisor ample? (3) Certainly all examples of schemes satisfying $*$ must be connected. Are there connected, but not irreducible examples? I thought this was a little interesting (and admittidenly, I have no applications in mind.) - Dear LMN, Regarding question (1), you will need some finiteness conditions. For example, if $X$ is the disjoint union of countably many copies of Spec $k$ (for some field $k$), then any affine subscheme will be finite, and so $X$ has no dense affine open subscheme. (This is related to the fact that an affine open is quasi-compact, so its closure will tend to want to be quasi-compact too, although I'm not sure in what generality that will literally be true.) Regards, – Emerton Dec 15 at 0:44 "Curves" is another such family! And gives examples of (3). – Allen Knutson Dec 15 at 2:12 If a K3 surfaces has no elliptic pencils then it has (*). Indeed, there are no maps to curves of positive genus since pullbacks of 1-forms are zero. – Sasha Dec 15 at 3:03 Your statement about blowing up $\mathbb{P}^2$ needs a little more care (although the basic point stands). – S. Carnahan♦ Dec 15 at 3:33 3 LMN: $H^{1,0}$ of a K3 surface vanishes. Also, you can't get $\mathbb{P}^1 \times \mathbb{P}^1$ just by blowing up $\mathbb{P}^2$. You have to blow down after two blow-ups. – S. Carnahan♦ Dec 15 at 6:38 show 4 more comments ## 1 Answer The point is not that every effective divisor should be ample, but that every semi-ample (=some multiple is basepoint-free) should have maximal Kodaira-Iitaka dimension. Actually this is an equivalent condition, but it is really just a reformulation of the condition you are looking for. Semi-ample divisors that are not ample lie on the boundary of the ample cone, so a sufficient condition would be something along the lines that the boundary of the ample cone does not contain any (integral) non-zero divisors. One probably has to make some assumptions so this would make sense. Also, this is sufficient, but not necessary. That is, if this condition on the cone holds, then your desired condition holds, but your desired condition may hold even without this. In particular, Mumford (first and then others) gave examples of nef but not semi-ample divisors. Those divisors would lie on the boundary as well. Anyway, I don't have time to add more details right now, but I will try to do it later. Alternatively, you can consider this a hint and work it out yourself. (Mumford's example is a ruled surface, so that does not work, but I would expect that one should be able to construct an example that satisfies your condition and still has a nef but not semi-ample divisor). Sasha's example of a K3 surface falls into this category. If there are no elliptic pencils, which is equivalent to there being no smooth elliptic curve on the surface, then the only non-trivial morphisms of the surface are the contraction of some $(-2)$-curves, but those morphisms are birational, so the target has the same dimension. For a reducible example take two varieties of the same dimension, both of which has Picard number one and who intersect each other. For instance two intersecting planes. If the morphism maps to something lower dimensional, it has to be constant on both planes but they have to map to the same point. - Isn't picard number $1$ more a situation where the boundary is just the origin, or trivial, rather than the entire cone? – Will Sawin Dec 15 at 6:23 Will, you're absolutely right. I corrected the relevant sentence. Thanks! – Sándor Kovács Dec 15 at 15:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312503933906555, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/60947-k-sided-dice.html
# Thread: 1. ## K-Sided Dice Hi everyone, Looking for a bit of help. If i throw a k-sided dice twice, what is the expected value of their sums. I'm looking for a formula for it including k. Thanks 2. Originally Posted by waxandshine Hi everyone, Looking for a bit of help. If i throw a k-sided dice twice, what is the expected value of their sums. I'm looking for a formula for it including k. Thanks The expected value of the sum of two RV's is the sum of their expected values. So what is the expected value of a single throw of a k-sided die. CB 3. well yeah, but there's no values so it's basically the expected value of one roll times two, but i dont know hot to write a formula for the expected value of one roll 4. Originally Posted by waxandshine well yeah, but there's no values so it's basically the expected value of one roll times two, but i dont know hot to write a formula for the expected value of one roll Fair die so probability of each face is 1/k. So expected value is: $<br /> (1)(1/k)+(2)(1/k)+..+(k)(1/k)=(1/k)\sum_{i=1}^k i<br />$ CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307576417922974, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/tags/oblivious-transfer/hot
# Tag Info ## Hot answers tagged oblivious-transfer 7 ### Why use a 1-2 Oblivious Transfer instead of a 1 out of n Oblivious Transfer? Oblivious transfer is mostly studied as a theoretic construction, as it is an important component in achieving interesting protocols (like secure two-party computation and secure function evaluation). The interest in 1-2 OT is that it is a minimal definition theoretically, and most results that limit themselves to 1-2 are designed to improve some basic ... 6 ### Why use a 1-2 Oblivious Transfer instead of a 1 out of n Oblivious Transfer? the securty of 1-n OT is a function of the security of a 1-2 OT. So in analysis it is easy to use 1-2 OT for security proofs. A 1-n OT is essentially a multiple run of a 1-2 OT. (somewhat like a byte is made of 8 bits) So IMO the question is like asking why use bits when you can use bytes for communication. [it depends on the application] 3 ### Randomized Oblivious Transfer No, as written, your protocol doesn't work -- the problem is that Bob is supposed to be allowed to choose $b$, your protocol selects a random one for him. However, it is close -- here is a modification that I believe does work: First, suppose Alice has her values $(x_0, x_1)$, and Bob has his bit $b$. They run their Random functionality R, and so Alice ... 2 ### Are there any differences between PIR, oblivious transfer and differential privacy? The other answers are good but I thought I would systemize the differences with a single example. Say Bob has a database with 10 entries of the form {name, salary} and Alice would like to query it. With PIR, Alice can retrieve any entry or entries of her choosing (say the 8th entry) without Bob learning which one. The trivial PIR is Alice just retrieves ... 2 ### Are there any differences between PIR, oblivious transfer and differential privacy? In differential privacy the concern is to protect the privacy of a single row of the database. Informally, the DP concept says that everything that can be learned from the database could be learned without access to that row. In a more technical sense, a mechanism respects this property if the distribution of the answers is almost identical (in a very strict ... 2 ### Are there any differences between PIR, oblivious transfer and differential privacy? There is a slight distinction between PIR and OT. From Wikipedia: PIR is a weaker version of 1-out-of-n oblivious transfer, where it is also required that the user should not get information about other database items. In other words, OT is stronger in that the receiver only gets what is requested. Differential privacy is new to me, so I'll read up ... 2 ### Is there a group of prime order which could fit the CT-Computational Diffie-Hellman assumption? The usual technique for having a group of prime size $q$ is to work modulo a prime $p$ such that $q$ divides $p-1$. The target group is then the subgroup of $q$-th roots of $1$ in $\mathbb{Z}_p$. To build such a group, first choose $q$, then selects random values $r$ until you find one such that $p = qr+1$ is prime. This is the way it is defined in the DSA ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519953727722168, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/convex-optimization?page=5&sort=newest&pagesize=15
# Tagged Questions Convex Optimization is a special case of mathematical optimization. It includes Linear Programming and least-squares. 0answers 36 views ### Duality gap in cone programming Let $K\subset \mathbb{R}^2$ be a closed convex and pointed cone, $A$ be a $2\times 2$ square matrix and $b, c\in \mathbb{R}^2$. Consider the problem (P)\quad \min\{\langle c, x\rangle: Ax\geq_K ... 1answer 63 views ### How does the two phase method for linear programs work… I understand that by adding artificial variables the problem can be reformulated as a new problem where the "starting point" is readily found. What I don't get is how when this extended problem is ... 1answer 40 views ### Why can't the hyperplane H intersected with polyhedral set S contain any line… S is the polyhedral set $S = \{ \mathbf{x} \in \mathbb{R}^{n} ; \mathbf{Ax}=\mathbf{b}, \mathbf{x} \ge \mathbf{0} \}$ and $H : \mathbf{c}^{T}\mathbf{x} = \beta$ with \$ \min_S ( ... 1answer 139 views ### Global Min-Max Optimization When is \begin{equation} \min_X \max_Y f(X,Y) \end{equation} globally solvable? (i.e. we can find global solution for the optimization problem?) I am not looking for reformulations. Is it only when ... 3answers 74 views ### Proof of Convexity Is the function $Trace(AX^TBX)$ a convex function in $X$ or not ? Here, $X$ is a rectangular matrix and $A,B$ are square, symmetric, p.s.d matrices. The entries in $X,A,B$ are real valued. 1answer 160 views ### Joint Convexity Is the problem \begin{equation} \min_X \max_Y -\operatorname{tr}(X^TY)-\operatorname{tr}(Y^TYX) \end{equation} Jointly convex in $X$ and $Y$? Can we solve it globally? Why or Why not? $X$ and $Y$ ... 1answer 67 views ### Convex Sets Versus Convex Functions Can we specify all convex sets, in terms of convex constraints (convex inequality functions) on a variable? 1answer 120 views ### What Stopping Criteria to Use in Projected Gradient Descent Suppose we want to solve a convex constrained minimization problem. We want to use projected gradient descent. If there was no constraint the stopping condition for a gradient descent algorithm would ... 0answers 30 views ### General properties of an optimal solution of a convex program How do we seek certain properties for a solution of a convex minimization problem. For example we want to make sure if the below objective has a symmetric optimal solution: \begin{equation} \min_X ... 1answer 153 views ### Strict convex function? I try to prove that $g(x)= K |x|^2/2 + z(x)$ is strictly convex, given that $z(x) \geq - m(1 + |x|^p)$ with $m \geq 0$, $0 \leq p \leq 2$, forall $x \in \mathbb{R}^n$, provided $K$ is sufficiently ... 3answers 58 views ### lower bound of a special type of convex functions Suppose $f$ is a convex, differentiable and $\|\nabla f(x)-\nabla f(y)\|\leq L\|x-y\|$. The minimum of $f$ is $0$. ($f$ may not be twice differentiable.) How to show \$f(x)\geq\frac{1}{2L}\|\nabla ... 1answer 36 views ### Concave optimal value? Let $A \in \mathbb{R}^{n \times m}$ and $B \in \mathbb{R}^{m \times n}$. Consider a compact set $C \subset \mathbb{R}^n$. For all $x \in C$ define f(x) := \min_{y \in \mathbb{R}^m} \{ x^\top A y ... 1answer 43 views ### Simple question about the solution of non-linear equations Given, say $4$ non linear equations with $4$ positive parameters, $$f_1(x,y,z,t)=a,\quad f_2(x,y,z,t)=b,\quad f_3(x,y,z,t)=c,\quad f_4(x,y,z,t)=d$$ for given $a,b,c,d$, If I am able to show that ... 0answers 68 views ### Max Quadratic Expression Let $A \in \mathbb{R}^{n \times n}$, $A = A^\top$, $B \in \mathbb{R}^{m \times n}$, and $\mathcal{C} \subset \mathbb{R}^n$ be a compact, convex set. For $A$ not negative semidefinite, how to globally ... 1answer 65 views ### A point which I couldnt understand in a paper. Currently I am reading a paper and the author has an optimization problem $$\max_w\frac{w^2\alpha}{w^2\beta+v}$$ Then he substitutes $w^2$ with $x$ and defines an objective function using a ... 0answers 125 views ### Convex Functions on 2 variables over an interval It is required to show that $f(x) = x_1x_2$ is a convex function on $[a,ma]^T$ where $a\ge 0$ and $m\ge1$.To show convexity we need to show that for $\lambda \in [0,1]$: \$f(\lambda x + (1-\lambda ... 1answer 38 views ### Are all polytopes also convex hulls? It seems, at least in the 2-D case, that all polytopes are going to be convex. Does this hold if the dimensions are increased? 0answers 45 views ### Prove that $\text{int}(\text{dom}(f))$ is a convex set. Let $f$ be a convex function. I have to prove that $\text{int}(\text{dom}(f))$ is a convex set. (Be careful with $-∞$ ) 2answers 405 views ### Please explain the intuition behind the dual problem in optimization. I've studied convex optimization pretty carefully, but don't feel that I have yet "grokked" the dual problem. Here are some questions I would like to understand more deeply/clearly/simply: 1) How ... 0answers 61 views ### Explain $x^*$ of subgradient in KKT -conditions: primal optima or dual optima? I asked this question here but I noticed that this notation $x^*$ may actually mean two things: primal optimality and dual optimality. Please, explain this notation particularly here: I understand ... 1answer 56 views ### Explain Complementary Slackness $\mu_i g_i(x^*)=0\forall i$ Wikipedia here explains it like this: I understand it so that either $\mu_i=0$ or $g_i=0$ but this answer here: "If μ1≠0 and μ2≠0, then x is one of the two points at the intersection of the two ... 1answer 77 views ### Finding an $O(n \log n)$ time algorithm for an optimization problem Consider the following optimization problem: Let $n$ be even and let $c$ be a positive vector in $\mathbb{R}^n$. Find \min\left\{c^T x : (x \geq 0) \text{ and } \left(\forall S \subseteq [n], \ ... 0answers 28 views ### using the ellipsoid algorithm to find a poly time algorithm for the optimization problem Consider the following optimization problem: Let $n$ be even and let $c, x$ be positive vectors in $\mathbb{R}^n.$ Find $\min(c^Tx)$ for $\sum_S x_i\geq 1,$ for any $S\subset \{1,...,n\}$ with \$|S| ... 1answer 38 views ### property of cones and their duals I am reading Convex Optimization by Boyd and Vandenberghe (free at http://www.stanford.edu/~boyd/cvxbook/) and I am trying to justifying their assertion (p. 53) that if $K$ is a proper cone, $K^*$ is ... 0answers 99 views ### sufficient condition for KKT problems For the Karush-Kuhn Tucker optimsation problem, Wikipedia notes that: "The necessary conditions are sufficient for optimality if the objective function f and the inequality constraints g_j are ... 3answers 215 views ### Prove $ax - x\log(x)$ is convex? How do you prove a function like $ax - x\log(x)$ is convex? The definition doesn't seem to work easily due to the non-linearity of the log function. Any ideas? 1answer 90 views ### What does the statement “Optimality condition for convex problem” mean? KKT or other condition? I am stuck to the problem 4 here, course Mat-2.3139, the due day was yesterday. The hint is "Optimality-condition for a convex-problem". I have asked this now from 3 assistants and everyone with ... 1answer 59 views ### Matrix computations problem: rank, pseudo inverse,… Suppose we are given two arbitrary $m \times n$ matrices, $A$, $B$, where we know $B$ has full column rank. Let $m>>n$. Can we always find a square $m \times m$ matrix $X$, such that $A=XB$? I ... 0answers 70 views ### Is positively weighted sum of eigenvalues of a matrix X, convex function of X? Is positively weighted sum of eigenvalues of a matrix X, convex function of the matrix X? 0answers 235 views ### Calculation/Estimate of Lipschitz Constant for Strictly Convex Function I have a strictly convex function $f(\bf{x}) = \dfrac{1}{2}\bf{x'Ax + b'x}$ where $\bf{f} : \mathbb{R^n} \rightarrow \mathbb{R}$ and I was wondering how I can find/estimate the Lipschitz ... 1answer 141 views ### Why does a positive definite matrix defines a convex cone? I've been working on convex optimization and got stuck. What exactly does a positive definite(p.d) matrix represent geometrically ? what kind of vector space it forms ? If I have a p.d matrix which ... 1answer 87 views ### Convex Combination of Hermitian Matrices Assume all the matrices I discuss about are $N \times N$. Consider any two hermitian matrices $A_1$ and $A_2$ which are indefinite. The question is, In general, for any $A_1$ and $A_2$, does there ... 1answer 85 views ### Positive values for a set of quadratic forms of Hermitian Matrices. (To find a set of vectors in which a hermitian matrix is positive definite) Assume all matrices I discuss about are $N \times N$ and the vectors conform with dimensions. Consider the following set of Quadratic inequalities where all the matrices $A_i$ are hermitian. ... 1answer 207 views ### Lagrangian Multipliers I have a fundamental question about Lagrange multipliers. Here it is: I have a function to maximize with respect to a parameter say $\theta$, subject to two constraints. Lets assume that the first ... 1answer 20 views ### unstable optimizer, stable objective I am trying to minimize a convex objective numerically using gradient descent. I select the starting point randomly. I repeat the experiment multiple times. The optimal objective value I get each time ... 0answers 95 views ### Upper bound for L1-L2 optimization problem I am interested in the following convex optimization problem: \begin{align*} \max & ||x||_1 \\ \text{s.t.} & ||x-a||_1 \le K \\ & ||b\circ x||_2 \le 1\\ & x \in R^n \end{align*} where ... 0answers 142 views ### Global optimum of sum of convex functions Take two real differentiable convex functions, $f_1$ and $f_2$, defined on the unit interval $[0; 1]$. I want to find the global optimum of: $\min_{x \in [0;1]} af_1(x)+bf_2(x)$, for given \$a, b \in ... 2answers 39 views ### Convexity of a function and constraint Consider the quadratic function $f(x_1,x_2,x_3,x_4)=x_1+2x_2+4x_4+x_1^2+5x_2^2+3x_3^2x_4^2-4x_1x_2-2x_2x_3+2x_3x_4$. Is f a convex function? Consider a constraint defined using the above function f: ... 2answers 136 views ### definition of strongly convex There are several equivalent definitions for strongly convex. For example, some literature said: A function $f$ is strongly convex with modulus $c$ if either of the following holds f(\alpha ... 0answers 71 views ### Branch-and-Price algorithms for IP/MIP I'm trying to do research into Branch-and-Price algorithms, which generally rely on Branch-and-Bound and column generation (typically Dantzig-Wolfe decomposition) to solve integer and mixed-integer ... 1answer 74 views ### what math topic is this kind of example part of? or what is needed to understand how to solve it? [closed] we 100000000 sets/locations. each set has, A = % chance of finding a cure (there are many different types of cures) for cancer B = time it takes to extract a cure to caner C = the optimal % chance (IN ... 0answers 89 views ### minimum of the function over symmetric body Let $X$ be a normed space. Let $K$ be centrally symmetric convex body with $M$ known vertices and with $Vol(K)=V$. We subdivide body $K$ be $M$ equal pieces, $P_i, i=1, \ldots, M$, such that each ... 1answer 69 views ### Parametric Linear Program: Continuous Solution? Consider the parametric linear problem $$x^*(\theta) := \min_{Y , \ Z } \left\| Z \right\|_1$$ $$\text{sub. to: } \ \theta A + B Y = \theta C Z.$$ where $Y \in \mathbb{R}^{m \times s}$, \$Z \in ... 1answer 135 views ### Numerical optimization with nonlinear equality constraints A problem that often comes up is minimizing a function $f(x_1,\ldots,x_n)$ under a constraint $g(x_1\ldots,x_n)=0$. In general this problem is very hard. When $f$ is convex and $g$ is affine, there ... 1answer 85 views ### Gradient of Moreau-Yosida Regularization Let $f(x):\Re^n\rightarrow \Re$ be a proper and closed convex function. Its Moreau-Yosida regularization is defined as $F(x)=\min_yf(y)+\frac{1}{2}\|y-x\|_2^2$ \$Prox_f(x)=\arg\min_y ... 0answers 28 views ### construction of a packing for polytope Let $C=[-1,1]^n$ and $H$ be a plane with equation $\sum_{i=1}^nr_i=s, 1\le s\le n.$ (Here $r_i$ are such that $Proba(r_i=1)=Proba(r_i=-1)=1/2$). The intersection $C \cap H$ is a polytope, $P(n, s)$. ... 3answers 302 views ### Why is the affine hull of the unit circle R^2? In Boyd's "Convex Optimization" it defines the affine hull of a subset $C$ of $R^n$ as \text{aff} C = \left\{\theta_1 x_1 + \ldots +\theta_k x_k \mid x_1, \ldots x_k \in C, \theta_1 + \ldots ... 0answers 93 views ### Formulation and solution of non-linear optimzation problem with inequality constraints I'd like to know if the following problem is well formulated and has solutions. I'm very new to the subject of nonlinear optimization with inequality constraints ('teaching myself the Kuhn-Tucker ... 1answer 79 views ### Intuition behind gradient VS curvature In Newton's method, one computes the gradient of a cost function, (the 'slope') as well as its hessian matrix, (ie, second derivative of the cost function, or 'curvature'). I understand the intuition, ... 3answers 184 views ### Summary of Optimization Methods. Context: So in a lot of my self-studies, I come across ways to solve problems that involve optimization of some objective function. (I am coming from signal processing background). Anyway, I seem to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 121, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8862180113792419, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/23045-quick-complex-analysis-query.html
# Thread: 1. ## Quick complex analysis query Quick complex analysis query Hi, I'm just working through examples for revision and I've come across one that I've worked but not sure if its correct. If anyone could check it would be much appreciated: Expand 1/z in power series about z=1 Ok you can write 1/z as 1/[1+(z-1)] in this case since expansion is about 1. Then: 1/[1+(z-1)= 1/(1+(z-1)/1)=1[1-(z-1)+(z-1)^2-+....] oo =SUM (-1)^k(z-1)^k k=0 Now is this sufficient for the question or have I left anything vitally important out. I am not great at the subject so I need to make sure anything I'm using as study exercises is working out correctly! 2. Originally Posted by musicmental85 Quick complex analysis query Hi, I'm just working through examples for revision and I've come across one that I've worked but not sure if its correct. If anyone could check it would be much appreciated: Expand 1/z in power series about z=1 $1/z = 1/1+(z-1) = 1-(z-1)+(z-1)^2-(z-1)^3+...$ 3. Perfect, thats what I thought but I know I'll be looking at it later closer to the exams in january and I don't want to learn something incorrect. I don't even need the sum section then do I?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9710061550140381, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/porous-media/hot
# Tag Info ## Hot answers tagged porous-media 15 ### When water climbs up a piece of paper, where is the energy coming from? The surface of any fluid has an associated energy-per-unit-area, known as the surface energy, a.k.a. surface tension. This energy is not a property of the fluid alone, but of the fluid and the medium it is in contact with. In your case you would have associated surface energies for the water-air interface, $e_{wa}$, as well as for the water-paper interface, ... 5 ### When water climbs up a piece of paper, where is the energy coming from? The physics behind this is the same capillary action that causes water to move up narrow cylindrical channels. Tissue paper is extremely porous, but the pores are sufficiently narrow that the cohesion between water molecules (actually driven by the Coulomb interaction, since water molecules are polar) and adhesion between the water and the surfaces of the ... 5 ### How can one build a multi-scale physics model of fluid flow phenomena? Your problem is highly nontrivial. The theoretical tool to be used is the renormalization group, which extracts the relevant dynamics of the large scales of the system. But if we were able to use it "in a blind way", then we would have a technique to study the macroscopic dynamics of any microscopic system... and this would made a lot of my colleagues ... 3 ### Laws of fluid flow in porous medium The keyword for porous flows is Darcy's flow, which is based on the Darcy law guiding the field of mass flux instead of velocity: $$\vec{q}=-K\nabla p$$ $\vec{q}:=\vec{u} \cdot \text{porosity}$, so it is equivalent to normal flow only for homogeneous media. This equation comes from averaging the NS over porous volume; however, as you can see, it is very ... 1 ### Darcy law yields extreme speed for gas flow throgh packed spheres? For the input parameters provided, your velocity estimate is reasonable but probably not accurate. Reason is that Darcy's law assumes Stokes (small Reynolds number) flow. For the parameters provided, together with a density of 1 kg/m3, and substituting a flow length scale of 0.001 m, the Reynolds number reaches a value around 10. This means that the flow ... 1 ### When water climbs up a piece of paper, where is the energy coming from? The previous answer tells you why the water moves up but doesn't explain where the energy comes from. In order for water to move up and thereby gain gravitational potential energy, you need to have some energy loss somewhere else to compensate. Some of the energy comes from the random molecular movement of heat which extends the edge of the water itself up ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319265484809875, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/1499/is-eke-attackable-by-a-brute-force-password-search?answertab=oldest
# Is EKE attackable by a brute-force password search? So I'm trying to properly implement the EKE protocol and I'm using C# with the Windows CNG and ECDH key exchange. I need to use this because it's FIPS certified and all that jazz. So what I'm understanding about EKE is the fact that it depends on the fact that the public keys are completely randomized, therefore not verifiable plaintext. Otherwise it would be easy to run a dictionary attack against the protocol because the public keys are encrypted using the pre-known password. Here's my problem: I noticed that in the CNG algorithm, the public key is a large byte array and it is completely randomized except for the first 6 bytes are always the same. From there on its completely randomized. This leads me to believe that if I wanted to run a dictionary attack against that, all I would need to know is what the first 6 bytes were and I could validate that I was able to decrypt that public key or maybe even brute force it. To protect against this, I could easily truncate off the first 6 bytes and encrypt the rest of the completely randomized byte array with the password and then just add those 6 bytes back on either endpoint to calculate the secret key. I'm pretty sure that's right but I'm discovering that cryptographic systems are fiendishly tricky and I wanted to run it past you guys to make sure I'm thinking about this correctly. - I added some links to your question and chose a better title - please check if I understood it right (and linked to the right stuff). Feel free to edit again. – Paŭlo Ebermann♦ Dec 16 '11 at 21:59 ## 2 Answers Doing EKE over an EC group is a tricky (and is something that RFC6124 avoids). The problem, as you note, is preventing an attacker from being able to determine whether a possible decryption is impossible (and hence he can remove that potential password from the list); that turns about to be considerably more involved than you would expect. Even if you skip the first six bytes, well, the EC X and Y coordinates are not actually randomized (even though they look random); these coordinates are actually integers (if you're doing even-characteristic EC, field elements, don't worry about the distinction) which are related by an equation (perhaps $y^2 + ax = x^3 + b \mod p$) that's easy to check; this would easily allow an attacker to do a dictionary attack. Now, all this can be handled; you can just use the X coordinate (ECDH still works just fine; however, your ECDH package might not support that), and you can deal with the problem that about half of the bit patterns are not possible EC X coordinates (and that's also easy to check); you could modify the encryption algorithm to deal with that as well. On the other hand, all this is rather tricky for someone who isn't used to working with Elliptic Curves. I would strongly recommend that you use either EKE using a MODP group (as RFC6124 has; don't forget you have you use the nonstandard generators that RFC6124 has), or go with SRP. - The "proper" way to do EKE on a generic group (be it an elliptic curve or any other group) is to do the encryption part by "adding" a group element which is a hash of the password (actually, a hash of a structure containing the password and the names of the two entities involved in the operation, so as to avoid issues with attackers trying to play with several authentication sessions by distinct entities). So you need a function $H$ which takes as input the password and outputs a (deterministic) pseudo-random point. One way is the following: for a $n$-bit curve of prime order (e.g. the P-256 curve from NIST: coordinates are integers modulo a 256-bit prime $p$, and the whole curve has prime order, so you want a random point on the curve), use the password as seed of a PRNG, to produce pairs $(x,r)$ where $x$ is an integer modulo $p$, and $r$ is either 0 or 1. Then, using the curve equation, try to find a point $(x,y)$ on the curve, such that the least significant bit of $y$ is $r$ (the curve yields $y^2 = x^3 + ax + b$; you then choose the square root which matches $r$). If the putative $y^2$ is not a square, use the next $(x,r)$ pair from the PRNG, and so on. Then the EKE goes thus: • Alice choose here random $a$ modulo $q$ ($q$ is the curve order) and sends $A = aG+H(\pi)$ (for the password $\pi$). • Bob chooses $b$ and sends $B = bG+H(\pi)$. • Alice computes $K = a(B-H(\pi))$. • Bob computes $K = b(A-H(\pi))$. The main difficulty is to have a $H$ which is close enough to a "random oracle" (the EKE described above is provable in the random oracle model). In practice, you need to be able to compute square roots modulo $p$ (this is easy if $p = 3 \pmod 4$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473697543144226, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/50484/convert-number-from-one-base-to-another
# Convert number from one base to another I have the next number: $$5.2880001*10^{-4}$$ Now I want to convert this number to a format of $X*2^y$. How I do it? Thank you. - 3 Are there any constraints on X and y? Otherwise you can always take X to be your number (0.00052880001) and y to be 0. – ShreevatsaR Jul 9 '11 at 9:34 ## 2 Answers Assuming that you want to follow a convention corresponding to the one that's used for decimal numbers (namely that X has a single non-zero digit before the decimal/binary point), you need $$\begin{eqnarray} y&=&\lfloor\log_2s\rfloor\;,\\ X&=&2^{-y}s\;, \end{eqnarray}$$ where $s$ is your number, $\log_2$ is the base-$2$ logarithm and $\lfloor\cdot\rfloor$ is the floor function, which yields the greatest integer not greater than its argument. If you don't have means for calculating logarithms available, you can just multiply or divide (in this case multiply) your number by $2$ until you cross $1$ to determine $y$. - I assume you want $1 \le X \lt 2$ and $y$ to be an integer. You could write your expression as $0.00052880001\times 2^0$ or $0.00105760002\times 2^{-1}$ or $0.00211520004\times 2^{-2}$ and so on by doubling the left part and changing the index on the right, including $1.08298242048\times 2^{-11}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220412969589233, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/245573/find-the-upper-and-lower-limits-of-function
# Find the upper and lower limits of function $xf(x)=\frac{1}{2}(\cos(x^2)-\cos[(x+1)^2]+r(x))$ $|r(x)|<c/x$ for constant $c$ Find the upper and lower limits of $xf(x)$ as $x→ \infty$ I'm a bit confused. The solution of mine(1,-1) and my friend's ($\sin\frac{1}{2}$,-$\sin\frac{1}{2}$) are different. It is possible that this question has two answers? (because of looseness of bounds) Please let me know how to get the exact answer. - If you write function names like $\sin$ out as text, they're interpreted as a juxtaposition of variable names and get formatted accordingly (e.g. italicized). You can get the proper font and spacing for functions like $\sin$ by using the predefined commands like \sin. If you need a function for which there's no predefined command, you can use \operatorname{name}. – joriki Nov 27 '12 at 10:12 ## 2 Answers We can rewrite the equation using the sum to product formula for the cosine. Then $$x f(x) = \frac{1}{2}\left[2 \sin\left(\frac{x^2+(x+1)^2}{2}\right) \sin\left(\frac{(x+1)^2 - x^2}{2}\right) \right]+\frac{c}{2x}\\ x f(x) = \sin\left(\frac{x^2+(x+1)^2}{2}\right) \sin\left(\frac{(x+1)^2 - x^2}{2}\right) + \frac{c}{2x}$$ The second term decays as $x\rightarrow\infty$ and is thus irrelevant. The minima and maxima are determined by the envelope of the beat pattern defined by the product of sines. The amplitude is unity such that the minimum is -1 and the maximum is +1 as you mentioned. See the plot below for an illustration. The envelope is plotted as a dashed red curve. The maxima are plotted as solid blue lines and the function $x f(x)$ is plotted in black. - This is not a proof though. – Tunococ Nov 27 '12 at 12:13 It is a bit strange that your function is $xf(x)$ while $f(x)$ alone is never used. I'll just define $g(x) = \frac 12\left(\cos(x^2) - \cos(x+1)^2 + r(x)\right)$ and try to find $\limsup_{x\to\infty} g(x)$ and $\liminf_{x\to\infty} g(x)$. Like Till said, the term $r(x)$ will eventually go to $0$ so it's unimportant. The problem is with $\cos(x^2) - \cos(x+1)^2$. Let's prove that $\cos(x^2) - \cos(x+1)^2$ has $\limsup$ and $\liminf$ equal to $2$ and $-2$ respectively. It is obvious that $-2 \le h(x) \le 2$ for all $x$. Suppose $x^2 = 2n\pi$. Then $x = \sqrt{2n\pi}$, and $(x + 1)^2 = 2n\pi + 1 + 2\sqrt{2n\pi}$. The difference of these two terms is $1 + 2\sqrt{2n\pi}$. We will show that the set $S = \{e^{(1 + 2\sqrt{2n\pi})i}\}$ in the complex plane has as its closure the unit circle. Given $\epsilon > 0$, pick $N$ such that $\sqrt{N + 1} - \sqrt{N} < \epsilon$. Then for all $n > N$, we have $2\sqrt{2\pi}(\sqrt{n+1} - \sqrt n) < 2\sqrt{2\pi}\epsilon$. This shows that $S$ is dense in the unit circle. We can also show that $-1 \notin S$: if $-1 \in S$, then there exist integers $m$ and $n$ such that $1 + 2\sqrt{2n\pi} = (2m + 1)\pi$. Rearranging gives $(2m + 1)^2\pi^2 - (4m + 8n + 2)\pi + 1 = 0$. This would imply that $\pi$ is algebraic, hence a contradiction. Therefore, $-1 \in \overline S - S$, and that implies $\limsup_{x\to\infty} (\cos(x^2) - \cos(x + 1)^2) = 2$. The case for $\liminf$ is similar: Replace $2n$ in the above paragraph with $2n - 1$ and make some minor changes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542063474655151, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=213959
Physics Forums compactness and continuity. I need to prove that for every continuous function f:X->X of a metric and compact space X, which satisfy for each two different x and y in X p(f(x),f(y))<p(x,y) where p is the metric on X, there's a fixed point, i.e there exist x0 s.t f(x0)=x0. obviously i thought assuming there isnt such a point i.e that for every x in X f(x)!=x now because X is compact and it's a metric space it's equivalent to sequence compactness, i.e that for every sequence of X there exist a subsequence of it that converges to x0. now $$p(f(x_{n_k}),x_{n_k})$$, because they are not equal then there exist e0 such that: $$p(f(x_{n_k}),x_{n_k})>=e0$$ now if x_n_k=f(y_n_k) y_n_k!=x_n_k, we can write it as: p(x_n_k,x0)+p(x0,y_n_k)>=p(x_n_k,y_n_k)>e0 now if y_n_k were converging to x0, it will be easier, not sure how to procceed... what do you think? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor This is a special case of the Banach contraction mapping theorem. A proof would go as follows: Let x0 be any point in X, and let xn=f(xn-1) for n>1. Claim: {x_n} converges. Post back if you need more hints. well cauchy sequnce obviously will do here. $$p(f(x_n),f(x_{n_k}))<p(x_n,x_n_k)=p(f(x_{n-1}),f(x_{n_k-1}))<p(x_{n-1},x_{n_k-1})<....<p(x_0,x_{n_k-n})$$ now if x_{n_k-n} converges to x_0 (which can be assumed cause it's compact and metric), it will be easy to prove your claim, cause then for every e>0 s.t k is big enough: p(x_0,x_{n_k-1})<e/2 and also p(x0,x_{n_k-n})<e/2 so p(x_n-1,x0)<=p(f(x_n),f(x_n_k))+p(f(x_n_k),x0)<...<e. is this wrong? I have a sneaky suspicion that yes. Recognitions: Homework Help Science Advisor compactness and continuity. If n > n_k, then $x_{n_k - n}$ doesn't make sense. Also I don't see how you can assume that "x_{n_k-n} converges to x_0", because it isn't true. You started out with the right idea. Let n>m, and consider p(xn, xm). Show that we can make this arbitrarily small. This would imply that {xn} is Cauchy and hence convergent (because X is compact). Then we can use the continuity of f to conclude that f has a fixed point (how?). well, p(xn,xm)<p(xn-1,xm-1)<....<p(xn-m,x0) now how do i procceed from here? I mean if I assume n-m is big enough, s.t x_n-m->x0 then that will do, not sure that this is correct... I have another two questions, I need to answer if the next spaces satisfy S2 or Sep, the spaces are with they metrics affiliated with them, in here: http://www.math.tau.ac.il/~shustin/c...ar5top.xet.pdf in questions 4,5 (disregard the herbew words near them) there listed the spaces. well what i think is that because if a space is metric and it satisfies S2 then it also satisifes S2, and always when S2 is satisifes then also Sep is satisifed, then it's easy to check fo Sep, i think it follows that for the first the space follows both of them, while in the second it doesnt satisify either of them. not sure how argue that? I mean can I find a countable basis for the C^k[0,1]? or a countable dense set in it? what do you think? Recognitions: Homework Help Science Advisor Quote by loop quantum gravity well, p(xn,xm)<p(xn-1,xm-1)<....<p(xn-m,x0) now how do i procceed from here? I mean if I assume n-m is big enough, s.t x_n-m->x0 then that will do, not sure that this is correct... To finish off, you can use the triangle inequality $$\rho(x_{n-m},x_0) < \rho(x_{n-m},x_{n-m-1}) + \rho(x_{n-m-1}, x_{n-m-2}) + \cdots + \rho(x_1, x_0)$$ coupled with the observation that $$\rho(x_n, x_{n-1}) < \rho(x_1, x_0)$$. Recognitions: Homework Help Science Advisor As for your other question: I'm guessing S_2 means second countable (has a countable basis) and Sep means separable (has a countable dense subset), right? And you have the right idea: a metric space is separable iff it's second countable. I would use separability here. For C^k[0,1], try to see if Weierstrauss's theorem is helpful. For l_2, I would think about the subspace consisting of sequences with only finitely many terms. This is certainly dense in l_2, but is it countable? No. So how about we restrict these sequences to those with rational terms? it seems eventually that munkres has a similar questions with hints which were helpful. Thread Tools Similar Threads for: compactness and continuity. Thread Forum Replies General Math 12 Calculus & Beyond Homework 1 Calculus 5 Calculus 6 Introductory Physics Homework 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9136365056037903, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/4153-few-general-questions-inquiries.html
# Thread: 1. ## A few general questions and inquiries Hey everyone, I just joined and have a few math questions and "inquires" about certain things regarding math! I had a quiz recently . The first question was regarding circles: $x^2+64$, find the radius (I'm pretty sure it was radius, if not, please feel free to correct me). Can someone explain this? Is the product of two binomials always a trinomial? Explain please. Can someone give me some links/information to the following math related subjects: • Completing the Square • Equation of a Circle • Difference of Squares • Dividing, multiplying exponents • Solving negative exponents I'll add more 'topics' as they come to mind. Thanks for the help, greatly appreciated. - $NineZeroFive$ 2. ## Completing the Square Say you are trying to solve the equation $x^2+4x-12=y$ and you are trying to solve for x when y=0 $x^2+4x-12=0\quad\Rightarrow\quad x^2+4x=12$ alright, now you have an equation, all you need to do is factor, the easiest way to do that would be by adding a number such that the left hand side becomes a perfect square, this number is $\left(\frac{b}{2}\right)^2$ b in this case is 4 (it's the coefficient of x) so solve...(remember to add to both sides) $\left(\frac{b}{2}\right)^2=\left(\frac{4}{2}\right )^2=4$ $x^2+4x=12$ $x^2+4x+4=12+4$ $(x+2)^2=16$ $\sqrt{(x+2)^2}=\sqrt{16}$ $x+2=\pm4$ $x+2=4\quad x+2=-4$ $x=2\quad x=-6$ if you would like an explanation on why this works, please tell me. ~ $Q\!u\!i\!c\!k$ 3. Originally Posted by Quick Say you are trying to solve the equation $x^2+4x-12=y$ and you are trying to solve for x when y=0 $x^2+4x-12=0\quad\Rightarrow\quad x^2+4x=12$ alright, now you have an equation, all you need to do is factor, the easiest way to do that would be by adding a number such that the left hand side becomes a perfect square, this number is $\left(\frac{b}{2}\right)^2$ b in this case is 4 (it's the coefficient of x) so solve...(remember to add to both sides) $\left(\frac{b}{2}\right)^2=\left(\frac{4}{2}\right )^2=4$ $x^2+4x=12$ $x^2+4x+4=12+4$ $(x+2)^2=16$ $\sqrt{(x+2)^2}=\sqrt{16}$ $x+2=\pm4$ $x+2=4\quad x+2=-4$ $x=2\quad x=-6$ if you would like an explanation on why this works, please tell me. ~ $Q\!u\!i\!c\!k$ Thanks for the help, an explanation would be great. Also, would anyone know a formula relating to Quadratic Functions or Geometry (Circles) which uses $\pm$, I learned it a few months ago, just forgot what it was all about. - $NineZeroFive$ 4. ## Quadratic Formula Originally Posted by 905 Thanks for the help, an explanation would be great. $(x+d)^2=(x+d)(x+d)=x^2+dx+dx+d^2=x^2+2dx+d^2$ the equation $x^2+2dx+d^2$ is usually written as $ax^2+bx+c$ now to find $d$ we could divide $b$ by 2, $\left(\frac{b}{2}\right)$ so now we know what $d$ equals. to find $c$ we need to square $d$, so $c=\left(\frac{b}{2}\right)^2$ now remember when we added $c=\left(\frac{b}{2}\right)^2$ to both sides in my last post? you'll notice that the left hand side didn't have $c$ so we added it on. Originally Posted by 905 Also, would anyone know a formula relating to Quadratic Functions or Geometry (Circles) which uses $\pm$, I learned it a few months ago, just forgot what it was all about. - $NineZeroFive$ There is a "quadratic formula" $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ that you can use to solve quadratic functions. FYI the quadratic formula uses the "completing the square" method, watch: your given equation: $ax^2+bx+c=0$ completing the square only works when there is no coefficient of x^2, so we divide by $a$: $x^2+\frac{b}{a}x+\frac{c}{a}=0$ subtract $\frac{c}{a}$ from both sides: $x^2+\frac{b}{a}x=-\frac{c}{a}$ complete the square: $x^2+\frac{b}{a}x+\left(\frac{b}{2a}\right)^2=\left (\frac{b}{2a}\right)^2-\frac{c}{a}$ factor: $\left(x+\frac{b}{2a}\right)^2=\left(\frac{b}{2a}<br /> \right)^2-\frac{c}{a}$ simplify: $\left(x+\frac{b}{2a}\right)^2=\frac{b^2}{4a^2}-\frac{4ac}{4a^2}$ combine fractions: $\left(x+\frac{b}{2a}\right)^2=\frac{b^2-4ac}{4a^2}$ find the square root of both sides: $x+\frac{b}{2a}=\pm\sqrt{\frac{b^2-4ac}{4a^2}}$ simplify: $x+\frac{b}{2a}=\pm\frac{\sqrt{b^2-4ac}}{2a}$ subtract $\frac{b}{2a}$ from both sides: $x=-\frac{b}{2a}\pm\frac{\sqrt{b^2-4ac}}{2a}$ compine fractions: $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ ~ $Q\!u\!i\!c\!k$ 5. Thanks, exactly what I was looking for. A few more things: $ax^2+bx+c=0$ To solve for $c$, doesn't it have to be [/tex] on the left side? Whats the equation $ax^2+bx=c$ then? I've seen it around, don't really know what it means. Heres another question I remember getting: $x^2+25$ How many solutions? Is it a complete square? I'm guessing yes, since it x^2=x and 25=5. Correct? Also, would you know the answer to this: $<br /> x^2+64<br />$ What is the radius? 6. I can't answer your questions completely until you tell me what the expressions equal. Originally Posted by 905 Heres another question I remember getting: $x^2+25$ How many solutions? Is it a complete square? I'm guessing yes, since it x^2=x and 25=5. Correct? No it's not, always remember the rule $\sqrt{a+b}\neq\sqrt{a}+\sqrt{b}$ Originally Posted by 905 Also, would you know the answer to this: $<br /> x^2+64<br />$ What is the radius? as you can see below, this is NOT a circle. however, $y^2+x^2=64$ is. Attached Thumbnails 7. Thanks, my memory's probably flawed. You said $y=x^2+5$ is not a complete square. Can you explain why that is and how many solutions it has? What is the equation $ax^2+bx=c$? Its the same thing as $ax^2+bx+c=0$ except we're solving for $c$ this time correct? Shouldn't $c$ be negative since its brought over to the other side? Thanks, -NineZeroFive 8. Originally Posted by Quick No it's not, always remember the rule $\sqrt{a+b}\neq\sqrt{a}+\sqrt{b}$ Yes, I understand this, but: $x^2+64$ How do I find out it it is a perfect square? Don't you square both terms and to find out if it is a perfect square? 9. You are an inquisitive lil' bugger Originally Posted by 905 Thanks, my memory's probably flawed. You said $y=x^2+5$ is not a complete square. Can you explain why that is and how many solutions it has? there are no solutions, a good way to tell solutions is by using the discriminant of the equation $b^2-4ac$ if the answer is negative, there are no solutions, positive, two solutions, zero yields one solution. Let's try it... $b^2-4ac=0^2-4(1)(5)=0-20=-20$ there are no solutions to your equation What is the equation $ax^2+bx=c$? Its the same thing as $ax^2+bx+c=0$ except we're solving for $c$ this time correct? Shouldn't $c$ be negative since its brought over to the other side? Thanks, -NineZeroFive I have never seen the equation $ax^2+bx=c$ P.S. I've enclosed an excel program I made for school, only type in boxes with a tab in them (if you hold your mouse over the box than a message will say that you can type in it) Attached Files • Chapter 10.xls (51.5 KB, 32 views) 10. Originally Posted by Quick You are an inquisitive lil' bugger there are no solutions, a good way to tell solutions is by using the discriminant of the equation $b^2-4ac$ if the answer is negative, there are no solutions, positive, two solutions, zero yields one solution. Let's try it... $b^2-4ac=0^2-4(1)(5)=0-20=-20$ there are no solutions to your equation I have never seen the equation $ax^2+bx=c$ P.S. I've enclosed an excel program I made for school, only type in boxes with a tab in them (if you hold your mouse over the box than a message will say that you can type in it) I'm still a bit confused. You said their were two solutions, then you said there were none. Hmmm.. EDIT: Nevermind, I think I understand the equation as well. Since $c$ is not known (whether it is positive or negative)as it a constant, it remains "postive". In a regular situation if it was positive on the right side, it's sign would change before going to the other side. I think I understand now, Thanks again, -NineZeroFive 11. Originally Posted by NineZeroFive EDIT: Nevermind, I think I understand the equation as well. Since $c$ is not known (whether it is positive or negative)as it a constant, it remains "postive". In a regular situation if it was positive on the right side, it's sign would change before going to the other side. I think I understand now, Thanks again, -NineZeroFive I actually edited that wikipedia entry just to get rid of your confusion. ~ $Q\!u\!i\!c\!k$ 12. ## Rules of exponents $x^a\times x^b=x^{(a+b)}$ $x^a\div x^b=x^{(a-b)}$ $(x^a)^b=x^{(a\times b)}$ $x^{-a}=\frac{1}{x^a}$ If you want an explanation of why these rules work, don't hesitate to ask. ~ $Q\!u\!i\!c\!k$ 13. Originally Posted by Quick $x^a\times x^b=x^{(a+b)}$ $x^a\div x^b=x^{(a-b)}$ $(x^a)^b=x^{(a\times b)}$ $x^{-a}=\frac{1}{x^a}$ If you want an explanation of why these rules work, don't hesitate to ask. ~ $Q\!u\!i\!c\!k$ Sure, why not. I know a few of those already, just dont know why. I've learn't alot today, Thanks again. -NineZeroFive 14. Originally Posted by NineZeroFive Sure, why not. I know a few of those already, just dont know why. I've learn't alot today, Thanks again. -NineZeroFive Apparently I'm a good teacher anyway, $x^a\times x^b=x^{(a+b)}$ let's do a sample equation... $a^3\times a^2=a^5$ write it out: $\underbrace{\overbrace{a\times a\times a}^{a^3}\times \overbrace{a\times a}^{a^2}}_{a^5}$ next rule: $x^a\div x^b=x^{(a-b)}$ back to our sample equation: $a^3\div a^2=a^1$ write it out: $\frac{a^3}{a^2}=\frac{a\times a\times a}{a\times a}=\frac{\not a\times \not a\times a}{\not a\times \not a}=a^1$ next rule: $(x^a)^b=x^{(a\times b)}$ our sample equation: $(a^3)^2=a^6$ write it out: $(a^3)^2=a^3\times a^3=\underbrace{\overbrace{a\times a\times a}^{a^3}\times \overbrace{a\times a\times a}^{a^3}}_{a^6}$ next rule: $x^{-a}=\frac{1}{x^a}$ this one is not so easy to explain, but here I go... apply rule #2 to our sample equation: $a^2\div a^3=a^{(2-3)}=a^{-1}$ write it out: $\frac{a\times a}{a\times a\times a}=\frac{\not a\times \not a}{\not a\times \not a\times a}=\frac{1}{a}$ that is why $a^{-1}=\frac{1}{a}$ ~ $Q\!u\!i\!c\!k$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 112, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964353621006012, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/239369/why-do-people-fit-polynomials?answertab=active
# Why do people fit polynomials? Could someone explain the justification and limits of fitting polynomials to arbitrary data points? I mean what about square roots or fractional or inverse powers? Most of the time some wants to improve a linear fit, they rather include quadric terms than anything else. Can you think of a mathematical justification why plain powers are favoured? I assume it might be related to function approximation Taylor series. Suppose I wanted to include fractional powers, would it rather make sense to include $x^a$ or $x^{0.5}+a\cdot x^{1.5}$ for "common real world data". Or do rational polynomials do better and I'd rather try these first? Maybe someone can elaborate the power of these method to approximate an unknown function in data. - Polynominals are smooth on whole $\mathbb R$, but $\sqrt{x} = x^{\frac 1 2}$ isn't differentiable at 0 for example. – Stefan Nov 17 '12 at 17:56 Ease of manipulation, smoothness(?). Further, almost everything can be written out as polynomials using Taylor Expansion. – Inquest Nov 17 '12 at 17:56 @Inquest: Not everything, so the question is why is the "almost" part more relevant for the "real world". – Gerenuk Nov 17 '12 at 18:04 @Stefan: Can this be translated into an argument why differentiability at zero is important for real world data? I suppose that is because real world shouldnt have infinite slopes? – Gerenuk Nov 17 '12 at 18:06 Well, Taylor approximation is such a good tool, and for it to work you need your function to be smooth, if you can't differentiate at a point, you can approximate. And of course infinite slopes don't make sense in real world application. – Stefan Nov 17 '12 at 18:13 show 5 more comments ## 2 Answers There are lots of theoretical results telling us that approximation by polynomials works well for various classes of functions, and even telling us what the maximum approximation error will be. For example, there's the Stone-Weiertrass theorem mentioned in the other answer, plus the "Jackson" theorems and many others in constructive approximation: http://en.wikipedia.org/wiki/Constructive_function_theory There are fast reliable easy-to-implement algorithms for computing the approximations. For some good examples, look at the Chebfun system, which basically does everything by computing high-degree polynomial approximations: http://www2.maths.ox.ac.uk/chebfun/ Once you have a polynomial, it's relatively easy (and inexpensive) to calculate function values, derivatives, integrals, zeros, bounds, and so on. Again, see Chebfun for examples. In some fields (like computer-aided design), polynomial forms are considered "standard", and using anything else causes data exchange problems. Rational approximations will sometimes work better than polynomial ones (in the sense that you get a smaller error with no increase in the degrees of freedom of the approximant). But optimal rational approximations are much harder to compute, and, once you have them, they are more difficult to handle (harder to integrate, for example). Polynomial approximation is not always the best choice (nothing is), but it's often a pretty good one, and it's a reasonable thing to try unless the nature of your specific problem suggests something different. - The square root itself is approximated with polynomial functions see here (for example for computations), so polynomials are here a bit more 'fundamental'. By the Stone–Weierstrass theorem every continuous function defined on a closed interval [a,b] can be uniformly approximated as closely as desired by a polynomial function. For numerical quadrature the definite integral of polynomials is exact, so if you approximate your function with polynomials the only step where approximation errors happens is the polynomial approximation... However, polynomials are not the only suitable basis for approximating continuous functions. For example $\sin(nx)$ and $\cos(nx)$ are very popular. Additionally if your functions are continuous but maybe have values that are not differentiable then in fact wavelets are a more modern alternative... EDIT: as to fitting points... no, polynomials are not usually the top choice. For points the function to be fitted depends very much on your problem domain. For example if the points represent populations growth then usually exponential functions are fitted. Also it matters how many dimensions your data has. High dimension point fitting problems (or clustering or seperating the points etc) go more in the direction of machine learning than approximation... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220218658447266, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/100160-finding-minimum.html
# Thread: 1. ## finding minimum $AC=\frac{TC}{x}=\frac{2000}{x}+20+\frac{10}{\sqrt{ x}}$ So i have to show that there is no minimum. thus i have to differentiate the function, which apparently gives this $\frac{dAC}{dx}=AC'= -\frac{2000}{x^2}-\frac{5}{\sqrt[3]{}x^3}$ I don't know how they got that answer , can someone please explain? 2. Originally Posted by el123 $AC=\frac{TC}{x}=\frac{2000}{x}+20+\frac{10}{\sqrt{ x}}$ So i have to show that there is no minimum. thus i have to differentiate the function, which apparently gives this $\frac{dAC}{dx}=AC'= -\frac{2000}{x^2}-\frac{5}{\sqrt[3]{}x^3}$ I don't know how they got that answer , can someone please explain? $AC = \frac{2000}{x}+20+\frac{10}{\sqrt{x}}$ $AC = 2000x^{-1} + 20 + 10x^{-\frac{1}{2}}$ use the power rule ... $AC' = -2000x^{-2} - 5x^{-\frac{3}{2}}$ rewrite ... $AC' = -\frac{2000}{x^2}-\frac{5}{\textcolor{red}{\sqrt{x^3}}}$ note there is no "cube" root in the denominator of the last term. 3. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650999903678894, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/260149/with-sample-variance-find-interval-for-population-variance
With sample variance, find interval for population variance The variance of in a random sample of 16 is 2.5. What is a 95% confidence interval for the variance of the population, assuming the population is normally distributed. - 3 Andy: You are putting questions on the site rapidly and without any explanation about where you are stuck, what you tried, what you know. This is not the way to proceed, see the How to ask page. – Did Dec 16 '12 at 18:57 1 Answer I vaguely remember seeing this question here before . . . . . . $$\frac{(n-1)S^2}{\sigma^2} = \frac{1}{\sigma^2}\sum_{i=1}^n (X_i - \bar X)^2 \sim \chi^2_{n-1},\text{ where }\bar X = \frac{X_1+\cdots+X_n}{n}.$$ So find $A,B$, such that $\Pr(\chi_{n-1}^2 >B) = \Pr(\chi^2_{n-1}>A) = 0.05/2$. Then $\Pr(A < \chi^2_{n-1}<B) = 0.95$. $$\Pr\left( A < \frac{(n-1)S^2}{\sigma^2} < B \right) = 0.95.$$ $$\Pr\left( \frac{(n-1)S^2}{B} < \sigma^2 < \frac{(n-1)S^2}{A} \right) =0.95.$$ (I'll let you fill in the details of algebra, etc.) There you have it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.859568178653717, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/26772/birthday-coverage-problem/26775
# Birthday-coverage problem I heard an interesting question recently: What is the minimum number of people required to make it more likely than not that all 365 possible birthdays are covered? Monte Carlo simulation suggests 2287 ($\pm 1$, I think). More generally, with $p$ people, what is the probability that for each of the 365 days of the year, there is at least one person in the group with that birthday? (Yes, ignoring the leap-day.) - ## 3 Answers For the coupon collector's problem with $n$ objects, let $T$ be the number of trials needed to get a complete set. Then we have the formula $$P(T\leq k)=n^{-k}\ n!\ \left\lbrace {k\atop n}\right\rbrace.$$ Here the braces indicate Stirling numbers of the second kind. With $n=365$, Maple gives me $P(T\leq 2286)=.4994$ while $P(T\leq 2287)=.5003$, so that $2287$ is the smallest number to give a 50% chance to get all 365 birthdays. - 2 (+1) for exact approach. – cardinal Mar 13 '11 at 20:39 3 Thanks, but I admit that approximations like those you provide are often more useful. – Byron Schmuland Mar 13 '11 at 20:40 3 It's always good to see both. It can provide intuition in both directions. – cardinal Mar 13 '11 at 20:42 1 – Henry Mar 13 '11 at 21:34 6 @Henry Just in case, I mention you should not worry about the relative popularities of answers on MSE and the like. To ponder on the ways votes accumulate on some answers and not on others is simply a waste of time, believe me. – Did Mar 14 '11 at 6:50 show 5 more comments As Ross also states, this can be framed as a coupon collector problem where birthdays correspond to coupons and individuals correspond to trials. Then, good bounds and a strong asymptotic result are both known for the time (number of trials) needed to collect at least one coupon of each unique type. Let $T$ be the number of trials at which all coupons have been collected. Let there be $n$ coupons (so $n = 365$ in our case). For each $n$, and any $c > 0$, $$\mathbb{P}(T > n \log n + c n) < e^{-c}$$ and, also, $$\mathbb{P}(T < n \log n - c n) < e^{-c} \> .$$ From the first inequality we get that no more than 2407 trials are needed so that there is greater than 50% chance all coupons are collected, and from the second we know that at least 1900 are needed. For more details, see the recent question and answer here. The following classical asymptotic result holds for $c \in \mathbb{R}$, $$\mathbb{P}(T > n \log n + c n ) \to 1 - e^{-e^{-c}} .$$ See, e.g., Motwani and Raghavan, Randomized Algorithms, pp. 60--63 for a proof. Note that if you plug in $c = -\log \log 2$ here, one gets 2287.240 which matches very closely to Byron's exact answer and the Monte Carlo estimate reported in the question. - 3 I would be interested to know about the reason for the downvote. I don't post answers very often, preferring to comment where possible, instead. Hence, I try to take great care when answering---even continually revisiting what I've written in an effort to improve. Suggestions for improvement are welcome. – cardinal Mar 14 '11 at 12:19 1 I dunno who or why about the downvote, but I've upvoted your answer to acknowledge its merit. – Mike Jones Aug 25 '11 at 20:02 This is the Coupon collector problem The expected value to cover them all (not quite what you asked) is $365 \sum_{i=1}^{365}\frac{1}{i}\approx 365n \ln 365 + 365\gamma + \frac{1}{2}$ - +1 Ahh, right, of course it's the coupon collector problem. $365\sum_{i=1}^{365}\frac{1}{i}\approx 2364.65$, so I'm pretty sure the expected number of people is not the same as the number of people to have a better-than-50% chance, though. – Isaac Mar 13 '11 at 20:23 1 @Isaac: I would think the expected number is higher than the number to have 50% chance as the tail can go on a long way. The numerics quoted support this. – Ross Millikan Mar 14 '11 at 1:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430927634239197, "perplexity_flag": "middle"}
http://electronics.stackexchange.com/questions/45341/potential-difference-across-one-resistor-with-and-without-a-known-current
# Potential difference across one resistor with and without a known current? Okay imagine you have a voltage supply of 10V and one Resistor of 5 Ohms. Now find the current : I=5/10 = 0.5A So potential difference across that resistor : V=IR=10V .. which proves this statement I found in my lecture note : If no internal resistance is present in voltage supply, the potential difference across the resistor is equal to supply voltage. Now imagine the same circuit but total current is given as 0.1A . The potential difference is V=IR=0.1*5=0.5V, which basically means the statement above is incorrect. However, my question is regarding the supplied current : what does it mean when the current is given or the need to find it ourselves ?? Is it possible to have the same voltage supply but with a different current supply ? Let me know if you need more clarification - If the current is 0.1 A the supply voltage must be 0.5 V. – Leon Heller Oct 23 '12 at 20:52 Or there's nonzero internal resistance in the supply. – Dave Tweed Oct 23 '12 at 20:54 But how come the example shows the supply voltage is 10V ? – Region Oct 23 '12 at 20:54 1 The supply must have internal resistance. – Leon Heller Oct 23 '12 at 21:14 ## 2 Answers V=IR=10V .. which proves this statement I found in my lecture note If no internal resistance is present in voltage supply, the potential difference across the resistor is equal to supply voltage. You don't need to measure anything to prove this statement. It's a simple consequence of Kirchoff's voltage law. If you have a perfect 10 V voltage supply, no matter what you connect across it, the voltage across that element will be 10 V. Now imagine the same circuit but total current is given as 0.1A . The potential difference is V=IR=0.1*5=0.5V, which basically means the statement above is incorrect. I'll assume you know that your supply has an open-circuit voltage of 10 V, but you don't know the internal resistance. If you measure 0.1 A, then you know the total resistance is 100 Ohms. This total resistance is made up of the supply's internal resistance and your external load (5 Ohms). Therefore you know the internal resistance is 95 Ohms. - Okay imagine you have a voltage supply of 10V and one Resistor of 5 Ohms. Now find the current : I=5/10 = 0.5A Wrong: if V = 10 V and R = 5 Ohms, then the current is given by Ohm's law: $$I = \frac{V}{R} = \frac{10}{5} = 2 \mathrm{\, A}$$ That, given that the voltage supply is ideal, which means that it doesn't have any internal resistance. If no internal resistance is present in voltage supply, the potential difference across the resistor is equal to supply voltage. This means that if the source is not ideal, it will show a series resistance which will add to the load causing a voltage divider that will lower the voltage on the load and dissipate some power. It will look like this: The voltage is given by: $$V_{load} = V_{supply} \cdot \frac{R_{load}}{R_{supply} + R_{load}}$$ Now, let's assume your book tells you that the current is now 0.1 A. The supply voltage will be still 10 V, and your load will still be 5 Ohms. But you now have a non-ideal supply, which will have the series resistance. Then your current is given by: $$I = \frac{V_{supply}}{R_{supply}+R_{load}} = 0.1 \mathrm{\, A}$$ and therefore $$R_{supply} = \frac{V_{supply}}{I} - R_{load} = \frac{10}{0.1} - 5 = 95 \mathrm{\, \Omega}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272898435592651, "perplexity_flag": "middle"}
http://en.m.wikibooks.org/wiki/Differential_Geometry/Curvature_and_Osculating_Circle
# Differential Geometry/Curvature and Osculating Circle Consider a curve $C$ of class of at least 2, parametrized by the arc length parameter, $f(s)$. The magnitude of $f''(s)$ is called the curvature of the curve $C$ at the point $f(s)$. The multiplicative inverse of the curvature is called the radius of curvature. The curvature is 0 at every point if and only if the curve is a straight line. Suppose that the curvature is always 0. Then $f''(s)$ is always 0, which proves that it is a straight line through elementary integrations. We can also consider the normal vector $f''(s)$ to be the curvature vector. The point that is away from $f(s)$ by a distance of the radius of curvature in the direction of the principal normal unit vector is called the center of curvature of the point $f(s)$ and the circle with the center on the center of curvature and with the radius as the radius of curvature is called the osculating circle at the point $f(s)$. It is very obvious that the unit tangent vector at the point $f(s)$ is tangent to the osculating circle at $f(s)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462329745292664, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/87784/when-are-theta-and-sin-theta-circ-both-rational
# When are $\theta$ and $\sin\theta^\circ$ both rational? [duplicate] Possible Duplicate: Sine values being rational I'm guessing that if I look in Ivan Niven's elementary book on irrational numbers, I'll find the answer to this quickly, but I'm posting it here in case people find it useful. For what rational values of $x/\pi$ is $\sin x$ rational? Obviously $\sin 0$, $\sin (\pi/6)$, $\sin (\pi/2)$ and their counterparts in the other quadrants will do it. I believe I've seen it asserted by someone who should know, that those are the only ones. How is that proved? - – André Nicolas Dec 2 '11 at 17:18 Maybe the difference is that this question explicitly asks for a proof of that claim. (But the one to defend that this is not a duplicate should be, first of all, the OP.) – Martin Sleziak Dec 2 '11 at 17:18 It's beginning to look as if this should be closed as a (nearly?) exact duplicate. I was moved to ask this by an earlier similar question posted today, but the answers getting posted there make it appear that it's being treated as if the questioner meant just what I asked here. – Michael Hardy Dec 2 '11 at 19:08 ## marked as duplicate by GEdgar, Ross Millikan, Srivatsan, t.b., lhfDec 2 '11 at 19:57 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer Those are the only ones. I'll answer the question for cosine instead: $2 \cos \frac{2 \pi k}{n}$ is the sum of two algebraic integers $\zeta_n + \zeta_n^{-1} = e^{ \frac{2 \pi i k}{n} } + e^{ - \frac{2 \pi i k}{n} }$, hence an algebraic integer, so it is rational if and only if it is an integer. Hence $\cos \frac{2 \pi k}{n} = 0, \pm \frac{1}{2}, \pm 1$. In fact, more can be said. $\mathbb{Q}(\cos \frac{2 \pi k}{n})$ is the real subfield of $\mathbb{Q}(\zeta_n)$, hence has degree $\frac{\varphi(n)}{2}$ over $\mathbb{Q}$. - 2 – robjohn♦ Dec 2 '11 at 19:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659024477005005, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/09/21/the-submodule-of-invariants/?like=1&_wpnonce=7a8d289a6e
# The Unapologetic Mathematician ## The Submodule of Invariants If $V$ is a module of a Lie algebra $L$, there is one submodule that turns out to be rather interesting: the submodule $V^0$ of vectors $v\in V$ such that $x\cdot v=0$ for all $x\in L$. We call these vectors “invariants” of $L$. As an illustration of how interesting these are, consider the modules we looked at last time. What are the invariant linear maps $\hom(V,W)^0$ from one module $V$ to another $W$? We consider the action of $x\in L$ on a linear map $f$: $\displaystyle\left[x\cdot f\right](v)=x\cdot f(V)-f(x\cdot v)=0$ Or, in other words: $\displaystyle x\cdot f(v)=f(x\cdot v)$ That is, a linear map $f\in\hom(V,W)$ is invariant if and only if it intertwines the actions on $V$ and $W$. That is, $\hom_\mathbb{F}(V,W)^0=hom_L(V,W)$. Next, consider the bilinear forms on $L$. Here we calculate $\displaystyle\begin{aligned}\left[y\cdot B\right](x,z)&=-B([y,x],z)-B(x,[y,z])\\&=B([x,y],z)-B(x,[y,z])=0\end{aligned}$ That is, a bilinear form is invariant if and only if it is associative, in the sense that the Killing form is: $B([x,y],z)=B(x,[y,z])$ ## 2 Comments » 1. Hi John, don’t know whether it’s worth posting here but your blog has been mentioned in the list of mathematics blogs here-> http://www.talkora.com/science/List-of-mathematics-blogs_112 (look for entry #5 in the list) Comment by Will | January 3, 2013 | Reply 2. Will you be posting anymore for the rest of time? Cheers, NS Comment by | April 13, 2013 | Reply « Previous | ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8733727931976318, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/165596-horizontal-tangent-implicit-differentiation.html
Thread: 1. Horizontal Tangent - Implicit Differentiation a curve in the plane is given by the following equation x^4+2x^2y+3y^2=6 A) Find an expression for dy/dx using implicit differentiation B) Find the four points on the curve where the tangent line is horizontal Ok I know how to solve for the derivative. I got dy/dx = (4x^3-4xy)/(2x^2+6y) However Im not quite sure how to solve when the tangent is horizontal. I that the tangent is horizontal when the derivative equals 0. However I don't know how solve for this when it is an implicit derivative. 2. Originally Posted by rawkstar a curve in the plane is given by the following equation x^4+2x^2y+3y^2=6 A) Find an expression for dy/dx using implicit differentiation B) Find the four points on the curve where the tangent line is horizontal Ok I know how to solve for the derivative. I got dy/dx = (4x^3-4xy)/(2x^2+6y) However Im not quite sure how to solve when the tangent is horizontal. I that the tangent is horizontal when the derivative equals 0. However I don't know how solve for this when it is an implicit derivative. I am assuming your derivative is correct ... $\frac{dy}{dx} = 0$ if $4x^3 - 4xy = 0$ $4x(x^2-y) = 0$ $x = 0$ , $y = x^2$ sub $x = 0$ and find the corresponding y-value(s) sub $y = x^2$ and find the corresponding x-values 3. sub it into the original equation? 4. Originally Posted by rawkstar sub it into the original equation? yes ... you are trying to find points on the curve. 5. ok so i subbed those into the orignial equation and got two x values and two y values do i then plug those into the original to get the corresponding opposite value 6. Originally Posted by rawkstar ok so i subbed those into the orignial equation and got two x values and two y values do i then plug those into the original to get the corresponding opposite value when you sub in x = 0 , you get two values for y ... the two points will be $(0,\sqrt{2})$ and $(0,-\sqrt{2})<br />$ when you sub in $x^2$ for $y$, you get two values for y ... square those to get the corresponding x-values.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8946122527122498, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/29091/simulating-a-gaussian-process-with-an-exponentially-decaying-covariance-function/29108
Simulating a Gaussian process with an exponentially decaying covariance function I'm trying to generate many draws (i.e., realizations) of a Gaussian process $e_i(t)$, $1\leq t \leq T$ with mean 0 and covariance function $\gamma(s,t)=\exp(-|t-s|)$. Is there an efficient way to do this that wouldn't involve computing the square root of a $T \times T$ covariance matrix? Alternatively can anyone recommend an `R` package to do this? - 2 It's a stationary process (looks close to a simple version of an OU process). Is it uniformly sampled? – cardinal May 24 '12 at 13:53 The R package `mvtnorm` has `rmvnorm(n, mean, sigma)` where `sigma` is the covariance matrix; you'd have to construct the covariance matrix for your sampled / selected $t$s yourself, though. – jbowman May 24 '12 at 14:06 2 @jb Presumably $T$ is huge, otherwise the OP wouldn't be asking to avoid the matrix decomposition (which is implicit in `rmvnorm`). – whuber♦ May 24 '12 at 14:31 2 Answers Yes. There is a very efficient (linear time) algorithm, and the intuition for it comes directly from the uniformly-sampled case. Suppose we have a partition of $[0,T]$ such that $0=t_0 < t_1 < t_2 < \cdots < t_n = T$. Uniformly sampled case In this case we have $t_i = i \Delta$ where $\Delta = T/n$. Let $X_i := X(t_i)$ denote the value of the discretely sampled process at time $t_i$. It is easy to see that the $X_i$ form an AR(1) process with correlation $\rho = \exp(-\Delta)$. Hence, we can generate a sample path $\{X_t\}$ for the partition as follows $$X_{i+1} = \rho X_i + \sqrt{1-\rho^2} Z_{i+1} \>,$$ where $Z_i$ are iid $\mathcal N(0,1)$ and $X_0 = Z_0$. General case We might then imagine that it could be possible to do this for a general partition. In particular, let $\Delta_i = t_{i+1} - t_i$ and $\rho_i = \exp(-\Delta_i)$. We have that $$\gamma(t_i,t_{i+1}) = \rho_i \>,$$ and so we might guess that $$X_{i+1} = \rho_i X_i + \sqrt{1-\rho_i^2} Z_{i+1} \>.$$ Indeed, $\mathbb E X_{i+1} X_i = \rho_i$ and so we at least have the correlation with the neighboring term correct. The result now follows by telescoping via the tower property of conditional expectation. Namely, $$\newcommand{\e}{\mathbb E} \e X_i X_{i-\ell} = \e( \e(X_i X_{i-\ell} \mid X_{i-1} )) = \rho_{i-1} \mathbb E X_{i-1} X_{i-\ell} = \cdots = \prod_{k=1}^\ell \rho_{i-k} \>,$$ and the product telescopes in the following way $$\prod_{k=1}^\ell \rho_{i-k} = \exp\Big(-\sum_{k=1}^\ell \Delta_{i-k}\Big) = \exp(t_{i-\ell} - t_i) = \gamma(t_{i-\ell},t_i) \>.$$ This proves the result. Hence the process can be generated on an arbitrary partition from a sequence of iid $\mathcal N(0,1)$ random variables in $O(n)$ time where $n$ is the size of the partition. NB: This is an exact sampling technique in that it provides a sampled version of the desired process with the exactly correct finite-dimensional distributions. This is in contrast to Euler (and other) discretization schemes for more general SDEs, which incur a bias due to the approximation via discretization. - Just a few more remarks. (1) To get a good idea of what the continuous time process looks like, $n$ and $T$ must be chosen so that $\Delta$ is small, say less than $0.1$. (2) The inverse covariance (precision) matrix for the timeseries vector is tri-diagonal, as is its Cholesky root. – Yves May 24 '12 at 17:55 @Yves: Thanks for your comments. To be clear, the procedure I've outlined gives an exact realization of the continuous-time process sampled on the corresponding partition; in particular, there is no discretization error like there is in typical Euler-scheme approximation to more general SDEs. The inverse Cholesky, as shown by the construction in the answer has nonzero terms only on the diagonal and lower off-diagonal, so it's a little simpler than tridiagonal. – cardinal May 24 '12 at 18:13 Calculate the decomposed covariance matrix by incomplete Cholesky decomposition or any other matrix decomposition technique. Decomposed matrix should be TxM, where M is only a fraction of T. http://en.wikipedia.org/wiki/Incomplete_Cholesky_factorization - 2 Can you give an explicit form of the Cholesky decomposition here? I think that the answer by cardinal achieves just that, if you think about it, by expressing $X_i$ as a function of the history. – StasK May 25 '12 at 2:54 The algorithm is a little too long to summarize. You can find an excellent description here: Kernel ICA, page 20. Note that this algorithm is incomplete, meaning it doesn't calculate the entire decomposition but rather an approximation (hence it is much faster). I have published code for this algorithm in the KMBOX toolbox, you can download it here: km_kernel_icd‌​. – Steven Jun 6 '12 at 20:45 default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927247941493988, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/76873/calculating-mean-from-the-probability-mass-function?answertab=oldest
# Calculating mean from the probability mass function Question. The number of flaws $X$ on an electroplated car grill is known to the have the following probability mass function: $$\begin{matrix} x & : & 0 & 1 & 2 & 3 \\ p(x) & : & 0.8 & 0.1 & 0.05 & 0.05 \end{matrix}$$ Calculate the mean of $X$. My working. $$\text{Mean} = E(X) = (0 \times 0.8) + (1 \times 0.1) + (2 \times 0.05) + (3 \times 0.05) = 0.35 .$$ But the answer is $0.25$ (which is also $\frac{0.8+0.1+0.05+0.05}{4}$). What am I doing wrong? - 1 Your calculations are correct. – Sasha Oct 29 '11 at 6:01 1 If you sum up the probabilities and divide it by $4$, it is clear that the result will always be $0.25$. That does not have any relevance to the problem. Your answer ($0.35$) seems ok. – Srivatsan Oct 29 '11 at 6:05 Thanks, I did suspect the answers in the book are wrong. Must be a slight typo – Arvin Oct 29 '11 at 6:07 ## 1 Answer Your answer ($0.35$) looks correct, and the textbook answer is wrong. The fraction $\frac{p(0)+p(1)+p(2)+p(3)}{4}$ will evaluate to $\frac{1}{4}=0.25$ for any probability mass function $p$, so that particular ratio does not have any significance for the expectation of $X$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278700351715088, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/42147?sort=votes
## name my cat: regular categories where inverse images also have right adjoint ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I need a name for a regular category where the inverse image maps have right adjoints. If $\mathcal C$ is a regular category, then the poset of subobjects $\mathsf{Sub}(X)$ of any object $X$ is a semilattice and the inverse image map of any arrow $f:X\to Y$ has a left adjoint $\exists_f:\mathsf{Sub}(X) \to \mathsf{Sub}(Y)$. If $\mathcal C$ is a Heyting category, then the inverse image map $f$ also has a right adjoint $\forall_f:\mathsf{Sub}(X) \to \mathsf{Sub}(Y)$. But Heyting categories also have all finite coproducts and I want a name for regular categories that just have those right adjoints. Do you know if this category of categories already has a name? Can you suggest a name? Update: Heyting categories or logoses need not have all finite coproducts, but posets of subobjects are lattices, where I only need semilattices. - Is there a name for the fragment of logic containing only $\wedge$, $\exists$, $\Rightarrow$, and $\forall$? If so, you could borrow the same name for your categories, since it would probably be their internal logic. – Mike Shulman Oct 15 2010 at 20:29 +1 for the title. – Dave Penneys Nov 23 2010 at 19:12 ## 1 Answer From Freyd and Scedrov's book Categories, Allegories: a logos is a regular category in which $Sub(A)$ is a lattice for each object $A$, and in which the inverse-image operation $f^*: Sub(B) \to Sub(A)$ has a right adjoint for each morphism $f: A \to B$ (page 117). - I don't want a lattice structure on my subobject posets. But maybe I could call my categories "hemilogoses" or something like that. – Wouter Stekelenburg Oct 18 2010 at 9:14 1 Sorry I don't know a name for precisely what you want. It seems like a natural-enough notion. – Todd Trimble Oct 18 2010 at 10:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9027099609375, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/102573?sort=votes
## Structure of F_p[G], for finite group G ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider group algebra k[G] of finite group G. If k is alg.closed then every irrep lives there with multiplicity equal to dimension. (More conceptually as bimodule over GxG it is multiplicity free and all irreps live there). Now let k=F_p what is known then ? Is true that all irreps of G live in k[G] ? (Seems obviously yes... but to be sure) But multiplicities ? If assume order of G does not divide p ? Example: G=Z/nZ, the F_p[G] = F_p[x] / (x^p-1) and every irreducible polynom generates ideal and hence subrepresentation, so these are quests about irred. polynoms. E.g. like this: http://math.stackexchange.com/questions/172534/for-which-values-of-n-is-the-polynomial-px-1xx2-cdotsxn-irreducible/172540#comment396429_172540 http://math.stackexchange.com/questions/172468/for-what-n-k-there-exists-a-polynomial-px-in-f-2x-s-t-degp-k-and Motivation: "cyclic error correcting codes" are exactly the ideals in F_p[x] / (x^p-1) = F_p[G], why take G= Z/nZ ? may be take other groups give "better" codes ? - 2 If by irreps you mean indecomposable representations, then not only it is not obvious that all of them are in the group algebra: it is in fact generally false (for there are infinitely many of them) On the other hand, if by irrep you mean simple representations, the situation is more hopeful; for example, in the extreme case where $G$ is a $p$-group, there is only one simple representation, the trivial one. But now you have to decide what exactly you want "live in F[G]" to mean, as the algebra is no longer semisimple. – Mariano Suárez-Alvarez Jul 18 at 19:51 (The statement in your first sentence is only true if $k$ is of characteristic zero) – Mariano Suárez-Alvarez Jul 18 at 19:52 5 Mariano, "irrep" means "irreducible (that is, simple) representation." – Ben Webster♦ Jul 18 at 20:01 3 The keyword you want here (for starters) is "modular representation theory." This is an old and, as I understand it, big subject. – Qiaochu Yuan Jul 18 at 20:18 See also Geoff Robinson answer at MSE math.stackexchange.com/questions/173811/… – Alexander Chervov Jul 22 at 11:22 ## 3 Answers It seems natural to work with an algebraically closed field of characteristic $p$, or, less restrictively, a splitting field of characteristic $p$ for $G$. For example, any field containing the primitive $m$-th roots of unity, where $|G| = p^{a}m$ and $p$ dos not divide $m$, so I assume now that $k$ is algebraically closed of characteristic $p.$ We have entered the realm of modular representation theory, whose theory was first extensively developed by Richard Brauer. By now the basic theory is covered in numerous texts (eg by Alperin, by Curtis-Reiner). The indecomposable direct summands of the group algebra $kG$ are the so-called projective indecomposable modules (sometimes called principal indecomposables). The number of isomorphism types of these is the number of conjugacy classes of elements of order prime to $p$ of $G.$ Each of these has a simple socle (the socle of module is its largset semisimple submodule) and a unique maximal submodule (so a simple head, or largest semisimple quotient module). Because the group algebra is a symmetric algebra, the socle and head of $P$ are isomorphic. If $P$ is one of these projective indecomposable modules, and has (simple) socle $S$, then up to isomorphism, $P$ occurs `${\rm dim}_{k}(S)$` times as a summand of the group algebra $k[G].$ Hence $S$ occurs $dim_{k}(S)$ times (up to isomophism) as a summand of the socle of the regular module $kG.$ Also, $S$ occurs ${\rm dim}{k}(P)$ times as a composition factor of the regular module $k[G].$ Every simple module for the group algebra $k[G]$ has a projective cover, and these all occur as direct summands of the group algebra. As Mariano remarks in his comment, the theory of non-projective indecomposable modules is much less transparent. The results over non-splitting field can be recovered using Galois theory and Clifford theory. However, modular representation theory is much richer and diverse than this brief description allows for. These basic facts (and much much more) were all known to Brauer, perhaps sometimes in different language, and these are just the beginnings. If $|G|$ is not divisible by $p,$ the projective indecomposable $P$ and its socle $S$ are the same module, and the theory degenerates to the semisimple case, which is much like the characterisic zero situation. - Thank you for the answer! What if field is not alg. closed e.g. just F_2={0,1} ? (This is main example to think if keep in my error-correcting codes). – Alexander Chervov Jul 19 at 5:57 In tht case, as I said, Galois theory comes into play. A simple module over the prime field breaks up when the field is extended to a splitting field, into a direct sum of a number of Galois conjugate modules, all of the same dimension, which can be easily calculated. Since the regular module is realizable over the prime field, the simple modules over the prime field all still appear in the socle of the regular module, but the multiplicity as a summand is the dimension of one of the absolutely irreducible simple module (ie over the algebraically closed field). – Geoff Robinson Jul 19 at 7:18 Thank you very much ! – Alexander Chervov Jul 19 at 19:34 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The basic roadblock for a finite group over a finite prime field (whether or not the characteristic divides the group order) is clear-cut: you rarely get all of the absolutely irreducible representations of the group over the prime field, which is equally a problem when working over `$\mathbb{Q}$`. You always get various irreducible reprsentations living in the group algebra, but typically these split up further if you extend the field as Geoff indicates. Then it can get arbitrarily complicated to sort everything out, but the tools are there. Trying to work exclusively over a prime field is seldom productive, without reference to a splitting field. Geoff has given a broad sketch of the subject, which is treated thoroughly in the 1962 book of Curtis-Reiner and (in a more modern style) in their later two-volume work. Alperin's book is short, readable, and module-oriented, while the third part of Serre's also short book is devoted to the basics of Brauer theory (without block theory) following an introduction to ordinary character theory, etc. There's plenty of literature on coding theory, some of which uses finite group theory in an essential way, but it's hard to do anything insightfully without at least some theoretical background in modular representation theory. P.S. I don't understand the parenthetic remark in your first paragraph. - Thank you for the answer. Does the standard construction of embedding End(V) to k[G] work over F_p ? I mean irrep (p,V) take a basis e_i in V. Consider matrix elements as elements for each element p(g)_ij - this gives some elements in k[G]. Over complex numbers this is subalgebra in k[G] isomorphic to End(V). It seems it works over any field. Am I wrong ? – Alexander Chervov Jul 19 at 7:06 End_{F}(V) embeds in the socle of FG (as rightmodule) when F is a simple FG-module, but in general if F is not a splitting field it will not be isomorphic a full matix ring over F. – Geoff Robinson Jul 19 at 7:32 Adding a few words from the coding theory side. Abelian groups without $p$-torsion are somewhat more natural in coding theory, because then we get the machinery of discrete Fourier transform (which is, of course, just representation theory) to play with. Dihedral groups have been used as symmetry groups of cyclic codes (add order reversal symmetry), but for the most part symmetries of codes just aid proving things about their properties. When the group has $p$-torsion, the theory is less clean. Nevertheless, the topic has been studied. Check out papers by Karl-Heinz Zimmermann (http://www.tu-harburg.de/ti6/mitarbeiter/khz/pub.html). In his papers from the 90s a lot of representation theoretical concepts appear. I don't know, if they are very hot from the point of view of constructing new and better codes, though. - Notifier: May I kindly ask you to look at mathoverflow.net/questions/117505/… PS Happy New year ! – Alexander Chervov Dec 29 at 12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124334454536438, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/206275-methods-proof-vacuous-proof.html
# Thread: 1. ## Methods of Proof: Vacuous Proof THIS IS NOT HOMEWORK! This is a practice problem from our textbook. Class link: Discrete Math I 1) "By the definition of R we see that there are no ordered pairsin R where the first entry of one and the second entry of another is the same..." I do not understand what the definition of R is, nor what they mean by the first entry of one and the second entry of another is the same. 2) "...so the hypothesis, ..., cannot be true." Maybe because I do not understand #1 above, I also do not understand this conclusion. 3) "...this shows that the implication is true" I obviously do not understand this either. Any clarification would be greatly appreciated. Thank you! 2. ## Re: Methods of Proof: Vacuous Proof 1) "By the definition of R we see that there are no ordered pairsin R where the first entry of one and the second entry of another is the same..." I do not understand what the definition of R is, nor what they mean by the first entry of one and the second entry of another is the same. 2) "...so the hypothesis, ..., cannot be true." Maybe because I do not understand #1 above, I also do not understand this conclusion. 3) "...this shows that the implication is true" I obviously do not understand this either. For #1 there is nothing to understand. A relation is a set of ordered pairs. That is exactly what $R$ is. For #2 are that two pair in $R$ that look like $(a,b),~(b,c)$. The answer is no. So $(a,b)\in R\text{ and }(b,c)\in R$ is a false statement. For #3. The statement "It false then anything" is always true. So if $(a,b)\in R\text{ and }(b,c)\in R$ then $(a,c)\in R$ is TRUE. Because A false statement implies any statement. Thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409633278846741, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/245500/correct-proof-idea-metric-spaces-and-closed-sets
# Correct proof idea? Metric spaces and closed sets. Let $(X,d)$ be an arbitrary complete metric space and suppose $S\subseteq(X,d)$. Show that $S$ is closed if and only if every Cauchy sequence in $S$ converges to a point in $S$. I did the forward direction, is it correct? Suppose $S$ is closed. Let $x\in S$ be an Cauchy sequence, $x=(x_n)_{n \in \mathbb N}$. Then, since $S$ is closed, it contains all of its limit points. Therefore, $\lim_{n\to\infty} (x_n)$ will converge to an element in $S$. So every Cauchy sequence in $S$ converges to an element in $S$. For the backward direction, would I just let $S\subseteq(X,d)$ and suppose that every Cauchy sequence in $S$ converges to a point in $S$. Then show that a limit point $x$ is in $S$? Any feedback is appreciated, thanks. - ## 2 Answers The first direction is almost ok, but you have to argue why $(x_n)_n$ converges at all. Here you need, that $(x_n)_n$ is also a C-sequence in $(X,d)$ which is complete. To show that S is closed, you have to show, that every in S convergent sequence has it limit value in S. But every convergent sequence is also a C-sequence... - To finish the proof, by the assumption "every Cauchy sequence in S converges to a point in S", then the limit point of the convergent sequence is in $S$. Hence $S$ is closed. – Mhenni Benghorbal Nov 27 '12 at 7:36 @MhenniBenghorbal This comment was really not necessary. If Fant chose to leave this as an indication, leave it as such. – Did Dec 1 '12 at 9:35 Prove the contrapositive for the other direction. Assume $S$ is not closed and find a Cauchy sequence in $S$ which does not converge in $S$ (not closed means that one limit point is not there in the set, so construct a Cauchy sequence which converge to this limit point). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629298448562622, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/91662/stuck-on-proving-uniform-convergence
# Stuck on proving uniform convergence I am preparing for my analysis finals tomorrow and I have been stuck over two hours trying to solve the following problem: Let $f_1, f_2, \ldots$ be a convergent (pointwise) sequence of monotonically increasing functions defined on $[a,b] \to \mathbb R$. (I.e., $f_n(x) \leq f_n(y)$ if $x \leq y$). Let $f$ be the limit of the above mentioned sequence. Assume $f$ is continuous. Show that the above sequence is uniformly convergent. I am not sure how to approach the above problem. I have been trying to show the above by showing that the window of values taken by $f_N$ (given by $f_n(b) - f_n(a)$) converges and messing around with triangle inequalities to get the required inequality ($f_n(p) - f(p) < \varepsilon$). But this I realized was wrong because even if the windows converge the functions themselves can be increasing at different rates within the interval thus the ($f_n(p) - f(p)$ need not shrink at a constant rate at all points). I feel that the fact $f_n$ is defined in a compact interval and therefore is uniformly continuous comes into the picture somehow, but I can't connect the dots. Any suggestions? - – Alex Youcis Dec 15 '11 at 3:24 1 Given an $\epsilon > 0$, at every point $x$, continuity of $f(x)$ gives you what? Now you have a collection of things, one for each $x$ in your domain...this defines something useful! What nice property does compactness give you? Why does that matter and how does it show uniform convergence instead of pointwise convergence? – tomcuchta Dec 15 '11 at 3:27 2 @AlexYoucis No,it's not dini's thm.In dini's thm,the value taken by the function for succesive n's is inreasing.Here,it is the function that is increasing.(i.e $f_n(x) <= f_n(y)$ if $x$ < $y$ ,not $f_{n+1}(p) >= f_n(p)$ for all p) – Thiagarajan Dec 15 '11 at 3:28 Ahh, misread it. I'm sorry. – Alex Youcis Dec 15 '11 at 3:30 ## 1 Answer Since $f$ is uniformly continuous and increasing, given $\varepsilon>0$, there exist points $a=x_0<x_1<x_2<\cdots<x_m=b$ such that $f(x_k)-f(x_{k-1})<\varepsilon$ for each $k$. There exists $N$ such that $n\geq N$ implies $|f_n(x_k)-f(x_k)|<\varepsilon$ for each $k$. For $x\in[a,b]$, $x$ is in $[x_k,x_{k+1}]$ for some $k$, and if $n\geq N$, then $$|f(x)-f_n(x)|\leq|f(x)-f(x_k)|+|f(x_k)-f_n(x_k)|+|f_n(x_k)-f_n(x)|.$$ Take that last term: $\begin{align*} |f_n(x_k)-f_n(x)|&=f_n(x)-f_n(x_k)\\ &\leq f_n(x_{k+1})-f_n(x_k)\\ &=|f_n(x_{k+1})-f_n(x_k)|\\ &\leq|f_n(x_{k+1})-f(x_{k+1})|+|f(x_{k+1})-f(x_k)|+|f(x_k)-f_n(x_k)|. \end{align*}$ Now you should be able to see why this gives you $|f(x)-f_n(x)|<5\varepsilon$. (Note where increasing is used.) There are probably more elegant solutions. - Thank you.The key idea which I didn't find was the fact the I can use uniform continuity to partition $[a,b]$ into finitely many sub-intervals.Thanks for clearing that up.I can sleep now.:) – Thiagarajan Dec 15 '11 at 3:57 Thiagarajan: Actually you can get away without thinking about uniform continuity, but instead the intermediate value theorem. Since $f$ is increasing and continuous, $f([a,b])=[f(a),f(b)]$, and if you break up $[f(a),f(b)]$ with points $f(a)=y_0<y_1<y_2<\cdots<y_m=f(b)$ such that $y_{k+1}-y_k<\varepsilon$ for each $k$, by continuity there exists $x_k$ such that $f(x_k)=y_k$. – Jonas Meyer Dec 15 '11 at 4:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363581538200378, "perplexity_flag": "head"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Ordinal
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Ordinal number (Redirected from Ordinal) Commonly, ordinal numbers, or ordinals for short, are numbers used to denote the position in an ordered sequence: first, second, third, fourth, etc. (See How to name numbers.) In mathematics, ordinal numbers are an extension of the natural numbers to accommodate infinite sequences, introduced by Georg Cantor in 1897. It is this generalization which will be explained below. Contents ## Introduction A natural number can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. While in the finite world these two concepts coincide, when dealing with infinite sets one has to distinguish between the two. The size aspect leads to cardinal numbers, which were also discovered by Cantor, while the position aspect is generalized by the ordinal numbers described here. In set theory, the natural numbers are commonly constructed as sets, such that each natural number is the set of all smaller natural numbers: 0 = {} (empty set) 1 = {0} = { {} } 2 = {0,1} = { {}, { {} } } 3 = {0,1,2} = {{}, { {} }, { {}, { {} } }} 4 = {0,1,2,3} = { {}, { {} }, { {}, { {} } }, {{}, { {} }, { {}, { {} } }} } etc. Viewed this way, every natural number is a well-ordered set: the set 4 for instance has the elements 0, 1, 2, 3 which are of course ordered as 0 < 1 < 2 < 3 and this is a well-order. A natural number is smaller than another if and only if it is an element of the other. We don't want to distinguish between two well-ordered sets if they only differ in the "notation for their elements", or more formally: if we can pair off the elements of the first set with the elements of the second set in a one-to-one fashion and such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called an order isomorphism and the two well-ordered sets are said to be order-isomorphic. With this convention, one can show that every finite well-ordered set is order-isomorphic to one (and only one) natural number. This provides the motivation for the generalization to infinite numbers. ## Modern definition and first properties We want to construct ordinal numbers as special well-ordered sets in such a way that every well-ordered set is order-isomorphic to one and only one ordinal number. The following definition improves on Cantor's approach and was first given by John von Neumann: A set S is an ordinal if and only if S is totally ordered with respect to set containment and every element of S is also a subset of S. (Here, "set containment" is another name for the subset relationship.) Such a set S is automatically well-ordered with respect to set containment. This relies on the axiom of well foundation: every nonempty set S has an element a which is disjoint from S. Note that the natural numbers are ordinals by this definition. For instance, 2 is an element of 4 = {0, 1, 2, 3}, and 2 is equal to {0, 1} and so it is a subset of {0, 1, 2, 3}. It can be shown by transfinite induction that every well-ordered set is order-isomorphic to exactly one of these ordinals. Furthermore, the elements of every ordinal are ordinals themselves. Whenever you have two ordinals S and T, S is an element of T if and only if S is a proper subset of T, and moreover, either S is an element of T, or T is an element of S, or they are equal. So every set of ordinals is totally ordered. And in fact, much more is true: Every set of ordinals is well-ordered. This important result generalizes the fact that every set of natural numbers is well-ordered and it allows us to use transfinite induction liberally with ordinals. Another consequence is that every ordinal S is a set having as elements precisely the ordinals smaller than S. This statement completely determines the set-theoretic structure of every ordinal in terms of other ordinals. It's used to prove many other useful results about ordinals. One example of these is an important characterization of the order relation between ordinals: every set of ordinals has a supremum, the ordinal obtained by taking the union of all the ordinals in the set. Another example is the fact that the collection of all ordinals is not a set. Indeed, since every ordinal contains only other ordinals, it follows that every member of the collection of all ordinals is also its subset. Thus, if that collection were a set, it would have to be an ordinal itself by definition; then it would be its own member, which contradicts the axiom of regularity. (See also the Burali-Forti paradox). An ordinal is finite if and only if the opposite order is also well-ordered, which is the case if and only if each of its subsets has a greatest element. ### Other definitions There are other modern formulations of the definition of ordinal. Each of these is essentially equivalent to the definition given above. One of these definitions is the following. A class S is element-transitive (or e-transitive) if, whenever x is an element of y and y is an element of S, then x is an element of S. An ordinal is then defined to be a class S which is e-transitive, and such that every member of S is also e-transitive. ## Arithmetic of ordinals To define the sum S + T of two ordinal numbers S and T, one proceeds as follows: first the elements of T are relabeled so that S and T become disjoint, then the well-ordered set S is written "to the left" of the well-ordered set T, meaning one defines an order on S∪T in which every element of S is smaller than every element of T. The sets S and T themselves keep the ordering they already have. This way, a new well-ordered set is formed, and this well-ordered set is order-isomorphic to a unique ordinal, which is called S + T. This addition is associative and generalizes the addition of natural numbers. The first transfinite ordinal is ω, the set of all natural numbers. Let's try to visualize the ordinal ω+ω: two copies of the natural numbers ordered in the normal fashion and the second copy completely to the right of the first. If we write the second copy as {0'<1'<2',...} then ω+ω looks like 0 < 1 < 2 < 3 < ... < 0' < 1' < 2' < ... This is different from ω because in ω only 0 does not have a direct predecessor while in ω+ω the two elements 0 and 0' don't have direct predecessors. Here's 3 + ω: 0 < 1 < 2 < 0' < 1' < 2' < ... and after relabeling, this just looks like ω itself: we have 3 + ω = ω. But ω + 3 is not equal to ω since the former has a largest element and the latter doesn't. So our addition is not commutative. You should now be able to see that (ω + 4) + ω = ω + (4 + ω) = ω + ω for example. To multiply the two ordinals S and T you write down the well-ordered set T and replace each of its elements with a different copy of the well-ordered set S. This results in a well-ordered set, which defines a unique ordinal; we call it ST. Again, this operation is associative and generalizes the multiplication of natural numbers. Here's ω2: 00 < 10 < 20 < 30 < ... < 01 < 11 < 21 < 31 < ... and we see: ω2 = ω + ω. But 2ω looks like this: 00 < 10 < 01 < 11 < 02 < 12 < 03 < 13 < ... and after relabeling, this looks just like ω and so we get 2ω = ω. Multiplication of ordinals is not commutative. Distributivity partially holds for ordinal arithmetic: R(S+T) = RS + RT. One can actually see that. However, the other distributive law (T+U)R = TR + UR is not generally true: (1+1)ω is equal to 2ω = ω while 1ω + 1ω equals ω+ω. Therefore, the ordinal numbers do not form a ring. One can now go on to define exponentiation of ordinal numbers and explore its properties. Ordinal numbers present an extremely rich arithmetic. There are ordinal numbers which can not be reached from ω with a finite number of the arithmetical operations addition, multiplication and exponentiation. The smallest such is denoted by ε0. This ordinal is very important in many induction proofs, because for many purposes, transfinite induction is only required up to ε0. Note that $\epsilon_0 = \omega^{\omega^{\omega^{\cdots}}}$, so that $\epsilon_0 = \omega^{\epsilon_0}$. Every ordinal number α > 0 can be uniquely written as $\omega^{\beta_1} c_1 + \omega^{\beta_2}c_2 + \ldots + \omega^{\beta_k}c_k$, where $k, c_1, c_2, \ldots, c_k$ are positive integers, and $\beta_1 > \beta_2 > \ldots > \beta_k$ are ordinal numbers (possibly βk = 0). This decomposition of α is called the Cantor normal form of α, and can be considered the positional base-ω numeral system. The highest exponent β1 is called the degree of α, and satisfies $\beta_1\le\alpha$. In case α < ε0, we even have β1 < α, resulting in a finite representation of α with integers only (attached to a skeleton of ωs, additions, multiplications, and exponentiations). Here is an example of an ordinal number in Cantor normal form, with exponents also in Cantor normal form: $\omega^{\omega^{\omega^{0}2}3+\omega^{\omega^{0}1}1}1 + \omega^{\omega^{\omega^{0}1}5+\omega^{0}2}1 + \omega^{3}4 + \omega^{0}17$, which usually is written more simply as $\omega^{\omega^{2}3+\omega} + \omega^{\omega5+2} + \omega^{3}4 + 17$. Arithmetic operations on ordinals can be captured by algorithms that transform Cantor normal forms, similar to, but more complicated than, the algorithms for integer arithmetic in terms of the decimal notation usually taught in primary education. Also see (Sierpinski, 1965). There exist uncountable ordinals. The smallest uncountable ordinal is equal to the set of all countable ordinals, and is usually denoted by ω1. ## Topology and limit ordinals The ordinals also carry an interesting order topology by virtue of being totally ordered. In this topology, the sequence 0, 1, 2, 3, 4, ... has limit ω and the sequence ω, ω^ω, ω^(ω^ω), ... has limit ε0. Ordinals which don't have an immediate predecessor can always be written as a limit of a net of other ordinals (but not necessarily as the limit of a sequence, i.e. as a limit of countably many smaller ordinals) and are called limit ordinals; the other ordinals are the successor ordinals. The topological spaces ω1 and its successor ω1+1 are frequently used as the text-book examples of non-countable topologies. For example, in the topological space ω1+1, the element ω1 is in the closure of the subset ω1 even though no sequence of elements in ω1 has the element ω1 as its limit. Some special ordinals can be used to measure the size or cardinality of sets. These are the cardinal numbers. ## References • Conway, J. H. and Guy, R. K. "Cantor's Ordinal Numbers." In The Book of Numbers. New York: Springer-Verlag, pp. 266-267 and 274, 1996. • Sierpinski, W. (1965). Cardinal and Ordinal Numbers (2nd ed.). Warszawa: Pantswowe Wydawnictwo Naukowe. Also defines ordinal operations in terms of the Cantor Normal Form. 03-10-2013 05:06:04 The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines. Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter. Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319489002227783, "perplexity_flag": "head"}
http://www.cscheid.net/
# Voting in the MLB Hall of Fame [blog] Wed, 23 Jan 2013 17:43:45 -0500 [... comments] Here’s a visualization of the MLB Hall of Fame trajectories my friend Kenny Shirley and I created using D3, R and a bunch of elbow grease. # Window Seat [blog] Wed, 16 Jan 2013 00:05:00 -0500 [... comments] What does it look like when you turn a video of a 5-hour flight into a single image? # A trivial Python hack to tweet from the command line [blog] Wed, 14 Nov 2012 23:14:08 -0500 [... comments] One of the annoying things about Twitter is that sometimes I want to tweet something, but I don’t want to actually read anything from my feed (because I’ll surely get distracted). So I wrote this trivial, minimal piece of python code to tweet from the command line. # VisWeek 2012, Monday and Tuesday [blog] Thu, 18 Oct 2012 13:13:09 -0400 [... comments] This continues the set of notes I started on the previous post. # VisWeek 2012, Sunday [blog] Wed, 17 Oct 2012 04:31:33 -0400 [... comments] At this point, VisWeek 2012 is just about halfway done, and there’s tons to write about, but I’ll try to keep these short by sticking to one day at a time. VisWeek now runs four parallel tracks for the best part of a week, so there’s no way I can tell you about everything that is happening out here in (today, surprisingly sunny!) Seattle. But I will tell you about what I think is cool. The usual caveats follow: omissions and mischaracterizations are all my fault. # So you want to look at a graph, part 3 [blog] Sun, 22 Jul 2012 19:15:12 -0400 [... comments] This series of posts is a tour of the design space of graph visualization. I’ve written about graphs and their properties, and how the encoding of data into a visual representation is crucial. In this post, I will use those ideas to justify the choices behind a classic algorithm for laying out directed, mostly-acyclic graphs. # A Javascript question on performance vs. convenience [blog] Tue, 26 Jun 2012 17:19:40 -0400 [... comments] Here’s a Javascript-specific software engineering problem I’m considering within Facet. I’m trying to decide how (or even whether) to approach type-checking in the API, and I’m looking for input. # Drawing Large Graphs by Low-Rank Stress Majorization [paper] Tue, 27 Mar 2012 21:02:27 -0400 [... comments] Marc Khoury, Yifan Hu, Shankar Krishnan, Carlos Scheidegger. Eurovis 2012, to appear. Optimizing a stress model is a natural technique for drawing graphs: one seeks an embedding into $R^d$ which best preserves the induced graph metric. Current approaches to solving the stress model for a graph with $|V|$ nodes and $|E|$ edges require the full all-pairs shortest paths (APSP) matrix, which takes $O(|V|^2 \log|E| + |V||E|)$ time and $O(|V|^2)$ space. We propose a novel algorithm based on a low-rank approximation to the required matrices. The crux of our technique is an observation that it is possible to approximate the full APSP matrix, even when only a small subset of its entries are known. Our algorithm takes time $O(k |V| + |V| \log |V| + |E|)$ per iteration with a preprocessing time of $O(k^3+k(|E|+|V|\log|V|)+k^2 |V|)$ and memory usage of $O(k|V|)$, where a user-defined parameter $k$ trades off quality of approximation with running time and space. We give experimental results which show, to the best of our knowledge, the largest (albeit approximate) full stress model based layouts to date. # So you want to look at a graph, part 2 [blog] Wed, 29 Feb 2012 10:25:46 -0500 [... comments] This series of posts is a thorough examination of the design space of graph visualization (Intro, part 1). In the previous post, we talked about graphs and their properties. We will now talk about constraints arising from the process of transforming our data into a visualization. # HCL color space blues [blog] Thu, 16 Feb 2012 01:20:06 -0500 [... comments] I’ve been playing around with the HCL color space. HCL, if you’ve never heard of it before, is a color space that tries to combine the advantages of perceptual uniformity of Luv, and the simplicity of specification of HSV and HSL. HCL is an improvement over HSV and HSL, but it is not exactly ideal: there is a nasty discontinuity at some bits of the transformation! I have been trying to find a way around this, but I’m stumped. Let me explain, and maybe you can help me. # So you want to look at a graph, part 1 [blog] Wed, 25 Jan 2012 08:34:01 -0500 [... comments] This series of posts is a tour through of the design space of graph visualization. As I promised, I will do my best to objectively justify as many visualization decisions as I can. This means we will have to go slow; I won’t even draw anything today! In this post, I will only take the very first step: all we will do is think about graphs, and what might be interesting about them. Older entries: page  0 1 2 3 4 ### About me I'm a researcher at AT&T Labs–Research, in the Information Visualization group. I work in data visualization, geometry processing and computer graphics. Opinions here are all mine. email: [email protected]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8924992680549622, "perplexity_flag": "middle"}
http://mathhelpforum.com/new-users/208287-what-am-i-doing-wrong.html
2Thanks • 1 Post By MarkFL • 1 Post By Prove It # Thread: 1. ## What am I doing wrong? |2x+1|<|x+3| Solve for x. So the correct answer is -4/3<x<2 based on my deductions. But my problem is my solution generates more answers. +-(2x+1)<+-(x+3) => 1) 2x+1<x+3 => x<2 2) 2x+1<-(x+3) => x<-4/3 3) -(2x+1)<(x+3) =>x>-4/3 4) -(2x+1)<-(x+3) => x>2 What am I doing wrong in the solution to get 2), 4) as wrong answers. I know I am probably having a brain fart and missing something obvious. Thanks in advance for your help. 2. ## Re: What am I doing wrong? I would write: $\sqrt{(2x+1)^2}<\sqrt{(x+3)^2}$ Square both sides, and expand: $4x^2+4x+1<x^2+6x+9$ $3x^2-2x-8<0$ $(3x+4)(x-2)<0$ Can you proceed from here? 3. ## Re: What am I doing wrong? You need to consider three cases. When $\displaystyle \begin{align*} x < -3 \end{align*}$ we have $\displaystyle \begin{align*} |x + 3| = -(x + 3) \end{align*}$ and $\displaystyle \begin{align*} |2x + 1| = -(2x + 1) \end{align*}$. When $\displaystyle \begin{align*} -3 \leq x < -\frac{1}{2} \end{align*}$ we have $\displaystyle \begin{align*} |x + 3| = x + 3 \end{align*}$ and $\displaystyle \begin{align*} |2x + 1| = -(2x + 1) \end{align*}$. When $\displaystyle \begin{align*} x \geq -\frac{1}{2} \end{align*}$ we have $\displaystyle \begin{align*} |x + 3| = x + 3 \end{align*}$ and $\displaystyle \begin{align*} |2x + 1| = 2x + 1 \end{align*}$. So to solve $\displaystyle \begin{align*} |2x + 1| < |x + 3| \end{align*}$, when $\displaystyle \begin{align*} x < -3 \end{align*}$ we have $\displaystyle \begin{align*} -(2x + 1) &< -(x + 3) \\ 2x + 1 &> x + 3 \\ x + 1 &> 3 \\ x &> 2 \end{align*}$. Since we don't have any values of x that are both < -3 and > 2, this is impossible. When $\displaystyle \begin{align*} -3 \leq x < -\frac{1}{2} \end{align*}$ we have $\displaystyle \begin{align*} -(2x + 1) &< x + 3 \\ 2x + 1 &> -x - 3 \\ 3x + 1 &> -3 \\ 3x &> -4 \\ x &> -\frac{4}{3} \end{align*}$ so that means $\displaystyle \begin{align*} -\frac{4}{3} < x < -\frac{1}{2} \end{align*}$ is acceptable. When $\displaystyle \begin{align*} x \geq -\frac{1}{2} \end{align*}$ we have $\displaystyle \begin{align*} 2x + 1 &< x + 3 \\ x + 1 &< 3 \\ x &< 2 \end{align*}$ so that means $\displaystyle \begin{align*} -\frac{1}{2} \leq x < 2 \end{align*}$ is also accetable. So that means our total solution is $\displaystyle \begin{align*} -\frac{4}{3} < x < 2 \end{align*}$. 4. ## Re: What am I doing wrong? whenever i see |anything|, i think that this means: |a| = a, when a ≥ 0 |a| = -a, when a < 0. now from appearances, it looks like we have 4 cases to investigate: 1) 2x+1, x+3 ≥ 0 2) 2x+1 ≥ 0, x+3 < 0 3) 2x+1 < 0, x+3 > 0 4) 2x+1,x+3 < 0 let's look at each of these in turn. in case (1), we have |2x+1| < |x+3| becomes 2x+1 < x+3, which then can be manipulated algebraically: x+1 < 3 x < 2 but this is not quite all. if 2x+1 ≥ 0, this means that 2x ≥ -1, so x ≥ -1/2. since we need x+3 ≥ 0 as well, we need to simultaneously require x ≥ -3. the more restrictive of the two requirements is x ≥ -1/2. so if case 1 is the case, then x ≥ -1/2 > -3, in which case the inequality |2x+1| < |x+3| tells us x < 2, so we have: -1/2 ≤ x < 2. now we turn to case (2). here, 2x+1 ≥ 0 means that x ≥ -1/2, as before. but x+3 < 0, means x < -3. but x cannot be both ≥ -1/2 AND < -3 at the same time. so this case never occurs. on to case (3). here, 2x+1 < 0, means that x < -1/2. x+3 ≥ 0 means that x ≥ -3. so for case (3) to apply, we need -3 ≤ x < -1/2. but we still have the original inequality to satisfy, as well. for case (3), it is: -2x-1 < x+3 -3x-1 < 3 -3x < 4 x > -4/3 (we have to "change direction" when we multiply by -1/3). since -4/3 is in-between -3 and -1/2, requiring that BOTH x > -4/3 AND -3 ≤ x < -1/2 translates into: -4/3 < x < -1/2. finally, case (4). here 2x+1 < 0 means x < -1/2, and x+3 < 0, means x < -3. the more restrictive condition is x < -3. in this case, our inequality is: -2x-1 < -x-3 -x-1 < -3 -x < -2 x > 2. now since x cannot be both > 2 and < -3, in case (4) we have no x that qualify. now the choice between the 4 cases (only 2 of which are possible) is an "OR" choice. thus we have, as our final solution: -4/3 < x < -1/2 OR -1/2 ≤ x < 2 since the "endpoints match", we can combine these into the SINGLE condition: -4/3 < x < 2. here is how it works "in reverse": when x < -3, then 2x+1 < -5 < 0, and x+3 < 0, so condition (4) holds. but the only x that satisfy -2x-1 < -x-3 are x > 2, and no x < -3 is > 2. so there are no x's that work < -3. when -3 ≤ x < -1/2, then 2x+1 < 0, and x+3 ≥ 0, so condition (3) holds. but for -2x-1 < x+3, we have to have x > -4/3, so the x's that work are in: [-3,-1/2) ∩ (-4/3,∞) = (-4/3,-1/2). when -1/2 ≤ x, then 2x+1 ≥ 0 and x+3 ≥ 5/2 > 0, so condition (1) holds, in which case 2x+1 < x+3 means that x < 2. so that x's that work are in: [-1/2,∞) ∩ (-∞,2) = [-1/2,2). so our final answer is (-4/3,-1/2) U [-1/2,2) = (-4/3,2). ********** let's compare this with MarkFL2's approach: suppose (3x + 4)(x - 2) < 0. when can this happen? in just two ways: A) 3x + 4 < 0 and x - 2 > 0 B) 3x + 4 > 0 and x - 2 < 0 (A) first: 3x + 4 < 0 means 3x < -4, and so x < -4/3. x - 2 > 0 means x > 2. it is impossible for both of these to be true. (B) next: 3x + 4 > 0 means x > -4/3, and x - 2 < 0 means x < 2. this, we can do, so we have: -4/3 < x < 2. much less work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205929636955261, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=381502
Physics Forums ## the physical meaning of the PDE?!!! 1. The problem statement, all variables and given/known data 2. Relevant equations How can I know the physical meaning of the following partial differential equation????!!! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug You cannot, It can refer to anything. In physics the first thing is the units. What is the unit of x and u? And what is the background of the equation? Why the boundary conditions have been choosen like this? After you have the answer to these questions, you can start wondering what would the equation means. Recognitions: Gold Member Science Advisor Staff Emeritus You can't. Mathematics is not physics and equations may have physical applications but separate from the application, there is no "physical" meaning. The crucial point is the meaning or interpretation of u and x themselves. As magwas suggested, you can learn something about the that by looking at the units x and u have. By the way, this is NOT a "partial" differential equation. Since there is only one independent variable, it is an ordinary differential equation. ## the physical meaning of the PDE?!!! Quote by magwas You cannot, It can refer to anything. In physics the first thing is the units. What is the unit of x and u? And what is the background of the equation? Why the boundary conditions have been choosen like this? After you have the answer to these questions, you can start wondering what would the equation means. thank you for the hint. u: velocity of a fluid m/sec x: Distance m [-1,1] : the Domain. Can you help me now wondering what would the equation means?! Quote by HallsofIvy You can't. Mathematics is not physics and equations may have physical applications but separate from the application, there is no "physical" meaning. The crucial point is the meaning or interpretation of u and x themselves. As magwas suggested, you can learn something about the that by looking at the units x and u have. By the way, this is NOT a "partial" differential equation. Since there is only one independent variable, it is an ordinary differential equation. I'm sorry ...that's right... it is an ODE. Well, let's see. We have a fluid. It moves relative to our 2m wide/long something. It would be helpful to know the direction of the speed relative to the something. If the sopeed is paralell to the something (I could think of a pipe which have the same cross area at -1 and 1 (but fluid dynamics is much more complicated than that)), then we have u'(x) with the units of 1/s, some kind of frequency. If it is not paralell, then u'(x) have units of $$\frac{m_{y}}{s m_{x}}$$ (just not to confuse length in one direction to length in other direction. Similarly u''(x) is either 1/ms or $$\frac{m_{y}}{s m_{x}^2}$$. Anyway, we should have a magical constant A of units $$m^2$$, thus the diff equation is really $$A u''(x) = u(x)$$ Now we should try to figure out the physical meaning of u''(x). It is the change of change of speed according to distance, which I honestly don't know what could mean. Maybe looking up equations of fluid dynamics or knowing more about the reasoning which led to this diff equation would help to understand more. Quote by magwas Well, let's see. We have a fluid. It moves relative to our 2m wide/long something. It would be helpful to know the direction of the speed relative to the something. If the sopeed is paralell to the something (I could think of a pipe which have the same cross area at -1 and 1 (but fluid dynamics is much more complicated than that)), then we have u'(x) with the units of 1/s, some kind of frequency. If it is not paralell, then u'(x) have units of $$\frac{m_{y}}{s m_{x}}$$ (just not to confuse length in one direction to length in other direction. Similarly u''(x) is either 1/ms or $$\frac{m_{y}}{s m_{x}^2}$$. Anyway, we should have a magical constant A of units $$m^2$$, thus the diff equation is really $$A u''(x) = u(x)$$ Now we should try to figure out the physical meaning of u''(x). It is the change of change of speed according to distance, which I honestly don't know what could mean. Maybe looking up equations of fluid dynamics or knowing more about the reasoning which led to this diff equation would help to understand more. Thank you for this explanation and I will try to find more about "change of change of speed according to distance". Thread Tools | | | | |----------------------------------------------------------|---------------------------|---------| | Similar Threads for: the physical meaning of the PDE?!!! | | | | Thread | Forum | Replies | | | General Physics | 21 | | | Quantum Physics | 4 | | | General Math | 6 | | | Advanced Physics Homework | 2 | | | Quantum Physics | 10 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609054923057556, "perplexity_flag": "middle"}
http://johncarlosbaez.wordpress.com/2012/10/01/petri-net-programming/
# Azimuth ## Petri Net Programming guest post by David Tanzer Petri nets are a simple model of computation with a range of modelling applications that include chemical reaction networks, manufacturing processes, and population biology dynamics. They are graphs through which entities flow and have their types transformed. They also have far-reaching mathematical properties which are the subject of an extensive literature. See the network theory series here on Azimuth for a panoramic and well-diagrammed introduction to the theory and applications of Petri nets. In this article, I will introduce Petri nets and give an expository presentation of a program to simulate them. I am hoping that this will be of interest to anyone who wants to see how scientific concepts can be formulated and actualized in computer software. Here the simulator program will be applied to a “toy model” of a chemical reaction network for the synthesis and dissociation of H2O molecules. The source code for this program should give the reader some “clay” to work with. This material will prepare you for the further topic of stochastic Petri nets, where the sequencing of the Petri net is probabilistically controlled. ### Definition of Petri nets A Petri net is a diagram with two kinds of nodes: container nodes, called species (or “places”, “states”), which can hold zero or more tokens, and transition nodes, which are “wired” to some containers called its inputs and some called its outputs. A transition can have multiple input or output connections to a single container. When a transition fires, it removes one token from each input container, and adds one token to each output container. When there are multiple inputs or outputs to the same container, then that many tokens get removed or added. The total state of the Petri net is described by a labelling function which maps each container to the number of tokens it holds. A transition is enabled to fire in a labelling if there are enough tokens at its input containers. If no transitions are enabled, the it is halted. The sequencing is non-deterministic, because in a given labelling there may be several transitions that are enabled. Dataflow arises whenever one transition sends tokens to a container that is read by another transition. Petri nets represent entity conversion networks, which consist of entities of different species, along with reactions that convert input entities to output entities. Each entity is symbolized by a token in the net, and all the tokens for a species are grouped into an associated container. The conversion of entities is represented by the transitions that transform tokens. ### Example 1: Disease processes Here’s an example discussed earlier on Azimuth. It describes the virus that causes AIDS. The species are healthy cell, infected cell, and virion (the technical term for an individual virus). The transitions are for infection, production of healthy cells, reproduction of virions within an infected cell, death of healthy cells, death of infected cells, and death of virions. Here the species are yellow circles and the transitions are aqua squares. Note that there are three transitions called “death” and two called “production.” They are disambiguated by a Greek letter suffix. • production ($\alpha$) describes the birth of one healthy cell, so it has no input and one healthy as output. • death ($\gamma$) has one healthy as input and no output. • death ($\delta$) has one infected as input and no output. • death ($\zeta$) has one virion as input and no output. • infection ($\beta$) takes one healthy and one virion as input, and has one infected cell as output. • production ($\epsilon$) describes the reproduction of the virus in an infected cell, so it has one infected as input, and one infected and one virion as output. ### Example 2: Population dynamics Here the tokens represent organisms, and the species are biological species. This simple example involving two species, wolf and rabbit: There are three transitions: birth, which inputs one rabbit and outputs rabbit + rabbit like asexual reproduction), predation, which converts rabbit plus wolf to wolf plus wolf, and death, which inputs one wolf and outputs nothing. ### Example 3: Chemical reaction networks Here the entities are chemical units: molecules, isolated atoms, or ions. Each chemical unit is represented by a token in the net, and a container holds all of the tokens for a chemical species. Chemical reactions are then modeled as transitions that consume input tokens and generate output tokens. We will be using the following simplified model for water formation and dissociation: • The species are H, O, and H2O. • Transition combine inputs two H atoms and one O atom, and outputs an H2O molecule. • Transition split is the reverse process, which inputs one H2 and outputs two H and one O. Note that the model is intended to show the idea of chemical reaction Petri nets, so it is not physically realistic. The actual reactions involve H2 and O2, and there are intermediate transitions as well. For more details, see Part 3 of the network theory series. ### A Petri net simulator The following Python script will simulate a Petri net, using parameters that describe the species, the transitions, and the initial labelling. It will run the net for a specified number of steps. At each step it chooses randomly among the enabled transitions, fires it, and prints the new labelling on the console. #### Download Here is the first Petri net simulator. #### Running the script The model parameters are already coded into the script. So let’s give it a whirl: python petri1.py This produced the output: ``` H, O, H2O, Transition 5, 3, 4, split 7, 4, 3, split 9, 5, 2, combine 7, 4, 3, combine 5, 3, 4, split 7, 4, 3, split 9, 5, 2, split 11, 6, 1, combine 9, 5, 2, split 11, 6, 1, split 13, 7, 0, ... ``` We started out in a state with 5 H’s, 3 O’s, and 4 H2O’s, then a split took place, which increased H by 2, increased O by 1, and decreased H2O by one, then… Running it again gives a different execution sequence. #### Software structures used in the program Before performing a full dissection of the code, I’d like to make a digression to discuss some of the basic software constructs that are used in this program. This is directed in particular to those of you who are coming from the science side, and are interested in learning more about programming. This program exercises a few of the basic mechanisms of object-oriented and functional programming. The Petri net logic is bundled into the definition of a single class called PetriNet, and each PetriNet object is an instance of the class. This logic is grouped into methods, which are functions whose first parameter is bound to an instance of the class. For example, here is the method IsHalted of the class PetriNet: ``` def IsHalted(this): return len(this.EnabledTransitions()) == 0 ``` The first parameter, conventionally called “this” or “self,” refers to the PetriNet class instance. len returns the length of a list, and EnabledTransitions is a method that returns the list of transitions enabled in the current labelling. Here is the syntax for using the method: ``` if petriNetInstance.IsHalted(): ... ``` If it were the case that IsHalted took additional parameters, the definition would look like this: ``` def IsHalted(this, arg1, ..., argn): ``` and the call would look like this: ``` if petriNetInstance.IsHalted(arg1, ..., argn) ``` Here is the definition of EnabledTransitions, which shows a basic element of functional programming: ``` def EnabledTransitions(this): return filter(lambda transition: transition.IsEnabled(this.labelling), this.transitions) ``` A PetriNet instance holds this list of Transition objects, called this.transitions. The expression “lambda: transition …” is an anonymous function that maps a Transition object to the boolean result returned by calling the IsEnabled method on that transition, given the current labelling this.labelling. The filter function takes an anonymous boolean-valued function and a list of objects, and returns the sublist consisting of those objects that satisfy this predicate. The program also gives an example of class inheritance, because the class PetriNet inherits all fields and methods from a base class called PetriNetBase. #### Python as a language for pedagogical and scientific programming We continue with our digression, to talk now about the choice of language. With something as fundamental to our thinking as a language, it’s easy to see how the topic of language choice tends to become partisan. Imagine, for example, a debate between different nationalities, about which language was more beautiful, logical, etc. Here the issue is put into perspective by David Tweed from the Azimuth project: Programming languages are a lot like “motor vehicles”: a family car has different trade-offs to a sports car to a small van to a big truck to a motorbike to …. Each of these has their own niche where they make the most sense to use. The Petri net simulator which I wrote in Python, and which will soon to be dissected, could have been equivalently written in any modern object-oriented programming language, such as C++, Java or C#. I chose Python for the following reasons. First, it is a scripting language. This means that the casual user need not get involved with a compilation step that is preparatory to running the program. Second, it does provide the abstraction capabilities needed to express the Petri net model. Third, I like the way that the syntax rolls out. This syntax has been described as executable pseudo-code. Finally, the language has a medium-sized user-base and support community. Python is good for proof of concept programs, and it works well as a pedagogical programming language. Yet it is also practical. It has been widely adopted in the field of scientific programming. David Tweed explains it in these terms: I think a big part of Python’s use comes from an the way that a lot of scientific programming is as much about “management” — parsing files of prexisting structured data, iterating over data, controlling network connections, calling C code, launching subprograms, etc — as much as “numeric computation”. This is where Python is particularly good, and it’s now acquired the NumPy & SciPy extensions to speed up the numerical programming elements, but it’s primarily the higher level elements that make it attractive in science. Because variables in Python are neither declared in the program text, nor inferred by the byte-code compiler, the types of the variables are known only at run time. This has a negative impact on performance for data intensive calculations: rather than having the compiler generate the right code for the data type that is being processed, the types need to be checked at run time. The NumPy and SciPy libraries address this by providing bulk array operations, e.g., addition of two matrices, in a native code library that is integrated with the Python runtime environment. If this framework does not suffice for your high-performance numerical application, then you will have to turn to other languages, notably, C++. All this being said, it is now time to return to the main topic of this article, which is the content of Petri net programming. #### Top-level structure of the Petri net simulator script At a high level, the script constructs a Petri net, constructs the initial labelling, and then runs the simulation for a given number of steps. First construct the Petri net: ``` # combine: 2H + 1O -> 1H2O combineSpec = ("combine", [["H",2],["O",1]], [["H2O",1]]) # split: 1H2O -> 2H + 1O splitSpec = ("split", [["H2O",1]], [["H",2],["O",1]]) petriNet = PetriNet( ["H","O","H2O"], # species [combineSpec,splitSpec] # transitions ) ``` Then establish the initial labelling: ``` initialLabelling = {"H":5, "O":3, "H2O":4} ``` Then run it for twenty steps: ``` steps = 20 petriNet.RunSimulation(steps, initialLabelling) ``` #### Program code Species will be represented simply by their names, i.e., as strings. PetriNet and Transition will be defined as object classes. Each PetriNet instance holds a list of species names, a list of transition names, a list of Transition objects, and the current labelling. The labelling is a dictionary from species name to token count. Each Transition object contains an input dictionary and an input dictionary. The input dictionary map the name of a species to the number of times that the transition takes it as input, and similarly for the output dictionary. ##### Class PetriNet The PetriNet class has a top-level method called RunSimulation, which makes repeated calls to FireOneRule. FireOneRule obtains the list of enabled transitions, chooses one randomly, and fires it. This is accomplished by the method SelectRandom, which uses a random integer between 1 and N to choose a transition from the list of enabled transitions. ``` class PetriNet(PetriNetBase): def RunSimulation(this, iterations, initialLabelling): this.PrintHeader() # prints e.g. "H, O, H2O" this.labelling = initialLabelling this.PrintLabelling() # prints e.g. "3, 5, 2" for i in range(iterations): if this.IsHalted(): print "halted" return else: this.FireOneRule() this.PrintLabelling(); print "iterations completed" def EnabledTransitions(this): return filter(lambda transition: transition.IsEnabled(this.labelling), this.transitions) def IsHalted(this): return len(this.EnabledTransitions()) == 0 def FireOneRule(this): this.SelectRandom(this.EnabledTransitions()).Fire (this.labelling) def SelectRandom(this, items): randomIndex = randrange(len(items)) return items[randomIndex] ``` ##### Class Transition The Transition class exposes two key methods. IsEnabled takes a labeling as parameter, and returns a boolean saying whether the transition can fire. This is determined by comparing the input map for the transition with the token counts in the labeling, to see if there is sufficient tokens for it to fire. The Fire method takes a labelling in, and updates the counts in it to reflect the action of removing input tokens and creating output tokens. ``` class Transition: # Fields: # transitionName # inputMap: speciesName -&gt; inputCount # outputMap: speciesName -&gt; outputCount # constructor def __init__(this, transitionName): this.transitionName = transitionName this.inputMap = {} this.outputMap = {} def IsEnabled(this, labelling): for inputSpecies in this.inputMap.keys(): if labelling[inputSpecies] &lt; this.inputMap[inputSpecies]: return False # not enough tokens return True # good to go def Fire(this, labelling): print this.transitionName for inputName in this.inputMap.keys(): labelling[inputName] = labelling[inputName] - this.inputMap[inputName] for outputName in this.outputMap.keys(): labelling[outputName] = labelling[outputName] + this.outputMap[outputName] ``` ##### Class PetriNetBase Notice that the class line for PetriNet declares that it inherits from a base class PetriNetBase. The base class contains utility methods that support PetriNet: PrintHeader, PrintLabelling, SelectRandom, and the constructor, which converts the transition specifications into Transition objects. ``` class PetriNetBase: # Fields: # speciesNames # Transition list # labelling: speciesName -&gt; token count # constructor def __init__(this, speciesNames, transitionSpecs): this.speciesNames = speciesNames this.transitions = this.BuildTransitions(transitionSpecs) def BuildTransitions(this, transitionSpecs): transitions = [] for (transitionName, inputSpecs, outputSpecs) in transitionSpecs: transition = Transition(transitionName) for degreeSpec in inputSpecs: this.SetDegree(transition.inputMap, degreeSpec) for degreeSpec in outputSpecs: this.SetDegree(transition.outputMap, degreeSpec) transitions.append(transition) return transitions def SetDegree(this, dictionary, degreeSpec): speciesName = degreeSpec[0] if len(degreeSpec) == 2: degree = degreeSpec[1] else: degree = 1 dictionary[speciesName] = degree def PrintHeader(this): print string.join(this.speciesNames, ", ") + ", Transition" def PrintLabelling(this): for speciesName in this.speciesNames: print str(this.labelling[speciesName]) + ",", ``` ### Summary We’ve learned about an important model for computation, called Petri nets, and seen how it can be used to model a general class of entity conversion networks, which include chemical reaction networks as a major case. Then we wrote a program to simulate them, and applied it to a simple model for formation and dissociation of water molecules. This is a good beginning, but observe the following limitation of our current program: it just randomly picks a rule. When we run the program, the simulation makes a kind of random walk, back and forth, between the states of full synthesis and full dissociation. But in a real chemical system, the rates at which the transitions fires are _probabilistically determined_, and depend, among other things, on the temperature. With a high probability for formation, and a low probability for dissociation, we would expect the system to reach an equilibrium state in which H2O is the predominant “token” in the system. The relative concentration of the various chemical species would be determined by the relative firing rates of the various transitions. This gives motivation for the next article that I am writing, on stochastic Petri nets. ### Programming exercises 1. Extend the constructor for PetriNet to accept a dictionary from transitionName to a number, which will give the relative probability of that transition firing. Modify the firing rule logic to use these values. This is a step in the direction of the stochastic Petri nets covered in the next article. 2. Make a prediction about the overall evolution of the system, given fixed probabilities for synthesis and dissociation, and then run the program to see if your prediction is confirmed. 3. If you like, improve the usability of the script, by passing the model parameters from the command line. Use sys.argv and eval. You can use single quotes, and pass a string like “{‘H’:5, ‘O’:3, ‘H2O’:4}” from the command line. ### Acknowledgements Thanks to David Tweed and Ken Webb for reviewing the code and coming up with improvements to the specification syntax for the Petri nets. Thanks to Jacob Biamonte for the diagrams, and for energetically reviewing the article. John Baez contributed the three splendid examples of Petri nets. He also encouraged me along the way, and provided good support as an editor. ### Appendix: Notes on the language interpreter The sample program is written in Python, which is a low-fuss scripting language with abstraction capabilities. The language is well-suited for proof-of-concept programming, and it has a medium-sized user base and support community. The main website is www.python.org. You have a few distributions to choose from: • If you are on a Linux or Mac type of system, it is likely to already be installed. Just open a shell and type “python.” Otherwise, use the package manager to install it. • In Windows, you can use the version from the python.org web site. Alternatively, install cygwin, and in the installer, select Python, along with any of your other favorite Linux-style packages. Then you can partially pretend that you are working on a Linux type of system. • The Enthought Python distribution comes packaged with a suite of Python-based open-source scientific and numeric computing packages. This includes ipython, scipy, numpy and matplotlib. The program is distributed as a self-contained script, so in Linux and cygwin you can just execute ./petri1.py. When running this way under cygwin, you can either refer to the cygwin version of the Python executable, or to any of the native Windows versions of interpreter. You just need to adjust the first line of the script to point to the python executable. This entry was posted on Monday, October 1st, 2012 at 6:11 pm and is filed under networks, software. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 24 Responses to Petri Net Programming 1. John Baez says: Great post, David! I hope we pick up enough steam to write some stochastic Petri net programs that tackle really interesting real-world issues, like: • how likely is it that a random stochastic Petri net displays bistability, like an ‘on-off switch’? or • how likely is it that a random stochastic Petri net displays periodic behavior, like a ‘biological clock’? or • what rules for evolution of stochastic Petri nets push them toward interesting behavior? or • take some stochastic Petri nets that are studied in the biology / chemistry literature, and answer some interesting open questions about them. But for now, here’s a naive programming question: what’s an ‘anonymous function’? • Jesse C. McKeown says: Anonymous functions are what most functions always were before we forgot they needn’t all have names. Jim, below suggests lambda-abstracts as one way to get anonymous functions. Strictly speaking, any composite term that can be used as a “function” in the program-writing sense might as well be considered an anonymous function. Languages like C tend to discourage function anonymity, and they get around the attendent difficulties (if they are difficulties) with things like “callbacks” and clever manipulation of machine state. C particularly is designed with a byte-chunked RAM machine in mind, and doing things directly to that machine; the things that are used as functions in C are thought of as addresses where a particular bunch of machine instructions start. These addresses can be used as parameters in calling other functions, or returned as values, but returned values won’t be “new” functions or combinates. Other languages, especially those with a “functional programming” style are more encouraging of anonymous functions; lisp comes to mind particularly. Of course, one can compile lisp into C (e.g.), so it ought to be possible to write in a functional style in C, directly; though the things that would be functions in lisp probably wouldn’t be C functions. • David Tanzer says: Interesting questions about random stochastic Petri nets that you posed. They lead to further questions. 1. By what method would we construct a random Petri net? I see definitions on the web of a random graph, where you fix the set of nodes, and then assign a probability to each possible edge — then use these probabilities to build a graph. With Petri nets we have two types of nodes, plus the possibility that an edge from A to B occurs multiple times. The following questions will betray my lack of knowledge of the further reaches of Petri net theory. I’m guessing my way through the terrain, so please correct and/oor elaborate. 2. Bistable and periodic refer to the solutions to the rate equation, right? We could think of the state space of a Petri net with k species either as the non-negative region of $R ^ k$ or $N ^ k$, depending on whether you use a continuous approximation or not. In the continuous case we use a differential equation, the rate equation. ( Is there a discrete analog?) The solutions to the equations give the trajectories in state space, given an initial state. So we can look at attractor points and basins of attraction. Does bistable mean that there are two attractors and the whole (non-negative) region of $R ^ k$ is divided into the basins of attraction for these points? Does periodic means that there are points that cycle back to themselves, or is it a broader notion, meaning that there are points whose trajectories are bounded, but do not converge to an attractor? Can a system be bistable and periodic, in the sense that there are two attractors, but there are also points that are periodic? Basically I’m asking about the character of the solutions to the rate equation, which I have not yet investigated. Can you summarize the main theorems regarding the solutions. 3. Given a stochastic Petri net, is there a decision procedure to say whether it is “bistable,” or “periodic”? Or does one have to resort to a probabilistic test, involving selections of an initial starting vector, and testing to see whether (1) it appears to be headed for an attractor (or infinity), or (2) whether it appears to be a periodic point. Clearly “appears” needs to be defined. I realize these are somewhat broad and sketchy questions, but any expert hints you can provide would be great! Thanks • John Baez says: Bistable and periodic refer to the solutions to the rate equation, right? Right. We could think of the state space of a Petri net with k species either as the non-negative region of $\mathbb{R}^k$ or $\mathbb{N}^k$, depending on whether you use a continuous approximation or not. In the continuous case we use a differential equation, the rate equation. Right. My book with Jacob Biamonte wound up spending a lot of time on the rate equation. We explained that when a Petri net has ‘deficiency zero’ and is ‘weakly reversible’, people have a good understanding of solutions of the rate equation. There are just as many equilibrium solutions as you’d expect, no more and no less. They’re all stable, and there are no periodic solutions. Even better, in this case, the ‘Anderson–Craciun–Kurtz theorem’ lets you quickly obtain equilibrium solutions of the master equation from equilibrium solutions of the rate equation. But this case is, in a sense, the boring case! ( Is there a discrete analog?) Yes, but people haven’t studied this as much. In fact, I don’t know if I’ve seen it anywhere, but I could easily describe it. It might be fun to generalize the zero deficiency theorem to this case, if nobody has yet. I think I could do it. The solutions to the equations give the trajectories in state space, given an initial state. So we can look at attractor points and basins of attraction. What I’m calling a ‘stable equilibrium’ is exactly the same as what you’re calling an attractor. When the ‘zero deficiency’ condition holds, we have a good handle on the attractors, and there’s only one attractor in each ‘stoichiometric compatibility class’. Does bistable mean that there are two attractors and the whole (non-negative) region of $\mathbb{R}^k$ is divided into the basins of attraction for these points? No, but close. It means there are two attractors in some stoichiometric compability class. I’m reluctant to explain ‘stoichiometric compatibility class’ in general, having already done so here, but if you think about the water formation and dissociation example you described, you’ll quickly get the idea. There’s one stoichiometric compatibility class for each choice of these two quantities: 1) number of H2O’s plus the number of O’s 2) twice the number of H2O’s plus the number of H’s These quantities can never change, since hydrogen and oxygen atoms can’t be created or destroyed by the reactions in this Petri net. So, a solution of the rate equation or master equation can only wander around in one stoichiometric compatibility class. Thanks to the deficiency zero theorem, the rate equation has one attractor in each stoichometric compatibility class. Where this attractor is depends on the rate constants for the formation and dissociation of water. This is a nice example of the zero deficiency theorem. Does periodic means that there are points that cycle back to themselves, Yes. The time it takes to come back is called the ‘period’. or is it a broader notion, meaning that there are points whose trajectories are bounded, but do not converge to an attractor? No, that broader notion would includes not just periodic solutions but quasiperiodic and chaotic ones. Can a system be bistable and periodic, in the sense that there are two attractors, but there are also points that are periodic? What’s periodic is not the system but a particular solution. A system can have any number of attractors and any number of periodic solutions. The word ‘bistable’ focuses undo attention on the number two, which is interesting just because it’s the first number bigger than one. A light switch is bistable, and people are interested in ‘switches’ in biochemistry, but a general switch could have many stable settings. Basically I’m asking about the character of the solutions to the rate equation, which I have not yet investigated. Can you summarize the main theorems regarding the solutions. I summarized the deficiency zero theorem and Anderson-Craciun-Kurtz theorem rather vaguely above. I did it quite precisely in the book. There are also lots of other theorems, like the deficiency one theorem. 3. Given a stochastic Petri net, is there a decision procedure to say whether it is “bistable,” or “periodic”? There are theorems—most notably the deficiency zero theorem and deficiency one theorem–that give necessary or sufficient theorems for these properties, but I don’t believe a full-fledged decision procedure is known. Or does one have to resort to a probabilistic test, involving selections of an initial starting vector, and testing to see whether (1) it appears to be headed for an attractor (or infinity), or (2) whether it appears to be a periodic point. Clearly “appears” needs to be defined. People can already do a lot better than this brute-force method, thanks in part to all the theorems people have shown, but I think there’s a huge unexplored territory here. Mathematical chemists are interested in this problem, and it seems to be tricky. • Vasileios Anagnostopoulos says: A naive programming answer : Creating an anonymous function is like cooking a cake in a pastry shop. The customer is the executor of the function. He will refer to it by name (like myPrecious, theYummy) and he uses it as he wishes (eating it the same day, eating half tomorrow and half today, eating it now, giving it as a present). The person who cooks it will not use it so there is no reason to give it a name. • Vasileios Anagnostopoulos says: Some more : A pure anonymous function has no side effects. If the cooker has forgotten his keys inside he will not be able to enter his house after selling it. This cake (anonymous function) is not pure. But if it has no side effects (after giving it away nothing happens) then it is pure. • davidtweed says: Another way of putting it is the difference between someone you introduce yourself to, say a new acquaintance and the person behind the till in a shop you’ll visit once. There’s no reason you can’t avoid asking the name of an acquaintance,but it makes things harder,and likewise you could find the name of the server,but its not clear it’s worth it even for them as they’re probably being tracked on throughout. Sometimes naming isn’t worth doing,even in programs. 2. jimstuttard says: The lambda in lambda calculus is an anonymous function except for its name “lambda” An example is: \x -> x**2 where x is a variable and \ is a lambda.. 3. Jenny says: producer input (fine) – process – output producer input (virus) – process – output reproducer input (fine) – process – output reproducer input (virus) – process – output repeating… basically input (fine)- process wont be infected- output(fine) input (virus) – - process will be infected- output(virus) • David Tanzer says: Can you elaborate a bit in words the thought process that you are going though here? 4. John Baez says: Okay, thanks everyone: I think I get the idea of an anonymous function. If I understand right, $f(x) := x^2 + 1$ is ‘nonymous’ (my silly term), since we’ve given it the name $f$, but $\lambda x. (x^2 + 1)$ is an ‘anonymous’ description of the same function in lambda-calculus terminology, and lots of mathematicians would use the anonymous description $x \mapsto x^2 +1$ 5. sean matthews says: I think you may be missing something here. Your discussion gets quite deep into the problems of building a petri-net simulator precisely in Python, but reading it, it is not clear to me that the concepts you find in Python are particularly relevant for understanding Petri-nets (this does not mean that you can’t build a good petri-net simulator in Python, only that what is interesting is the result, not the details of the implementation). When I think about how to implement a Petri-net simulator (speaking as someone who first encountered them at least 30 years ago, when I was about 17) I don’t think in terms of object/class structures at all, but in terms of possible paths through a relational algebra. (I had the same feeling reading James Sethna’s book on statistical physics a while ago, where he provides a lot of class-based (python) code for working with graphs, when sparse matrixes and an algebraic approach would have been much more effective). The result would be more concise and map much better onto the problem, so that intutions on one side could be reinforced on the other. I’m sceptical that this is the case with something built around the limitations of Python. Object-orientation is a bit like the gibbs vector calculus: it works and it is better than some of the alternatives, but algebraic geometry can do it better! [you know, blog comment boxes are too short for this sort of point, it suddenly occurs to me] • David Tanzer says: Hi Sean, Thanks for your reflective comments. I’d be very interested to hear more about your relational-algebra interpretation of Petri net structures. In terms of the article itself, I do not believe that I have missed the mark in terms of the aim that I set out, which was to address a three-fold audience of programming people who want to learn about science applications, science people who want to learn about programming, and any other curious folks. I wrote it as, for lack of a better term, a “praxis paper,” in the sense of practical application of a field of study. So first I introduced the concepts, and then moved to a pragmatic construction stage. For the language, I choose one that is both capable of abstraction, and which has a substantial application base. To give the idea of this genre in simpler terms, consider an article that introduces regular polyhedra, and the shows how to build them with common household materials. I would further maintain that this “household materials” programming language can faithfully represent the mathematical concepts of many application domains, which are presented in terms of objects, functions and relations. Take for example a definition of a Petri net. A Petri net P consists of a pair (S,T) where S is a set of species names, and T is a set of transitions. Each T is defined by a pair (I,O), where I: S -> N is the input degree map, and O: S -> N is the output degree map. A labelling L: S -> N is a map that counts the number of tokens for each species. A transition T is enabled in a labelling L if for each s in S, L(s) >= InputMap(T)(s). The definition itself introduces nouns, which means, in the first place, objects. Functions are involved in the definition. This one happens not to involve any relations. So any language that allows us to work with objects, functions, and relations is a Useful Thing. If you look at the definitions in the program I gave, sure there is some syntactic baggage, but if you look through that, you will see an isomorphic copy of the tuples and functions used in the above mathematical definition. Consider, for example, the method IsEnabled of a Transition. It is precisely a boolean valued function of a Transition (with a parameter that is conventionally called “this” or “self”) and a labelling, and the body of the function says exactly what I said above in natural language. The beauty of this kind of formal language is that it is mathematics that runs. Now it is true, as you pointed out, that I gave some discussion of Python per se, which is incidental to the topic of Petri net programming. I was aware of this, and that is why I introduced those sections as part of a “digression.” I put this in to get as many readers on board as possible. Since the program text was to be presented in Python, if only because that is a native language of the author, then why not prep the uninitiated. Finally, in the spirit of hands-on and “literate” coding, we _do_ want to present the code and dissect it. This is not merely an “implementation.” We’ve learned something about a theory, and now we want to learn the method of translating this theory into something that can do cool things. We look _into_ the code, to see the isomorphic copy of theory. * * * Your view of Petri nets in terms of “possible paths through a relational algebra,” is intriguing, but I’d need to hear more about it to really understand what you mean. Let’s suppose that your conceptualization — which undoubtedly is based on far more experience with Petri nets than I have — gives some kind of structural insight in the semantics of Petri nets, which can be put to good use in simulator. I have no reason to doubt this. Here is an analogy to explain the relation of your approach to mine. Suppose we gave an introductory “praxis paper” on Abelian group theory. A group is first defined, an then an “Abelian” group is introduced as one which has the constraint that the group operation is commutative. After giving some examples, we could then introduce some theory-isomorphic code to test if the group operator is commutative, perform multiplications, using the simplifications that come along with commutativity, etc. This could be some useful pedagogical code, that readers could tinker with. Now, based on further analysis of the domain, along comes the Structure Theorem for finite Abelian groups, which gives a powerful, simplifying representation for such groups: they are isomorphic to direct products of cyclic groups. This leads to a more incisive representation for the groups, and new an better data structures to be used in a computational system. But this latter advance doesn’t negate the pedagogical or experimental value of the original code, which is an executable expression of group theory, using the general terms of group theory. Furthermore, once the new theory based on cyclic groups is put forward, it would be very natural then to translate these definitions into isomorphic forms in the household computing language, and use that for an expository presentation of the more advanced simulator. So if you share with us the definitions behind your relational-algebraic approach to Petri net computation, we might ask for your permission to express it in a household abstract programming language and blog about it — with due credit to you of course! Seriously though, in addition to giving us some more color on your idea, check out the discussions taking place on the Forum of the Azimuth Project. There is a lot of ongoing, active interest in Petri nets, and their potential application to areas such as climate modelling. (Theme question: do we live in a bistable climate?) It’s clear that you have an experienced eye for this problem domain, and if you are interested, you can make a real contribution to our ongoing research. To become a posting member of the group only takes a few steps. Regards, Dave P.S. – This invitation is generally extended to anyone who wants to discuss and work on the computational, mathematical and scientific challenges associated with environmental problems. This is a focal point, but is not taken in a restrictive sense. For example, we have recently been discussing Qinglan Xia’s gorgeous paper “The Formation of a Tree Leaf,” at http://www.ma.utexas.edu/users/qlxia/Research/leaf.pdf, which presents a simple physical model that accounts for the shape of tree leaves, in terms of an optimization process that seeks to minimize transport costs. from the root to the cells. • sean matthews says: Hmmm… My point wasn’t really about petri nets, and more about the relevance of the specific programming language. However, now that I am confronted with the problem, I suppose I should put my money where my mouth is. How would I implement petri-nets? Call it Graham’s principle that the structure visible in a program should correspond to the problem, and not to the programming language. The ideal for this is usually Lisp (for which see Paul Graham passim), or, say, for large scale linear algebra, APL (alas only once upon a time) or (today) Matlab. For working with relations on tuples I think Prolog is likely to provide a clean map between concrete and abstract. A petri-net is a bipartite graph G plus a set of counters on one of the types of nodes, T, so that the vector of these counters defines a state. G defines a relation R, of transitions from state to state; thus a particular run of a simulation of a petri-net has the form T1 -R-> T2 -R-> T3 -R-> T4 -R-> … Prolog allows me to model tuple transitions very easily. If I model numbers ‘peano-style’, as s(s(…(0))), and T has the form t(n1, n2, n3, ….) Then I can model a rule directly in Prolog as: r(rulename, t(n1.old, n2.old, n3.old, ….), t(n1.new, n2.new, n3.new, ….)). which is simply a declared relation between states (I’ve also included the name – which gives me a bit more control and documentation) and I am done. So how does this work in practice? Well I can write down the the ‘Chemical reaction’ net as: % t(O, H, H2O). r(split, t( O , H , s(H2O)), t(s(O), s(s(H)), H2O ). r(combine, t(s(O), s(s(H)), H2O ), t( O , H , s(H2O)). Note that rules are directly declarative and I have completely separated them from any other interpretive framework – I can modify my non-deterministic model, or implement breadth first or bounded depth-first search on the space easily as extensions. I can even run my model backwards, should I so want. A basic simulator (with a bound on the number of transitions) then looks like: simulate(T, T, [], 0). simulate(Tinit, Tfinal, [(Tinit, R) | L], N) :- choose(R), r(R, Tinit, Tnext), PrecN is N – 1, simulate(Tnext, Tfinal, L, PrecN). simulate(T, T, [], N) :- not r(_, T, _), N > 0. choose is a bit more messy, since it should enumerate the possible rules in random order. A state specific version (we can do much better - more general – with a bit more space) would be choose(R) :- (0 =:= rand(2) -> (X = split ; X = combine) | (X = combine; X = split ). A call to the simulator, with a bound of 20 transitions, is then: simulate(t(s(s(s(s(s(0))))), s(s(s(0))), s(s(s(s(0))))), Tfinal, L, 20). Not sure there is enough code here to count as a program (<= 19 lines). I don't have a prolog interpreter to hand – my employers are very unenthusiastic about my installing my own software, so there may be typos above, but I hope the idea is clear. Note the argument here is not in favor of Prolog (a similar-flavored - though not quite identical and not quite so concise – implementation would be easy in a language like Scheme), but in favor of choosing your tools so as to move you close to the problem rather than trying to move the problem closer to your tools, and thus, in this case, no objects/classes; if you were designing a user-interface, then an object/class model would be close to your problem. 6. Keith says: How does this compare to something like the Stochastic Pi Machine (http://research.microsoft.com/en-us/um/cambridge/groups/science/tools/spim/spim.htm)? • David Tanzer says: Keith, Thank you for the link to this magnificent program! For the comparison, right off the bat we have that: My program is a proof-of-concept implementation of a simulator for Petri nets. It is meant to teach the concepts, and to serve as a kind of “clay” that actually works. That program is a full-fledged, incredibly well presented implementation of a simulation system for reactions that use the formalism of pi-calculus, not Petri nets. I know next to nothing about the pi-calculus, but now that I see what it can do, I will learn more about it!! Can anyone recommend a good tutorial introduction to the pi-calculus? I find the Wikipedia presentation to be formalistic and unintuitive. Thanks. • davidtweed says: There’s surely better actual tutorials, but here’s a lightning introduction by Mike Stay. • David Tanzer says: Wow. I sort of get the idea, but not well. If the stochastic pi-machine is pi-calculus in action, then that is proof that the calculus is cool and useful. So I’m trying to make sense of the code that have for the sample reactions. Here a document that explains the some of the examples in the Sliverlight reaction simulator: http://research.microsoft.com/en-us/projects/spim/gillespie.pdf It includes radioactive decay, and Lotka reactions. Here is the Lotka reaction logic: directive sample 5.0 1000 directive plot Y() new [email protected]:chan new [email protected]:chan let X() = ?c1; X() let Y() = do !c1; (Y() | Y()) or !c2 or ?c2 run (X() | 10 of Y()) The following questions are directed generally to anyone here. How much of this is pure pi-calculus, and how much is in the stochastic extension? The channels have rate constants associated with them, but is that the extent of the extension? Can anyone give a detailed analysis and explanation of this code, to explain how it is acheiving the marvelous simulation result? Remember, I don’t really know what pi-calculus is, so you’ll have to start from a point of few assumptions. Here is the the language definition document for stochastic pi-calculus: http://research.microsoft.com/en-us/projects/spim/language.pdf Thanks • John Baez says: The pi calculus is meant as a distilled framework for describing ‘concurrency’—roughly, computer networks where nodes can create channels to other nodes and send messages along these nodes—much as the lambda calculus is a distilled framework for describing computations done sequentially by a single processor. Personally I find these distilled frameworks, that seek to get everything done with as few primitives as possible, less clear than a category-theoretic description of large classes of frameworks. When you try to distill everything down to a few primitives you wind up making a lot of arbitrary choices, and the resulting framework tends to look cryptic. Of course, category theory also looks cryptic to those who haven’t studied it… but it has the advantage of being so generally useful that once you learn it and formulate some idea in this way, a bunch of connections to other ideas are instantly visible—ideas in math, logic, physics, and computation, for starters. Right now I have just one grad student: Mike Stay, who works at Google. He’s working with me on categories and computation. In that post David Tweed mentions, he was endeavoring to explain the pi calculus to me, in response to my plea for help. As a result, he got interested in describing By now he’s gone a lot further. See: • Mike Stay, Higher categories for concurrency, n-Category Café. for the state of his thinking a year and a half ago. Since then he’s been working on this subject with Jamie Vicary and making even more progress. By now it’s clear to me that bigraphs, the pi calculus and other frameworks for concurrency would really profit from a category-theoretic description, and that when this is done it’ll look a bit like a categorification of Petri net theory, with symmetric monoidal bicategories replacing symmetric monoidal categories. In other words: instead of describing a bunch of ‘particles’ interacting in time, these fancier frameworks describe a bunch of labelled graphs interacting and changing topology in time. The vertices of the graph are what I called ‘nodes’, and the edges are what I called ‘channels’. The nodes can do things (e.g. compute) and send messages to each other along the edges. This sort of framework might be useful in biology and ecology, too. It would certainly be relevant to the ‘smart grid’. 7. Boris Borcic says: While I don’t feel this way of running Petri nets is very useful, I used the opportunity of the proposed exercise to extend the engine for the case of transitions with relative rates. This, to better show the flexibility of Python as a base to create DSL – domain specific languages. In this case a DSL for petri nets, which for the three examples of the post, allows to encode them entirely as ```@petrinet def AIDS() : α = 0.1 | _ >> healthy β = healthy+virion >> infected γ = 0.2 | healthy >> _ δ = 0.3 | infected >> _ ε = infected >> infected+virion ζ = 0.2 | virion >> _ @petrinet def rabbits_and_wolves() : birth = rabbit >> 2*rabbit predation = rabbit+wolf >> 2*wolf death = 0.3 | wolf >> _ @petrinet def chemical_reaction() : split = H2O >> 2*H+O combine = 2*H+O >> H2O ``` The optional number and vertical bar at the beginning of a rule expresses a weight or unnormalized rate different from unity. The code is written for Python 3.3 but except for the use of greek letters in the “AIDS” petri net, should also run on Python 2.7. The implementation of the syntax in 65 LOCs is minimalistic, as no step is taken ensure graceful errors when the DSL code strays from the syntax illustrated. The way to invoke the code is ```>>> for step,transition,labelling in AIDS(healthy=5,virion=5): print(transition,"==>",labelling) if step>10 : break β ==> 4*healthy+1*infected+4*virion ε ==> 4*healthy+1*infected+5*virion β ==> 3*healthy+2*infected+4*virion β ==> 2*healthy+3*infected+3*virion ε ==> 2*healthy+3*infected+4*virion β ==> 1*healthy+4*infected+3*virion ε ==> 1*healthy+4*infected+4*virion β ==> 5*infected+3*virion ε ==> 5*infected+4*virion ε ==> 5*infected+5*virion ε ==> 5*infected+6*virion ``` And here is the code. I just hope the sourcecode quoting works as advertised :) ```# -*- coding: utf-8 -*- from __future__ import print_function from collections import Counter class Bag(Counter) : weight=1 transitions=[] output={} def __rmul__(self, other) : return Bag({s:n*other for s,n in self.items()}) def __add__(self,other) : return Bag(Counter.__add__(self,other)) def __rshift__(self,other) : self.output=other self.transitions.append(self) return self def __ror__(self,other) : self.weight=other return self def __call__(self,other) : return Bag(other-self+self.output) def __repr__(self) : return '+'.join("%s*%s" % (n,s) for s,n in sorted(self.items()) if n) or '{}' def petrinet(fun) : code = getattr(fun,'__code__',0) or fun.func_code species={ n : Bag([n]) for n in code.co_names } species['_'] = Bag() Bag.transitions[:]=[] eval(code,species,{}) transitions=Bag.transitions[:] for n,t in zip(code.co_varnames,transitions): t.name=n del species['__builtins__'] del species['_'] def petrirun(**labelling) : from random import random from itertools import count labelling=Bag(labelling) for step in count(1) : possible=[t for t in transitions if t == t & labelling] if not possible : break pick=random()*sum(t.weight for t in possible) for k,transition in enumerate(possible) : pick-=transition.weight if pick<=0 : labelling=transition(labelling) yield step,transition.name,labelling break petrirun.species=species petrirun.transitions=transitions return petrirun ``` 8. David Tanzer says: Cool. Can you say a bit about how these DSL language constructs are working together to make this program run? • David Tanzer says: I.e., How does it work? I am not familiar with these 2.7 language constructs. Looking it over some more, I’m getting some hazy pciture of it.. The syntax “@petrinet ___” is a way to pass the defined set of transitions to the function “petrinet.” What is the type of the defined objects like “chemical_reaction.” Are they something like rulesets? And what is the general role of the symbols |, _ and >>. Yet you call the object AIDS like a function. Above and beyond these language specifics, can you post here a small amount of dissection of the code? Saying essentially, here is what I wanted to achieve, and these are the mechanisms I used, which work together correctly because of XYZ. Thanks. 9. Boris Borcic says: The only thing that’s 2.7 is the Counter library class (itself a subclass of dict) which saves me some definitions, such as that of the & operator by a special method of the user-defined subclass Bag. That subclass itself (through special __methods__ that are the Python way of overloading operators) serves to implement the syntax for transitions. This is comparable in power and limitations to making up user syntax by defining operators in Prolog. “Practicality beats purity” is a motto of Python, and this is reflected in using that single general-purpose subclass Bag to represent at once species, transition inputs, transitions outputs, and transitions (given this, the name of the subclass is debatable – as it doesn’t properly fit the last specialization). The implementation is a hack, and can be largely characterized as using reflection to re-target Python’s own compilation engine. The syntax @petrinet is what’s called a decorator (iirc it was borrowed from Java around python 2.4), and the way it works is that the result of compiling the function definition that follows, gets passed to the function petrinet, and the return value bound instead to the name of the defined function in the outer scope. The function petrinet() takes apart the compiled function object which allows it to create bindings for all unbound variables meant to name species, to values that are singleton Bags with 1 exemplary of the name or species token counted in. The code of the function is then evaluated while the Bag class of these bound values provides appropriate definitions for the algebraic syntax of transitions. The most flaky is the cheap way transition names get associated with transitions; this is done by simply mapping evaluations of the >> operator in the sandboxed function evaluation, in order, to the names of local variables listed by the compiled function object. This boils down to the unenforced requirement that the decorated petri net function is formed of simple assignment statements to distinct variables, of the form transition_name = (expression involving a single call to >>) that is, with the RHS an expression involving exactly one evaluation of the >> operator. Also, the function must be declared with an empty parameter list or things will break. The function petrirun is there to demonstrate that the intended semantics is captured (in a more realistic setting the procedural interpretation of the petri net would be better decoupled). petrirun() gets instantiated by petrinet() for each petri net as the executable object. It is actually a generator because it returns values with a yield statement, what allows to pull out of it, subsequent values on demand, for instance as the generator in a for-loop as demonstrated above. The list of transitions and the list of species are attached to it as user function attributes for allowing later user access. The petrirun instance should also be renamed to the decorated function name (to appear under that name when coming out in the REPL or tracebacks) but that was left out. petrirun() is also designed to exploit named parameters function call syntax to serve the stipulation of the initial labelling of the run. 10. In the previous article, I explored a simple computational model called Petri nets [...]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255252480506897, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/1088/efficient-way-to-count-the-number-of-zeros-at-the-right-end-of-a-very-large-nu/1092
# efficient way to count the number of zeros at the (right) end of a very large number If I want to count the number of zeros at the (right) end of a large number, like $12345!$, I can use something like: ````Length[Last[Split[IntegerDigits[12345!]]]] ```` But this seems clumsy, since it's potentially doing the full work of `Split[]` to the whole list of digits when all I need is the length of the run of $0$s at the end of the list. Is there a more efficient (and particularly more Mathematica-elegant) way to do this? (edit: the answer for the example should be 3082) - 1 – J. M.♦ Feb 1 '12 at 3:12 1 @J.M.: Yeah, I was actually using Mathematica to check the results of applying that formula by hand and I wasn't happy with the Mathematica code that I came up with. – Isaac Feb 1 '12 at 3:15 ## 6 Answers For general large integers `n`, I don't know if there's a better method than `Min[IntegerExponent[n, 5], IntegerExponent[n, 2]]`. Or more compactly, `IntegerExponent[n, 10]` or `IntegerExponent[n]`. - I liked yours better – Rojo Feb 1 '12 at 3:24 This is certainly more concise and "Mathematicaic" than what I'd cobbled together. It also seems to be 3-7 times faster. – Isaac Feb 1 '12 at 3:55 If you are strictly interested in the number of trailing zeros in factorials $n!$, as the example in your question suggests, then consider the number of pairs of 2 and 5 in all the factors of numbers 1 through $n$. There is always a 2 to match a 5, so the number of fives gives the number of zeros. Integers divisible by 5 contribute one 5 to the total. Integers divisible by 25 contribute one additional 5, and so on. The maximum power to consider is `Floor[Log[5,n]]`. This method avoids time- and memory-consuming calculation of $n!$, and is about 50 times faster than `IntegerExponent` on my machine. ````NumberOfFives[n_Integer] := Total[Floor[n/5^Range[Floor[Log[5,n]]]]] ```` However, the fastest method I've found to calculate the exponent of prime $p$ in $n!$ is the following: ````PrimeExponent[n_Integer, p_Integer] := (n - Total[IntegerDigits[n, p]])/(p - 1) ```` which, on my machine, is about three times as fast as Mr. Wizard's answer: ````Tr@Floor@NestWhileList[#/5` &, #/5`, # > 1 &] & @ 12345 ```` - 1 Welcome to Mathematica.se. From the looks of it, you've been using Mathematica for quite some time (or you learn very quickly)! – David Carraher Oct 13 '12 at 0:57 1 As David said, welcome! You're selling this method short. With very large numbers my method loses precision and `5` ` needs to be changed to `5` to be accurate. With this, and a big number, e.g. `RandomInteger[1*^5000]`, your code is about 700 times faster than mine. Nicely done. – Mr.Wizard♦ Oct 13 '12 at 1:35 Alternatively: `NumberOfFives[n_Integer] := Total[Quotient[n, 5^Range[IntegerLength[n, 5] - 1]]]` – J. M.♦ Oct 24 '12 at 15:12 Exactly the same method I've come up during computation of million factorials using C. – Mohsen Afshin Dec 23 '12 at 20:56 Here is a recursive divide-and-conquer. There are probably nicer ways to code it. ````trailingZeros[n_, b_] := Module[ {scale=Log[b,N[n]], sqrt, ndigits}, If [scale<1, Return[0]]; sqrt = Ceiling[scale/2]; ndigits = IntegerDigits[n, b^sqrt, 2]; If [Last[ndigits]==0, sqrt + trailingZeros[First[ndigits],b], trailingZeros[Last[ndigits], b]] ] In[39]:= Timing[trailingZeros[4234567!, 33]] Out[39]= {6.740000, 423454} ```` In terms of speed, it is essentially identical to J.M.'s approach. It just shows how one might do this were there no `IntegerExponent` function available. - +1 interesting... – Rojo Feb 1 '12 at 17:44 First thing that comes to mind is something like ````LengthWhile[Reverse@IntegerDigits[12345!], # == 0 &] ```` Clearly a compiled version, procedural, must be faster but less MMA elegant - Right off, about twice as fast as my code and pretty elegant. – Isaac Feb 1 '12 at 17:09 Specific to factorials: ````Tr@Floor@NestWhileList[#/5` &, #/5`, # > 1 &] & @ 12345 ```` 3082 Or shorter: ````Tr@Floor[# / 5`^Range@Log[5, #]] & @ 12345 ```` - 3 I'd have done `Tr@Rest@NestWhileList[Quotient[#, 5] &, #, # > 1 &] &@12345` to implement de Polignac-Legendre myself... – J. M.♦ Feb 1 '12 at 4:25 The first one and @J.M.'s suggestion seem to be about equally fast; the second one seems to take about 5-10 times as long. Is there a reason to use `Tr[]` as opposed to `Total[]`? – Isaac Feb 1 '12 at 21:45 @Isaac: He just wants it short, methinks (and it does work). I prefer using `Total[]` myself for purposes of readability (and since a matrix trace isn't what's being computed here)... likely it's the flooring of the logarithm that slows things down in the second snippet. – J. M.♦ Feb 1 '12 at 21:51 @Isaac I use `Tr` most often. It is very fast on packed arrays. Also, I find both `Tr` and `Total` a little ambiguous, in that they are multipurpose functions; `Plus @@` is clearer, but more characters and often slower. Each has its place. – Mr.Wizard♦ Feb 1 '12 at 21:55 BTW @Isaac: for completeness, you could also try the version using `FixedPointList[]` instead of `NestWhileList[]`: `Total[Rest[FixedPointList[Quotient[#, 5] &, 12345, SameTest -> (#1 <= 1 &)]]]`. – J. M.♦ Feb 1 '12 at 21:58 Perhaps ```` StringCases[ToString[12345!], {Longest[x : "0" ..] ~~ EndOfString} :> StringLength@x] ```` Or, with regular expressions, ```` StringCases[ToString[12345!], RegularExpression["(0+)$"] :> StringLength["$1"]] ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9115663170814514, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2380/how-garch-arch-models-are-useful-to-check-the-volatility?answertab=oldest
# How GARCH/ARCH models are useful to check the volatility? Below a R code wrote by the moderator @richardh (whom I want to thank again) about ARCH/GARCH models. ````library(quantmod) library(tseries) getSymbols("MSFT") ret <- diff.xts(log(MSFT$MSFT.Adjusted))[-1] arch_model <- garch(ret, order=c(0, 3)) garch_model <- garch(ret, order=c(3, 3)) plot(arch_model) plot(garch_model) ```` My focus is to understand if the volatility of the returns is constant during all the series. I don't understand how ARCH/GARCH models could help me understading this kind of aspect, at the moment the operations I do are: • Calculate the % returns of the stocks • Linear regressione like: lm(A~B) where A and B are the stocks returns (%) • Passing the residuals of the linear regression to the unit root tests. now the problem is to understand if the volatility is constant (take a look at the chart below, that problem is clearly visible), so the question is: How can I understand if the volatility is not constant reading ARCH/GARCH model EDIT: ````garch_model <- garch(rnorm(1000), order=c(3, 3)) > summary(garch_model) Call: garch(x = rnorm(1000), order = c(3, 3)) Model: GARCH(3,3) Residuals: Min 1Q Median 3Q Max -3.394956 -0.668877 -0.008454 0.687890 3.221826 Coefficient(s): Estimate Std. Error t value Pr(>|t|) a0 7.133e-01 7.156e+00 0.100 0.921 a1 1.752e-02 3.750e-02 0.467 0.640 a2 6.388e-03 1.924e-01 0.033 0.974 a3 6.486e-14 1.711e-01 0.000 1.000 b1 7.396e-02 1.098e+01 0.007 0.995 b2 8.052e-02 1.120e+01 0.007 0.994 b3 8.493e-02 4.279e+00 0.020 0.984 Diagnostic Tests: Jarque Bera Test data: Residuals X-squared = 1.4114, df = 2, p-value = 0.4938 Box-Ljung test data: Squared.Residuals X-squared = 0.0061, df = 1, p-value = 0.9377 > ```` garch_model\$fitted.values - just want to have a quick question: how can we get all the graphs when plotting a "garch" object? R just request us to skip through all of those until the ACF graphs. Thanks – Long Apr 1 at 8:52 ## 2 Answers ARCH and GARCH are, by essence heteroskedastic models, that is, with non-constant volatility. If you fit these models to your sample, it will provide you with a time series of the volatility for each point (you can construct it actually). If the values are not the same for all $t$, then the volatility is not constant, according to these models. What you are looking to do here is to fit the model (GARCH or ARCH) to your time series (look at the GARCH definition). That means that the algorithm in your `garch` fonction basically finds the parameters that match the best your sample. As you can see on the description of the `garch` function, you get different information on the returns. With your parameters you can recreate your $\sigma_t^2$, which is the volatility at time $t$ (hence, it's a time series). If it's not constant (or say, relatively constant) you can see that you detected volatility clusters in your series. To say it differently, plot your time series of $\sigma_t^2$. If it is a straight line, the vol. is constant. - thank you so much for the reply. I have two doubts about your answer: 1. what do you mean with "With your parameters you can recreate your σ2t, which is the volatility at time t." ? can you give me a small example with R? 2. Saying "If it's not constant (or say, relatively constant) you can see that you detected volatility clusters in your series." probably means that you checked the volatility, the doubt is HOW you can estabilish that the volatility is NOT constant. Do I have to apply KPSS test (or other unit root test) to σ2t? Let me know, thank you! – Dail Nov 14 '11 at 9:41 I added the summary(garch_model) on my question. I have used for convenience rnorm(1000) as "timeseries" – Dail Nov 14 '11 at 9:59 try plotting `garch_model$fitted.values`, I think you find your volatilities – SRKX♦ Nov 14 '11 at 10:04 the problem is that I need to test it "programmatically" so how could I "write": "plot your time series of σ2t. If it is a straight line, the vol. is constant." in R? I can't do a visual check.. Maybe some values in the GARCH's returns help me to understand if it is constant or not? And what about GARCH(3,3) is it the standard? I also see 1,1. What to apply? A lot of thanks! – Dail Nov 14 '11 at 10:05 1 The plots are helpful, but to determine if the GARCH model fits, you should use statistics. Look at the log-likelihood, sum-of-squared-residuals, and information criteria across various specifications to see which fits best. Then perform joint test of the GARCH coefficients. If you fail to reject that all coefficients are jointly zero, then you don't need a GARCH model. – richardh♦ Nov 14 '11 at 14:24 show 4 more comments "How can I understand if the volatility is not constant reading ARCH/GARCH model ": • By analyzing the error terms/residuals. There is not much more magic going on than just this and the following rather introductory level paper should get you started: http://archive.nyu.edu/bitstream/2451/26577/2/FIN-01-030.pdf Garch models essentially add conditional variance terms to the regression equation in order to capture time-varying variance and volatility clustering. Just to tame your excitement a bit, do not expect to extract a whole lot of value from the application of ARCH/GARCH models in terms of predicting volatility clusters or variance dynamics, they generally perform very poorly in that regards. Academicians may get extremely excited when the market confirms their models that S&P 500 index/futures vol at 60 levels does not drop back to 20 levels overnight. At the same time one too many funds and trading desks got killed by an over reliance on GARCH models in their quest to predict volatility dynamics. Volatility in the end of the day trades and reacts to extreme but unpredictable events in a very similar fashion than any other asset class. That is why traditionally most models fail in high volatility environments while they track a lot better in low vol environments, but hey, isn't that a self-fulfilling prophesy? Fact, however, remains that most all models are incapable to predict regime changes which confirms my own basic tenet of how to approach trading and risk management in general: Reactive rather than predictive. Just sharing my own non-quantitative take and summary of market experience. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9021438360214233, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87131?sort=votes
## Simplicial presheaves enriched, tensored and cotensored over simplicial sets correctly? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let `$\mathcal{C}$` be a category and `$\mathcal{P}=Functors(\mathcal{C}^{op},\mathcal{S})$` the category of simplicial presheaves, where `$\mathcal{S}=sSet$`. I want `$\mathcal{P}$` to be enriched, tensored, and cotensored over the category of simplicial sets $\mathcal{S}$ in the correct way. So I'll spell this out myself, and then the question will be if I got it right. (I'm pretty sure the answer is yes, but I'm uncertain. A reference where this is spelled out would be very welcome.) First, to any simplicial set `$K \in \mathcal{S}$`, I can associate a constant simplicial presheaf, also denoted `$K$`. Then I can define an action of the symmetrical monoidal category `$sSet$` on `$\mathcal{P}$` by taking the (categorical) product with `$K$` (which is computed levelwise). This, I think, is going to be the tensoring. Next, for any two `$F,G \in \mathcal{P}$`, we want mapping spaces `$Map_{\mathcal{P}}(F,G) \in \mathcal{S}$` so that the `$0$`-simplices are just the morphisms between `$F,G$` in `$\mathcal{P}$`, and so that this is compatible with the tensoring: `$Map_{\mathcal{P}}(K \times F,G) \simeq Map_{\mathcal{S}}(K, Map_{\mathcal{P}}(F,G))$`. In particular, setting $K=\Delta^{n}$, we see that we must have for the $n$-simplices `$Map_{\mathcal{P}}(F,G)_{n}=Hom_{\mathcal{P}}(\Delta^{n} \times F,G)$`. To the categorical product with constant simplicial presheaves and the above mapping spaces should make `$\mathcal{P}$` tensored and enriched over ``$\mathcal{S}$`. Finally, we want $\mathcal{P}$ to be 'cotensored' or 'powered'. In fact, I think more is true. $\mathcal{P}$ should have an internal Hom whose value at `$x \in \mathcal{C}$` is `$\mathcal{Hom}(F,G)(x)=Map_{\mathcal{P}}(F_{| \mathcal{C}/x},G_{| \mathcal{C}/x})$`, and the cotensoring $F^{K}$ should just be $\mathcal{Hom}_{\mathcal{P}}(K,G)$. - I didn't read every line of your post, so I don't want to comment on correctness. However, Hovey has written about these things, as has his student Gillespie. Might be worth googling. Of course, on the homotopy level you should have no problem being tensored, cotensored, and enriched over $SSet$ (see e.g. Spectra and Symmetric Spectra in General model categories). But to get this on the model category level will also require the Pushout Product axiom, which I don't see appearing in your write-up so far. – David White Jan 31 2012 at 18:09 ## 1 Answer Hi, yes you are right. I don't know if you're still interested, but since I took some time to understand what happens here, I might share it. First, $(\textbf{sSet}, \times, \ast)$ is a closed, symmetric monoidal category, where the internal-hom is $\textbf{Map}(F \times \Delta[-],G)$. For any small category $\mathcal{C}$, the category of simplicial presheaves $[\mathcal{C}^{\text{op}}, \textbf{sSet}]$ inherits object-wise the monoidal structure, that is, $(F \times G)(c) := F(c) \times G(c) \in \textbf{sSet}$ and the unit is just the object-wise unit. This is trivially symmetric, and in fact also closed with the object-wise internal-hom $(F^G)(c) := F(c)^{G(c)} = \textbf{Map}(G(c), F(c)) \in \textbf{sSet}$. No tricks, everything is object-wise. Now, there is a fully faithful embedding $\textbf{sSet} \hookrightarrow [\mathcal{C}^{\text{op}}, \textbf{sSet}]$ sending a simplicial set to the constant (on objects) diagram. Therefore, there is a now a notion of tensor product between a simplicial set $X_{\bullet}$ and a simplicial presheaf $F$, by doing the product in the category of simplicial presheaves after the above embedding, i.e., $(X_{\bullet} \otimes F)(c) := X_{\bullet} \times F(c) \in \textbf{sSet}$ and as you said, this is the tensor. The enriched-hom is now just $\textbf{Map}(F \otimes \Delta[-], G)$, and the cotensor similarly $(F^{X_{\bullet}})(c) := \textbf{Map}(F(c), X_{\bullet})$. This gives the $\textbf{sSet}$-enrichment of the category of simplicial presheaves which is tensored and cotensored. So there are two structures on the category of simplicial presheaves : it is first symmetric closed monoidal and so it can be enriched over itself with a simplicial presheaf as internal-hom, and it also is enriched over simplicial sets which is just a full subcategory and gives a simplicial mapping space. For the model structures (projective and injective) on the category of simplicial presheaves you can have a look to chapter 3 here. The projective is very good since it is in fact monoidal,simplicial and proper, while the injective is only simplicial and proper. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471276998519897, "perplexity_flag": "head"}
http://amathew.wordpress.com/2012/06/04/a-derived-characterization-of-open-immersions/
# Climbing Mount Bourbaki Thoughts on mathematics June 4, 2012 ## A derived characterization of open immersions Posted by Akhil Mathew under algebraic geometry, commutative algebra | Tags: cotangent complex, derived category, etale morphisms, open immersions | [5] Comments I’d like to discuss today a category-theoretic characterization of Zariski open immersions of rings, which I learned from Toen-Vezzosi’s article. Theorem 1 If ${f: A \rightarrow B}$ is a finitely presented morphism of commutative rings, then ${\mathrm{Spec} B \rightarrow \mathrm{Spec} A}$ is an open immersion if and only if the restriction functor ${D^-(B) \rightarrow D^-(A)}$ between derived categories is fully faithful. Toen and Vezzosi use this to define a Zariski open immersion in the derived context, but I’d like to work out carefully what this means in the classical sense. If one has an open immersion ${f: A \rightarrow B}$ (for instance, a localization ${A \rightarrow A_f}$), then the pull-back on derived categories is fully faithful: in other words, the composite of push-forward and pull-back is the identity. To prove the converse, suppose ${f}$ is finitely presented and ${D^-(B) \rightarrow D^-(A)}$ is fully faithful. There is an adjunction: $\displaystyle f^*, f_*: D^-(A) \rightleftarrows D^-(B)$ where ${f^* = \stackrel{\mathbb{L}}{\otimes}_A B}$ and ${f_*}$ is restriction. We are assuming that ${f_*}$ is fully faithful. By general nonsense, this implies that the adjunction maps $\displaystyle f^* f_* \rightarrow \mathrm{Id}$ are isomorphisms in ${D^-(B)}$. That is, for any complex ${C_\bullet \in D^-(B)}$, one has $\displaystyle C_\bullet \simeq C_\bullet \stackrel{\mathbb{L}}{\otimes}_A B.$ In particular, one has $\displaystyle B \stackrel{\mathbb{L}}{\otimes}_A B \simeq B.$ Taking homology in degree zero, this gives ${B \otimes_A B \simeq B}$. Geometrically, if we write ${X = \mathrm{Spec} B }$ and ${Y = \mathrm{Spec} A}$, then this is saying that the map $\displaystyle X \rightarrow X \times_Y X$ is an isomorphism: that is, ${X \rightarrow Y}$ is a monomorphism in the category of schemes. But one has a little more. We can actually show that ${X \rightarrow Y}$ (or, equivalently, ${A \rightarrow B}$) is étale, and now a general result of Grothendieck tells us that an étale radicial morphism (e.g., an étale monomorphism) is an open immersion. How can we check étaleness? We have to show that the cotangent complex vanishes, i.e. $\displaystyle L_{B/A} \simeq 0.$ Since ${B \stackrel{\mathbb{L}}{\otimes}_A}$ acts as the identity on ${D^-(B)}$, it equates to showing that $\displaystyle L_{B/A}\stackrel{\mathbb{L}}{\otimes}_A B \simeq 0.$ But this in turn is ${L_{B \stackrel{\mathbb{L}}{\otimes}_A B/B}}$ because the formation of the cotangent complex is compatible with derived base-change: that is, we should consider ${B \stackrel{\mathbb{L}}{\otimes} B}$ as a derived (e.g., simplicial) commutative ring and take its cotangent complex with respect to ${B}$. However, since ${B \stackrel{\mathbb{L}}{\otimes}_A B \simeq B}$, we conclude that ${L_{B/A} \stackrel{\mathbb{L}}{\otimes}_A B \simeq L_{B/A} \simeq 0}$. In other words, ${A \rightarrow B}$ is étale. This completes the proof. ### 5 Responses to “A derived characterization of open immersions” 1. hyh Says: June 4, 2012 at 8:22 pm Nice post. The assumption just gives that the homotopy push out of $B\leftarrow A\rightarrow B$ is equal to $B$, so the cotangent complex vanish, right? BTW, I’m a little concerned about the ambiguity of $\otems^L$, from the beginning they are derived tensor product of modules, and later they becomes total derived tensor product of $A$-algebras! I’m not 100% sure I get it right. But this secret changing of meaning may confuse people. I don’t have a better suggestion for notation though. 2. hyh Says: June 4, 2012 at 8:24 pm Sorry, repost, ignore the first one. Nice post. The assumption just gives that the homotopy push out of $B\leftarrow A\rightarrow B$ is equal to $B$ , so the cotangent complex vanishes, right? BTW, I’m a little concerned about the ambiguity of $\otimes^{L}$, from the beginning they are derived tensor product of \$A-\$modules, and later they becomes total derived tensor product of $A$-algebras! I’m not 100% sure I get it right. But this secret changing of meaning may confuse people. I don’t have a better suggestion for notation though. 1. Akhil Mathew Says: June 4, 2012 at 8:44 pm Yes, more or less: I think another way of saying this is that there aren’t many “homotopy monomorphisms” in (affine) derived schemes. (A map $X\to Y$ is a homotopy monomorphism if it satisfies the following derived notion of being a monomorphism: $X \to X \times_Y X$ (the latter being the homotopy fibered product) is a weak equivalence.) By contrast, there are lots of ordinary monomorphisms in schemes (e.g. closed immersions). Regarding your concern, the point is that the forgetful functor from (derived) $A$-algebras to (derived) $A$-modules sends tensor products to tensor products. In the algebra language you can compute $B \stackrel{\mathbb{L}}{\otimes}_A B$ by taking a simplicial resolution $P_\bullet \to B$ of $A$-algebras (where $P_\bullet$ is a cofibrant simplicial $A$-algebra) and the answer is $P_\bullet \otimes_A B$. But a simplicial cofibrant resolution of $B$ in algebras corresponds (via Dold-Kan or the Moore complex functor) to a projective resolution of $B$ as a chain complex. So it’s really the derived tensor product of modules as well as algebras. I’ve been rather sloppy with notation because the foundations are anyway so varied — simplicial rings, dg rings (in char. 0), and a plethora of models of spectra. 3. hyh Says: June 5, 2012 at 1:37 pm BTW, are there similar characterization of closed immersions? (Also I suppose this characterization works for schemes?) 1. Akhil Mathew Says: June 5, 2012 at 2:06 pm Not to my knowledge, though there is a nice functorial criterion: a proper monomorphism is a closed immersion. You can check properness via the valuative criterion plus a direct limit argument to check locally of finite presentation (i.e., whether homming into them commutes with direct limits of rings). This should work for schemes as well, though I haven’t checked it (there would be presumably some subtleties, e.g. in which derived category to use, etc.). %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160805940628052, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/Waves/Thin_Films
# Waves/Thin Films Waves : 1 Dimensional Waves 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 Examples - Problems - Solutions - Terminology ### Thin Films Figure 1.14: Plane light wave normally incident on a transparent thin film of thickness $d$ and index of refraction $n > 1$. Partial reflection occurs at the front surface of the film, resulting in beam A, and at the rear surface, resulting in beam B. Much of the wave passes completely through the film, as with C. One of the most revealing examples of interference occurs when light interacts with a thin film of transparent material such as a soap bubble. Figure 1.14 shows how a plane wave normally incident on the film is partially reflected by the front and rear surfaces. The waves reflected off the front and rear surfaces of the film interfere with each other. The interference can be either constructive or destructive depending on the phase difference between the two reflected waves. If the wavelength of the incoming wave is $\lambda$, one would naively expect constructive interference to occur between the A and B beams if $2d$ were an integral multiple of $\lambda$. Two factors complicate this picture. First, the wavelength inside the film is not $\lambda$, but $\lambda /n$, where $n$ is the index of refraction of the film. Constructive interference would then occur if $2d = m \lambda /n$. Second, it turns out that an additional phase shift of half a wavelength occurs upon reflection when the wave is incident on material with a higher index of refraction than the medium in which the incident beam is immersed. This phase shift doesn't occur when light is reflected from a region with lower index of refraction than felt by the incident beam. Thus beam B doesn't acquire any additional phase shift upon reflection. As a consequence, constructive interference actually occurs when $2d = (m + 1/2) \lambda /n , ~~ m = 0,1,2, \ldots \quad \mbox{(constructive interference)}$ (2.23) while destructive interference results when $2d = m \lambda /n , ~~ m = 0,1,2, \ldots \quad \mbox{(destructive interference)} .$ (2.24) When we look at a soap bubble, we see bands of colors reflected back from a light source. What is the origin of these bands? Light from ordinary sources is generally a mixture of wavelengths ranging from roughly $\lambda = 4.5 \times 10^{-7} \mbox{ m}$ (violet light) to $\lambda = 6.5 \times 10^{-7} \mbox{ m}$ (red light). In between violet and red we also have blue, green, and yellow light, in that order. Because of the different wavelengths associated with different colors, it is clear that for a mixed light source we will have some colors interfering constructively while others interfere destructively. Those undergoing constructive interference will be visible in reflection, while those undergoing destructive interference will not. Another factor enters as well. If the light is not normally incident on the film, the difference in the distances traveled between beams reflected off of the front and rear faces of the film will not be just twice the thickness of the film. To understand this case quantitatively, we need the concept of refraction, which will be developed later in the context of geometrical optics. However, it should be clear that different wavelengths will undergo constructive interference for different angles of incidence of the incoming light. Different portions of the thin film will in general be viewed at different angles, and will therefore exhibit different colors under reflection, resulting in the colorful patterns normally seen in soap bubbles. Waves : 1 Dimensional Waves 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10 - 11 - 12 - 13 Examples - Problems - Solutions - Terminology
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224711656570435, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/102461-how-do-i-get-these-equations-into-form-quadric-surface-equation.html
Thread: 1. How do I get these equations into the form of a quadric surface equation? A question in my text asks to classify the surfaces by comparing them with the standard equations of quadric surfaces. My problem is I can't figure out how to get these equations into the standard form of quadric surface equations. Can someone please tell me the answer to this and explain how to do it? Thanks in advance. surface 1: $x^2+2y^2-z^2+3x=1$ surface 2: $2x^2+4y^2-2z^2-5y=0$ 2. Rewrite x^2 +3x +2*y^2 -z^2 =1 Complete the square: (x+3/2)^2 + 2y^2 -z^2 = 13/4 which is a hyperboloid of one sheet recall to complete the square ax^2 +bx = a(x+b/2a)^2 - b^2/4a In the second problem complete the square in y
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232853651046753, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/18497-list-two-different-matrices-s-t-2-i-print.html
# List Two Different Matrices A s.t. A^2=I Printable View • September 4th 2007, 04:35 PM Fourier List Two Different Matrices A s.t. A^2=I Hello, I am trying to determine two different matrices $A$ such that $A^2=\Bigg[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\Bigg]$. I am trying to solve this algebraically. Here is what I have attempted: http://img207.imageshack.us/img207/7...4194511xi6.jpg • September 4th 2007, 06:39 PM JakeD Quote: Originally Posted by Fourier Hello, I am trying to determine two different matrices $A$ such that $A^2=\Bigg[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\Bigg]$. I am trying to solve this algebraically. Here is what I have attempted: http://img207.imageshack.us/img207/7...4194511xi6.jpg Set b = c = 0. Then a^2 = d^2 = 1. So I and -I fall out as 2 of 4 solutions. • September 4th 2007, 07:19 PM CaptainBlack Quote: Originally Posted by Fourier Hello, I am trying to determine two different matrices $A$ such that $A^2=\Bigg[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\Bigg]$. I am trying to solve this algebraically. Here is what I have attempted: http://img207.imageshack.us/img207/7...4194511xi6.jpg Set $a^2=d^2=0$, then $bc=1$ so: $<br /> A^2= \left[ \begin{array}{cc}0&x\\1/x&0 \end{array} \right]^2=I_{2x2} <br />$ RonL • September 5th 2007, 05:10 AM Soroban Hello, Fourier! You're off to a good start . . . Quote: . . $\begin{bmatrix}a & b\\c &d\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix }\;=\;\begin{bmatrix}1&0\\0&1\end{bmatrix}$ $\begin{array}{ccc}(1)\;\;a^2 + bc \:=\:1 &\qquad & (2)\;\;ab+bd \:=\:0 \\<br /> <br /> (3)\;\;ac + cd \:=\:0 & \qquad & (4)\;\;bc+d^2\:=\:0\end{array}$ We have: . $\begin{array}{cccc}(2)\;\;ab+bd\:=\:0 & \Rightarrow & b(a+d)\:=\:0 & (5) \\ (3)\;\;ac + cd \:=\:0 & \Rightarrow & c(a+d)\:=\:0\ & (6)\end{array}$ Subtract: . $\begin{array}{c}(1)\;\;a^2+bc \:=\:1 \\ (4)\;\;bc+d^2\:=\:1\end{array} \quad\Rightarrow\quad a^2-d^2\:=\:0\quad\Rightarrow\quad d \,=\,\pm a$ If $a\!\cdot\!d \neq 0$, then (5) and (6) give us: . $b = c = 0$ And (1) and (4) give us: . $a^2 \,= \,1,\;d^2\,=\,1\quad\Rightarrow\quad a\,=\,\pm1,\;d\,=\,\pm1$ . . Two solutions: . $\begin{bmatrix}1 &0 \\0&1\end{bmatrix}$ .and . $\begin{bmatrix}\text{-}1 & 0\\0&\text{-}1\end{bmatrix}$ If $a = d = 0$, then (1) gives us: . $bc \,=\,1\quad\Rightarrow\quad c \,=\,\frac{1}{b}$ . . More solutions: . $\begin{bmatrix}0 & b \\ \frac{1}{b} & 0\end{bmatrix}$ . . . . for $b \neq 0$ . obviously. • September 5th 2007, 06:02 AM JakeD Quote: Originally Posted by Soroban Hello, Fourier! You're off to a good start . . . We have: . $\begin{array}{cccc}(2)\;\;ab+bd\:=\:0 & \Rightarrow & b(a+d)\:=\:0 & (5) \\ (3)\;\;ac + cd \:=\:0 & \Rightarrow & c(a+d)\:=\:0\ & (6)\end{array}$ Subtract: . $\begin{array}{c}(1)\;\;a^2+bc \:=\:1 \\ (4)\;\;bc+d^2\:=\:1\end{array} \quad\Rightarrow\quad a^2-d^2\:=\:0\quad\Rightarrow\quad d \,=\,\pm a$ If $a\!\cdot\!d \neq 0$, then (5) and (6) give us: . $b = c = 0$ And (1) and (4) give us: . $a^2 \,= \,1,\;d^2\,=\,1\quad\Rightarrow\quad a\,=\,\pm1,\;d\,=\,\pm1$ . . Two solutions: . $\begin{bmatrix}1 &0 \\0&1\end{bmatrix}$ .and . $\begin{bmatrix}\text{-}1 & 0\\0&\text{-}1\end{bmatrix}$ If $a = d = 0$, then (1) gives us: . $bc \,=\,1\quad\Rightarrow\quad c \,=\,\frac{1}{b}$ . . More solutions: . $\begin{bmatrix}0 & b \\ \frac{1}{b} & 0\end{bmatrix}$ . . . . for $b \neq 0$ . obviously. More solutions are $\begin{bmatrix}\text{-1} &0 \\0&1\end{bmatrix}$ and $\begin{bmatrix}1 & 0\\0&\text{-}1\end{bmatrix}$ and $\begin{bmatrix}a & b \\ \frac{1-a^2}{b} & \text{-}a\end{bmatrix}$ and $\begin{bmatrix}a & \frac{1-a^2}{b} \\ b & \text{-}a\end{bmatrix}$ for $b \neq 0.$ All times are GMT -8. The time now is 11:40 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8757857084274292, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/64105-help-volume-solid-w-uniform-cross-sections-parabola.html
# Thread: 1. ## HELP!! Volume of a solid w/ uniform cross sections of a parabola I didn't know how to explain the problem in words so I made a scan of it. Edit: Any help at all will be very appreciated. 2. Originally Posted by sj9110 I need to find the volume of a parabola a-x^2 where a is 0 to 10 and its also bounded by y=x or something. I don't know. Clearly I am very confused. I have a drawing of what the solid should look like. I didn't know how to explain the problem in words so I made a scan of it. Any help at all will be very appreciated. recall that the volume between $x = a$ and $x = b$ ( $a \le b$) is given by $V = \int_a^b A(x)~dx$, where $A(x)$ is the formula for the cross-sectional area. in this case, our cross-sections are parabolas of the form $y = a - x^2$. but what is the area of this parabola that fits your constraints? recall that the area "under" the curve is given by the integral. so graph the curve $y = a - x^2$ for arbitrary $a$. find the intercepts and all that. now find the area under this curve. we see the area will be given by: $A = \int_{\sqrt{a}}^{\sqrt{a}} (a - x^2)~dx = 2 \int_0^{\sqrt{a}}(a - x^2)~dx$. Now, when you get the integral, replace the a with x, since in this case, a = x (the parabola is bounded by y = x, so the height is given by x, so that a = x since a is the height of our parabola above the x-axis). then go to the volume formula i gave you above to find the answer. x ranges from 0 up to 10 3. ## Solid with parabolic cross-section Hi - I attach a drawing of the problem as I understand it, with three axes: $Ox$, $Oy$ and $Oz$. The values of $x$ go from $0$ to $10$. At $x = a$, the parabola (which lies in a plane parallel to the $y-z$ plane) has equation $z=a-y^2$. This parabola will have values of $y$ from $-\sqrt a$ to $\sqrt a$, and values of $z$ from $0$ to $a$. What you need to do, then, to find the volume of the solid enclosed by all these parabolas, the $x-y$ plane and the plane $x = 10$, is: • Find the area $A$ of the typical parabola shown. (Do this in the usual way with an integral, whose limits are $y=-\sqrt a$ to $y=\sqrt a$.) • Replace $a$ by $x$ in your formula for $A$. • Now imagine $x$ increasing by an amount $\delta x$. As it does so, the volume 'swept out' is approximately $A\delta x$. So the total volume will be $\int_0^{10} A dx$. So, replace $A$ by your formula in terms of $x$, and then work out the integral. Have I given you enough to go on? Let me know if you want me to check your working. Grandad Attached Thumbnails 4. Originally Posted by Grandad Hi - I attach a drawing of the problem as I understand it, with three axes: $Ox$, $Oy$ and $Oz$. The values of $x$ go from $0$ to $10$. At $x = a$, the parabola (which lies in a plane parallel to the $y-z$ plane) has equation $z=a-y^2$. This parabola will have values of $y$ from $-\sqrt a$ to $\sqrt a$, and values of $z$ from $0$ to $a$. What you need to do, then, to find the volume of the solid enclosed by all these parabolas, the $x-y$ plane and the plane $x = 10$, is: • Find the area $A$ of the typical parabola shown. (Do this in the usual way with an integral, whose limits are $y=-\sqrt a$ to $y=\sqrt a$.) • Replace $a$ by $x$ in your formula for $A$. • Now imagine $x$ increasing by an amount $\delta x$. As it does so, the volume 'swept out' is approximately $A\delta x$. So the total volume will be $\int_0^{10} A dx$. So, replace $A$ by your formula in terms of $x$, and then work out the integral. Have I given you enough to go on? Let me know if you want me to check your working. Grandad this is what i mean, i answered this problem in the other thread, giving the same solution (though i must admit, Grandad was a bit more elegant, using 3-dimensions and giving a nice diagram to boot--but it was still the same solution).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 67, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500779509544373, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/37641-how-many-ways-make-change-100-dollars.html
# Thread: 1. ## how many ways to make change for 100 dollars? using 10 and 20 dollar bills? its a generating function problem, for the life of me i dont know what to do. 2. The answer is the coefficient of $x^{100}$ in the expansion of $\left( {\sum\limits_{k = 0}^{10} {x^{10k} } } \right)\left( {\sum\limits_{k = 0}^5 {x^{20k} } } \right)$. Can you finish? 3. Hallo ! Originally Posted by Plato The answer is the coefficient of $x^{100}$ in the expansion of $\left( {\sum\limits_{k = 0}^{10} {x^{10k} } } \right)\left( {\sum\limits_{k = 0}^5 {x^{20k} } } \right)$. Can you finish? How do you get it ? 4. Originally Posted by Moo How do you get it ? Why not allow p00ndawg to answer that question you us? 5. Originally Posted by Plato Why not allow p00ndawg to answer that question you us? Let's wait for it then Or if you could provide a starting of answer by PM...if it doesn't bother you :x 6. Originally Posted by p00ndawg using 10 and 20 dollar bills? its a generating function problem, for the life of me i dont know what to do. Maybe this isn't really the way you're supposed to do it, but can't we just reason that, when making change for \$100, we can use either 0, 1, 2, 3, 4, or 5 \$20 bills, with the rest being \$10 bills, so there are 6 ways? (Specifically, for (\$10, \$20): (10, 0), (8, 1), (6, 2), (4, 3), (2, 4), and (0, 5)) 7. Originally Posted by Mathnasium Maybe this isn't really the way you're supposed to do it, but can't we just reason that, when making change for \$100, we can use either 0, 1, 2, 3, 4, or 5 \$20 bills, with the rest being \$10 bills, so there are 6 ways? (Specifically, for (\$10, \$20): (10, 0), (8, 1), (6, 2), (4, 3), (2, 4), and (1, 5)) Rethink that (1 ; 5) one. (Equals \$110) 8. Originally Posted by Mathnasium Maybe this isn't really the way you're supposed to do it, but can't we just reason that, when making change for \$100, we can use either 0, 1, 2, 3, 4, or 5 \$20 bills, with the rest being \$10 bills, so there are 6 ways? (Specifically, for (\$10, \$20): (10, 0), (8, 1), (6, 2), (4, 3), (2, 4), and (0, 5)) thanks for all the help, this question was actually on my final and I ended up doing what you say. Because when I took the test earlier this morning i just couldnt figure out any other way to do it. but i just wanted to actually know the mathematical way to do it. Mr. plato gave the way im assuming my teacher wanted it, but I totally did not expect that question, I think the answer I got was right (6), but im not sure if he'll approve of my methods lol. So i think i may have gotten it right. 9. Originally Posted by p00ndawg thanks for all the help, this question was actually on my final and I ended up doing what you suggested. but i just wanted to actually know the mathematical way to do it. Mr. plato gave the way im assuming my teacher wanted it, but I totally did not expect that question, I think the answer I got was right (6), but im not sure if he'll approve of my methods lol. So i think i may have gotten it right. 10. Originally Posted by janvdl Yea it was on my final, I just wanted to see how to do it. After I took the test that was really the only problem that bugged me. now I just got to read up on cal 2 for tomorrow. 11. Originally Posted by p00ndawg the way im assuming my teacher wanted it, but I totally did not expect that question, I think the answer I got was right (6), but im not sure if he'll approve of my methods. It could be that the instructor expected the exact answer the way I gave it. I have asked students to set up such problems just to check on understanding. Unless one spends a good deal of time on how coefficients are calculated from generating functions, I don’t think we can expect much more than that. There is in fact a whole textbook on generating functions, it could be a whole course. 12. Originally Posted by Plato It could be that the instructor expected the exact answer the way I gave it. I have asked students to set up such problems just to check on understanding. Unless one spends a good deal of time on how coefficients are calculated from generating functions, I don’t think we can expect much more than that. There is in fact a whole textbook on generating functions, it could be a whole course. yea he might have, and I did indeed go through the whole coefficient process of figuring it out. He's is kind of lenient at times, and im hoping he'll give it to me because the test was fairly rough. There was one other curveball question he put on the test that I didnt even THINK of reviewing, my fault of course, I couldnt even start the problem lol. Anyways thanks for the quick reply.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9716560244560242, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/143025-circle.html
# Thread: 1. ## circle Let C be the circle in R² having the point (h, k) and (0, 1) as diameter. Prove that this circle intersects the x-axis if and only if h²-4k≥0 and in this case the two intercepts are the roots of the equation x²-hx+k=0. 2. First, it doesn't make sense. I am guessing the diameter is the line segment between (h,k) and (0,1) Second, isn't this just an algebra problem? Center of Circle $\left(\frac{h}{2},\frac{k+1}{2}\right)$ Radius of Circle $\sqrt{(h-0)^{2}+(k-1)^{2}}$ Where's the tricky part? 3. ## confused i cant even understand the question is that the question says that find the radius/ 4. Originally Posted by apple2009 Let C be the circle in R² having the point (h, k) and (0, 1) as diameter. As TKHunny says, two "points" do not form a diameter- you mean the line segment having those points as endpoints. Prove that this circle intersects the x-axis if and only if h²-4k≥0 and in this case the two intercepts are the roots of the equation x²-hx+k=0. The center of a circle is the midpoint of any diameter so this circle has $\left(\frac{h+0}{2}, \frac{k+1}{2}\right)= \left(\frac{h}{2}, \frac{k+1}{2}\right)$ as center. The radius of a circle is the distance from the center to any point on the circumference. Since $\left(\frac{h}{2}, \frac{k+1}{2}\right)$ is the center and (0, 0) is on the circumference, the radius is $\sqrt{\frac{h^2}{4}+ \frac{(k+1)^2}{4}}$ The equation of a circle with center at (a, b) and radius r is $(x- a)^2+ (y- b)^2= r^2$ so the equation of this circle is $\left(x- \frac{h}{2}\right)^2+ \left(y- \frac{k+1}{2}\right)^2= \frac{h^2+ (k+1)^2}{4}$. The first thing we can do is multiply both sides of that by 4, leaving $(2x- h)^2+ (2y- (k+1))^2= h^2+ (k+1)^2$. The circle "intersects the x-axis" where y= 0. Put y= 0 into that equation and solve for x. That will be a quadratic equation so it will have two distinct solutions if and only its discrimant is 0. (The "discriminant" of the quadrtic equation, $ax^2+ bx+ c= 0$ is $b^2- 4ac$.) 5. ## thanks thank you very much 6. There is a circle equation formed from 2 end points of a diameter but it does not seem to be regularly taught.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352054595947266, "perplexity_flag": "middle"}
http://slawekk.wordpress.com/2007/10/19/compact-sets-and-the-axiom-of-choice/
# Formalized Mathematics Just another WordPress.com weblog ## Compact sets and the Axiom of Choice The first iteration of the tool for extracting blog-postable view from Isabelle/ZF source is ready. The post below is the result of processing the Topology_ZF_1b.thy theory file, to be included on the next release of IsarMathLib. The tool just does a bunch of context-sensitive text substitutions. The next iteration will hopefully do something closer to real parsing of the Isar source. ```theory Topology_ZF_1b imports Topology_ZF_1 begin ``` One of the facts demonstrated in every class on General Topology is that in a $T_2$ (Hausdorff) topological space compact sets are closed. Formalizing the proof of this fact gave me an interesting insight into the role of the Axiom of Choice (AC) in many informal proofs. A typical informal proof of this fact goes like this: we want to show that the complement of $K$ is open. To do this, choose an arbitrary point $y\in K^c$. Since $X$ is $T_2$, for every point $x\in K$ we can find an open set $U_x$ such that $y\notin \overline{U_x}$. Obviously $\{U_x\}_{x\in K}$ covers $K$, so select a finite subcollection that covers $K$, and so on. I had never realized that such reasoning requires the Axiom of Choice. Namely, suppose we have a lemma that states “In $T_2$ spaces, if $x\neq y$, then there is an open set $U$ such that $x\in U$ and $y\notin \overline{U}$” (like our lemma T2_cl_open_sep below). This only states that the set of such open sets $U$ is not empty. To get the collection $\{U_x \}_{x\in K}$ in this proof we have to select one such set among many for every $x\in K$ and this is where we use the Axiom of Choice. Probably in 99/100 cases when an informal calculus proof states something like $\forall \varepsilon \exists \delta_\varepsilon \cdots$ the proof uses AC. Most of the time the use of AC in such proofs can be avoided. This is also the case for the fact that in a $T_2$ space compact sets are closed. Compact sets are closed – no need for AC In this section we show that in a $T_2$ topological space compact sets are closed. First we prove a lemma that in a $T_2$ space two points can be separated by the closure of an open set. ``` lemma (in topology0) T2_cl_open_sep: assumes A1: $T \text{ is}\ T_2 \$ and A2: $x \in \bigcup T$ $y \in \bigcup T$ $x\neq y$ shows $\exists U\in T.\ (x\in U \wedge y \notin \text{cl}(U))$ proof - from A1 A2 have $\exists U\in T.\ \exists V\in T.\ x\in U \wedge y\in V \wedge U\cap V=0$ using isT2_def then obtain U V where $U\in T$ $V\in T$ $x\in U$ $y\in V$ $U\cap V=0$ then have $U\in T \wedge x\in U \wedge y\in V \wedge \text{cl}(U) \cap V = 0$ using open_disj_cl_disj thus $\exists U\in T.\ (x\in U \wedge y \notin \text{cl}(U))$ qed ``` AC-free proof that in a Hausdorff space compact sets are closed. To understand the notation recall that in Isabelle/ZF Pow(A) is the powerset (the set of subsets) of $A$ and Fin(A) is the set of finite subsets of $A$. ``` theorem (in topology0) in_t2_compact_is_cl: assumes A1: $T \text{ is}\ T_2 \$ and A2: $K \text{ is compact in}\ T$ shows $K \text{ is closed in}\ T$ proof - let X = $\bigcup T$ have $\forall y \in X - K.\ \exists U\in T.\ y\in U \wedge U \subseteq X - K$ proof - { fix y assume A3: $y \in X$ $y\notin K$ have $\exists U\in T.\ y\in U \wedge U \subseteq X - K$ proof - let B = $\bigcup x\in K.\ \{V\in T.\ x\in V \wedge y \notin \text{cl}(V)\}$ have I: $B \in \text{Pow}(T)$ $\text{Fin}(B) \subseteq \text{Pow}(B)$ using Fin.dom_subset from A2 A3 have $\forall x\in K.\ x \in X \wedge y \in X \wedge x\neq y$ using IsCompact_def with A1 have $\forall x\in K.\ \{V\in T.\ x\in V \wedge y \notin \text{cl}(V)\} \neq 0$ using T2_cl_open_sep hence $K \subseteq \bigcup B$ with A2 I have $\exists N \in \text{Fin}(B).\ K \subseteq \bigcup N$ using IsCompact_def then obtain N where D1: $N \in \text{Fin}(B)$ $K \subseteq \bigcup N$ with I have $N \subseteq B$ hence II: $\forall V\in N.\ V\in B$ let M = $\{\text{cl}(V).\ V\in N\}$ let C = $\{D \in \text{Pow}(X).\ D \text{ is closed in}\ T\}$ from topSpaceAssum have $\forall V\in B.\ (\text{cl}(V) \text{ is closed in}\ T)$ $\forall V\in B.\ (\text{cl}(V) \in \text{Pow}(X))$ using IsATopology_def cl_is_closed IsClosed_def hence $\forall V\in B.\ \text{cl}(V) \in C$ moreover from D1 have $N \in \text{Fin}(B)$ ultimately have $M \in \text{Fin}(C)$ by (rule fin_image_fin) then have $X - \bigcup M \in T$ using Top_3_L6 IsClosed_def moreover from A3 II have $y \in X - \bigcup M$ moreover have $X - \bigcup M \subseteq X - K$ proof - from II have $\bigcup N \subseteq \bigcup M$ using cl_contains_set with D1 show $X - \bigcup M \subseteq X - K$ qed ultimately have $\exists U.\ U\in T \wedge y \in U \wedge U \subseteq X - K$ thus $\exists U\in T.\ y\in U \wedge U \subseteq X - K$ qed } thus $\forall y \in X - K.\ \exists U\in T.\ y\in U \wedge U \subseteq X - K$ qed with A2 show $K \text{ is closed in}\ T$ using open_neigh_open IsCompact_def IsClosed_def qed ``` ``` end``` This blog post has been generated from IsarMathLib’s Topology_ZF_1b.thy theory file, see the relevant pages of the IsarMathLib proof document. See also TiddlyWiki rendering of this post.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 73, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234380722045898, "perplexity_flag": "head"}