url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://www.physicsforums.com/library.php?do=view_item&itemid=9
Physics Forums Menu Home Action My entries Defined browse Select Select in the list MathematicsPhysics Then Select Select in the list Then Select Select in the list Search perfect numbers Definition/Summary A perfect number is a number which is the sum of its proper divisors (half the sum of its total divisors). Even perfect numbers are a Mersenne prime times a power of two; odd perfect numbers are not known to exist. Equations Sum-of-divisors function: $$\sigma(n)=\sum_{k|n}k$$ $$\sigma(p^aq^b)=\sigma(p^a)\sigma(q^b)\;\;(p,q\text{ relatively prime})$$ $$\sigma(p^a)=\frac{p^{a+1}-1}{p-1}$$ Definition of N perfect: $$2N=\sigma(N)$$ Form of an even perfect number: $$N=M_p(M_p+1)/2=2^{p-1}(2^p-1)$$ where M_p is a Mersenne prime. Recent forum threads on perfect numbers Breakdown Mathematics > Number Theory >> Sequences See Also Images Extended explanation The first two perfect numbers are: 6 = 1 + 2 + 3 = $2^{2-1} (2^2-1)$ 28 = 1 + 2 + 4 + 7 + 14 = $2^{3-1} (2^3-1)$ The next two are: 496 = $2^{5-1} (2^5-1)$ 8128 = $2^{7-1} (2^7-1)$ Commentary seyalbert @ 01:16 PM Apr10-11 ??????? seyalbert @ 01:16 PM Apr10-11 Someone knows a formula for add perfect numbers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8164815902709961, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/140718/topological-properties-of-a-3-torus?answertab=active
# Topological Properties of a 3-Torus A 3-torus can be constructed by starting with a cube and then conceptually joining the top and bottom, the right and left, and the front and back. In such a space, an object which travels out to the left hand side of the cube instantly reappears at the right hand side (and similarly for the top-bottom and the front-back). Therefore, standing inside the center of a 3-torus, we could look to the right (or left) and see the back of our own head. We could also look straight up (or down), or straight out the front (or back) of the cube and see ourselves. Is the conceptualization of a 3-torus described above considered to be topologically equivalent to an actual 3-torus that is obtained by embedding the cube in 3-dimensional space or higher, and physically gluing the three pairs of opposite faces of the cube? Now, you construct a string-like path in three dimensions made up of three double-vortex figures which start at the center of a cube (with its opposite sides conceptually joined together), spiral out to cover two faces of the cube (the top and left rear, the right front and right rear, and the left front and bottom), and come back to the center. The purpose of constructing this particular path is to geometrically model a type of flower called a calla lily. This path has a 1.5 turn spiral vortex directly above the top face of the cube that is exactly the mirror image of a 1.5 turn spiral vortex directly below the bottom face. The two vortices are conceptually joined together. Similarly, the 1.5 turn vortices from the left rear and right front are conceptually joined together, and the vortices from the right rear and left front are also conceptually joined together. If this string-like path is arrayed around a cube conceptualized as a 3-torus, and the vortices projected out from the cube are conceptually joined together in similar fashion as the top-bottom, right-left, and front-back face pairs, would such a path be considered as topologically equivalent to an actual 3-torus as described above? - What do you mean by "actual $3$-torus"? – Qiaochu Yuan May 4 '12 at 6:17 ## 1 Answer The answer to your first question is yes. A 3-torus may be described in several ways, all of which are topologically equivalent. For example, here is an informal description of three of the most common: 1. What you call a conceptualization of a 3-torus is a type of topological space called a quotient space. Informally speaking, the 3-torus is obtained from the unit cube by identifying the opposite faces together in the manner you describe. We may in fact describe the 3-torus as a quotient space of 3-dimensional space $\mathbb{R}^{3}$. Roughly speaking you can imagine $\mathbb{R}^3$ as made up of unit cubes. Like a stack of boxes filling up all the space. Then, we obtain the 3-torus by identifying opposite faces together on each cube, while at the same time identifying equally designated faces of different cubes with each other. Note that in this descriptions of the 3-torus as a quotient space we are not physically performing any gluing. 2. Another description of the 3-torus, and what you seem to mean by an actual 3-torus is obtained from the first by embedding the cube in $\mathbb{R}^{n}$, $n\ge 3$ (you may have had 3-dimensional space in mind) and physically gluing the three pairs of opposite faces of the cube in the way you describe (as if your cube were made of stretchable rubber). What you obtain is an immersion of the 3-torus in $\mathbb{R}^{n}$. For example, if you start with your unit cube in 3-dimensional space you will get a 3-torus with self intersections. But you can start with you unit 3-dimensional cube sitting in $\mathbb{R}^{n}$, $n=4,5,\mbox{ etc.}$, and avoid these self intersections. The reason for this is simply that although the gluing of the faces of the cube is always the same, no matter the dimension, extra dimensions allow one to move about more freely. Let's look at dimension three: After gluing the front and back faces we obtain something that looks like a thick washer. Now we glue the top annulus of the washer to the bottom one (these were originally the top and bottom faces of the cube). We get a hollowed out doughnut having an outer surface and an inner surface (the original left and right faces of the cube). We glue these to each other to obtain the 3-torus. It is this last step that requires self-intersection. 3. The 3-torus can also be defined as the product space formed by taking the Cartesian product of three circles (e.g. of radius one), denoted by $S^{1}\times S^{1}\times S^{1}$. With respect to your second question, it appears to me that by a string-like path in three dimensions you mean the image of a line segment (say $[0,1]$) via a continuous map to $\mathbb{R}^{3}$. In your case the path seems to be nice enough (i.e. for example it is not everywhere self-intersecting) that it will have topological dimension 1. Therefore, it can never be topologically equivalent to a 3-torus, whose topological dimension is 3 (the 3-torus is a 3-dimensional manifold). The topological dimension of a space is a topological invariant, meaning that two spaces that are topologically equivalent must have the same topological dimension. In this case, physically performing the identifications of the faces will definitely create loops and self intersections on your path, but will not change its topological dimension. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396947026252747, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81443?sort=oldest
## Fastest Algorithm to Compute the Sum of Primes? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can anyone help me with references to the current fastest algorithms for counting the exact sum of primes less than some number n? I'm specifically curious about the best case running times, of course. I'm pretty familiar with the various fast algorithms for prime counting, but I'm having a harder time tracking down sum of prime algorithms... - 2 You may be interested in arxiv.org/abs/1011.1667 although it's about bounds and asymptotics rather than exact sums. There is also an impressive tabulation at oeis.org/A007504/a007504.txt – Gerry Myerson Nov 20 2011 at 22:41 Thanks, Gerry - both interesting links, and the tabulation in particular is handy for me. I notice that the largest value (11138479445180240497) requires a full 64 bits to express in binary - looks like you can't count much higher with taking precision concerns pretty seriously (at least in languages like C). – Nathan McKenzie Nov 20 2011 at 23:12 1 You should be able to get away with using only 64 bits for a pretty long time, since the first few bits are determined by the asymptotic formula. Even beyond the point you're comfortable holding a few implicitly, it's not hard to use a second 64-bit variable to extend your precision to 128 bits -- you need only check for overflow every million iterations unless you're going over half a trillion primes. – Charles Nov 20 2011 at 23:53 ## 5 Answers Deléglise-Dusart-Roblot [1] give an algorithm which determines $\pi(x,k,l)$, the number of primes up to $x$ that are congruent to $l$ modulo $k,$ in time $O(x^{2/3}/\log^2x).$ Using this algorithm to count the number of primes in all residue classes $k<2\log x$ takes $$1+\sum_{p<2\log x}(p-2)\sim\frac{2\log^2x}{\log\log x}$$ invocations of Deléglise-Dusart-Roblot for a total of $O(x^{2/3}/\log\log x)$ time. This allows one to determine the value of $\sum_{p\le x}p$ mod all primes up to $2\log x$ and hence, by the Prime Number Theorem and Chinese Remainder Theorem, the value of the sum mod $\exp(\vartheta(2\log x))=x^2(1+o(1)).$ Together with bounds on the value of $\sum_{p\le x}p$ [2], this allows the computation of the sum. Note that the primes slightly beyond $2\log x$ may be required depending on the value of $\vartheta(2\log x).$ Practically speaking, except for $x$ tiny, $2\log x+\log x/\log\log x$ suffices. This does not change the asymptotics. I do not know if it is possible to modify the Lagarias-Odlyzko analytic formula [3] to count in residue classes. If so, this would allow an $O(x^{1/2+o(1)})$ algorithm. # References [1] Marc Deléglise, Pierre Dusart, and Xavier-François Roblot, Counting primes in residue classes, Mathematics of Computation 73:247 (2004), pp. 1565-1575. doi 10.1.1.100.779 [2] Nilotpal Kanti Sinha, On the asymptotic expansion of the sum of the first n primes (2010). [3] J. C. Lagarias and A. M. Odlyzko, Computing $\pi(x)$: An analytic method, Journal of Algorithms 8 (1987), pp. 173-191. - An excellent answer! The Lagarias-Odlyzko analytic formula can certainly be obtained in this setting as well (although I am not sure whether someone has done it), giving the $x^{1/2+o(1)}$ answer. This basically comes down to beeing able to calculate multiple values (or multiple zeros) of Dirichlet L-functions fast (The Odlyzko-Schönhage algorithm for Dirichlet L-functions), and this is possible for Dirichlet L-functions as well as Riemann zeta-function (see e.g. Platt's thesis: maths.bris.ac.uk/~madjp/thesis5.pdf) – Johan Andersson Nov 21 2011 at 18:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a difficult problem. I asked about it here on math.se and here on cstheory. In both cases my question was somewhat broader: allowing sums over different exponents rather than just 1. In the second link I also allowed interactive proofs. I noted that for exponent 0 there are efficient algorithms (superior to enumerating primes): the various analytical or combinatorial $\pi(x)$ algorithms. But no one was able to suggest an efficient means to solve the problem for any exponent. An interactive proof was proposed that allows a person to check a proposed proof in linear time, but the confidence in the proof is asymptotically 0% against a cunning adversary. I am not aware of any hardness results, though, so this seems to be wide-open. - Unless I am missing something, computing the sum of the kth powers of (the positive primes less than n) should be equivalent in polynomial complexity to finding all the primes less than n, when k is a positive integer. Also, it may be quick to approximate a sum of kth powers by considering the related sum for numbers of the form , say, 210m + j where j is coprime to and less than 210. Gerhard "Ask Me About System Design" Paseman, 2011.11.20 – Gerhard Paseman Nov 21 2011 at 1:07 The complexity of counting primes is sublinear. I've seen $n^{2/3}\log n$ for time with $n^{1/3}\log n$ for memory but I don't know if it is the current record. However that counting is based on some non-trivial sieve type identities and I do not immediately see if there are any analogs of those identities for power sums. – fedja Nov 21 2011 at 5:32 Fedja: Are you saying counting primes is sub-linear, or do you mean summing primes? If the latter, I would be fascinated to see a reference. If the former, there is a method from Lagarias-Odlyzko that is commonly listed as having variants that are O(x^3/5+epsilon) time, O(epsilon) space, and O(x^1/2+epsilon) time and O(x^1/4+epsilon) space (though with quite difficult constant time factors). – Nathan McKenzie Nov 21 2011 at 6:15 I know only about counting (I just responded to Gerhard). It is, indeed, an interesting question if we can do better than $n$ for the sum rather than for the number. I haven't seen it anywhere but the first natural step would be to go back to all those partial sieve formulae and see if they can be modified to give the sums rather than numbers. I do not see how to do it immediately but I do not see why it is impossible in principle either. – fedja Nov 21 2011 at 12:23 Unfortunately, fedja and I are experiencing a misunderstanding. I am referring to finding, not counting, primes. Only if pi(n) and pi(n+1) are different have we found a prime. Knowing pi(n)and and pi(n+k) does not find the primes in the middle in general. Gerhard "Ask Me About System Design" Paseman, 2011.11.21 – Gerhard Paseman Nov 21 2011 at 19:07 Edit Nov 22: Changed the condition of the test function $\Phi$ somewhat to simplify my argument (removed one sum in the identity below). Although I like Charles' answer and its application of the Chinese remainder theorem, let me give you a different take on the problem. It is possible to use an analytic method more directly (similar to the Lagarias-Odlyzko method), just using the Riemann zeta-function and not any theory of the Dirichlet L-functions (or primes in arithmetical sequences). This method can treat rather arbitrary sums $$\sum_{ p < x } f(p)$$ for smooth functions $f$. The idea is to consider the identity $$\sum_{ p < x } f(p) =\sum_{p} f(p)\Phi \left( \frac {p-x} {\sqrt x} \right ) - \sum_{x< p < x + \sqrt x } f(p) \Phi \left(\frac {p-x} {\sqrt x} \right),$$ where $\Phi(x)$ is smooth test function such that $\Phi(x)=1$ for $x<0$ and $\Phi(x)=0$ for $x>1$. It is clear that the last sum can be calculated in $O( x^{1/2} )$ time (say sieving out the primes in the interval, and explicit calculation). The idea is now to use the Mellin-transform identity $$\sum_{p}f(p)\Phi \left( \frac {p-x} {\sqrt x} \right )= \frac 1 {2 \pi i} \int_{c-\infty i}^{c+\infty i} \sum_{p} p^{-s} \int_0^\infty \Phi \left(\frac{y-x}{\sqrt x} \right)f(y) y^{s-1} dyds. (c>1)$$ The point here is that the Mellin transform $\int_0^\infty \Phi(\frac{y-x}{\sqrt x})f(y) y^{s-1}dy$ is small when $|\Im(s)|>x^{1/2+\epsilon}$ and $\Re(s)=c$ for any $\epsilon>0$, so that part of the integral can be discarded. The Dirichlet series $\sum_{p} p^{-s}$ can be expressed in terms of the logarithm of the zeta function for the arguments $ks$ (summing over $k$). Now the Odlyzko-Schönhage algorithm (Quite nice algorithm, uses fast fourier transform) allows us to calculate the values of the Riemann zeta-function for say $s=c+it$ for $|t| < x^{1/2+\epsilon}$ in $O(x^{1/2+\epsilon+o(1)})$ time. We also remark that the Weil explicit formula can be used instead of this complex integral (then the zeros of the Riemann zeta-function needs to be calculated fast, but this can also be done by the Odlyzko-Schönhage algorithm). This means that the total time to calculate $\sum_{ p < x } f(p)$ will be $O(x^{1/2+\epsilon})$ for any $\epsilon>0$. Note that the same argument applies when primes in arithmetical progression are concerned (see my comment on Charles answer). Since the Odlyzko-Schönhage algorithm also holds for the Dirichlet L-functions, this case can be treated in the same way. - This is a great answer. – Charles Nov 22 2011 at 13:54 This has been great. Some nice answers here - I'll accept one shortly. I was looking for a lit review because I have implemented an approach that runs in $O(n^\frac{2}{3} \log n)$ time and $O(n^\frac{1}{3} \log n)$ space for summing primes to non-negative integer powers, and I wanted to see how it compared. I wasn't exactly ready to write up what I'd done, but I guess I'll take a stab. For the curious: PART 1 - The General Identity $\displaystyle\sum_{j=2}^n\frac{\Lambda(j)}{\log j}j^a = \sum_{j=2}^nj^a - \frac{1}{2}\sum_{j=2}^n\sum_{k=2}^{\frac{n}{j}}j^a \cdot k^a+ \frac{1}{3}\sum_{j=2}^n\sum_{k=2}^{\frac{n}{j}}\sum_{l=2}^{\frac{n}{jk}}j^a \cdot k^a \cdot l^a - \frac{1}{4}...$ where $\Lambda(n)$ is the Mangoldt function. This is a generalization of Linnik's identity found here. For ease of notation, using $D_{a,k}(n) = \displaystyle\sum_{j=2}^n j^a \cdot D_{a,k-1}(\frac{n}{j})$ and $D_{a,0}(n) = 1$ rewrite the nested sums on the right to give us $\displaystyle\sum_{j=2}^n\frac{\Lambda(j)}{\log j}j^a = \sum_{j=1}^{\log_2 n} \frac{-1^{j+1}}{j}D_{a,j}(n)$ We evaluate $\log_2 n$ terms because $D_{a,k}(n) = 0$ when $n < 2^k$, This counts a kind of sum of prime powers. We can count prime sums by a kind of moebius inversion $\displaystyle\sum_{p \leq n} p^a = \sum_{j=1}^{\log_2 n} \sum_{k=1}^{\log_2 (n^\frac{1}{j})} \frac{-1^{k+1}}{j \cdot k}\mu(j) D_{a \cdot j, k}(n^{\frac{1}{j}})$ Thus, if we can find fast methods of computing $D_{a,k}(n)$, we'll have fast ways of computing $\sum_{p \leq n} p^a$ In Mathematica, this is DD[a_, k_, n_] := Sum[j^a DD[a, k - 1, n/j], {j, 2, n}] DD[a_, 1, n_] := Sum[j^a, {j, 2, n}] SumPrimes[a_, n_] := Sum[(-1)^(k + 1)/(j k) MoebiusMu[j] DD[j a, k, n^(1/j)], {j, 1, Log[2, n]}, {k, 1, Log[2, (n^(1/j))]}] SumPrimes[0,n] will return the count of primes, SumPrimes[1,n] will return the sum of primes less than n, and so on. PART 2 - The Specific Algorithm Here's my fastest approach for calculating $D_{a,k}(n)$ It is inspired by this method for calculating the Mertens function. (If you find the following writeup interesting but rushed, I've written about my approach with somewhat more detail here and, for the special case of just prime counting, here. A C++ implementation is here) Bear with me; this is a bit involved. It requires two observations. First, we can use sieving to calculate values of $D_{a,k}(n)$ up to $n^\frac{2}{3}$ in roughly $O(n^\frac{2}{3} \log n)$ time and $O(n^\frac{1}{3} \log n)$ space with the following approach. We want to sieve numbers up to $n^\frac{2}{3}$ in such a way that we have their full power signature, with $n = {p_1} ^ {a_1} \cdot {p_2} ^ {a_2} \cdot {p_3} ^ {a_3} \cdot ...$. With this, the normal count of divisor function is $d_k(n) = \binom {{a_1} + k - 1} {a_1} \cdot \binom {{a_2} + k - 1} {a_2} \cdot \binom {{a_3} + k - 1} {a_3} \cdot ...$ The strict count of divisors function, a count of divisor function excluding 1 as a factor, is, in turn, $d_k'(n) = \sum_{j=0}^k -1^{k-j} \binom {k}{j} d_j(n)$. With the strict count of divisors function, we can say $D_{a,k}(n) = D_{a,k}(n-1) + d_k'(n) \cdot n^a$ . Thus, if we sieve all the numbers from 1 to $n^\frac{2}{3}$, for each number we can compute its values for $d_k(n)$, from which we can calculate $d_k'(n)$, which in turn lets us keep a running total of $D_{a,k}(n)$. We will segment our sieve in blocks of size $n^\frac{1}{3}$ to stay in our memory bound. The second observation is that, using the following identity, we can express $D_{a,k}(n)$ as sums that rely only on $D_{a,k}(j)$ where $j < n^\frac{2}{3}$ and of arbitrary values of $D_{a,1}(j) = \sum_{k=2}^j k^a$, which can generally be computed in constant or log time for any values of j for the values of a that we're working with. If we let $d_{a,k}(n) = D_{a,k}(n) - D_{a,k}(n-1)$, we start with this combinatorial identity, whose derivation I have written up elsewhere but don't have the room to show: $D_{a,k}(n) =$ $\displaystyle\sum_{j=t+1}^n d_{a,1}(j) \cdot D_{a, k-1}(\frac{n}{j})$ $\displaystyle + \sum_{j=2}^t d_{a,k-1}(j) \cdot D_{a,1}( \frac{n}{j})$ $+ \displaystyle \sum_{j=2}^t \sum_{s=\frac{t}{j} + 1}^\frac{n}{j} \sum_{m=1}^{k-2} d_{a,1}(s) \cdot d_{a,m}(j) \cdot D_{a, k-m-1}( \frac{n}{js} )$ For this identity, t is some value such that $1 < t < n$. If you look through the sums closely, you should see that the right hand side doesn't rely on any values of $d_{a,k}(j)$ where $j > t$ and $D_{a,k}(j)$ where $j > \frac{n}{t}$ except when $k = 1$, as desired. Even if we set $t=n^\frac{2}{3}$ and sieve values of $D_{a,k}(j)$ up to $n^\frac{2}{3}$, these sums are too large to be computed in our time bound. Instead, we have to take advantage of certain symmetries to reduce the calculations in these sums. I'm going to hand wave over the process leading to this - you can find a description of it and much else here, but, taking one last leap, the following Mathematica code takes the identity we were just talking about and evolves it further as the function labeled DDFast - remember, when tracing through the following code, that in a full implementation any instances of DD and d in DDFast would be looked up thanks to sieving and caching within our time and space bounds, so what follows isn't an accurate reflection of its execution time, just the accurate mechanics of DDFast. DD[A_, k_, n_] := Sum[j^A DD[A, k - 1, n/j], {j, 2, n}] DD[A_, 1, n_] := Sum[j^A, {j, 2, n}] d[ A_, k_, n_ ] := DD[ A, k, n ] - DD[ A, k, n - 1] rng[ 0, start_, end_ ] := Floor[end] - (start - 1) rng[ 1, start_, end_ ] := Floor[end] (Floor[end] + 1)/2 - (start - 1) start/2 rng[ 2, start_, end_ ] := Floor[end] (Floor[end] + 1) (2 Floor[end] + 1)/ 6 - (start - 1) start (2 start - 1)/6 rng[ A_, start_, end_ ] := Sum[m^A, {m, start, end}] DDFast[A_, 1, n_] := rng[A, 2, n] DDFast[A_, k_, n_] := Sum[j^A DD[A, k - 1, n/j], {j, Floor[n^(1/3)] + 1, n^(1/2)}] + Sum[rng[A, Floor[n/(j + 1)] + 1, n/j] DD[A, k - 1, j], {j, 1, n/Floor[n^(1/2)] - 1}] + Sum[d[A, k - 1, j] rng[A, 2, n/j], {j, 2, n^(1/3)}] + Sum[s^A d[A, m, j] DD[A, k - m - 1, n/(j s)], {j, 2, n^(1/3)}, {s, Floor[Floor[n^(1/3)]/j] + 1, Floor[n/j]^(1/2)}, {m, 1, k - 2}] + Sum[(rng[A, Floor[n/(j (s + 1))] + 1, n/(j s)]) (Sum[ d[A, m, j] DD[A, k - m - 1, s], {m, 1, k - 2}]), {j, 2, n^(1/3)}, {s, 1, Floor[n/j]/Floor[Floor[n/j]^(1/2)] - 1}] SumPrimesFast[A_, n_] := Sum[(-1)^(k + 1)/(j k) MoebiusMu[j] DDFast[j A, k, n^(1/j)], {j, 1, Log[2, n]}, {k, 1, Log[2, (n^(1/j))]}] The function labeled DDFast, which is really the key here, can be computed in $O(n^\frac{2}{3})$ time as long as all of the functions it references have been cached. So there we have it. If you compute SumPrimesFast and, in turn, DDFast, but interleave the computation of those sums with the sieving process described above, you end up with something like an $O(n^\frac{2}{3} \log n)$ time and $O(n^\frac{1}{3} \log n)$ space algorithm that can count prime sums to non-negative, integer powers, including as a prime counting function for $a = 0$. It's worth taking a look, too, at rng[ A_, start_, end_ ] := Sum[m^A, {m, start, end}] - the reason this algorithm works for non-negative integers is that rng can be computed in constant time for non-negative, integer values with Faulhaber's Formula. It's possible the algorithm could work for other powers if rng can be usefully extended to other powers with sufficient speed and precision. I have an implementation of this algorithm in C++ here. It has a few C math precision issues I haven't tracked down, so it's often off by a 2 or 3, but it does essentially implement what I've described here, and might be useful for stepping through with breakpoints. A somewhat more fleshed out description of this can be found here. Several details that I've hand-waved over here are described there. Even better written up is a description of this as a prime counting algorithm, here. It does a better job showing a few key derivations. - I'll put in a plug for my original paper with Lagarias and Odlyzko, as well as a recent paper by Bach, Klyve and Sorenson: http://www.ams.org/journals/mcom/2009-78-268/S0025-5718-09-02249-2/home.html Computing prime harmonic sums Math. Comp. 78 (2009), 2283-2305. Although the algorithms of Lagarias and Odlyzko (with the extra algorithm of Odlyzko-Schonhage for computing good approximations to a bunch of values of $\zeta(s)$) are asymptotically the best, having complexity $O(x^{1/2 + \epsilon})$ the combinatorial algorithms will probably work best for any reasonable ranges. Though one should look at J. Buethe, J. Franke, A. Jost, and T. Kleinjung, "Conditional Calculation of pi(10^24)", Posting to the Number Theory Mailing List, Jul 29 2010. A little more detail (that's all I have right now) is contained in the talk of David Platt: www.maths.bris.ac.uk/~madjp/junior%20talk.pdf . The idea in the combinatorial method is the following: Suppose that $f(n)$ is a completely multiplicative function of the positive integers. In the case of calculating $\pi(x)$, we take $f(n) = 1$. In the case of calculating the sum of the primes $\le x$ we take $f(n) = n$. In the case of Calculating $\pi(x,a,q)$ we take $f(n)$ to be a Dirichlet character mod $q$. Define $\phi(x,a)$ as the sum of $f(n)$ for all integers $n \le x$ which are not divisible by the first $a$ primes. Clearly $\phi(x,0) = \sum_{n \le x} f(n)$. We have the recursion: $\phi(x,a+1) = \phi(x,a) - f(p_{a+1})\phi(x/p_{a+1},a).$ Imagine that you have a labeled tree, where the labels are $\pm f(k) \phi(x/k,b)$ for some $k$ and $b$. You can expand any node in the tree you want by applying the above formula. This has the property that it leaves the sum of the values at the leaves the same. So the idea is to start with a tree consisting of one node labeled $\phi(x,\pi(\sqrt x))$ and keep expanding until either $b=0$ or $x/k <$ some cutoff. If you accumulate all such nodes you can evaluate all of them by sieving (and a special data structure -- see the Lagarias-Miller-Odlyzko paper for details), and the optimal value for the cutoff is $x^{2/3}$ perhaps multiplied by some logarithmic factors. Here's the reference to my paper with Lagarias and Odlyzko: Computing $\pi(x)$: The Meissel-Lehmer Method Author(s): J. C. Lagarias, V. S. Miller, A. M. Odlyzko Source: Mathematics of Computation, Vol. 44, No. 170 (Apr., 1985), pp. 537-560 - You mentioned Platt's talk; possibly relevant is his new preprint arxiv.org/abs/1203.5712. – Charles Mar 29 2012 at 21:57 @Charles: Thanks for the pointer to Platt's paper. I wasn't aware of it (of course it was only put on arXiv 3 days ago). – Victor Miller Mar 30 2012 at 3:42 Thanks for all this - great references. – Nathan McKenzie May 12 2012 at 20:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048775434494019, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/65180/what-is-it-that-makes-this-proof-about-rational-rectangles-work-fundamentally/65182
# What is it that makes this proof about rational rectangles work fundamentally? I saw this problem several years ago, and I discovered a solution to it. I've since learned a somewhat more efficient solution based on the same idea. Call a rectangle in the $(x,y)$ plane rational if it has sides parallel to the axes and at least one of its sidelengths is rational. Suppose $R$ is a rectangle which can be disected into a finite collection of rational rectangles (i.e., they cover and intersect only on their boundaries). Show $R$ is rational. The rather elegant solution (not by me, but similar to mine) to this problem is as follows. First, scale up the whole picture by the least common denominator of all the rational sidelengths, so each rectangle in the disection has an integer sidelength. Then integrate $\sin{(2 \pi x)} \sin{(2 \pi y)}$ over $R$. Via the dissection, the integral is clearly $0$, and hence $R$ must be rational. The only things this proof need are the fundamental theorem of calculus and the antiderivative of $\sin$. For quite a while, I was satisfied with this solution. The machinery of the integral surprisingly has all the necessary tools to prove this. But now I want to understand what it is that makes this proof work. I've tried to unpack this proof into a sequence of simple statements, but I find that it always seems to get more complicated. Hence, I ask, is there a way to simultaneously simplify this argument and avoid calculus (with an argument similar to this one, not a totally new one)? Is there a way to unpack some of the definitions to make this argument more transparent? - There was a slight typo before. "Call a rectangle in the (x,y) plane rational if it has sides parallel to the axes and at least one of its sidelengths is rational." – Logan Maingi Sep 16 '11 at 23:39 ## 2 Answers Anyone interested in this problem would enjoy the very nice expository article (fourteen proofs!) by Stan Wagon. Added: And more than fourteen proofs makes the result even more true. - This is an excellent reference just from skimming it, and I suspect it will totally answer my question. Give me some time to read it first, but it is very much appreciated. – Logan Maingi Sep 16 '11 at 23:44 You can replace the need for calculus with just the concept of volume under the curve, by replacing the sine function with a periodic piecewise linear function with average 0 over its period, for example lines connecting the point sequence (0,0),(1/4,1),(3/4,-1),(1,0) and repeating with period 1. You can find the relevant volumes with basic geometry of triangles instead of integral calculus. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564383625984192, "perplexity_flag": "head"}
http://mathoverflow.net/questions/31065/applications-of-non-reductive-git
## Applications of non-reductive GIT ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Geometric invariant theory works well when the algebraic group $G$ acting on a variety is reductive. There has been recent work by Doran and Kirwan here and here to find a canonical method of constructing GIT quotients for non-reductive groups. My question is what are potential applications for their work? One specific application they mention is constructing moduli of hypersurfaces in toric varieties. I would be interested in knowing of other applications. - ## 3 Answers On 80's Atiyah conference Kirwan spoke about one application. Namely she stated in her talk that there is an application to Green-Griffits conjecture. You can download the talk here http://www.maths.ed.ac.uk/~aar/atiyah80.htm and the slides are here http://www.icms.org.uk/downloads/GandP/Kirwan.pdf I am not sure if this was written down somewhere. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This isn't really an answer, but I think it will help point in a useful direction. In case it's already obvious to you, please disregard. :-) In linear algebra we classify $n\times n$ matrices up to conjugation, i.e. the $GL(n)$-orbits on the set ${\mathfrak gl}(n)$ of $n\times n$ matrices. If we work over ${\mathbb C}$, then we know the complete classification of orbits by Jordan canonical form; and the $GL(n)$-equivariant geometry of ${\mathfrak gl}(n)$ is beautiful and interesting. In particular, using Jordan form we know that every matrix can be conjugated to an upper triangular one; let ${\mathfrak b}$ be the set of upper triangular matrices. You might then ask, if we only allow ourselves to conjugate by the group $B$ of upper triangular matrices, what's the orbit structure now? There are reasons that this question is interesting. The first thing you might want to do is understand the function theory, i.e. the ring of invariant functions ${\mathbb C}[{\mathfrak b}]^B$, or more generally semi-invariant functions: i.e., choosing a character $\chi: B\rightarrow {\mathbb C}^*$, functions $f(b)$ for which $f(g\cdot b) = \chi(g)f(b)$ for all $g\in B$, $b\in {\mathfrak b}$. This leads to GIT for the nonreductive group $B$. Now, in this case it's not really necessary to deal with $B$, of course: we can replace $B$ by $G=GL(n)$ and ${\mathfrak b}$ by $\widetilde{g} = G\times_B {\mathfrak b}$, the Grothendieck-Springer resolution of ${\mathfrak gl}(n)$, and do GIT for the $GL(n)$-action on $\widetilde{g}$. [And in fact, if I'm not mistaken, this method of inducing up to a reductive group and studying invariant theory for that larger group plays a role in the Doran-Kirwan theory?] Still, though, you can imagine situations that arise in nature in which you are interested in objects that naturally have some kind of filtration, and then the group by which you are quotienting just won't be reductive. These kinds of things arise naturally in studying moduli of decorated sheaves: see for example this paper of Drezet and Trautmann for the kind of thing that happens. - Thanks! I never knew that \widetilde{g} was considered a Grothendieck-Springer resolution. You are correct that Doran-Kirwan also use this construction, which they call a reductive envelope. – Chirag Lakhani Jul 8 2010 at 17:20 I know of one potential application. Hain has some work described here http://arxiv.org/pdf/0802.0814 that deals with certain non-reductive GIT quotients that contain useful information about 3-manifold invariants. The set of 3-manifolds equipped with a genus g heegard splitting is naturally identified with the double coset space $H_g \backslash \Gamma_g / H_g$ where $\Gamma_g$ is the mapping class group of a genus g surface and $H_g \subset \Gamma_g$ is the handlebody subgroup. Using relative Malcev completions one replaces this with a double coset in which the subgroup is not reductive. So to construct the quotient one can try to use non-reductive GIT techniques. Last I heard, Doran and Hain were collaborating to work this out. - Thanks. My advisor mentioned that he had conversations with Hain about this paper. Looking forward to seeing a paper on it if they are in fact working on this problem. – Chirag Lakhani Jul 8 2010 at 17:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387099742889404, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/99145-valid-subsets-reccurence-relation-print.html
# The valid subsets and reccurence relation Printable View • August 24th 2009, 10:36 PM Snowboarder The valid subsets and reccurence relation Hi all. I'm struggling with the one quite hard question for me: Let I_n = {1,2,3,...,n} for n belong to N. A valid subset of I_n does not contain two numbers of the form x, x+1 for 1<=x<n. For example, {1,3,5} is valid but {1,3,4} is invalid. Let a_n denote the number of valid subsets of I_n. How can i list explicitly the valid subsets of I_k for k<=4 ? and how can i derive a recurrence relation for a_n (how can i obtain the formula)? For any advice I'll be very appreciate Regards • August 25th 2009, 01:53 AM Defunkt $k=0:$ The empty set matches the criteria, so $I_0 = 1$ $k=1:$ $\phi , \left\{1\right\}$ So $I_1=2$ $k=2:$ $\phi , \left\{1\right\},\left\{2\right\}$ So $I_2=3$ $k=3:$ $\phi , \left\{1\right\},\left\{2\right\}, \left\{3\right\}, \left\{1,3\right\}$ So $I_3=5$ $k=4:$ $\phi , \left\{1\right\},\left\{2\right\}, \left\{3\right\}, \left\{4\right\}, \left\{1,3\right\}, \left\{1,4\right\}, \left\{2,4\right\}$ So $I_4=8$ So we want to find a recurrence relation for $I_n$. We can observe that given $I_{n-1}$, $I_n = I_{n-1} + P(n)$ Where $P(n)$ is the number of valid subsets that do not contain $(n-1)$. But it is easily seen that $P(n) = I_{n-2}$, and so we get: $I_n = I_{n-1} + I_{n-2}$ , as expected. • August 25th 2009, 06:49 AM Plato Quote: Originally Posted by Snowboarder Let I_n = {1,2,3,...,n} for n belong to N. A valid subset of I_n does not contain two numbers of the form x, x+1 for 1<=x<n. For example, {1,3,5} is valid but {1,3,4} is invalid. Let a_n denote the number of valid subsets of I_n. Here is a way to see how the recursion works. Let’s say $n\ge 3$. Any subset valid in $I_{n-1}$ would of course be valid in $I_{n}$. Now subsets in $I_{n-2}$ do not contain the number ${n-1}$. So if we form these unions, $J = \left\{ {\{ n\} \cup Y:Y \in I_{n - 2} \wedge Y} \text{ is valid}\right\}$ , that is we put ${n}$ into every valid set in $I_{n-2}$. So now we have $a_{n}= a_{n-1}+a_{n-2}$. All times are GMT -8. The time now is 12:59 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.819327712059021, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/94604-solved-solve-logaritmic-equation-terms-natural-log.html
# Thread: 1. ## [SOLVED] Solve Logaritmic Equation in terms of natural log I need to solve the following equation: 10^(2X+3) = 280 To express in logarithmic, I know I need to do the following: log(10)^280 = 2X+3 At this point I'm not sure where to go. I can move the three over to the left: (log(10)^280) - 3 = 2X Then divide by two: ((log(10)^280) - 3))/2 = X I get -.276 = X, but I feel like I'm missing a step in determining the answer. Did I do the steps correctly to come to this answer or is something missing? 2. Originally Posted by Snowcrash I need to solve the following equation: 10^(2X+3) = 280 To express in logarithmic, I know I need to do the following: log(10)^280 = 2X+3 At this point I'm not sure where to go. I can move the three over to the left: (log(10)^280) - 3 = 2X Then divide by two: ((log(10)^280) - 3))/2 = X I get -.276 = X, but I feel like I'm missing a step in determining the answer. Did I do the steps correctly to come to this answer or is something missing? $10^{2x + 3} = 280$ $\ln{10^{2x + 3}} = \ln{280}$ $(2x + 3)\ln{10} = \ln{280}$ $2x + 3 = \frac{\ln{280}}{\ln{10}}$ $2x = \frac{\ln{280}}{\ln{10}} - 3$ $x = \frac{1}{2}\left(\frac{\ln{280}}{\ln{10}} - 3\right)$. 3. Great - I see I did not use the natural logarithmic form, but the answer is the same. (.276) Thank you! 4. Originally Posted by Snowcrash Great - I see I did not use the natural logarithmic form, but the answer is the same. (.276) Thank you! No problem. My personal preference is always the natural logarithm, but in this case, your answer might look a bit neater if you use the logarithm with base 10. $10^{2x + 3} = 280$ $10^{2x + 3} = 28\cdot 10$ $\log_{10}{10^{2x + 3}} = \log_{10}{(28\cdot 10)}$ $(2x + 3)\log_{10}{10} = \log_{10}{28} + \log_{10}{10}$ $2x + 3 = \log_{10}{28} + 1$ $2x = \log_{10}{28} - 2$ $x = \frac{1}{2}\log_{10}{28} - 1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173408150672913, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20919/polish-spaces-in-probability/122393
## Polish spaces in probability ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Probabilist are very often working with Polish spaces, though this is not always very clear where this assumption is needed. question: what can go wrong when doing probability on non-Polish spaces ? - 2 For those unaware of the definition, "a Polish space is a separable completely metrizable topological space; that is, a space homeomorphic to a complete metric space that has a countable dense subset." (Wikipedia) – Tom LaGatta Apr 10 2010 at 23:14 ## 8 Answers One simple thing that can go wrong is purely related to the size of the space (polish spaces are all size $\leq 2^{\aleph_0}$). When spaces are large enough product measures become surprisingly badly behaved. Consider Nedoma's pathology: Let $X$ be a measure space with $|X| > 2^{\aleph_0}$. The diagonal in $X^2$ is not measurable. We'll prove this by way of a theorem: Let $U \subseteq X^2$ be measurable. $U$ can be written as a union of at most $2^{\aleph_0}$ spaces of the form $A \times B$. Proof: First note that we can find some countable collection $A_i$ such that $U \subseteq \sigma(A_i \times A_j)$ (proof: The set of $V$ such that we can find such $A_i$ is a sigma algebra containing the basis sets). For `$x \in \{0, 1\}^\mathbb{N}$` define `$B_x = \bigcap \{ A_i : x_i = 1 \} \cap \bigcap \{ A_i^c : x_i = 0 \}$`. Consider all sets which can be written as a (possibly uncountable) union of $B_x \times B_y$ for some $y$. This is a sigma algebra and obviously contains all the $A_i \times A_j$, so contains $A$. But now we're done. There are at most $2^{\aleph_0}$ of the $B_x$, and each is certainly measurable in $X$, so $U$ can be written as a union of $2^{\aleph_0}$ sets of the form $A \times B$. QED Corollary: The diagonal is not measurable. Evidently the diagonal cannot be written as a union of at most $2^{\aleph_0}$ rectangles, as they would all have to be single points, and the diagonal has size $|X| > 2^{\aleph_0}$. - (A few latex errors make it hard to read, use backticks ` around the \\$ if the equation doesn't display right.) – François G. Dorais♦ Apr 10 2010 at 19:50 (I just fixed the most obvious errors, please change the definition of $B_x$ to what you meant.) – François G. Dorais♦ Apr 10 2010 at 19:53 Thanks. Sorry about that. – David R. MacIver Apr 10 2010 at 20:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Separability is a key technical property used to avoid measure-theoretic difficulties for processes with uncountable index sets. The general problem is that measures are only countably additive and $\sigma$-algebras are closed under only countably many primitive set operations. In a variety of scenarios, uncountable collections of measure zero events can bite you; separability ensures you can use a countable sequence as a proxy for the entire process without losing probabilistic content. Here are two examples. 1. Weak convergence: the classical theory of weak convergence utilizes Borel-measurable maps. When dealing with some function-valued random elements, such as cadlag functions endowed with the supremum norm, Borel-measurability fails to hold. See the motivation for Weak Convergence and Empirical Processes. The $J1$ topology is basically a hack which ensures the function space is separable and thereby avoids measurability issues. The parallel theory of weak convergence described in the book embraces non-measurability. 2. Existence of stochastic processes with nice properties: a key property of Brownian motion is continuity of the sample paths. Continuity, however, is a property involving uncountably many indices. The existence of a continuous version of a process can be ensured with separable modifications. See this lecture and the one that follows. Metrizability allows us to introduce concepts such as convergence in probability. Completeness (the Cauchy convergence kind, not the null subsets kind) makes it easier to conduct analysis. - and for Polish spaces, this is relatively easy to check (Prohorov theorem) if a sequence of probability measures is weak-compact: tightness is enough. Moreover, weak convergence is metrizable. – Alekk Apr 10 2010 at 18:04 There's already been some good responses, but I think it's worth adding a very simple example showing what can go wrong if you don't use Polish spaces. Consider $\mathbb{R}$ under its usual topology, and let X be a non-Lebesgue measurable set. e.g., a Vitali set. Using the subspace topology on X, the diagonal $D\subseteq\mathbb{R}\times X$, $D=\{(x,x)\colon x\in X\}$ is Borel Measurable. However, its projection onto $\mathbb{R}$ is X, which is not Lebesgue measurable. Problems like this are avoided by keeping to Polish spaces. A measurable function between Polish spaces always take Borel sets to analytic sets which are, at least, universally measurable. The space X in this example is a separable metrizable metric space, whereas Polish spaces are separable completely metrizable spaces. So things can go badly wrong if just the completeness requirement is dropped. - Google "image measure catastrophe" with quotation marks. - Ah! From Laurent Schwartz's Radon measures on arbitrary topological spaces and cylindrical measures - ams.org/mathscinet-getitem?mr=426084 – François G. Dorais♦ Apr 10 2010 at 20:19 It can also be useful to have the set of Borel probability measures on $X$ (with weak* convergence, a.k.a. convergence in law) to be metrizable, for instance to be able to treat the convergences sequentially. For this you need the space $X$ to be separable and metrizable (see Lévy-Prohorov metric). - Below is a copy of an answer I gave here http://stats.stackexchange.com/questions/2932/metric-spaces-and-the-support-of-a-random-variable/20769#20769 Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event ${X=X'}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" relation (note that the normed vector space $L^p$ is a set of equivalence class) (b) The distance $d(X,X')$ between the two $E$-valued r.v. $X, X'$ is measurable; in passing this allows to define the space $L^0$ of random variables equipped with the topology of convergence in probability (c) Simple r.v. (those taking only finitely many values) are dense in $L^0$ And some techical conveniences of complete separable (Polish) metric spaces : (d) Existence of the conditional law of a Polish-valued r.v. (e) Given a morphism between probability spaces, a Polish-valued r.v. on the first probability space always has a copy in the second one - As far as I remember, the projection of a measurable set may fail to be measurable, so something very natural may become not an event. Besides, constructing conditional probabilities as measures on sections becomes problematic. Perhaps, there are more reasons but these two are already good enough. - 2 The existence of such a set does not depend on separability. Take a non-measurable set $X$ and a nonempty null set $Y$, then $X \times Y$ is null and its projection $X$ is non-measurable. However, it is true that the projection of a Borel set is always Lebesgue measurable (but not always Borel). – François G. Dorais♦ Apr 10 2010 at 18:54 We know by Ulam's theorem that a Borel measure on a Polish space is necessarily tight. If we just assume that the metric space is separable, we have that each Borel probability measure on $X$ is tight if and only if $X$ is universally measurable (that is, given a probability measure $\mu$ on the metric completion $\widehat X$, there are two measurable subsets $S_1$ and $S_2$ of $\widehat X$ such that $S_1\subset X\subset S_2$ and $\mu(S_1)=\mu(S_2)$. So a probability measure is not necessarily tight (take $S\subset [0,1]$ of inner Lebesgue measure $0$ and outer measure $1$), see Dudley's book Real Analysis and Probability. An other issue related to tightness. We know by Prokhorov theorem that if $(X,d)$ is Polish and if for all sequence of Borel probability measures $(\mu_n)$ we can extract a subsequence which converges in law, then $(\mu_n)$ is necessarily uniformly tight. It may be not true if we remove the assumption of "Polishness". And it may be problematic when we want results as "$\mu_n\to \mu$ in law if and only if there is uniform tightness and convergence of finite dimensional laws." -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303773641586304, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/118396-cardinality-set-complex-numbers.html
# Thread: 1. ## The cardinality of the set of complex numbers I know $|\mathbb{C}|=|\mathbb{R}|$, but I am not exactly sure how to do a formal proof. I know that every element in $\mathbb{C}$ can be written as $a+bi, \, a,b\in \mathbb{R}$. So every element in $\mathbb{C}$ can be expressed as $(a,b) \in \mathbb{R \times R}$. Now, what would be a bijective function that maps $\mathbb{R \times R} \rightarrow \mathbb{R}$? 2. Originally Posted by Pinkk I know $|\mathbb{C}|=|\mathbb{R}|$, but I am not exactly sure how to do a formal proof. I know that every element in $\mathbb{C}$ can be written as $a+bi, \, a,b\in \mathbb{R}$. So every element in $\mathbb{C}$ can be expressed as $(a,b) \in \mathbb{R \times R}$. Now, what would be a bijective function that maps $\mathbb{R \times R} \rightarrow \mathbb{R}$? What have you tried? Try showing that $\mathbb{R}^2\simeq S^1$ and $S^1\simeq (-1,1)$. ( $\simeq$ denotes equipotent...same size) 3. Ok. Here, I will give you a checklist of things to prove. 1. $\mathbb{R}\times\mathbb{R}\simeq S^1=(-1,1)\times(-1,1)$ Spoiler: Draw a geometric picture and show that $S^1$ "bent upward" into a hemi-sphere has the property than any line projected from through a point $P$ will hit a unique point $P'\in\mathbb{R}^2$. This is an injection, apply the Schroder-Bernstein theorem. 2. $S^1\simeq (-1,1)\times (0,1)$ Spoiler: You tell me! 3. $(-1,1)\times(0,1)\simeq(-1,1)$ Spoiler: Note that this is merely the semi-circle "over" the interval $(0,1)$ consider the projection $\pi<img src=$-1,1)\times(0,1)\mapsto(-1,1)" alt="\pi-1,1)\times(0,1)\mapsto(-1,1)" /> given by $\pi(x,y)=x$. Show that this is bijective. Note: There are other ways to do this. This is just, in my mind, the most canonical progression. P.S. Draw a picture! 4. I am confused by the terminology. $S^1$ is a circle, i.e., a one-dimensional curve, without the interior points. Now, how can be easily bent upward into a hemi-sphere, which is a two-dimensional surface? In my opinion, relating a two-dimensional set, like $\mathbb{R}^2$, with one-dimensional set, like $\mathbb{R}$, is the main difficulty of this problem. Also, I am confused by the notation $S^1=(-1,1)\times(-1,1)$. Doesn't $\times$ mean Cartesian product? Again, relating two-dimensional set $(-1,1)\times(-1,1)$ to one-dimensional $S^1$ is nontrivial. I assume that by $S^1$ you do mean one-dimensional curve because you say that a projection from half of $S^1$ into $(-1,1)$ is a bijection. 5. I just started analysis, so I have not learned any of what you mentioned. But I had the idea of showing there exists an $f : (0,1) \times (0,1) \rightarrow (0,1)$ $f((0.a_{1}a_{2}a_{3}...,0.b_{1}b_{2}b_{3}...)) = 0.a_{1}b_{1}a_{2}b_{2}a_{3}b_{3}...$ If I restrict the $a_{n}, b_{n}$ to be digits from 1 to 8, then this is bijective, correct? 6. I think you are right, except that numbers with decimal expansion having only digits 1 through 8 do not cover the whole interval (0, 1). For example, I don't see how to represent 0.9 this way. We could first prove that continuum $\mathfrak{c}$, which is the cardinality of all real numbers, is the same as $2^{\aleph_0}$. First, it is easy to show that all real numbers are equipotent with (0,1) by Drexel's suggestion. Namely, using a coordinate system on a plane, draw a lower semi-circle with radius 1 and center in (0,1). Let's call it $S$. Then any ray (half of a line) emitted from (0,1) and crossing $S$ also crosses the horizontal axis, thus relating the two intersection points. ( $S$ should not include the endpoints (-1,1) and (1,1).) Then projection from $S$ to the horizontal axis is a bijection between $S$ and (-1,1). Finally, $f(x)=(x+1)/2$ is a bijection between (-1,1) and (0,1). Next, (0,1) is equipotent with $2^{\aleph_0}$, which is the set of all infinite sequences of 0 and 1. One proof is in Wikipedia, but I think there is a small mistake in the second part, which shows that $2^{\aleph_0}$ can be injected in $\mathbb{R}$. I think, the right way is to take a sequence, change all 1's into 2's, and interpret the result as a ternary expansion of a real number in [0,1]. This mapping is injective. Indeed, if a real number from [0,1] has more than one ternary expansion, one of those expansions from some point stabilizes as $11111\dots$ or $22222\dots$. But since we excluded 1's from our sequences, no ambiguity is left. Now that we don't have to pay attention to these issues anymore, it is easy to join two sequences $a_1a_2a_3\dots$ and $b_1b_2b_3\dots$ by interleaving them: $a_1b_1a_2b_2a_3b_3\dots$, as suggested in the previous post. This is a bijection between $2^{\aleph_0}\times 2^{\aleph_0}$ and $2^{\aleph_0}$. Another way to prove equipotency of a square and a line segment is by using space-filling curves, such as Peano or Hilbert curves. Wikipedia and an excellent site Cut The Knot! have articles about them. Wikipedia article even has a section with a "Proof that a square and its side contain the same number of points", while Cut-The-Knot has the construction of the curve and a proof that it indeed fills the square, both of which require only a little calculus. 7. Hmm, I've been thinking about another approach, maybe this will work. I can find an injective function $f: \mathbb{R} \rightarrow \mathbb{R \times R}$, so $|\mathbb{R}| \le |\mathbb{R \times R}|$. Now, if I use the similar argument I made previously: $f((0.a_{1}a_{2}a_{3}...,0.b_{1}b_{2}b_{3}...)) = 0.a_{1}b_{1}a_{2}b_{2}a_{3}b_{3}...$ where I can assume that at some point along the decimal expansion of each rational number is unique (let the digits of the decimal expansion of the rational number be 0 from some position on), then that function is clearly injective, correct? If it is, then $|(0,1) \times (0,1)| \le |(0,1)|$, which means $|\mathbb{R \times R}| \le |\mathbb{R}|$. Since $|\mathbb{R \times R}| \le |\mathbb{R}|$ and $|\mathbb{R}| \le |\mathbb{R \times R}|$, $|\mathbb{R \times R}| = |\mathbb{R}|$. 8. Yes, I don't see anything wrong with this either. 9. Whew, thanks! I somewhat understand the suggestions you are making, but I don't think my professor expects us to use that sort of knowledge. 10. Originally Posted by emakarov I am confused by the terminology. $S^1$ is a circle, i.e., a one-dimensional curve, without the interior points. Now, how can be easily bent upward into a hemi-sphere, which is a two-dimensional surface? In my opinion, relating a two-dimensional set, like $\mathbb{R}^2$, with one-dimensional set, like $\mathbb{R}$, is the main difficulty of this problem. Also, I am confused by the notation $S^1=(-1,1)\times(-1,1)$. Doesn't $\times$ mean Cartesian product? Again, relating two-dimensional set $(-1,1)\times(-1,1)$ to one-dimensional $S^1$ is nontrivial. I assume that by $S^1$ you do mean one-dimensional curve because you say that a projection from half of $S^1$ into $(-1,1)$ is a bijection. Wow! I must have been tired last night. Let me rectify this. Firstly, I completely forgot that $S^1$ was the topological standard for the unit circle. In fact, here I was referencing, as can be seen from the representation of $S^1=(-1,1)\times(-1,1)$ is something similar to the unit square. I came up with an easier way though, it almost looks similar to Pinkk's. Problem: Prove that $\mathbb{R}\times\mathbb{R}\simeq \mathbb{R}$. Proof: This will be proven if the following (very long) chain of equipotences can be shown $\mathbb{R}\times\mathbb{R}\simeq\mathcal{U}=[0,1)\times[0,1)\simeq[0,1)\times\{0\}\simeq(0,1)\simeq\mathbb{R}$. Luckily though, most of these are well known. So let us begin Step 1- Claim: $\mathbb{R}\times\mathbb{R}\simeq\mathcal{U}$ Proof: Let $D$ be some open disc contained strictly within $\mathcal{U}$ with coenciding centers. Bend this circle up to create an open hemispherical surface at the center of $\mathcal{U}$. Next, construct a vertical $\ell$ which emanates from the center of the surface and that is perpendicular to $\mathbb{R}^2$. One can see (why Pinkk ?) that projecting from various positions on this line creates an injection between $D$ and $\mathbb{R}\times\mathbb{R}$. Applying the Schroder-Bernstein theorem we can then see that $\mathbb{R}\times\mathbb{R}\simeq D$ from which the conclusion follows. Step 2- This is the hard part. In essence we are saying that the cardinality of this (almost) square is the same as just one of it's edges. Claim: $\mathcal{U}\simeq[0,1)\times\{0\}$. Proof: Let $(x,y)\in\mathcal{U}$ be arbitrary. Both $x,y$ have unique decimal representations which do not end in an infinite chain of $9$'s (this is a convention). Now form a new number $z$ be interweaving the decimal expansions of $x,y$. In other words $(x,y)=(.a_1a_2\cdots,b_1,b_2\cdots)\mapsto .a_1b_1a_2b_2\cdots.$. We now note that this mapping is injective, for $.z_1z_1\cdots=z=z'=.z'_1z'_2\cdots$ implies that $z_{2n}=z'_{2n}$ which means that the x-coordinate of the ordered pair the two came from is the same, and $z_{2n+1}=z'_{2n+1}$ implies that the y-coordinate is the same. Appealing once again to the Shcroder-Bernstein theorem thus shows that $\mathcal{U}\simeq[0,1)\times\{0\}$ Step 3- This is very simple Claim: $[0,1)\times\{0\}\simeq(0,1)$ Proof: Clearly the projection $\pi:[0,1)\times\{0\}\mapsto[0,1)$ given by $\pi(x,y)=x$ is clearly bijective. Also, it is trivially true that $[0,1)\simeq(0,1)$ from where the conclusion follows. Step 4- This is very well known. Claim: $\mathbb{R}\simeq(0,1)$ Proof: The mapping $f<img src=$0,1)\mapsto\mathbb{R}" alt="f0,1)\mapsto\mathbb{R}" /> given by $x\mapsto\tan\left(\pi\left[x-\frac{1}{2}\right]\right)$ is a bijection (I leave this to you). Now, realizing that this string has been satisfied the result follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 104, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458988904953003, "perplexity_flag": "head"}
http://cms.math.ca/Reunions/ete12/abs/cgr.html
Réunion d'été SMC 2012 Hôtels Regina Inn et Ramada (Regina~Saskatchewan), 2 -4 juin 2012 www.smc.math.ca//Reunions/ete12 Géométrie complexe et domaines reliés Org: Tatyana Barron (Western) et Eric Schippers (Manitoba) [PDF] Poincare series map on open Riemann surfaces  [PDF] Poincare series is a classic technique to construct automorphic forms. Let $R$ to be a Riemann surface and $k>1$ is an integer . Poincare series produces a linear and bounded operator from $A^{(1)}(\Delta)$ (which is the space of holomorphic and integrable k-differentials on the unit disc) onto $A^{(1)}(R)$ (which is the space of holomorphic and integrable k-differentials on $R$). I will talk about some applications of Poincare series on Riemann surfaces. Also I will talk about the kernel of Poincar\'e series map, specially I will talk about some results in this direction, obtained with T. Barron. AJNEET DHILLON, Western University Vector bundles with parabolic structure and algebraic stacks  [PDF] I will discuss some theorems due to Indranil Biswas and Niels Borne and how they can be applied to study coherent sheaf cohomology of semistable parabolic vector bundles on algebraic curves. BRUCE GILLIGAN, University of Regina Holomorphic Reductions of Pseudoconvex Homogeneous Manifolds  [PDF] Let $G$ be a connected complex Lie group and $H$ a closed complex subgroup. There is a Lie theoretic fibration $\pi : G/H \to G/J$ with $G/J$ holomorphically separable and ${\cal O}(G/H) \simeq \pi^* {\cal O}(G/J)$ called the holomorphic reduction of the complex homogeneous manifold $G/H$. In general, $G/J$ is not Stein, e.g., $\mathbb C^n \setminus \{ 0 \}$ for $n>1$, and examples show that one need not have ${\cal O}(J/H)\simeq\mathbb C$. We will prove that if $G/H$ is pseudoconvex and $G$ is reductive, then \newline 1.) the base $G/J$ of its holomorphic reduction is Stein and ${\cal O}(J/H)\simeq\mathbb C$, and \newline 2.) if additionally, $G/H$ is K\"ahler with ${\cal O}(G/H)\simeq\mathbb C$, then $G/\overline{H}$ is a flag manifold, $\overline{H}/H$ is a Cousin group and $G/H = G/\overline{H} \times\overline{H}/H$ is a product, where $\overline{H}$ denotes the Zariski closure of $H$ in $G$. \newline The proof employs ideas of Hirschowitz (1975) in order to show the existence of a certain foliation of non-Stein pseudoconvex domains spread over complex homogeneous manifolds. This generalizes results of Kim-Levenberg-Yamaguchi (2011). \newline (Based on joint work with Christian Miebach and Karl Oeljeklaus.) GORDON HEIER, University of Houston On uniformly effective birationality and the Shafarevich Conjecture over curves  [PDF] We will discuss the following recent effective boundedness result for the Shafarevich Conjecture over function fields. Let $B$ be a smooth projective curve of genus $g$, and $S \subset B$ be a finite subset of cardinality $s$. There exists an effective upper bound on the number of deformation types of admissible families of canonically polarized manifolds of dimension $n$ with canonical volume $v$ over $B$ with prescribed degeneracy locus $S$. The effective bound only depends on the invariants $g, s, n$ and $v$. The key new ingredient which allows for this kind of result is a careful study of effective birationality for families of canonically polarized manifolds. This is joint work with S. Takayama. OLEG IVRII, Harvard University Ghosts of the Mapping Class Group  [PDF] Recently, McMullen showed that the Weil-Petersson metric in Teichmuller theory arises as the double derivative of the Hausdorff dimension of certain families of quasi-circles arising from simultaneous uniformization. He noticed that a similar construction can be carried out on spaces of Blaschke products; and so by analogy one can define a Weil-Petersson metric there. But how does this metric look like? Is it incomplete? Invariant under the mapping class group? While it appears that there is no genuine mapping class group acting on the space of Blaschke products, there are ‘ghosts’ acting on two very different boundaries that arise from non-tangential and horocyclic degenerations. In this talk, we will describe these boundaries and illuminate these ghosts. ALEXEY KOKOTOV, Concordia University Polyhedral surfaces and determinant of Laplacian  [PDF] The zeta-regularized determinant of the Laplacian on a compact polyhedral surface (a closed orientable surface of genus $g$ glued from Euclidean triangles) is studied. We derive a formula for the ratio of two determinants corresponding to two conformally equivalent polyhedra (an analog of classical Polyakov's formula for two conformally equivalent smooth metrics). This formula implies the reciprocity law for polyhedra which is closely related to the classical Weil reciprocity law for harmonic functions with logarithmic singularities. DAVID MINDA, University of Cincinnati Hyperbolic geometry and conformal invariants.  [PDF] The goal is to use classical hyperbolic geometry to obtain results about the Euclidean size of the image of a set in a simply connected hyperbolic region under a conformal mapping onto the open unit disk. The idea is to use a conformal invariant to estimate the Euclidean size. In hyperbolic geometry a half-plane H subtends an angle 2t at a point z not in H. The angle decreases as the distance from z to H increases and the angle is a conformal invariant. The classical Angle of Parallelism formula is the main tool to estimate the Euclidean size. This is joint work with A.F. Beardon. ERIC SCHIPPERS, University of Manitoba A refined Teichmuller space of bordered surfaces  [PDF] Consider a Riemann surface biholomorphic to a compact Riemann surface of genus g with n discs removed. By classical results of Bers, the Teichmuller space of surfaces of this type is an open subset of a Banach space. In previous work David Radnell and I showed that the Teichmuller space of a bordered surface can be identified (up to a properly discontinuous group action) with a moduli space of Riemann surfaces which appears in conformal field theory, and originates with Friedan and Shenker, Vafa, and Segal. We define a refinement of the Teichmuller space of a bordered surface, and prove that this refinement is a Hilbert manifold. This is achieved by combining the above results with work of Takhtajan and Teo on a refinement of the universal Teichmuller space. Joint work with David Radnell (American University of Sharjah) and Wolfgang Staubach (Uppsala University). VASILISA SHRAMCHENKO, University of Sherbrooke Higher genus Weierstrass sigma-function  [PDF] We propose a new way to generalize the Weierstrass sigma-function to higher genus Riemann surfaces. Our definition of the odd higher genus sigma-function is based on a generalization of the classical representation of the elliptic sigma-function via Jacobi theta-function. The odd higher genus sigma-function is associated with an odd spin line bundle on a given Riemann surface. We also define an even sigma-function corresponding to an arbitrary even spin structure on the surface. The proposed generalization of the sigma-function differs essentially from the existing ones; our way of generalization applies to any Riemann surface and naturally continues the approach of Felix Klein who generalized the sigma-function to the class of hyperelliptic curves. KEN STEPHENSON, University of Tennessee Quasiconformal Mappings via Circle Packing: a Conjecture  [PDF] Suppose $K$ is a triangulation of a region $G$ in the plane. Associated with $K$ is a maximal packing $P$ in the unit disc $\mathbb D$, that is, a configuration of circles with the tangency pattern encoded in $K$. In particular, $P$ gives an embedding $K'$ of $K$ in $\mathbb D$. Intensive experiments suggest that when $K$ is an appropriately random triangulation of $G$, then the piecewise affine map $f: K'\to K$ approximates the conformal map from $\mathbb D$ to $G$. If this is the case, then by biasing the random triangulation $K$ using the ellipse field for a Beltrami coefficient $\mu$, one should be able to approximate the quasiconformal mapping from $\mathbb D$ to $G$ with dilatation $\mu$. Conjectured results will be illuminated by visual experiments. ## Commandites Nous remercions chaleureusement ces commanditaires de leur soutien.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.849988579750061, "perplexity_flag": "head"}
http://tcsmath.wordpress.com/2010/02/
# tcs math – some mathematics of theoretical computer science ## February 21, 2010 ### The need for non-linear mappings Filed under: Math, Open question — Tags: Dimension reduction, embeddings of finite metric spaces, non-linear mappings — James Lee @ 1:43 pm In the last post, I recalled the problem of dimension reduction for finite subsets of $L_1$.  I thought I should mention the main obstacle to reducing the dimension below $O(n)$ for $n$-point subsets:  It can’t be done with linear mappings. All the general results mentioned in that post use a linear mapping.  In fact, they are all of the form: 1. Change of density, i.e. preprocess the points/subspace so that no point has too much weight on any one coordinate. 2. Choose a subset of the coordinates, possibly multiplying the chosen coordinates by non-negative weights.  (Note that the Newman-Rabinovich result, based on Batson, Spielman, and Srivastava, is deterministic, while in the other bounds, the sampling is random.) (The dimension reduction here is non-linear, but only applies to special subsets of $L_1$, like the Brinkman-Charikar point set.) The next theorem shows that linear dimension reduction mappings cannot do better than $O(n)$ dimensions. Theorem: For every $1 \leq p \leq \infty$, there are arbitrarily large $n$-point subsets of $L_p$ on which any linear embedding into $L_2$ incurs distortion at least $\left(\frac{n-1}{2}\right)^{|1/p-1/2|}.$ Since the identity map from $\ell_1^n$ to $\ell_2^n$ has distortion $\sqrt{n}$, this theorem immediately implies that there are $n$-point subsets on which any linear embedding requires $\Omega(n)$ dimension for an ${O(1)}$-distortion embedding.  The $p=1$ case of the preceding theorem was proved by Charikar and Sahai.  A simpler proof, which extends to all $p \geq 1$ is given in Lemma 3.1 of a paper by myself, Mendel, and Naor. ## February 19, 2010 ### Open problem: Dimension reduction in L_1 Filed under: Math, Open question — Tags: Dimension reduction, embeddings of finite metric spaces, Sparsification — James Lee @ 10:26 pm Since I’ve been interacting a lot with the theory group at MSR Redmond (see the UW-MSR Theory Center), I’ve been asked occasionally to propose problems in the geometry of finite metric spaces that might be amenable to probabilistic tools. Here’s a fundamental problem that’s wide open. Let ${c_D(n)}$ be the smallest number such that every ${n}$-point subset of ${L_1}$ embeds into ${\ell_1^{c_D(n)}}$ with distortion at most ${D}$. Here’s what’s known. 1. Talagrand (following work of Bourgain-Lindenstrauss-Milman and Schechtman) proved that for every ${\varepsilon > 0}$, every ${n}$-dimensional subspace of ${L_1}$ admits a ${(1+\varepsilon)}$-distortion embedding into ${\ell_1^{d}}$ with ${d = O((n \log n)/\varepsilon^2)}$. In particular, this gives $\displaystyle c_{1+\varepsilon}(n) = O((n \log n)/\varepsilon^2).$ 2. Brinkman and Charikar showed that ${c_D(n) \geq \Omega(n^{1/D^2})}$ for ${D \geq 1}$. A significantly simpler proof was later given by Assaf Naor and myself. (With Brinkman and Karagiozova, we have also shown that this bound is tight for the Brinkman-Charikar examples and their generalizations.) 3. Recently, Newman and Rabinovich showed that one can take ${c_{1+\varepsilon}(n) = O(n/\varepsilon^2)}$ for any ${\varepsilon > 0}$. Their paper relies heavily on the beautiful spectral sparsification method of Batson, Spielman, and Srivastava. In fact, it is shown that one can use only ${O(n/\varepsilon^2)}$ weighted cuts (see the paper for details). This also hints at a limitation of their technique, since it is easy to see that the metric on ${\{1,2,\ldots,n\} \subseteq \mathbb R}$ requires ${\Omega(n)}$ cuts for a constant distortion embedding (and obviously only one dimension). The open problem is to get better bounds. For instance, we only know that $\displaystyle \Omega(n^{1/100}) \leq c_{10}(n) \leq O(n).$ There is evidence that $n^{\Theta(1/D^2)}$ might be the right order of magnitude.  In the large distortion regime, when $D = \Omega(\sqrt{\log n} \log \log n)$, results of Arora, myself, and Naor show that $c_D(n) = O(\log n)$. ## February 15, 2010 ### Hypercontractivity and its Applications Filed under: Math — Tags: Fourier analysis of boolean functions, hardness of approximation, Hypercontractivity, log-Sobolev inequalities — James Lee @ 12:14 am My student Punya Biswal just completed this great survey on hypercontractivity and its application in computer science. There is a PDF version from his home page, and accompanying slides. Hypercontractive inequalities are a useful tool in dealing with extremal questions in the geometry of high-dimensional discrete and continuous spaces. In this survey we trace a few connections between different manifestations of hypercontractivity, and also present some relatively recent applications of these techniques in computer science. ### 1. Preliminaries and notation Fourier analysis on the hypercube. We define the inner product ${\langle f,g \rangle = \mathop{\mathbb E}_{x}f(x)g(x)}$ on functions ${f,g \colon \{-1,1\}^{n} \rightarrow {\mathbb R}}$, where the expectation is taken over the uniform (counting) measure on ${\{-1,1\}^n}$. The multilinear polynomials ${\chi_{S}(x)=\prod_{i\in S}x_{i}}$ (where ${S}$ ranges over subsets of ${[n]}$) form an orthogonal basis under this inner product; they are called the Fourier basis. Thus, for any function ${f \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, we have ${f = \sum_{S\subseteq[n]}\hat{f}(S)\chi_{S}(x)}$, where the Fourier coefficients ${\hat{f}(S)=\langle f,\chi_{S}\rangle}$ obey Plancherel’s relation ${\sum\hat{f}(S)^{2}=1}$. It is easy to verify that ${\mathop{\mathbb E}_{x}f(x)=\hat{f}(0)}$ and ${\textsf{Var}_{x}f(x)=\sum_{S\neq\emptyset}\hat{f}(S)^{2}}$. Norms. For ${1\leq p<\infty}$, define the ${\ell_{p}}$ norm ${\|f\|_{p}=(\mathop{\mathbb E}_{x}|f(x)|^{p})^{1/p}}$. These norms are monotone in ${p}$: for every function ${f}$, ${p\geq q}$ implies ${\|f\|_{p}\geq\|f\|_{q}}$. For a linear operator ${M}$ carrying functions ${f \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$ to functions ${Mf=g \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, we define the ${p}$-to-${q}$ operator norm ${\|M\|_{p\rightarrow q}=\sup_{f}\|Mf\|_{q}/\|f\|_{p}}$. ${M}$ is said to be a contraction from ${\ell_{p}}$ to ${\ell_{q}}$ when ${\|M\|_{p\rightarrow q}\leq1}$. Because of the monotonicity of norms, a contraction from ${\ell_{p}}$ to ${\ell_{p}}$ is automatically a contraction from ${\ell_{p}}$ to ${\ell_{q}}$ for any ${q<p}$. When ${q>p}$ and ${\|M\|_{p\rightarrow q}\leq1}$, then ${M}$ is said to be hypercontractive. Convolution operators. Letting ${xy}$ represent the coordinatewise product of ${x, y \in \{-1,1\}^n}$, we define the convolution ${(f*g)(x)=\mathop{\mathbb E}_{y}f(y)g(xy)}$ of two functions ${f,g \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, and note that it is a linear operator ${f\mapsto f*g}$ for every fixed ${g}$. Convolution is commutative and associative, and the Fourier coefficients of a convolution satisfy the useful property ${\widehat{f*g}=\hat{f}\hat{g}}$. We shall be particularly interested in the convolution properties of the following functions • The Dirac delta ${\delta \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, given by ${\delta(1,\dotsc,1)=1}$ and ${\delta(x)=0}$ otherwise. It is the identity for convolution and has ${\hat{\delta}(S)=1}$ for all ${S\subseteq[n]}$. • The edge functions ${h_{i} \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$ given by $\displaystyle h_{i}(x)= \begin{cases} \phantom{-}1/2 & x=(1,\dotsc,1)\\ -1/2 & x_{i}=-1,x_{[n]\setminus\{i\}}=(1,\dotsc,1)\\ \phantom{-}0 & \text{otherwise.} \end{cases}$ ${\hat{h}_{i}(S)}$ is ${1}$ or ${0}$ according as ${S}$ contains or does not contain ${i}$, respectively. For any function ${f \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, ${(f*h_{i})(x)=(f(x)-f(y))/2}$, where ${y}$ is obtained from ${x}$ by flipping just the ${i}$th bit. Convolution with ${h_{i}}$ acts as an orthogonal projection (as we can easily see in the Fourier domain), so for any functions ${f,g \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$, we have ${\langle f*h_{i},g\rangle=\langle f,h_{i}*g\rangle=\langle f*h_{i},g*h_{i}\rangle}$ • The Bonami-Gross-Beckner noise functions ${\textsf{BG}_{\rho} \colon \{-1,1\}^{n}\rightarrow{\mathbb R}}$ for ${0\leq\rho\leq1}$, where ${\widehat{\textsf{BG}}_{\rho}(S)=\rho^{|S|}}$ and we define ${0^{0}=1}$. These operators form a semigroup, because ${\textsf{BG}_{\sigma}*\textsf{BG}_{\rho}=\textsf{BG}_{\sigma\rho}}$ and ${\textsf{BG}_{1}=\delta}$. Note that ${\textsf{BG}_{\rho}(x)=\sum_{S}\rho^{|S|}\chi_{S}(x)=\prod_{i}(1+\rho x_{i})}$. We define the noise operator ${T_{\rho}}$ acting on functions on the discrete cube by ${T_{\rho}f=\textsf{BG}_{\rho}*f}$. In combinatorial terms, ${(T_{\rho}f)(x)}$ is the expected value of ${f(y)}$, where ${y}$ is obtained from ${x}$ by independently flipping each bit of ${x}$ with probability ${1-\rho}$. Lemma 1 ${\frac{d}{d\rho}\textsf{BG}_{\rho}=\frac{1}{\rho}\textsf{BG}_{\rho}*\sum h_{i}}$ Proof: This is easy in the Fourier basis: $\displaystyle \widehat{\textsf{BG}}_{\rho}' = (\rho^{|S|})' = |S|\rho^{|S|-1} = \sum_{i\in[n]}\hat{h}_{i}\frac{\widehat{\textsf{BG}}_{\rho}}{\rho}.$ $\Box$ (more…) Theme: Shocking Blue Green. Blog at WordPress.com.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 123, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047524929046631, "perplexity_flag": "head"}
http://mathoverflow.net/questions/26256?sort=newest
## Gauss-Bonnet Theorem for Graphs? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One can define the Euler characteristic χ for a graph as the number of vertices minus the number of edges. Thus an n-cycle has χ = 0 and K4 has χ = –2. Is there an analog for the Gauss-Bonnet theorem for graphs, something akin to: [total turn angle] + [enclosed curvature] = τ + ω = 2 π χ ? Certainly if one embeds the graph on a manifold, then an interpretation is possible via Gauss-Bonnet on the manifold. But is there a more purely combinatorial interpretation? Addendum. (27Nov11). A new paper on this topic just appeared on the arXiv: Oliver Knill (who answered below back in March), "A graph theoretical Gauss-Bonnet-Chern Theorem." arXiv:1111.5395v1. Here is Knill's first figure: - When you say "embeds the graph in a manifold", do you mean "embeds the graph in a compact surface so that its complement is a disjoint union of disks"? I am having trouble seeing how this would work in a more general situation. – S. Carnahan♦ May 28 2010 at 14:49 Yes, that is what I meant, your precision is much preferable to my off-hand way of expressing it. – Joseph O'Rourke May 28 2010 at 15:09 @Joseph: Are you explicitly interested in general graphs - including trees or the graph shown above - or is it OK to consider specific families of graphs, e.g. polyhedral graphs? (I believe that it is possible to have a purely combinatorial interpretation of Gauss-Bonnet for polyhedral graphs, independent of a specific embedding.) – Hans Stricker Jan 22 at 15:37 ## 3 Answers One can do the following : given a graph with $n$ vertices and $m$ edges, define the scalar curvature of a vertex $x$ of valency $v(x)$ by $S(x)=2-v(x)$. Isolated and pendant vertices have positive scalar curvature, $S$ vanishes precisely on degree two vertices (wich are those one want to call flat), and is negative for higher degrees, reminiscent of the trees being $\mathrm{CAT}(-\infty)$. Now $\sum_x S(x)=2n-\sum_x v(x)=2\chi$ is a Gauss-bonnet formula. It is very simple, but one does not expects much more from such local considerations. I guess that to get a more subtle formula, one can try to add some geometric structure to the graph (for example, a length on each edge and an angle for each pair of adjacent edges, and maybe a circular ordering of the edges incident to each vertex). - Very nice! Thanks! Presumably, then, there is a version for, say, a simple cycle in the graph enclosing a certain amount of "scalar curvature." – Joseph O'Rourke May 28 2010 at 14:51 I like this answer! Intuitively a negative curvature point is one with 'more space than one might expect around it' and a positive curvature point is one with less. Ties up quite nicely I think... – Tom Boardman May 28 2010 at 15:55 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are different type of curvatures for graphs. In two dimensions its not the degree of the point which matters but the length of the circles at the point like in differential geometry. This is different from the degree if graphs with boundary are considered. The simplest curvature for two dimensional graphs (I defined the dimension for graphs in http://arxiv.org/abs/1009.2292). For the curvature K(x) = 6-|S(x)|, any two dimensional graph G has total curvature 6 chi(G). More interesting and subtle is a second order curvature 2|S1(x)|-|S2(x)| which is differential geometrically motivated because curvature is a second order difference notion. Gauss-Bonnet is now not true in general and it seems that some smoothness condition needs to be satisfied. It is true for all two dimensional graphs non-negative curvature and for graphs which are "smooth" enough in some sense which I still struggle to define. I myself got more interested in Gauss-Bonnet-Chern for n-dimensional graphs and have a result there. There is a natural Chern-Euler form on n-dimensional graphs those total sum is the Euler characteristic for an n-dimensional graph. This form is defined also in odd dimensions but seems to be zero (it is in 3 dimensions but I still struggle to verify this in higher dimensions). Note that all this is purely graph theoretical. No additional structure on the graph is necessary. No embedding in an ambient Euclidean space necessary for example. The inductive dimension as defined in the above mentioned paper (revised here: http://www.math.harvard.edu/~knill/graphgeometry/papers/1.pdf) is a natural way to define polyhedra and polytopes as graphs which become n-dimensional graphs after some truncation or stellation procedure. The inductive dimension for graphs looks similar to inductive dimension usually considered in topology but is utterly different. With the usual inductive dimension, any graph would be one dimensional. - There is indeed a version of Gauss-Bonnet for graphs $G$ embedded on a 2-manifold. Here, the combinatorial curvature at a vertex $x$ of $G$ is $1-\frac{deg(x)}{2} + \sum_{f \sim v} \frac{1}{size(f)}$, where $f \sim v$ means that the face $f$ is incident to the vertex $x$. This paper by Chen and Chen, then gives a Gauss-Bonnet formula for embedded infinite graphs with a finite number of accumulation points. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170101284980774, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/27220/how-to-solve-a-transcendental-equation
## How to solve a transcendental equation? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the best way to solve an equation of the form $x ^ x = 5$ by hand? I know about approximations using Newton's method etc using a computer, but what I want to know is the best method to converge at a solution just using a pen and a paper. - 4 Wouldn't this entail computing an approximate value for the Lambert W-function? – Rob Grey Jun 6 2010 at 3:27 2 I retagged, as the "associative algebra" tag was confusing. Take the logarithm and use a pen and paper (as you like) to find the root of $\ln x=5/x$ graphically. Otherwise, follow the above comment. – Wadim Zudilin Jun 6 2010 at 3:33 2 With pen and paper I'd use log tables and Newton's method. – Dan Piponi Jun 6 2010 at 4:12 1 Sometimes the dumbest method has virtues... Just trying x^x on my calculator and using linear interpolation in my head I converged on 2.129372... really fast. It makes sense that if one used log tables and did the same thing with x log x = log 5 the same thing would happen, assuming you had enough digits of accuracy. – Michael Greenblatt Jun 6 2010 at 16:30 2 db, the examples in the wiki article, en.wikipedia.org/wiki/Lambert_function may be of interest for you – Pietro Majer Jun 6 2010 at 20:29 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043294787406921, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/161783-sketch-region-integration-change-order-integration.html
# Thread: 1. ## sketch the region of integration and change the order of integration. Heres the problem: HAving a hard time understanding what this question is asking. When they say change the order of integration, it seems like i am supposed to integrate dx with the 4x to 4 bound and integrate dy with the 0 to 1 bound...but im pretty sure im wrong. Any help is much appreciated. 2. What does the region look like? When changing the order of integration, the limits will likely change as well. That is, in general you do NOT have this: $\displaystyle\int_{a}^{b}\int_{f(x)}^{g(x)}h(x,y)\ ,dy\,dx=\int_{f(x)}^{g(x)}\int_{a}^{b}h(x,y)\,dx\, dy.$ You have to sketch out the region. In the first integral, which is like yours, you view the inner integral as happening first. In addition, you think of the region first as being bounded below and above by two different functions of x. Then, the outer integral gives you a number by integrating from one number to another. After you've switched the order of integration, you have something that looks more like this: $\displaystyle\int_{c}^{d}\int_{j(y)}^{k(y)}h(x,y)\ ,dx\,dy,$ and now you must think of the inner integral's limits as enclosing the left function up to the right function. Then you get a y-interval in the outer integral to close it off. I don't feel like I'm explaining this very clearly. Does this make sense? 3. As Ackbeet suggests, as does the title of your post, first sketch the region. $0\le x\le 1$ so draw two vertical lines at x= 0 and x= 1. The upper bound in the y-integral is 4 so draw the line y= 4. The lower bound is 4x so draw the line y= 4x. You should see a triangle with vertices at (0, 0), (0, 4) and (1, 4). The region over which you want to integrate is the inside of that triangle. Reversing the order of integration, you will be integrating with respect to y last so its limits of integration must be constants. You can see from your sketch that the lowest value of y is 0 and the highest is 4. Those must be the limits of integration for y. Now, I recommend drawing a horizontal line, representing a given value of y, inside the triangle. You can see that the left end is on the vertical line x= 0 while the right end is on y= 4x or x= y/4. The limits of integration on the inside, x, integral are x= 0 to x= y/4.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434676170349121, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/158832/an-upper-bound-for-lambda-a-1
An upper bound for $\|(\lambda-A)^{-1}\|$? Let $A$ be a k-by-k matrix and $\sigma(A)$ its spectrum, or the collection of eigenvalues of $A$. If we know $\lambda\notin\sigma(A)$, then $\lambda$ is at a positive distance to all points in the spectrum since the latter is compact. I wonder whether there is a bound for the norm of the inverse of $\lambda I-A$, maybe in terms of the distance from $\lambda$ to the spectrum. You can use all kinds of norms on $\|(\lambda I-A)^{-1}\|$. Thanks! - What do you want such a bound for? Have you tried the spectral radius formula? – Qiaochu Yuan Jun 15 '12 at 20:17 Is there any interest in cases where $\lambda$ is "near" some point in $\sigma (A)$? – Tim Duff Jun 15 '12 at 20:35 @QiaochuYuan I am considering the spectrum of T, a direct sum of countably many matrices, A_n, of bounded sizes. If $\lambda$ is not in the spectra of any of the matrices, then each $\lambda-A_n$ would be invertible, but it remains to prove the norms of their inverses are uniformly bounded. – Hui Yu Jun 15 '12 at 20:42 @Hui: there's no reason to believe that the norms of their inverses are uniformly bounded. Take a direct sum of $1 \times 1$ matrices with entries $1 - \frac{1}{n}, n \in \mathbb{N}$ and $\lambda = 1$. – Qiaochu Yuan Jun 15 '12 at 20:45 1 @Norbert Well, I checked that paper. The bad thing is that it gives only the estimates for self-adjoint operators and operators differ from self-adjoint ones 'not much', and thus is quite restrictive. I guess there would be some more general result when all operators are assumed to be k-by-k matrices. – Hui Yu Jun 16 '12 at 13:28 show 6 more comments 1 Answer Here is a crude upper bound. It might not be good enough for what you are after, but without further constraints on your matrices, it is hard to see how one can do substantially better. Throughout, I am using the operator norm. $\newcommand{\norm}[1]{\Vert#1\Vert} \newcommand{\Cplx}{{\bf C}}\newcommand{\lm}{\lambda}$ Fix $k$. Let $A$ be a $k\times k$ matrix with complex entries. Schur's theorem from linear algebra tells us that there is an upper triangular matrix $B$ and a unitary matrix $U$ such that $A=U^*BU$. Let $d_1,\dots, d_n$ be the diagonal entries of $B$ (these form the spectrum of $B$, and hence of $A$, as we will shortly see). For $\lm\in\Cplx\setminus\{d_1,\dots,d_n\}$, let $D_\lambda$ be the diagonal matrix whose entries are $(\lm_1-d_1)^{-1}, \dots, (\lm_k-d_k)^{-1}$, and put $$C_\lm = D_\lm^{-1} (\lm I - B)$$ Then $C_\lambda$ is an upper triangular matrix with each diagonal entry equal to $1$. Since $(C_\lm - I)^k=0$, $C_\lm$ is $I$ + a nilpotent matrix, and so is invertible. In fact, just by the usual formula for $(1+x)^{-1}$, we have $$C_\lm^{-1} = \sum_{j=0}^{k-1} (-1)^j(C_\lm-I)^j$$ and thus $$(\lm I- B)^{-1} =(D_\lm C_\lm)^{-1} = \sum_{j=0}^{k-1} (-1)^j(C_\lm-I)^j D_\lm^{-1}$$ Now we just use the triangle identity and submultiplicativity of the norm. Let $d(\lm) = {\rm dist}(\lm, \sigma(B)) = \min_j \vert d_j-\lm\vert$. Then $\norm{D_\lm^{-1}} = d(\lm)^{-1}$, and $$\norm{ (\lm I- B)^{-1} } \leq \sum_{j=0}^{k-1} d(\lm)^{-j-1} \norm{\lm I - B}^j = d(\lm)^{-1} \frac{d(\lm)^{-k}\norm{\lm I - B}^k - 1}{d(\lm)^{-1}\norm{\lm I-B} -1 } .$$ Since $B$ is unitarily equivalent to $A$, the same inequality holds with $B$ replaced by $A$. - Thanks, Yemon. Actually your answer suffices for my problem. As you said in your comment, we need to put a bound for the norm of the matrices in the direct sum, but there is already one bound for them. Since their direct sum of them is a bounded operator, the matrices has an uniform bound for their norms. Then we can use this norm in your last inequality to get a bound for the norm of their inverses. – Hui Yu Jun 18 '12 at 15:45 On the other hand, I believe your bound is actually sharp. Maybe we can find a matrix for which the equality holds. – Hui Yu Jun 18 '12 at 15:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942937433719635, "perplexity_flag": "head"}
http://mathoverflow.net/questions/71514/hausdorff-dimension-of-inverse-images/71520
## Hausdorff dimension of inverse images. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f: \mathbb{R}^d \to \mathbb{R}$ be a continuous function. Let $t \in (\inf(f), \sup(f))$ and define $C = f^{-1} (t)$. Is it true that the Hausdorff dimension of C is $\geq d -1$? If no how does one construct a counter example? I believe the following argument works for $d = 2$: $A = f^{-1}((-\infty, t))$ and $B= f^{-1}((t,\infty))$ are two open sets whose complement is contained in $C$. If the Hausdorff dimension of $C$ was $< 1$, then $C$ would be totally disconnected. Hence, $\mathbb{R}^2 \setminus C$ would be disconnected, which is implossible. - 1 I retagged with dimension-theory to help bring out the fact that this is about Hausdorff dimension to those who use tags as filters – David White Jul 28 2011 at 20:22 ## 2 Answers The boundary of $A = f^{-1}((-\infty, t))$ and $B= f^{-1}((t,\infty))$ is $C = f^{-1} (t)$. Therefore $C$ has Hausdorff dimension at least $d-1$, using this MO entry. I recommend Sergei Ivanov's response for a simple proof. Strictly speaking, the quoted MO entry would require that $A$ or $B$ is bounded, but read below. Sergei Ivanov's argument can be adapted for a direct proof as follows. One can find balls $A'\subset A$ and $B'\subset B$ of equal radius. Consider the line $L$ connecting the centers of these balls, and the planes orthogonal to $L$ passing through the centers. These planes intersect $A'$ and $B'$ in two parallel disks of dimension $d-1$ and equal radius. Applying the intermediate value theorem to $f$ restricted to the lines parallel to $L$, one sees that the orthogonal projection of $C$ to either disk is surjective, hence $C$ has Hausdorff dimension at least $d-1$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Indeed, more is true: (1) The topological dimension is $\ge d-1$. And (2) the Hausdorff dimension is $\ge$ the topological dimension. For (1) note that $C$ is a closed set that separates $\mathbb R^d$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926445722579956, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/25883/list
Return to Answer 2 added 1039 characters in body Here is a counterexample: A torus embedded in $\mathbb{R}^3$ has a fundamental class in $H_2$, but there are no points with neighborhoods homotopy equivalent to $S^2$. It might be interesting to try to formulate a condition for the homology of pairs $(X,Y)$. Edit: I think you may want to change your conjecture by demanding that there exist neighborhoods of a certain type that represent a set of homology cycles that span $H_k$. Otherwise, there doesn't seem to be a relationship between the neighborhoods and the elements of homology. It is also not clear why convexity makes an appearance - you may want to demand the neighborhood be contractible in $\mathbb{R}^m$, though. With that in mind, here is a family of counterexamples that actually satisfy the conditions you specified: Embed the torus $(S^1)^n$ into some $\mathbb{R}^m$, for $n>2$ and sufficiently large $m$. Then for $k$ ranging from $2$ to $n-1$, the rational homology in degree $k$ has dimension $\binom{n}{k}$, but no cycles represented by spheres (or punctured Euclidean spaces). In general, I think you should change "punctured Euclidean space" to "compact connected orientable manifold". Such objects look like holes if they aren't boundaries of higher-dimensional manifolds (and if you squint enough). 1 Here is a counterexample: A torus embedded in $\mathbb{R}^3$ has a fundamental class in $H_2$, but there are no points with neighborhoods homotopy equivalent to $S^2$. It might be interesting to try to formulate a condition for the homology of pairs $(X,Y)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480385780334473, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/252644/evaluate-the-limit-without-lhospitals-rule
# Evaluate the limit without l'Hospital's rule Let $$f(x)=\frac{x}{x}$$ be defined on $\mathbb R\setminus \{0\}$. Show that $$\lim\limits_{x\to 0}f(x) = 1$$ without using l'Hospital's rule. - 3 You do not need any approach... Just tell: how much is $x$ divided by $x$? – Godot Dec 6 '12 at 22:57 ## 3 Answers If $x \neq 0$, then $|f(x) - 1| = 0$. Let $\epsilon > 0$. We need $\delta > 0$ so that $0<|x| < \delta\implies |f(x) - 1 |<\epsilon$ The value $\delta = 1$ works for any $\epsilon$. - when you plug in x/x for f(x) you get 0 and in the definition it states that $ϵ > 0$ – Grigor Dec 6 '12 at 22:59 3 The definition of limit says $0 < |x - a| < \delta \implies |f(x) - L| < \epsilon$ Limits care about what happens around the point $a$ but are insensitive to what happens at $a$. – ncmathsadist Dec 6 '12 at 23:06 $$f(x)=1\qquad \forall x \neq 0$$ Thus $$f(1)=1$$ $$f(.001)=1$$ $$f(.00000000001)=1$$ etc. You can get as close as you want to $x=0$ (without $x$ ever becoming $0$), and $f(x)$ will always be $1$. - Does this answer truly deserve downvotes? If so, please elaborate. – Argon Dec 7 '12 at 0:27 I didn't downvote you, but you might want to add a bit more detail. It seems like the OP is confused about what limits mean, so you could try to explain that finding a limit is considering points very close to the limit. And so since one sees that $f(0.001)$ .... the limit is .... – Thomas Dec 7 '12 at 4:40 Is this a real question? $x/x = 1$ because $x \in {\mathbb R} \setminus \{0\}$, so ... - 2 yes, but how would you show that mathematically? – Grigor Dec 6 '12 at 22:56 2 @Grigor Mathematically, whenever $x\neq 0$; we have that $$\frac x x =1$$ Thus, on any punctured neighborhood of zero, $f(x)=1$ – Peter Tamaroff Dec 6 '12 at 22:58 Gregor, it's definition, or better: it stems from definitions: for any real nonzero number $\,x\,$, it is true that $\,\frac{x}{x}=1\,$...we could get into equivalence classes and stuff, but I think the above should suffice. – DonAntonio Dec 6 '12 at 22:58 This is a tautology, $x/x = 1$ is always true. – glebovg Dec 6 '12 at 23:02 3 Perhaps at such a level as this question, one should assume that the author needs to see a delta-epsilon proof. – robjohn♦ Dec 7 '12 at 2:15 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454389810562134, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/operators
# Tagged Questions The operators tag has no wiki summary. 2answers 34 views ### Quantum Mechanical Operators in the argument of an exponential In Quantum Optics and Quantum Mechanics, the time evolution operator $$U(t,t_i) = \exp\left[\frac{-i}{\hbar}H(t-t_i)\right]$$ is used quite a lot. Suppose $t_i =0$ for simplicity, and say the ... 0answers 28 views ### Schrodinger equation in momentum space [duplicate] I have a problem this is: When I solve the Schrodinger equation in momentum space, I had done as this: \$\begin{array}{l} i\hbar \frac{{\partial \Psi }}{{\partial t}} = - \frac{\hbar ... 0answers 66 views ### Prove that the position operator is $\hat{x} = i\hbar \frac{d}{{dp}}$ in the momentum representation [closed] Proof that: $x = i\hbar \frac{d}{{dp}}$ I did this, could you tell me if I am false or true \$\begin{array}{l} x{e^{\frac{{ipx}}{\hbar }}} = - i\hbar \frac{{d{e^{\frac{{ipx}}{\hbar }}}}}{{dp}} = ... 1answer 65 views ### Proof $\left[ {\hat H,{{\hat p}_i}} \right] = - \frac{\hbar }{i}\frac{{\partial \hat H}}{{\partial {{\hat q}_i}}}$ [closed] I have a problem with the Hamiltonian, I don't think anything to solve it!! So could you give me some hints! Knowing that: \left[ {{{\hat p}_i},{{\hat q}_k}} \right] = \frac{\hbar }{i}{\delta ... 1answer 84 views ### Some Dirac notation explanations Equation for an expectation value $\langle x \rangle$ is known to me: \begin{align} \langle x \rangle = \int\limits_{-\infty}^{\infty} \overline{\psi}x\psi\, d x \end{align} By the definition we ... 2answers 108 views ### How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$? I am kind of new to this eigenvalue, eigenfunction and operator things, but I have come across this quote many times: $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$. ... 1answer 39 views ### Statistical sum of physical quantities in a quantum system Let $C = A + B$ (statistical sum, so $\mathbb{E}[C] = \mathbb{E}[A] + \mathbb{E}[B]$), and let $p(A = a) = 1$. Are the following true? $\mathbb{E}[C^2] = a^2 + 2a\mathbb{E}[B] + \mathbb{E}[B^2]$ ... 1answer 44 views ### Energy eigenvalues of a Q.H.Oscillator with $[\hat{H},\hat{a}] = -\hbar \omega \hat{a}$ and $[\hat{H},\hat{a}^\dagger] = \hbar \omega \hat{a}^\dagger$ I just finished deriving the commutators: \begin{align} [\hat{H}, \hat{a}] &= -\hbar \omega \hat{a}\\ [\hat{H}, \hat{a}^\dagger] &= \hbar \omega \hat{a}^\dagger\\ \end{align} On the ... 0answers 46 views ### A particlar normal ordering problem Say we have an expression of the form: $$\left<0\right|:\phi(x)^2: : \phi(y)^2:\left|0\right>,$$ where $\phi$ is some scalar field. I have heard the claim several times, that in evaluating ... 2answers 179 views ### Coherent State, Unitary Operators, Harmonic Oscillator Consider the operator: $$O = e^{\theta(a^\dagger b - b^\dagger a)}$$ where $\theta$ is a constant. $O$ is a unitary operator. $a$, $a^\dagger$, $b$, and $b^\dagger$ are ladder operators for two ... 2answers 88 views ### Proof for commutator relation $[\hat{H},\hat{a}] = - \hbar \omega \hat{a}$ I know how to derive below equations found on wikipedia and have done it myselt too: \begin{align} \hat{H} &= \hbar \omega \left(\hat{a}^\dagger\hat{a} + \frac{1}{2}\right)\\ \hat{H} &= ... 1answer 51 views ### The matrix element of a normal-ordered operator Eq (1.137) in Negele and Orland gives the following identity for a normal-ordered operator $A(a_i^\dagger,a_i)$: \langle \phi|A(a_i^\dagger,a_i)|\phi'\rangle=A(\phi_i^*,\phi'_i)e^{\sum ... 2answers 72 views ### Translator Operator In Modern Quantum Mechanics by Sakurai, at page 46 while deriving commutator of translator operator with position operator, he uses $$\left| x+dx\right\rangle \simeq \left| x \right\rangle.$$ But for ... 1answer 105 views ### Phys.org Spectral geometry to unite relativity and quantum mechanics, restate in laymens terms? Lingua Franca links relativity and quantum theories with spectral geometry Could someone give me a short synopsis of this article in laymens terms? What implications does this have in the physics ... 0answers 88 views ### How to define the mirror symmetry operator for Kane-Mele model? Let us take the famous Kane-Mele(KM) model(http://prl.aps.org/abstract/PRL/v95/i22/e226801 and http://prl.aps.org/abstract/PRL/v95/i14/e146802) as our starting point. Due to the time-reversal(TR), ... 1answer 81 views ### Matrix representation for fermionic annihilation operator My guess it should look something like this: \$ c_\sigma = ... 0answers 38 views ### Time ordering and Fermions Having time ordering operator for fermions, should it reverse sign if it swaps operators with opposite spin variable? In other words should $T[c_{t_1,\uparrow}c_{t_2,\downarrow}^\dagger]$ return ... 0answers 36 views ### QFT basics for Klein-Gordon fields I am teaching myself QFT from Peskin for next years maths course and I have two questions: What is a c-number? Is it a complex number, and if so why does it mean, ... 2answers 106 views ### Quantum commutator I'm given this commutator: $$\left[PXP,P\right]$$ Being $P\psi=-i\hbar\partial_x\psi$, and $X\psi=x\psi$ I've solved it in two ways, the first one is just aplying the commutator to some function ... 1answer 86 views ### Hermitian Adjoint of differential operator I came across this equation (identity) (Eq. 4 in this paper): $\int(-i d\psi/dx)^*\psi dx = \int \psi^*(-i d\psi/dx) dx + id(\psi^*\psi)/dx\mid_{-\infty}^{+\infty}$ I have trouble proving it. I ... 3answers 103 views ### Associating a Unitary operator to proper Lorentz transformations? If one reads eg page 32 of Srednicki where he says: In quantum theory, symmetries are represented by unitary (or antiunitary) operators. This means that we associate a unitary operator U(Λ) ... 1answer 56 views ### Quantum mechanical analogue of conjugate momentum In classical mechanics, we define the concept of canonical momentum conjugate to a given generalised position coordinate. This quantity is the partial derivative of the Lagrangian of the system, with ... 1answer 80 views ### Operators in quantum mechanics According to the Quantum Mechanics, can we write $\langle q|p\rangle = e^{ipq}$? If so then how? And if we transfer to integrate formulation then how it will look like? 1answer 76 views ### Physics Applications of Fredholm Theory: I find Fredholm theory beautiful, especially the Liouville-Neumann series for solving Fredholm integral equations of the second kind. There seems to be a consensus that these equations are quite ... 0answers 26 views ### Quantum graph theory: complex spectra In quantum graph theory, what are the properties of a given graph to own complex conjugated complex eigenvalues, either finite or infinite? Spectral graph theory is as far as I know a not completely ... 1answer 61 views ### Spectral properties of CFT What are the general spectral properties of CFT? I mean what is the "spectrum"/eigenvalues of CFT in 2d and d>2 spacetime dimensions? I understand the "spectrum" and "Fock space" realization of Dirac ... 1answer 104 views ### The issue on existence of inverse operations of $a$ and $a^{\dagger}$ I have asked a question at math.stackexchange that have a physical meaning. My assumption: Suppose $a$ and $a^\dagger$ is Hermitian adjoint operators and $[a,a^\dagger]=1$. I want to prove that ... 3answers 313 views ### How to tackle 'dot' product for spin matrices I read a textbook today on quantum mechanics regarding the Pauli spin matrices for two particles, it gives the Hamiltonian as H = \alpha[\sigma_z^1 + \sigma_z^2] + ... 5answers 223 views ### Math of eigenvalue problem in quantum mechanics I learned the eigenvalue problem in linear algebra before and I just find that the quantum mechanics happen to associate the Schrodinger equation with the eigenvalue problem. In linear algebra, we ... 1answer 64 views ### Notational techniques for dealing with creation operators on Fock space This question is trying to see if anyone has some simple notation (or tricks) for dealing with operators acting on coherent states in a Fock space. I use bosons for concreteness; what I'm interested ... 1answer 170 views ### Show that for QM operator A: $\int_{-\infty}^{\infty}\psi A^{\dagger}A\psi dx = \int_{-\infty}^{\infty}(A\psi)^*(A\psi)dx$ I need to show for $$A = \frac{d}{dx} + \tanh x, \qquad A^{\dagger} = - \frac{d}{dx} + \tanh x,$$ that \int_{-\infty}^{\infty}\psi^* A^{\dagger}A\psi dx = ... 1answer 123 views ### Coordinate representation of quantum ladder operator? I can't seem to figure out how to derive the coordinate representation of the $a_+$ ladder operator in quantum mechanics. I know that $a_-$ is $\sqrt{\frac{1}{2mwh}} (mwx + i\dot{p})$ in which where ... 1answer 132 views ### String theory - OPE and primary operators First, a disclaimer: I am new to Physics SE, and I am primarily a mathematician, not a physicist. I apologise in advance for the possibly poor quality of the question, any and thank you for your ... 3answers 196 views ### Why is this identity an if, rather than if and only if? A recent question (Product of exponential of operators) asked who to proved that the exponentials of operators multiply in same manner as those of scalars if and only if the commutator of the ... 0answers 73 views ### Explicit evaluation of a radially ordered product I am trying to understand the application of the operator product expansion to calculate the radially ordered product in the complex plain of $T_{zz}(z)\partial_w X^{\rho}(w)$ which should result in ... 2answers 226 views ### Deriving a QM expectation value for a square of momentum $\langle p^2 \rangle$ I alredy derived a QM expectation value for ordinary momentum which is: $$\langle p \rangle= \int\limits_{-\infty}^{\infty} \overline{\Psi} \left(- i\hbar\frac{d}{dx}\right) \Psi \, d x$$ And i ... 4answers 374 views ### Product of exponential of operators in the context of non-relativistic quantum mechanics I want to show that, for any $A$ and $B$ operators $$e^{A}e^{B}=e^{A+B}$$ if and only if $$[A,B]=0$$ I remember my professor told use about ... 4answers 132 views ### How to prove that the symmetrisation Operator is hermitian? Let $\mathcal{H}_N$ be the $N$ particle Hilbert space. So a quantum state $\left| \Psi \right>$ may be representated by \left| \Psi \right> = \left| k_1 \right>^{(1)}\left| k_2 ... 3answers 141 views ### Operators explaination and momentum operator in QM I know and understand why equation below holds. But i am new to operator thing in QM and would need some explaination on this. \langle x \rangle = \int\limits_{-\infty}^\infty |\Psi|^2 x \, ... 2answers 161 views ### Weyl Ordering Rule While studying Path Integrals in Quantum Mechanics I have found that [Srednicki: Eqn. no. 6.6] the quantum Hamiltonian $\hat{H}(\hat{P},\hat{Q})$ can be given in terms of the classical Hamiltonian ... 4answers 266 views ### Is the momentum operator diagonal in position representation? The matrix elements of the momentum operator in position representation are: $$\langle x | \hat{p} | x' \rangle = -i \hbar \frac{\partial \delta(x-x')}{\partial x}$$ Does this imply that \$\langle x ... 1answer 48 views ### Question about the linearity of wave functions For piece-wise constant potential, the potential energy is constant so the time dependent wave function can take the form $\psi(x,t)=C_1e^{i(kx- \omega t)}+C_2e^{i(-kx-\omega t)}$ where ... 1answer 95 views ### Klein-Gordon Canonical Commutation Relation (CCR) In the complex Klein-Gordon field we regard as dynamical variables the field $\phi$, the complex conjugate of the field $\phi^*$, and the momenta $\pi$, $\pi^*$. I can't see how should arise the ... 1answer 140 views ### commutation of operator product expansion In CFT, when we have an OPE: $$O_1(z)O_2(w)=\frac{O_2(w)}{(z-w)^2}+\frac{\partial O_2(w)}{(z-w)}+...$$ this holds inside a time-ordered correlation function, so $O_1(z)O_2(w)=O_2(w)O_1(z)$. Does it ... 1answer 54 views ### Can I prove boundedness of an operator without checking it for its whole domain? (I don't have a direct reference so this is a little fishy and I'll delete it if nobody recognises what I'm talking about, but I though for starters I'll ask anyway) I've heard at university that if ... 0answers 34 views ### Why there is no operator for time in QM? [duplicate] Is there one central reason why there is no "Time" operator in QM? I know this question has been asked before, but I thought I would try to stimulate some fresh thinking. 2answers 235 views ### Derivatives of operators How do derivatives of operators work? Do they act on the terms in the derivative or do they just get "added to the tail"? Is there a conceptual way to understand this? For example: say you had the ... 1answer 127 views ### Evaluate Commutator with Partial Derivatives I need to evaluate the following commutator... $[x(\frac{\partial}{\partial y})-y(\frac{\partial}{\partial x}),y(\frac{\partial}{\partial z})-z(\frac{\partial}{\partial y})]$ i tried applying an ... 2answers 143 views ### Sum of two density matrices: $\rho=p_1\rho_1+p_2\rho_2$ Suppose we have $$\rho=p_1\rho_1+p_2\rho_2$$ Where $\rho_1$ and $\rho_2$ are density matrices with $p_1+p_2=1$ I'm trying to show this is also a density matrix If we let \rho_1=\sum_i^n ... 1answer 128 views ### Once I have the eigenvalues and the eigenvectors, how do I find the eigenfunctions? I am using Mathematica to construct a matrix for the Hamiltonian of some system. I have built this matrix already, and I have found the eigenvalues and the eigenvectors, I am uncertain if what I did ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823761343955994, "perplexity_flag": "middle"}
http://regularize.wordpress.com/tag/abstract/
# regularize Trying to keep track of what I stumble upon July 26, 2012 ## Estimating unknown sparsity, sparse signal processing and proximal Newton-type methods Posted by Dirk under Math, Optimization, Signal and image processing, Sparsity | Tags: abstract, arxiv, compressed sensing, papers, sparsity | In this post I just collect a few papers that caught my attention in the last moth. I begin with Estimating Unknown Sparsity in Compressed Sensing by Miles E. Lopes. The abstract reads Within the framework of compressed sensing, many theoretical guarantees for signal reconstruction require that the number of linear measurements ${n}$ exceed the sparsity ${\|x\|_0}$ of the unknown signal ${x\in\mathbb{R}^p}$. However, if the sparsity ${\|x\|_0}$ is unknown, the choice of ${n}$ remains problematic. This paper considers the problem of estimating the unknown degree of sparsity of ${x}$ with only a small number of linear measurements. Although we show that estimation of ${\|x\|_0}$ is generally intractable in this framework, we consider an alternative measure of sparsity ${s(x):=\frac{\|x\|_1^2}{\|x\|_2^2}}$, which is a sharp lower bound on ${\|x\|_0}$, and is more amenable to estimation. When ${x}$ is a non-negative vector, we propose a computationally efficient estimator ${\hat{s}(x)}$, and use non-asymptotic methods to bound the relative error of ${\hat{s}(x)}$ in terms of a finite number of measurements. Remarkably, the quality of estimation is dimension-free, which ensures that ${\hat{s}(x)}$ is well-suited to the high-dimensional regime where ${n<<p}$. These results also extend naturally to the problem of using linear measurements to estimate the rank of a positive semi-definite matrix, or the sparsity of a non-negative matrix. Finally, we show that if no structural assumption (such as non-negativity) is made on the signal ${x}$, then the quantity ${s(x)}$ cannot generally be estimated when ${n<<p}$. It’s a nice combination of the observation that the quotient ${s(x)}$ is a sharp lower bound for ${\|x\|_0}$ and that it is possible to estimate the one-norm and the two norm of a vector ${x}$ (with additional properties) from carefully chosen measurements. For a non-negative vector ${x}$ you just measure with the constant-one vector which (in a noisy environment) gives you an estimate of ${\|x\|_1}$. Similarly, measuring with Gaussian random vector you can obtain an estimate of ${\|x\|_2}$. Then there is the dissertation of Dustin Mixon on the arxiv: Sparse Signal Processing with Frame Theory which is well worth reading but too long to provide a short overview. Here is the abstract: Many emerging applications involve sparse signals, and their processing is a subject of active research. We desire a large class of sensing matrices which allow the user to discern important properties of the measured sparse signal. Of particular interest are matrices with the restricted isometry property (RIP). RIP matrices are known to enable efficient and stable reconstruction of sfficiently sparse signals, but the deterministic construction of such matrices has proven very dfficult. In this thesis, we discuss this matrix design problem in the context of a growing field of study known as frame theory. In the first two chapters, we build large families of equiangular tight frames and full spark frames, and we discuss their relationship to RIP matrices as well as their utility in other aspects of sparse signal processing. In Chapter 3, we pave the road to deterministic RIP matrices, evaluating various techniques to demonstrate RIP, and making interesting connections with graph theory and number theory. We conclude in Chapter 4 with a coherence-based alternative to RIP, which provides near-optimal probabilistic guarantees for various aspects of sparse signal processing while at the same time admitting a whole host of deterministic constructions. By the way, the thesis is dedicated “To all those who never dedicated a dissertation to themselves.” Further we have Proximal Newton-type Methods for Minimizing Convex Objective Functions in Composite Form by Jason D Lee, Yuekai Sun, Michael A. Saunders. This paper extends the well explored first order methods for problem of the type ${\min g(x) + h(x)}$ with Lipschitz-differentiable ${g}$ or simple ${\mathrm{prox}_h}$ to second order Newton-type methods. The abstract reads We consider minimizing convex objective functions in composite form $\displaystyle \min_{x\in\mathbb{R}^n} f(x) := g(x) + h(x)$ where ${g}$ is convex and twice-continuously differentiable and ${h:\mathbb{R}^n\rightarrow\mathbb{R}}$ is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. Many problems of relevance in high-dimensional statistics, machine learning, and signal processing can be formulated in composite form. We prove such methods are globally convergent to a minimizer and achieve quadratic rates of convergence in the vicinity of a unique minimizer. We also demonstrate the performance of such methods using problems of relevance in machine learning and high-dimensional statistics. With this post I say goodbye for a few weeks of holiday. October 31, 2011 ## Reading and writing abstracts Posted by Dirk under Math | Tags: abstract, writing | I read abstracts quite a lot and I also have to write them occasionally. Nowadays I usually write abstracts quite quickly and don’t think too much if the abstract is going to be good. This semester I am organizing a seminar for the graduate students and post docs in our department (we simply call this seminar “research seminar”) and especially have to collect the abstracts to make the announcements. Since all mathematical institutes here are involved, the topics vary a lot: abstract algebra, numerical linear algebra, mathematical physics or optimization. Hence, it may happen that I get abstracts which I can hardly access and I thought if it is always possible to write an abstract someone like me can understand. At this point I should probably explain what I mean by “understand”: I think that an abstract should allow to locate the field in which the talk will be situated. Moreover, I should understand what the objects of interests will be and what the role of these objects is in the field. Moreover, it would be good if I would be able to estimate how the topic of the talk is related to other fields of mathematics (especially to fields I am interested in). Since the people who give talks in our research seminar are mostly quite early in their career I made the following experiment: When I received a new abstract I read it thoroughly and tried to understand it in the above sense. Usually there was something which I totally not got and I went ahead and replied to the speaker and asked very basic questions which I asked myself while reading the abstract; questions like “What are the object you talk about?” or “What can one do with these objects or results?”. The speakers then responded with a new version of the abstract and from my point of view this process always produced a way better abstract. Hence, I started thinking about rules and tips to write a good abstract. From what I’ve written above, I tried to deduce some rules which I collect here: • Avoid jargon. Although this sounds obvious, most abstracts contain jargon in one way or the other. Of course one can not avoid the use of specific terminology and technical terms but even then there is an easy check, if a technical term is appropriate: Try to find a definition on the internet – if you do not succeed within a few minutes you should find a different word. I’ll try to illustrate this with an example (which I randomly choose from arxiv.org on a topic which is quite far away from mine): On a convex body in a Euclidean space, we introduce a new variational formulation for its Funk metric, a Finsler metric compatible with the tautological Finsler structure of the convex body. I know what a convex body in Euclidean space is and I know what could be meant by a variational formulation; however, I had no idea what a Funk metric is – but it was not hard to find out that a Finsler structure is something like a “metric varying continuously from point to point”. Well, its still open to me, what the “tautological” Finsler structure of the convex body shall be, but this is something that I hope, a talk or paper could explain. This example is somehow borderline, since the additional explanation still leads to terms which are not all defined on Wikipedia or Wolfram MathWorld. But still, this sentence gives me something: The author studies the geometry of convex bodies and will come up with a variational formulation of a special metric. • Use buzzwords. This may sound to contradict the precious point and in part it does. But beware that you can use a buzzword together with its explanation. Again, the example from the previous point works: “Funk metric” may be a buzzword and the explanation using the name “Finsler” is supposed to ring a bell (as I learned, it is related to Hilbert’s 23rd problem). This helps the readers to find related work and to remember what was the field you were working in. • General to specific. In general it’s a good advice to work from general to specific. Start with a sentence which points in the direction of the field you are working in. So your potential audience will know from the beginning in which field your work is situated. • Answer questions. If you think that your work answers questions, why not pose the questions in the abstract? This may motivate the readers to think by themselves and draw their interest to the topic. • Don’t be afraid of layman’s terms. Although layman’s terms usually do not give an exact description and sometimes even are ridiculously oversimplified, they still help to form a mental picture. Finally I’d like to repeat an advice which you can find in almost every other collection of tips on writing (e.g. here: Write, read, rewrite and reread your abstract. Repeat this procedure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414691925048828, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/19694/what-is-the-most-frequent-number-of-edges-of-voronoi-cells-of-a-large-set-of-ran?answertab=active
# What is the most frequent number of edges of Voronoi cells of a large set of random points? Consider a large set of points with coordinates that are uniformly distributed within a unit-length segment. Consider a Voronoi diagram built on these points. If we consider only non-infinite cells, what would be (if any) the typical (most frequent) number of edges (that is, neighbors) for those cells? Is there a limit for this number when number of points goes to infinity? Does it have anything common with `kissing number`? If so, doest it generalize to higher dimensions, that is, 6 for 2D, 13 for 3D etc? - ## 2 Answers I think the book Spatial tessellations : concepts and applications of Voronoi diagrams by OKABE, BOOTS,and SUGIHARA discusses this. - All the links to content on the page you provided are dead. – mbaitoff Jan 31 '11 at 11:55 @mbaitoff: yes, it seems so, sorry. But it's easy fixable: Just ua instead of okabe in the host part. For instance, the contents are at ua.t.u-tokyo.ac.jp/okabelab/Voronoi/contents.html – lhf Jan 31 '11 at 12:17 There are contents headlines only, no book contents. – mbaitoff Jan 31 '11 at 12:49 The book is on the net, thanks Buddha. Now starting reading. – mbaitoff Jan 31 '11 at 12:58 Well, seems that it's not easy to find an answer in 683-page book. If you know the answer, would you please post it instead? – mbaitoff Jan 31 '11 at 14:55 show 2 more comments I don't have access to the book that @lhf referenced, but here is a nice topological argument for the expected number of edges of a Voronoi cell in $2$D. Unfortunately, it does not generalize to more than two dimensions. Consider the dual of the Voronoi diagram, which is the Delaunay triangulation of the given points. This is a planar connected graph, so its Euler characteristic is $\chi = 2$. The Euler characteristic is also given by $\chi = V - E + F$, where $V$, $E$, and $F$ are the number of vertices, edges, and faces in the Delaunay triangulation respectively. Now every face has $3$ edges, while all non-boundary edges are adjacent to $2$ faces. Under reasonable conditions*, the proportion of boundary edges tends to zero, so let us ignore them. This means that $3F \approx 2E$, and plugging this into $V - E + F = 2$ gives $V \approx \frac13E + 2$, where by "$\approx$" I mean the ratio tends to $1$ as $V \to \infty$. So there are about three times as many edges as vertices, and since each edge is incident on $2$ vertices, the average degree of the vertices approaches $6$. As the degree of a vertex in the Delaunay triangulation is precisely the number of edges of the corresponding Voronoi cell, this agrees with your intuition for the $2$D case. (Although, strictly speaking, the expected number of edges is not the same as the most frequent number of edges). In three dimensions, the Euler characteristic is still $2$, but is now given by $V - E + F - C$, where $C$ is the number of tetrahedral cells in the triangulation. A similar argument as above gives $4C \approx 2F$, but we have no control over $E$. Indeed, there are triangulations on the same point set with the same boundary (the convex hull) but which have different numbers of edges. In the $2$D case, every such triangulation had exactly the same number of edges! So in $3$D one will have to think about the geometry of the Voronoi diagram and/or Delaunay triangulation, and cannot get a result purely via its topology. * I believe it is sufficient for the points to be drawn from a uniform distribution on a strictly convex area, but I don't know the details. - Can we rely on the fact that ration of the perimeter of the figure to the area of the figure goes to zero, and assume this fact is true for ratio number of boundary vertices to the number of internal vertices? – mbaitoff Feb 8 '12 at 12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937553346157074, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/63364-finding-expected-value-standard-deviation-help-please.html
# Thread: 1. ## Finding Expected Value and Standard deviation...Help please :( The question has a table that contain shows the approximate number of males of Hispanic origin employed in the U.S. in 2005 broken down by age group Age: 15-24.9 25-54.9 55-64.9 Employment (thousands):16,000 13,000 1,600 It then asks that using the midpoints of the given measurement classes, compute the expected value and the standard deviation of the age X of a male Hispanic worker in the U.S... The back of the book tells me that the Mean is 30.2 years and a standard deviation of 11.78 years but I have absolutely NO idea how these were obtained. The chapter that covers this question is a chapter about Measures of Dispersion if that helps at all. Thank you for all your help. 2. Total no. of people working $= 16000 + 13000 + 1600 = 30600$ expected value, $E(X) = \frac{24.9 + 15}{2} \times \frac{16000}{30600} + \frac{54.9 + 25}{2} \times \frac{13000}{30600} + \frac{64.9 + 55}{2} \times \frac{1600}{30600} = 30.5 years$ standard deviation, $\sigma = \sqrt{E(X^2) - [E(X)]^2}$ To find $E(X^2)$, $E(X^2) = \left( \frac{24.9 + 15}{2}\right)^2$ $\times \frac{16000}{30600} + \left( \frac{54.9 + 25}{2} \right)^2$ $\times \frac{13000}{30600} + \left( \frac{64.9 + 55}{2} \right)^2 \times \frac{1600}{30600}$ $\sigma = 11.895 years$ My answers are slightly different but I'm sure I've done the method correctly...hmmm....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933468759059906, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87708/partitions-of-unity
## Partitions of Unity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a metric $g$ on a smooth, closed manifold $\mathcal{M}$. Take a finite subcover of the manifold from its atlas. Is it true that any smooth partition of unity subordinate to this cover has uniformly bounded derivatives in $L^p (\mathcal{M})$ for each $p \geq 1$ ? - Have you tried to figure this out yourself? What difficulty did you run into? Also, note that you need to clarify what you mean by "uniformly bounded derivatives in $L^p(M)$". – Deane Yang Feb 6 2012 at 19:46 I mean that $|| \nabla ^k \phi _j ||_{ L^p (\mathcal{M})} \leq K$ for each $k \geq 0$, where $K$ can depend on $j$ but is independent of $k$. Here, $\phi _j$ is one function in the partition of unity. In terms of trying it for myself, I have given it some thought but was hoping I was missing something trivial. – T-' Feb 6 2012 at 20:19 @Michael, something is not competely clear to me: you want a bound on the $L^p$ norms of the derivatives, which is uniform on which of the following: $p\ge1$; any partition of unity subordinated to the subcover $\mathcal{U}$; at least some partition of unity subordinated to the subcover $\mathcal{U}$; the subcover $\mathcal{U}$ itself ? Thanks. – Pietro Majer Feb 6 2012 at 20:24 It appears that he wants it to be uniform in the number of derivatives. I don't have an answer to this, but I think it suffices to find a single compactly supported smooth function on $R^n$ with a uniformly bound for any number of derivatives in $L^p$. – Deane Yang Feb 6 2012 at 20:44 I doubt it. Here's why: I would guess that you can reduce this to a partition of unity for a closed interval. Now the derivatives of a given element of the "standard explicit" partition (see, e.g., Dubrovin, Fomenko, and Novikov's construction) are going to oscillate more as the order of the derivative increases. Looking at tanh (which is similarish) I don't see why the extrema of these derivatives would be bounded (though I don't have ready access to software to check this numerically at the moment). – Steve Huntsman Feb 6 2012 at 21:02 show 3 more comments ## 1 Answer A counterexample: Let $M$ be the unit circle. Let the two charts be the arcs $A = (0,2\pi)$ and $B = (\pi/2, 5\pi/2)$. For each $n$, consider the partition of unity subordinate to ${A,B}$ given by $$\psi_{A,n} = \sin^2 ( (2n+1) \theta )$$ and $$\psi_{B,n} = \cos^2 ( (2n+1) \theta )$$ Check that $\psi_{A,n} (0) = 0 = \psi_{B,n} (\pi/2)$, and clearly $\psi_{A,n} + \psi_{B,n} = 1$. By the scaling property it is easy to check that $$\| \partial^k\psi_{A,n} \| = (2n+1)^k \| \partial^k\psi_{A,1} \|$$ And using that $$\sin^2(\theta) = \frac12 (1-\cos 2\theta)$$ you see immediately that your desired uniform bound is impossible. - Thanks for this! – T-' Mar 1 2012 at 16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479724168777466, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/176329/when-is-a-quasi-projective-variety-affine
# When is a quasi-projective variety affine? By an affine variety I mean a variety that is isomorphic to some irreducible algebraic set in $\mathbb A^n$ and by a quasi-projective variety I mean a locally closed subset of $\mathbb P^n$, with the usual Zariski topology and structure sheaf. (I am not quite familiar with the language of schemes.) Assume that the underlying field is algebraically closed. Let $X$ be a quasi-projective variety. Are there any effective ways to tell whether $X$ is affine? Let me be more specific. If $X$ is affine, then $\mathcal O_X(X)$ is large enough so that it completely determines the variety. In particular, the Nullstellensatz holds, i.e. the map $$X\to \mathrm {spm}(\mathcal O_X(X))$$ $$x\mapsto \{f\in\mathcal O_X(X)|f(x)=0\}$$ is a 1-1 correspondence, where $\mathrm{spm}(\mathcal O_X(X))$ denotes the maximal ideals in $\mathcal O_X(X)$. Now I'd like to pose the question: If $X$ is quasi-projective and the above map is a bijection, is $X$ necessarily affine? Thanks! - If it is an isomorphism of varieties, then yes – of course. The map you construct is automatically a morphism of varieties, so the question is whether it is enough that it be a bijection for it to be an isomorphism. Unfortunately, there are examples of morphisms that are bijective but nonetheless not invertible... – Zhen Lin Jul 29 '12 at 1:37 According to a theorem of Serre (Hartshorne Theorem III.3.7), it is necessary that $H^1(X,\mathcal I)=0$ for every coherent sheaf of ideals $\mathcal I$ on $X$. I don't know whether checking all such conditions is really feasible though... – Andrew Jul 29 '12 at 1:45 2 @ZhenLin: Yeah, I know that map. But that does not provide an immediate counterexample to my question, does it? – Andrew Jul 29 '12 at 1:58 Dear Andrew, Note that $\mathcal O(X)$ is not always f.g. over the ground ring $k$, so that maxspec $\mathcal O(X)$ is not always an affine variety. (See my answer for a little more on this point, and a link to an example.) Regards, – Matt E Jul 30 '12 at 2:57 ## 1 Answer If the map from $X$ to the maxspec of $\mathcal O(X)$ is a bijection, then $X$ is indeed affine. Here is an argument: By assumption $X \to$ maxspec $\mathcal O(X)$ is bijective, thus quasi-finite, and so by (Grothendieck's form of) Zariski's main theorem, this map factors as an open embedding of $X$ into a variety that is finite over maxspec $\mathcal O(X)$. Any variety finite over an affine variety is again affine, and hence $X$ is an open subset of an affine variety, i.e. quasi-affine. So we are reduced to considering the case when $X$ is quasi-affine, which is well-known and straightforward. (I'm not sure that the full strength of ZMT is needed, but it is a natural tool to exploit to get mileage out of the assumption of a morphism having finite fibres, which is what your bijectivity hypothesis gives.) In fact, the argument shows something stronger: suppose that we just assume that the morphism $X \to$ maxspec $\mathcal O(X)$ has finite non-empty fibres, i.e. is quasi-finite and surjective. Then the same argument with ZMT shows that $X$ is quasi-affine. But it is standard that the map $X \to$ maxspec $\mathcal O(X)$ is an open immersion when $X$ is quasi-affine, and since by assumption it is surjecive, it is an isomorphism. Note that if we omit one of the hypotheses of surjectivity or quasi-finiteness, we can find a non-affine $X$ satisfying the other hypothesis. E.g. if $X = \mathbb A^2 \setminus \{0\}$ (the basic example of a quasi-affine, but non-affine, variety), then maxspec $\mathcal O(X) = \mathbb A^2$, and the open immersion $X \to \mathbb A^2$ is evidently not surjective. E.g. if $X = \mathbb A^2$ blown up at $0$, then maxspec $\mathcal O(X) = \mathbb A^2$, and $X \to \mathbb A^2$ is surjective, but has an infinite fibre over $0$. Caveat/correction: I should add the following caveat, namely that it is not always true, for a variety $X$ over a field $k$, that $\mathcal O(X)$ is finitely generated over $k$, in which case maxspec may not be such a good construction to apply, and the above argument may not go through. So in order to conclude that $X$ is affine, one should first insist that $\mathcal O(X)$ is finitely generated over $k$, and then that futhermore the natural map $X \to$ maxspec $\mathcal O(X)$ is quasi-finite and surjective. (Of course, one could work more generally with arbitrary schemes and Spec rather than maxspec, but I haven't thought about this general setting: in particular, ZMT requires some finiteness hypotheses, and I haven't thought about what conditions might guarantee that the map $X \to$ Spec $\mathcal O(X)$ satisfies them.) Incidentally, for an example of a quasi-projective variety with non-finitely generated ring of regular functions, see this note of Ravi Vakil's - +1 for the nice examples at the end :-) – Andrew Jul 30 '12 at 2:55 Wooonderful arguement! Thanks so much! – Andrew Aug 1 '12 at 8:43 For the fact the that a quasi-affine variety satisfying my hypothesis must be affine, I only have a argument minicing the proof for that a distinguish open subset of a affine variety is affine. I didn't quite understand your "But it is standard that the map $X\to$ maxspec $\mathcal O(X)$ is an open immersion". Could you explain that further? Thanks! – Andrew Aug 1 '12 at 8:46 @Andrew: Dear Andrew, Here is a lemma (which I am calling standard, although I don't recall it being in Hartshorne --- so maybe it's not completely standard!): a scheme $X$ is quasi-affine if and only if the morphism $X \to$ Spec $\mathcal O(X)$ is an open immersion. The if direction is clear, since we then get an open immersion of $X$ into an affine scheme. The converse is a good exercise. If you have trouble with it, you could ask it as another question. Cheers, – Matt E Aug 1 '12 at 12:42 @MattE: Ok, let me have a try! Thank you, Matt! – Andrew Aug 2 '12 at 15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321452379226685, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/16994/total-resistance-of-infinite-resistor-grid/17020
# Total Resistance of Infinite Resistor Grid? The problem of the infinite resistor grid is very common. The solution for the resistance between any 2 nodes in an infinite resistor lattice is all over the internet. My question is somewhat similar but more pragmatic. If we had a grid that was very large but yet finite... Then what would be the average voltage drop across a given grid for a given current density? For arguments sake, a grid in the region of say 4000 by 4000. Maybe it would be safe to assume an infinite grid(?) Very interesting Q. Can anyone shed any light? - 1 I presume you're talking about the problem as described by Randall Munroe (of xkcd) at his talk at Google. – Warrick Nov 15 '11 at 10:15 – user1011182 Nov 15 '11 at 10:29 1 @user1011182: How exactly do you define the notion of total resistance, cf. question(v1), if you say that it is not between two points in the grid? Using some limit? – Qmechanic♦ Nov 15 '11 at 11:53 1 I do not know what I want, but I want it really! – Georg Nov 15 '11 at 13:44 1 It's still not clear to me what you're asking for... you can't talk about the entire grid if the grid is infinite. So is it a finite grid, or are you looking for the resistance between two given nodes? – David Zaslavsky♦ Nov 15 '11 at 20:57 show 7 more comments ## 4 Answers The total resistance of the grid is infinite when the grid is two dimensional and large. If you place two point probes at location x and y on an infinite 2-d resistor grid, and impose the voltage V(x)=1 and V(y)=0, the potential obeys the discretized Laplace equation: V(up) + V(down) + V(left) + V(right) - 4 V(center) = 0 with the boundary conditions at the two given points and V=0 at infinity (beyond x and y). In the limit that x and y are far apart, the discrete Laplace equation might as well be the continuous Laplace equation, and the solution goes like C log(|r-x|/|y-x|), so that the potential difference for any finite C diverges with the distance. This means that C has to go to zero in the large |x-y| limit, so the current vanishes. The same is true in 1d, where a line of resistors has a current which vanishes as 1/L, so the total resistance goes as the total length L. In two dimensions, the total resistance blows up as log(L). For a three dimensional grid and higher, you do have a finite resistance for a block. Whether the limiting resistance is finite or infinite is the same problem as the recurrence/nonrecurrence of a random walk on the grid. If you make a pseudo two-d grid using N parallel lines of N resistors in series, then the total resistance is N on each path, but there are N parallel paths, so the total resistance is R, independent of the size. This is not the same as the 2-d resistor grid, because in the 2d grid there is resistance to going vertically a long way which is similar to the resistance to going horizontally, so the horizontal resistor paths are not parallel. If you make all the vertical resistors zero, and make the separation between x and y horizontal, and make the vertical width equal to |x-y|, you recover the series/parallel situation. The series-parallel example gives intuition about why two dimensions is critical for the transition from infinite resistance to finite resistance. - I do not understand this ignorant downvote. – Ron Maimon Nov 15 '11 at 22:00 No obvious errors, answers the question, +1 to cancel downvote. – Alexander Nov 15 '11 at 22:29 @alexander: thanks. There were a few bloopers, and I fixed those (fingers move too fast, I last worked out this problem 20 years ago) – Ron Maimon Nov 16 '11 at 0:24 You haven't been very specific about how your voltage probes are attached to this infinite grid. We could imagine the grid is stretched between two vertical conducting surfaces, one at $x=-\infty$ and one at $x=+\infty$. Then the problem is symmetric under discrete translations in the $y$ direction. In this case it is easy to see that you will have constant current flowing through the horizontal nodes, 0 current through the vertical nodes. Let $N_x$ be the number of nodes in the $x$ direction (to be taken to $\infty$, $N_y$ the number of nodes in the $y$ direction, and $I$ the current flowing through any horizontal resistor. Total voltage drop $V = N_x * I * R$, total current $I_{tot} = I*N_y$, net resistance $R_{tot} = V/I_{tot} = N_x * R / N_y$, so you can get any answer you want depending on how you take the limit. Now this probably isn't what you had in mind. You probably had in mind putting two point probes in the network, and taking the locations of those two point probes to $+/- \infty$. That's a more interesting problem. - I don't get why you speicify that the current flows only through horizontal nodes? – user1011182 Nov 16 '11 at 20:21 Because in this situation, the source and the sink are both infinite and translation invariant in y. This gives the total resistance of a long 1-d chain, and the resistance in this case goes linearly in the length. – Ron Maimon Nov 17 '11 at 18:08 I remember an interesting result that the resistance of uniform sheet is independent of the distance between the two points. And so conductive mats are labelled "ohms/square" Doesn't an infinite grid of resistors approximate to this? - Here is a way an electrician solves the problem: To get an analytical approximation let's approximate the large grid by a solid, homogenous metal sheet with thickness $H$. Let 2 nodes be 2 cylindrical conductors of radius $r$ both. The distance between nodes, let it be $L$. Also, for simplicity assume that the conductivity of the nodes is much greater than the conductivity of the sheet's material. Because of the last assumption we can take that the nodes have a constant potential throughout their lengths. So, to determine the electric field we can consider an electrostatic problem: Let the linear charge density on conductors be $±\lambda$. Applying Gauss' theorem to one of the nodes, we find that the field strength of the node at a distance $l$ from its axis is equal to $$E=\frac{\lambda}{2\pi\epsilon_0l}$$ The potential difference between the nodes are obtained by integration of the field: $$U=\frac{\lambda}{2\pi\epsilon_0}\int_{r}^{L-r}\left ( \frac{1}{l}-\frac{1}{L-l}\right)dl\approx\frac{\lambda}{\pi\epsilon_0}\ln\frac{L}{r};L>>r$$ Assuming that the current density $j=\gamma E$ ($\gamma$ is sheet's conductivity) is constant over the thickness of the sheet, we obtain for the total current flowing out of a cylindrical node: $$I=2\pi rHj=2\pi rH\gamma E=\frac{H\lambda\gamma}{\epsilon_0}$$ So, the resistance between 2 nodes approximately: $$R=\frac{U}{I}\approx\frac{1}{\pi\gamma H}\ln\frac{L}{r}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354159235954285, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/75646?sort=oldest
## “Duals” of Lindenbaum algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) From Wikipedia I learn: The Lindenbaum algebra A of a theory T consists of the equivalence classes of sentences of T. The operations in A are inherited from those in T. If there are disjunction, conjunction and negation, A is a Boolean algebra and can be seen as a poset: The objects of A are sentences $\phi$ modulo $$T \vdash \phi \leftrightarrow \phi'$$ There is a relation $\phi \leq \psi$ iff $$T \vdash \phi \rightarrow \psi$$ Lindenbaum algebras are a bit boring since — for example — all complete theories T have the same two-element Lindenbaum algebra. They might be a bit more interesting when relaxing the conditions: Objects $\phi$ modulo: $$\vdash \phi \leftrightarrow \phi'$$ Relation $\phi \leq \psi$: $$T \vdash \phi \rightarrow \psi$$ EDIT: I made two corrections due to Joel's answer. EDIT: And a simplification. - ## 2 Answers You seem to be missing a T on the left when defining the relation $\leq$ on the Lindenbaum algebra, an error which seems to undercut the premise of your question about duals. Namely, one wants to define that $\phi\leq\psi$ if and only if $T\vdash \phi\leftrightarrow\phi\wedge\psi$. This is the same as $T\vdash\phi\to\psi$. You need to include the theory $T$ in this definition, since otherwise the relation is not well-defined on your equivalence classes. For example, you won't even be able to prove that $\phi\leq\phi'$ when $\phi$ and $\phi'$ have been already identified by the first part of your definition. Thus, the way you have set things up, the relation $\leq$ will not be well-defined on the equivalence classes you have set up, but using the theory $T$ when defining the order does make things well-defined. The point of the Lindenbaum algebra is that the objects in the Lindenbaum algebra represent the possible assertions that you can make, having already committed yourself to the theory $T$. These form a Boolean algebra, and the order in that case is simply the usual order arising in any Boolean algebra. It is not a defect that the algebra has only two elements when $T$ is complete, since if you are committed to a complete theory, then every statement is either proved or refuted by the theory, and these are the two kinds of statements you can make. The way I would describe the situation is that the algebra is more interesting when the theory leaves matters unsettled, since the point of the algebra is to understand the nature of what is not yet settled by $T$. I think you can find an account of the Lindenbaum algebra in any of the standard logic texts, but I'd have to double check for a specific reference. But if we do consider the notion that you have set up, your objects are the Lindenbaum algebra of the underlying language with no theory, but you have a new relation, which is merely a pre-order, arising from the order as defined using theory $T$. Note that different objects $x$ and $y$ in your algebra can obey $x\leq y\leq x$ with $x\neq y$, so this is a pre-order rather than an order. But if we were to quotient by the corresponding equivalence relation, we would get exactly the Lindenbaum algebra arising from the theory $T$, since $\varphi\leq\psi\leq \varphi$ if and only if $\varphi$ and $\psi$ are equivalent in the Lindenbaum algebra of $T$. - @Joel: I obviously seem to have missed something (the T on the left - though I wasn't aware). So it's not about "turning things around" but about "relaxing conditions". Does the question make sense now? – Hans Stricker Sep 16 2011 at 23:27 Yes, but my last paragraph still seems to answer the question. What you have is the pre-order of the T-Lindenbaum algebra applied on the smaller equivalence classes of the underlying Lindenbaum algebra. – Joel David Hamkins Sep 16 2011 at 23:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. What you have defined is one way to go about this. There is another question discussing Lindenbaum algebras and the answer by Andreas Blass discusses another possibility. Briefly, another possibility is to define an algebra over arbitrary formulas, including those containing free variables. Some people call this the Rasiowa-Sikorski approach. It is covered the the second half of the book The mathematics of metamathematics by Helena Rasiowa and Roman Sikorski. The topic is briefly treated in the chapter on model theory in Mathematical Logic: A Course with Exercises Pt.2: Recursion Theory, Godel's Theorem, Set Theory and Model Theory, by Rene Cori, Daniel Lascar. A tutorial style treatment that discusses some design decisions one can take in setting up an algebraic framework for studying logics is in these two articles (part 2 of the second). Algebraic logic by Hajnal Andréka, István Németi and Ildikó Sain, and the article Applying Algebraic Logic; a General Methodology by Hajnal Andréka, Ágnes Kurucz, István Németi and Ildikó Sain - This is not really different: just add countably many unconstrained constant terms to the theory and you get the same algebra. – François G. Dorais♦ Sep 17 2011 at 3:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940511167049408, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/08/13/ideals-of-lie-algebras/?like=1&source=post_flair&_wpnonce=b0679ce6ea
# The Unapologetic Mathematician ## Ideals of Lie Algebras As we said, a homomorphism of Lie algebras is simply a linear mapping between them that preserves the bracket. I want to check, though, that this behaves in certain nice ways. First off, there is a Lie algebra $0$. That is, the trivial vector space can be given a (unique) Lie algebra structure, and every Lie algebra has a unique homomorphism $L\to0$ and a unique homomorphism $0\to L$. This is easy. Also pretty easy is the fact that we have kernels. That is, if $\phi:L\to L'$ is a homomorphism, then the set $I=\left\{x\in L\vert\phi(x)=0\in L'\right\}$ is a subalgebra of $L$. Indeed, it’s actually an “ideal” in pretty much the same sense as for rings. That is, if $x\in L$ and $y\in I$ then $[x,y]\in I$. And we can check that $\displaystyle\phi\left([x,y]\right)=\left[\phi(x),\phi(y)\right]=\left[\phi(x),0\right]=0$ proving that $\mathrm{Ker}(\phi)\subseteq L$ is an ideal, and thus a Lie algebra in its own right. Every Lie algebra has two trivial ideals: $0\subseteq L$ and $L\subseteq L$. Another example is the “center” — in analogy with the center of a group — which is the collection $Z(L)\subseteq L$ of all $z\in L$ such that $[x,z]=0$ for all $x\in L$. That is, those for which the adjoint action $\mathrm{ad}(z)$ is the zero derivation — the kernel of $\mathrm{ad}:L\to\mathrm{Der}(L)$ — which is clearly an ideal. If $Z(L)=L$ we say — again in analogy with groups — that $L$ is abelian; this is the case for the diagonal algebra $\mathfrak{d}(n,\mathbb{F})$, for instance. Abelian Lie algebras are rather boring; they’re just vector spaces with trivial brackets, so we can always decompose them by picking a basis — any basis — and getting a direct sum of one-dimensional abelian Lie algebras. On the other hand, if the only ideals of $L$ are the trivial ones, and if $L$ is not abelian, then we say that $L$ is “simple”. These are very interesting, indeed. As usual for rings, we can construct quotient algebras. If $I\subseteq L$ is an ideal, then we can define a Lie algebra structure on the quotient space $L/I$. Indeed, if $x+I$ and $y+I$ are equivalence classes modulo $I$, then we define $\displaystyle [x+I,y+I]=[x,y]+I$ which is unambiguous since if $x'$ and $y'$ are two other representatives then $x'=x+i$ and $y'=y+j$, and we calculate $\displaystyle [x',y']=[x+i,y+j]=[x,y]+\left([x,j]+[i,y]+[i,j]\right)$ and everything in the parens on the right is in $I$. Two last constructions in analogy with groups: the “normalizer” of a subspace $K\subseteq L$ is the subalgebra $N_L(K)=\left\{x\in L\vert[x,K]\in K\right\}$. This is the largest subalgebra of $L$ which contains $K$ as an ideal; if $K$ already is an ideal of $L$ then $N_L(K)=L$; if $N_L(K)=K$ we say that $K$ is “self-normalizing”. The “centralizer” of a subset $X\subseteq L$ is the subalgebra $C_L(X)=\left\{x\in L\vert[x,X]=0\right\}$. This is a subalgebra, and in particular we can see that $Z(L)=C_L(L)$. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras ## 4 Comments » 1. [...] seen that the category of Lie algebras has a zero object and kernels; now we need cokernels. It would be nice to just say that if is a homomorphism then is the [...] Pingback by | August 14, 2012 | Reply 2. [...] if — where is the center of — is nilpotent then is as well. Indeed, to say that is to say that for some . But then [...] Pingback by | August 22, 2012 | Reply 3. [...] killed by every . But this means that for all , while . That is, is strictly contained in the normalizer [...] Pingback by | August 22, 2012 | Reply 4. Reblogged this on Peter's ruminations and commented: in Turing’s on permutations paper, he refers (upon editing) to normalizers, idealizers etc. We can get a feel for what these are:- (the ideal is a bit like the 0, in Hs = 0 for LDPCs) Comment by | August 27, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 49, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348999857902527, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/167502-telescopic-series.html
# Thread: 1. ## telescopic series sigma k=2 to infinity, 1/k^2-1 how do you break up the k^2-1 into a minus part? i cant do it theres no partial fraction form i can ffind? please help with a little bit 2. Originally Posted by mathcore sigma k=2 to infinity, 1/k^2-1 how do you break up the k^2-1 into a minus part? i cant do it theres no partial fraction form i can ffind? please help with a little bit $\frac{1}{k^2-1}=\frac{1}{(k-1)(k+1)}=\frac{1}{2}\left(\frac{1}{k-1}-\frac{1}{k+1}\right)$ Tonio 3. Is... $\displaystyle \frac{1}{k^{2}-1} = \frac{1}{(k-1)\ (k+1)} = \frac{1}{2} \ (\frac{1}{k-1} - \frac{1}{k+1})$ (1) ... so that... $\displaystyle \sum_{k=2}^{\infty} = \frac{1}{2} (1 - \frac{1}{3} + \frac{1}{2} - \frac{1}{4} + \frac{1}{3} - \frac{1}{5} + ...) = \frac{1}{2}\ (1+\frac{1}{2}) = \frac{3}{4}$ (2) Kind regards $\chi$ $\sigma$ 4. Originally Posted by tonio $\frac{1}{k^2-1}=\frac{1}{(k-1)(k+1)}=\frac{1}{2}\left(\frac{1}{k-1}-\frac{1}{k+1}\right)$ Tonio how do you work out this part? how would you make that jump because i'm fine with the difference of two squares part but i couldnt reach the next part myself. the reason i ask how is because i dont wanna have to ask for the help for each question if i know the logic i can do my remaining questions just fine thank you 5. also is 1/2 sigma (then the sum expression) the same as sigma 1/2 (then the sum expression). we were instructed to put the 1/2 to the left of the sigma, is this ok 6. $\frac{1}{(k-1)(k+1)}=\frac{1}{2}\left(\frac{1}{k-1}-\frac{1}{k+1}\right)$ using Partial Fractions Expansion. and if c is a constant, then $\sum c \, a_k=c \, \sum a_k$. 7. im stuck on another one: it is sigma k=2 to infinity again, but the sum is 1/k^3-k the reason i am stuck is the partial fracs give three parts then i dont know what to do i solved it as -1/k + (1/2)/k+1 + (1/2)/k-1, i am lost because that doesnt really help me at all 8. Do not put new problem in the same thread. Post a new thread for the new one. 9. Originally Posted by Miss Do not put new problem in the same thread. Post a new thread for the new one. its the same question just the next one 10. This is the rules of this forum. If you post new problems in the same thread, It will be too hard to follow the replies. Anyway, as a student taking infinite series, you should be able to find the partial fraction expansion. $\dfrac{1}{k^3-k}=\dfrac{1}{k(k^2-1)}=\dfrac{1}{k(k-1)(k+1)}=\dfrac{A}{k}+\dfrac{B}{k-1}+\dfrac{C}{k+1}$ You do not know how to find A,B & C ? 11. Originally Posted by Miss This is the rules of this forum. If you post new problems in the same thread, It will be too hard to follow the replies. Anyway, as a student taking infinite series, you should be able to find the partial fraction expansion. $\dfrac{1}{k^3-k}=\dfrac{1}{k(k^2-1)}=\dfrac{1}{k(k-1)(k+1)}=\dfrac{A}{k}+\dfrac{B}{k-1}+\dfrac{C}{k+1}$ You do not know how to find A,B & C ? please do not tell me what i should or shouldnt be able to find, that is not yours to say. yes i can do partial fractions but i cant seem to arranging into a telescopic form 12. what you have done there is exactly what i did but that form does not cancel out 13. also i am not "taking infinite series", i did not say that. i did not even say i was a student. do not make these assumptions. these are isolated problems that have not had any teaching hours dedicated to them 14. What is the statement of the problem? You want to find its sum or just test its convergence? 15. find its sum it is stated to be telescopic and as soon as i can see how the terms cancel out, i can find the sum, thats the easy part. but right now i cant see how to get them in a cancelling form
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469810724258423, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/23377-polynomial-roots.html
# Thread: 1. ## Polynomial Roots If A Polynomial Of Degree Greater Than Or Equal To Two With Coefficient From The Set Of Real Numbers Has All Roots In Set Of Real Numbers.then Prove That All The Roots Of Its Derivatives Are Also Real Numbers. 2. Originally Posted by sehrishqau If A Polynomial Of Degree Greater Than Or Equal To Two With Coefficient From The Set Of Real Numbers Has All Roots In Set Of Real Numbers.then Prove That All The Roots Of Its Derivatives Are Also Real Numbers. Hint: Rolle's theorem. Between each pair of roots of a real polynomial there must be a root of the derivative. 3. Originally Posted by sehrishqau If A Polynomial Of Degree Greater Than Or Equal To Two With Coefficient From The Set Of Real Numbers Has All Roots In Set Of Real Numbers.then Prove That All The Roots Of Its Derivatives Are Also Real Numbers. A ploynomial of degree n has n roots, and here they are all real. Rolle's theorem tells us that here there is a zero of the derivative between each pair or real roots. Hence for our polynomial we have n-1 zeros of the derivative. But the derivative of our polynomial is a polynomial of degree n-1, and we have shown that it has n-1 zeros. RonL 4. There is one more issue that needs to be addressed. CaptainBlank is assuming that f(x) has exactly n roots. Then by using Opalg's argument with CaptainBlank's commentary we see there are n-1 real roots for f'(x) and so these are its only roots for deg f'(x) = n-1. But the situation changes if there is not exactly n real roots. But that (this problem) is not really so bad. Say that f(x) = (x-a)^n*(x-b)^m where 'a' is a root of multiplicity n and 'b' of multiplicity m. Then $f'(x) = n(x-a)^{n-1}(x-b)^m + m(x-a)^{n-1}(x-b)^{m-1}$. Thus, 'a' is a zero (of f'(x)) of multiplicity n-1 and 'b' is a zero of multiplicity 'b' is a zero of multiplicity m-1. Now using Opalg's argument there is another zero between 'a' and 'b'. So far we have (n-1)+(m-1)+1 = n+m-2 zeros (counting multiplicity) but since the number of complex zeros must remain even it means the last zero cannot be complex. Thus, there are n+m-1 zeros altogether (counting multiplicity). Thus, f'(x) has only real zeros. Now in general suppose we have $f(x)=(x-a_1)^{n_1}...(x-a_k)^{n_k}$ then $a_1,...,a_k$ are zeros of multiplicity $(n_1-1),...,(n_k-1)$. Using Opalg's hint we also have zeros between $n_1,...,n_k$ and so there are $k-1$ zeros. Thus in total with multiplicity we have $(n_1+...+n_k)-k-2$ since the number of complex solutions must be even it means the last remaining zero must be real. Thus $f'(x)$ has only real zeros. 5. Originally Posted by ThePerfectHacker There is one more issue that needs to be addressed. CaptainBlank is assuming that f(x) has exactly n roots. Then by using Opalg's argument with CaptainBlank's commentary we see there are n-1 real roots for f'(x) and so these are its only roots for deg f'(x) = n-1. But the situation changes if there is not exactly n real roots. But that (this problem) is not really so bad. Say that f(x) = (x-a)^n*(x-b)^m where 'a' is a root of multiplicity n and 'b' of multiplicity m. Then $f'(x) = n(x-a)^{n-1}(x-b)^m + m(x-a)^{n-1}(x-b)^{m-1}$. Thus, 'a' is a zero (of f'(x)) of multiplicity n-1 and 'b' is a zero of multiplicity 'b' is a zero of multiplicity m-1. Now using Opalg's argument there is another zero between 'a' and 'b'. So far we have (n-1)+(m-1)+1 = n+m-2 zeros (counting multiplicity) but since the number of complex zeros must remain even it means the last zero cannot be complex. Thus, there are n+m-1 zeros altogether (counting multiplicity). Thus, f'(x) has only real zeros. Now in general suppose we have $f(x)=(x-a_1)^{n_1}...(x-a_k)^{n_k}$ then $a_1,...,a_k$ are zeros of multiplicity $(n_1-1),...,(n_k-1)$. Using Opalg's hint we also have zeros between $n_1,...,n_k$ and so there are $k-1$ zeros. Thus in total with multiplicity we have $(n_1+...+n_k)-k-2$ since the number of complex solutions must be even it means the last remaining zero must be real. Thus $f'(x)$ has only real zeros. Fundamental theorem of algebra and the stated condition that all the roots are real implies n real roots for the nth degree polynomials in question. RonL 6. Originally Posted by CaptainBlank Fundamental theorem of algebra and the stated condition that all the roots are real implies n real roots for the nth degree polynomials in question. What about $x^n$ it only has 1 root not n. We need to show that $f'(x)$ has real roots. The situation can happen that $1+i,1-i$ are roots in that case $f'(x)$ has real coeffcients, but not real roots. Consider for example $f(x) = (x-1)^3(x+1)^4$ this polynomial has real roots only $1,-1$. We want to show $f'(x)$ has only real roots, we know that there is a root between -1 and 1 but since degree is 6 it can still be that the other roots are complex conjugates in that case f'(x) will have real coefficients but not real roots. So we need to use what I mentioned above. Now consider $f(x)=(x-1)(x-2)(x-3)$ it has 3 real roots as much as its degree so its derivative has zeros between 1 and 2 , 2 and 3 hence it is two real solutions equal to its degree and we can stop right there. But this is just a special case because this polynomial has a many roots as its degree. But in the above case this does not work. Thus we need to consider this possibility in the proof. 7. Originally Posted by ThePerfectHacker What about $x^n$ it only has 1 root not n. We need to show that $f'(x)$ has real roots. The situation can happen that $1+i,1-i$ are roots in that case $f'(x)$ has real coeffcients, but not real roots. Consider for example $f(x) = (x-1)^3(x+1)^4$ this polynomial has real roots only $1,-1$. We want to show $f'(x)$ has only real roots, we know that there is a root between -1 and 1 but since degree is 6 it can still be that the other roots are complex conjugates in that case f'(x) will have real coefficients but not real roots. So we need to use what I mentioned above. Now consider $f(x)=(x-1)(x-2)(x-3)$ it has 3 real roots as much as its degree so its derivative has zeros between 1 and 2 , 2 and 3 hence it is two real solutions equal to its degree and we can stop right there. But this is just a special case because this polynomial has a many roots as its degree. But in the above case this does not work. Thus we need to consider this possibility in the proof. The problem as I see it is that roots are being counted with multiplicities. Which can of course be sorted out by perturbing the polynomials, the using a suitable limiting argument. But not a good idea RonL 8. I agree that multiplicity must be taken into account, for the reasons that ThePerfectHacker gives. However, if a polynomial has a root of multiplicity n, its derivative will have that same root with multiplicity n–1: $\frac d{dx}\bigl((x-a)^nq(x)\bigr) = (x-a)^{n-1}\bigl(nq(x)+(x-a)q'(x)\bigr)$. So multiplicity ought not to create any problems. 9. Originally Posted by Opalg So multiplicity ought not to create any problems. It does not create any serious problem (post #4 shows it is easily correctable). What about the following problem? Let $\bar F$ be the algebraic closure of $F$ with $[\bar F : F] < \infty$ and $f(x)\in F[x]$ has roots in $F$ then does $f'(x)$ (the algebraic derivatice) have roots in $F$ also?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469219446182251, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/200689/how-to-simplify-log-base-2-and-log-base-4/200692
# how to simplify log base 2 and log base 4 How do I simplify the following expression: $$\log_2(2x+1) - 5\log_4(x^2) + 4\log_2(x)$$ - ## 2 Answers $\log_2(2x+1)-5\log_4x^2+4\log_2x$ $=\log_2(2x+1)+\log_2x^4-5\frac{\log_yx^2}{\log_y4}$ as $\log a+ \log b=\log ab,m\log a=\log a^m$ and $\log_yz=\frac{\log_xz}{\log_xy}$ where $x\neq 1$ as $\log_1y$ is not defined. $=\log_2(2x+1)x^4-5\frac{\log_yx^2}{\log_y2^2}$ $=\log_2(2x+1)x^4-5\frac{2\log_yx}{2\log_y2}$ $=\log_2(2x+1)x^4-5\log_2x$ $=\log_2(2x+1)x^4-\log_2x^5$ $=\log_2\frac{(2x+1)x^4}{x^5}$ $=\log_2\frac{(2x+1)}{x}$ - Suppose that $x>0$ is some number, and $\log_4x=y$. That means that $4^y=x$. Now $4=2^2$, so $x=4^y=\left(2^2\right)^y=2^{2y}$, and that means that $\log_2x=2y$. In other words, we’ve just demonstrated that for any $x>0$, $\log_2x=2\log_4x$. Now you have $\log_2(2x+1)-5\log_4x^2+4\log_2x$, which mixes logs base $2$ with logs base $4$; it would be much easier to simplify if all of the logs were to the same base. Use the result of the first paragraph to change $\log_2(2x+1)$ to $2\log_4(2x+1)$ and $\log_2x$ to $2\log_4x$; then you have $$2\log_4(2x+1)-5\log_4x^2+8\log_4x\;,$$ and you can use the usual properties of logs to express this as the log base $4$ of a single expression. Going back to $\log_2x=2\log_4x$, if you happen to notice that $2\log_4x=\log_4x^2$, you simply replace $5\log_4x^2$ by $5\log_2x$ to get $$\log_2(2x+1)-5\log_2x+4\log_2x\;,$$ which is even easier to simplify. The answers that you get by these two approaches won’t be identical, since one will be a log base $4$ and the other a log base $2$, but they’ll be equal, and you can use the relationship $\log_2x=2\log_4x$ to verify this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566438794136047, "perplexity_flag": "head"}
http://nrich.maths.org/7140
### 14 Divisors What is the smallest number with exactly 14 divisors? ### Summing Consecutive Numbers Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? ### Rule of Three If it takes four men one day to build a wall, how long does it take 60,000 men to build a similar wall? # Weekly Problem 46 - 2010 ##### Stage: 3 Short Challenge Level: How many ten-digit numbers are there which contain only the digits $1$, $2$ or $3$, and in which any pair of adjacent digits differs by $1$? If you liked this problem, here is an NRICH task which challenges you to use similar mathematical ideas. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042155742645264, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/164319-proving-z-w-z-w.html
# Thread: 1. ## Proving ||z|-|w|| <= |z-w| hi, I stumbled upon a simple demonstration but i can´t solve it: ||z|-|w|| <= |z-w| Can someone help me? Pedro 2. Originally Posted by Pedro hi, I stumbled upon a simple demonstration but i can´t solve it: ||z|-|w|| <= |z-w| Can someone help me? Pedro This is the reverse triangle inequality. 3. ok, thank you! 4. This proof relies on this fact: $a \geqslant 0\;\& \, - a \leqslant b \leqslant a\, \Leftrightarrow \,\left| b \right| \leqslant \left| a \right|$. So $\left| u \right| \leqslant \left| {u - w} \right| + \left| w \right|\, \Rightarrow \,\left| u \right| - \left| w \right| \leqslant \left| {u - w} \right|$ Likewise $\left| w \right| - \left| u \right| \leqslant \left| {w - u} \right| = \left| {u - w} \right|$ Thus we now have $- \left( {\left| {u - w} \right|} \right) \leqslant \left( {\left| u \right| - \left| w \right|} \right) \leqslant \left( {\left| {u - w} \right|} \right)$ Can you use the fact to finish? 5. i understood what is discussed above... but i would like know how to explain and admit that -|a| ≤ a ≤ |a| and -|b| ≤ b ≤ |b| can anybody help me to prove this? all i know is that a negative is of course less than a positive number 6. Please start a new thread for any new question. You said that you understand that $a \geqslant 0\;\& \, - a \leqslant b \leqslant a\, \Leftrightarrow \,\left| b \right| \leqslant \left| a \right| .$ If you do, can we say that $|a|\le ||a||~?$ If so then is it not true that $-|a|\le a \le |a|~?$ 7. The fact is, that any nonnegative number is equal to its absolute value. So if $\displaystyle a \geq 0$, then $\displaystyle a = |a|$. However, any negative number is less than its absolute value (since the absolute value is always nonnegative). So if $\displaystyle a < 0$, then $\displaystyle a < |a|$. So that means for any $\displaystyle a$ that $\displaystyle a \leq |a|$. A similar argument works the other way to show $\displaystyle -|a| \leq a$. So that means $\displaystyle -|a| \leq a \leq |a|$. 8. thanks you! so same in the -|b| ≤ b ≤ |b|.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028776288032532, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/7324/transformation-of-random-variables/7326
# Transformation of Random Variables Suppose $f_{X}(x) = xe^{-x^2/2}$ for $x>0$ and $Y = \ln X$. Find the density function for $Y$. So we want to find $P(Y \leq y)$. This is the same thing as $P(\ln X \leq y)$ or $P(X \leq e^{y})$. Thus $f_{Y}(y) = f_{X}(e^y)$? Or does $f_{Y}(y) = F_{X}(e^y)$ since $P(X \leq x) = F_{X}(x)$? - ## 1 Answer You're correct up to the point where you have $P(Y \leq y) = P(\ln X \leq y) = P(X \leq e^y)$. The correct next step, though, is that $F_Y(y) = F_X(e^y)$. To obtain $f_Y(y)$, differentiate both sides of this equation with respect to $y$ (and don't forget to use Leibniz's rule on the right-hand side). (In case you don't remember, Leibniz's rule is the first part of the Fundamental Theorem of Calculus combined with the chain rule.) - Thus $f_{Y}(y) = f_{X}(e^y) \cdot e^y$. – PEV Oct 20 '10 at 18:12 That's correct. – Mike Spivey Oct 20 '10 at 18:13 I guess another way to do this is the following: $Y = \ln X \Rightarrow X = e^{Y}$. So $f_{Y}(y) = f_{X}(e^y) \cdot e^y$ (e.g. multiply by the derivative). But this method only works for bijective functions? – PEV Oct 20 '10 at 18:13 – Mike Spivey Oct 20 '10 at 18:19 How do you accept answers? – PEV Oct 20 '10 at 18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067098498344421, "perplexity_flag": "head"}
http://mathoverflow.net/questions/1931/bimodules-in-geometry/1943
## Bimodules in geometry ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Grothendieck's approach to algebraic geometry in particular tells us to treat all rings as rings of functions on some sort of space. This can also be applied outside of scheme theory (e.g., Gelfand-Neumark theorem says that the category of measurable spaces is contravariantly equivalent to the category of commutative von Neumann algebras). Even though we do not have a complete geometric description for the noncommutative case, we can still use geometric intuition from the commutative case effectively. A generalization of this idea is given by Grothendieck's relative point of view, which says that a morphism of rings f: A → B should be regarded geometrically as a bundle of spaces with the total space Spec B fibered over the base space Spec A and all notions defined for individual spaces should be generalized to such bundles fiberwise. For example, for von Neumann algebras we have operator valued weights, relative L^p-spaces etc., which generalize the usual notions of weight, noncommutative L^p-space etc. In noncommutative geometry this point of view is further generalized to bimodules. A morphism f: A → B can be interpreted as an A-B-bimodule B, with the right action of B given by the multiplication and the left action of A given by f. Geometrically, an A-B-bimodule is like a vector bundle over the product of Spec A and Spec B. If a bimodule comes from a morphism f, then it looks like a trivial line bundle with the support being equal to the graph of f. In particular, the identity morphism corresponds to the trivial line bundle over the diagonal. For the case of commutative von Neumann algebras all of the above can be made fully rigorous using an appropriate monoidal category of von Neumann algebras. This bimodule point of view is extremely fruitful in noncommutative geometry (think of Jones' index, Connes' correspondences etc.) However, I have never seen bimodules in other branches of geometry (scheme theory, smooth manifolds, holomorphic manifolds, topology etc.) used to the same extent as they are used in noncommutative geometry. Can anybody state some interesting theorems (or theories) involving bimodules in such a setting? Or just give some references to interesting papers? Or if the above sentences refer to the empty set, provide an explanation of this fact? - ## 8 Answers In "commutative geometry," I think bimodules tend to be a little concealed. People are more likely to talk about "correspondences" which are the space version of bimodules: A correspondence between spaces X and Y is a space Z with maps to X and Y. When you think in this langauge, there are lots of examples you're missing. For example, the right notion of a morphism between two symplectic manifolds is a Lagrangian subvariety of their product, or even a manifold mapping to their product with Lagrangian image (maybe not embedded). See, for example, Wehrheim and Woodward's functoriality for Lagrangian correspondences in Floer homology Similarly, correspondences are incredibly important in geometric representation theory. See, for example, the work of Nakajima on quiver varieties. The theory of stacks also is at least partially founded on taking correspondences seriously as objects, and in particular being able to quotients by any (flat) correspondence. This same philosophy also underlies groupoidification as studied by the Baez school (they tend to use the word "span" instead of "correspondence" but it's the same thing). - I think groupoidification has more to do with 2-categories of spans, which the 2-category of rings and bimodules is not as far as I know. Still, the ideas are similar as you say. – Reid Barton Oct 24 2009 at 16:26 Well, if you ask me groupoidification is just a hands-on language looking at the finite field (including F_1) points of correspondences of stacks, which makes it look a lot more bimodular. – Ben Webster♦ Oct 24 2009 at 17:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here's a theorem from derived algebraic geometry: if A and B are A∞ algebras (think associative algebras) then giving an A-B-bimodule is the same as giving a functor from {right A-modules} to {right B-modules} which preserves colimits (equivalently, has a right adjoint). The correspondence sends AMB to the functor – ⊗A AMB. Under this correspondence, tensor product of bimodules over the middle algebra is realized by composition of functors. - 1 Most of the theories have some version of Eilenberg-Watts theorem (all (homotopy) cocontinuous functors come from bimodules). They usually also have some version of Mitchell's theorem that characterizes categories of modules. Combined together they give Morita equivalence theory that establishes an equivalence between the bicategory of rings, modules, and intertwiners and the appropriate bicategory of categories, functors, and natural transofrmations. But this is a universal phenomenon. It would be more interesting to find concrete applications in specific cases (like Ben Webster's examples). – Dmitri Pavlov Oct 22 2009 at 21:18 1 Can you give a statement of Mitchell's theorem in the DAG case (or a reference)? – Reid Barton Oct 22 2009 at 22:34 I have a feeling that I might have seen something like this in one of Lurie's papers (DAG-I?), but I might be wrong on this issue. An abelian category is enriched over abelian groups, which correspond to spectra in the derived case, so our category should be enriched over spectra (looks like stable (∞,1)-category?). It should also be (homotopy) cocomplete and have some sort of (homotopy) generator. I guess one should look at the proof of the usual algebraic theorem to guess exact conditions that guarantee the existence of an equivalence. – Dmitri Pavlov Oct 26 2009 at 17:58 Thanks, this must be Theorem 4.4.9 of DAG II. (It's not quite what I was hoping for, since it refers to objects of the category. I would like a characterization which only refers to the category viewed as an object of the (∞,2)-category of stable presentable (∞,1)-categories. But maybe that's too much to ask.) – Reid Barton Oct 26 2009 at 18:51 The Fourier-Mukai transform comes from a bimodule: the Poincaré bundle. Let A be an abelian variety, the Poincaré bundle P is a vector bundle on Ax coming from the fact that the points in the dual abelian variety  parametrize line bundles on A (P is the universal family). In the Fourier-Mukai construction, P is used as a OA-OÂ-bimodule to produce a functor between the derived categories of coherent sheaves on A and  via a push-pull construction. - Noncommutative algebraic geometry lives in the nature of bimodules. There are some work of Grothendieck flavor noncommutative algebraic geometry. Bimodules naturally came in this story. 1.One of the most important concept in NCAG is monad and comonad. From the Barr-Beck theorm(categorical version of Grothendieck flat descent)for noncommutative scheme. We have the following theorem: Let X be a quasi compact and quasi separated (noncommutative)scheme, u[i]:U[i]--->X is an affine cover U=coproduct of U[i]--->X A[U]=product of Ou(U[j]),and then Qcoh(U)=coproduct Qcoh(U[i])=A[U]-mod Then, according to the Beck's theorem. We have Qcoh(X)=G[f]-Comod where G[f]=(M[f] tensoring over A[U], delta) is a comonad on Qcoh(U). Comonad structure as follows: M[f] tensor over A[U] M[f]<------M[f]---->A[U] In particular, if the scheme X is semiseparated(say algebraic varieties), M[f] is a A[U] tensor A[U]^op module(it is A[U]-bimodule]. In other words, G[f] is a coalgebra in the monoidal category of A[U] tensor A[U]^op -modules(A[U]-bimodule) Reference of Beck's theorem for noncommutative scheme(I mentioned above) is Maxim Kontsevich and Alexander.L.Rosenberg Noncommutative spaces and flat descent.(This paper is in Max Plank preprint series, it is online) 1. Another reference is Maxim Kontsevich and A.L.Rosenberg Noncommutative smooth space http://arxiv.org/PS_cache/math/pdf/9812/9812158v1.pdf Bimodule is used to define the Covers for noncommutative space 3.If one need to define differential operator in general noncommutative space(such as abelian category), in particular, affine scheme. Noncommutative D-module(in particular, quantum D-module), he needs the differential bimodule. The reference is: V.A.Lunts and A.L.Rosenberg Differential Calculus in Noncommutative Algebraic geometry I and II(These are also in Max-Plank preprint series) 1. One needs Bimodule to develop the machinary to treat noncommutative grassmannian type space and the tannaka formalism in noncommutative nonsymmetric monoidal category. Reference: M.Kontsevich and A.L.Rosenberg Noncommutative Grasmannian and related construction.(MPIM preprint series) - The paper Adam Nyman. The Eilenberg-Watts theorem over schemes, available at arXiv studies the connection between cocontinuous functors $Qcoh(Y) \to Qcoh(X)$, which are there called bimodules, and $Qcoh(X \times Y)$ in detail. - In a paper from 1985, Raeburn and J. Taylor describe how to view all elements of H^2(X,Gm) (etale cohomology) as coming from non-unital Azumaya algebras. The construction relies on bimodule theory for these algebras. - Let S be a scheme of positive characteristic p and X an S-scheme. If F denotes the absolute Frobenius, then we can pull back O_X via F. In the affine case, say S=spec k, X=Spec R, this corresponds to tensoring R over F with R, hence we get a bimodule: for r,f\in R we get fr=r^pf. This bimodule is the beginning of the theory of "F-unit crystals" and a positive characteristic version of the Riemann-Hilbert correspondence! See for example this survey by Emerson-Kisin. - Well, in commutative algebra, you have the fact that any left module is also a right module, so the notion of bimodule can be considered a bit redundant. Any time that an algebraic geometer (or other) uses a module at all, it's generally an A-A bimodule, but we don't think of that, because it's not different form a left A-module or a right A-module, like it is in the noncommutative case. - Well, this is also the case for noncommutative von Neumann algebras: A left M-module is the same thing as a right M^op-module. In particular, an M-N-bimodule is the same thing as a left M⊗N^op-module for an appropriate monoidal structure on von Neumann algebras. This does not imply that we cannot have an interesting theory of bimodules over von Neumann algebras, in fact we do have such a theory. – Dmitri Pavlov Oct 22 2009 at 20:17 I believe the point Charles is making is the fact that in commutative land giving some abelian group M the structure of a left A-module, is equivalent to giving it the structure of a right A-module. So there is nothing really new happening there. However in noncommutative land giving M an A-A-bimodule structure is harder since you have to find two different module structures that interact nicely. But this is obviously just a partial answer since it only talks about A-A-bimodules and not A-B-bimodules in general. – Grétar Amazeen Oct 22 2009 at 21:48 Grétar, I do not quite follow your reasoning. In the commutative case we can also have A-A-bimodules where the left and right actions are different and so we get a nontrivial A-A-bimodule. – Dmitri Pavlov Oct 26 2009 at 17:50 Thats true. It was written in haste. I guess I meant that they seem to pop up more often, and perhaps more naturally, when dealing with noncommutative things. But as for the reason for that I do not know. – Grétar Amazeen Oct 26 2009 at 19:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.896643340587616, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/295959/is-this-sloppy-writing-for-limits?answertab=active
# Is this sloppy writing for limits? Please note that I am not asking you to compute or show me how to do this limit. I am asking how to write out a clean and formal solution that is free of any error, ambiguity, or sloppiness. Given $$\lim\limits_{(x,y) \to (0,0)} \dfrac{3x^2 y}{x^2 + y^2}$$, find its limit So find the limit along $y = mx$ and let $f(x,y) = \dfrac{3x^2 y}{x^2 + y^2}$. So we have $f(x,mx) = \dfrac{3x^2 mx}{x^2 + m^2 x^2} = \dfrac{3mx}{1 + m^2}$ Here is the part where I am not so hot on. Can I write this? $\lim\limits_{(x,y) \to (0,0)} f(x,y) = \lim\limits_{(x,y) \to (0,0)} f(x,mx) = \lim\limits_{(x,y) \to (0,0)} \dfrac{3mx}{1 + m^2}= 0$ And conclude the limit is indeed $0$ through any line. (a formal justification involves epsilon-delta, but I omit it here because that is another question for another time). I am thinking that the first equality sign is wrong. Remark Most books I've read seem to do everything without the limit operator. Stewart for instance just argues the limit is this and this along this path and that path. I want to do my answers with the limit operators - 1 You’re right: the first equality is premature. You’ve shown only that the limit is $0$ when you approach the origin along a straight line. To claim that the limit is actually $0$, you have to show that it’s $0$ no matter how you approach the origin; there are functions that behave very nicely along straight lines but not along more complicated paths. – Brian M. Scott Feb 6 at 2:08 You're right. The expression in the middle does not make sense. – julien Feb 6 at 2:09 The second equality sign is justified though right? Provided I made my argument that "we are going to take the limit along this and that path" and then my conclusion should be followed cleanly without any loss in rigor or error? – sizz Feb 6 at 2:10 For the second and third terms to make sense, you need to replace $(x,y)\rightarrow(0,0)$ by $x\rightarrow 0$. – julien Feb 6 at 2:11 1 Yes, but you should write $$\lim_{x\to 0}f(x,mx)=\lim_{(x,y)\to(0,0)}\frac{3mx}{1+m^2}=0\;.$$ – Brian M. Scott Feb 6 at 2:12 show 7 more comments ## 2 Answers There are so many ways to approach (0; 0), appoaching along the lines $y = mx$ is 1 way, other ways are approaching along the curves $y = x^2$, or $y = \sqrt{x}$, or $y = x^3$, or... any curves that pass through (0; 0). So, pointing out that as $(x; y) \rightarrow (0; 0)$ along the lines $y = mx$ the limit is 0, is definitely not enough to show that the limit does exist. Here's a counter-example ## Example Evaluate $\lim\limits_{\substack{x \rightarrow 0\\y \rightarrow 0}} \dfrac{x^2 y}{x^4 + y^2}$ You can show that the limit is always 0, as $(x; y) \rightarrow (0; 0)$ along any lines $y = mx$. I'll leave this to you, it should be as easy as a piece of cake. Let's try it. :) But if you let $(x; y) \rightarrow (0; 0)$ along the curve $y = x^2$, then you'll have: $\lim\limits_{\substack{x \rightarrow 0\\y \rightarrow 0}} \dfrac{x^2 y}{x^4 + y^2} = \lim\limits_{x \rightarrow 0} \dfrac{x^4}{x^4 + x^4} = \dfrac{1}{2}$. So that limit does not exist. You can change it into polar co-ordinate, like this: Let $\left\{ \begin{array}{l} x = r \cos \varphi \\ y = r \sin \varphi \end{array} \right.$, when $(x; y) \rightarrow (0; 0)$ it means that $r \rightarrow 0$, and $\varphi$ can vary freely. So, to show that the limit exists, all you must do is to show that as $r \rightarrow 0$, and $\varphi$ takes any value, the limit stays the same. Like this: $\lim\limits_{\substack{x \rightarrow 0 \\ y \rightarrow 0}}\dfrac{3x^2y}{x^2 + y^2} = \lim\limits_{r \rightarrow 0}\dfrac{3r^3\cos^2 \varphi \sin \varphi}{r^2} = \lim\limits_{r \rightarrow 0} 3r\cos^2 \varphi \sin \varphi = 0$. Of course the limit of the final expression does not depend on the value of $\varphi$, you can use the Squeeze Theorem to see this: $-3r \le 3r \cos^2 \varphi \sin \varphi \le 3r$. Another way to prove this is to notice that: • $|f(x)| \rightarrow 0 \Leftrightarrow f(x) \rightarrow 0$. • $x^2 + y^2 \ge 2 |xy|$ So, we have: $0 \le \left| \dfrac{3x^2y}{x^2 + y^2} \right| \le \left| \dfrac{3x^2y}{2xy} \right| = \dfrac{3}{2} \left| x \right|$ Now, take the limit of the whole thing as $(x; y) \rightarrow (0; 0)$, and apply Squeeze Theorem. :) - ## Did you find this question interesting? Try our newsletter Just to see how bad things can be while being totally nice along straight lines. Consider the function $f:\mathbb R^2 \to \mathbb R$ that is defined like this. For any point $(x,y)$ consider the slope of the line through the origin on which it lies (take care of the case $(x,y)=(0,0)$ separatel). Then if the slope is not of the form $1/n$, $n\in \mathbb N$ then set $f(x,y)=0$. Otherwise $(x,y)$ lies on a line through the origin of slope $1/n$ and then set $f(x,y)=0$ if the distance from $(x,y)$ to the origin is less then $1/n$, and set $f(x,y)=1$ otherwise. The limit at the origin along any straight line is $0$ though the global limit at the origin does not exist.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417906403541565, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/4406/difference-or-relation-between-inference-reasoning-deduction-and-induction/6536
# Difference or relation between Inference, Reasoning, Deduction, and Induction? What is the precise difference or relation between these terms in logic: Inference, Reasoning, Deduction, and Induction? - This does not appear to be a mathematical question. There are not formal definitions of "inference" and "reasoning" in mathematical logic in the sense that the question seems to intend. – Carl Mummert Sep 12 '10 at 12:49 1 Community wiki then? I see no reason why we should dismiss this question outright -- we're not held to MO's standards, and soft questions like this can be as important to people learning math as help with specific problems. And there have been questions about books, philosophy, notation, etc. recently. – Paul VanKoughnett Oct 11 '10 at 16:35 I've voted to close this as I don't consider it a question about mathematics. I'd advice though to post it somewhere else (I can't say where), perhaps in some logic forum...? – DonAntonio Sep 10 '12 at 18:13 ## 2 Answers Morteza, I'd guess you'll get different answers from everyone you ask. Here is the way I see it (which is, by necessity, somewhat circular): Inference, in the narrowest sense, is a single step in a deductive chain. If I know $P\rightarrow Q$ and $\neg Q$, I can infer $\neg P$ from modus tollens (which itself is sometimes called a "rule of inference"). More loosely, we can call any conclusion from premises an inference, even if it's not properly deductive (i.e. I look outside, see a clear sky, and infer that it's not raining). Reasoning is the mental process of logic -- what goes on inside my head when I use deduction. Though a lot of people would probably use it as a synonym of "deduction," I'd say the differences are far more important. When you reason, you skip steps, explore multiple pathways, and use your intuition, which are all things unavailable to, say, a computer. Deduction is the formal process of logic, and an inference is deductive when it follows from an axiom or logical rule. This is the makeup of most mathematical proofs. Induction has two meanings. The first is some sort of mathematical induction (strong, weak, or transfinite), all of which are based on the idea of an infinite deductive chain that can be collapsed into a few steps. For instance, in normal (weak) induction, you prove the proposition $P(0)$, and you prove that $\forall n:P(n)\rightarrow P(n+1)$. Modus ponens then gives you $P(1)$, then $P(2)$ using $P(1)$, then $P(3)$ using $P(2)$, and so forth. In all but the most intuitionist systems of arithmetic/set theory, induction is either provable or taken as an axiom, so its use is properly a deductive inference! However, there is another, more abstract meaning of induction that contrasts properly with deduction. The logician Charles Sanders Peirce had an analogy with bags of beans that I find compelling. Deduction has you applying a generalization to a specific case to get a result. So you know that all the beans in my bag are white (the generalization), and you take a bean from my bag (the case): then that bean must be white (the result). Induction has you taking a case and a result and generalizing them. You take a bean from my bag (the case), and you see it is white (the result): therefore, all the beans in my bag are white (generalization). In this rudimentary form, induction looks like guesswork, and nobody would trust this logic. But if you took five or ten beans, and they were all white, your confidence in the generalization would increase. In fact, everybody uses this form of induction many times a day: you expect the sun to rise, your bed to be in the same place, your food to cook properly, etc. just because they've all happened many times before. This kind of inference is not deductive at all, yet we still trust it. Abduction was the third of Peirce's forms of inference. It is the most suspect of the three, and is best thought of as a sort of "reverse implication." The idea is to take a generalization and a result and observe that a case is probably true because it implies the result. So you have a white bean (the result), and you know that all the beans in my bag are white (the generalization): thus, this bean must be from my bag, for if it were, it would have to be white. This seems like the sort of reasoning you see a lot on detective shows. - If your question is about philosophical matters of logic, then I think that you may find very interesting the following paper authored by a leading logician (see also other papers in the same handbook). Wilfrid Hodges, The scope and limits of logic pp. 41-63 in Handbook of the Philosophy of Science: Philosophy of Logic, ed. Dale Jacquette, 2007, Elsevier, Amsterdam -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9647096991539001, "perplexity_flag": "middle"}
http://nrich.maths.org/1310
### Bendy Quad Four rods are hinged at their ends to form a convex quadrilateral. Investigate the different shapes that the quadrilateral can take. Be patient this problem may be slow to load. ### Long Short A quadrilateral inscribed in a unit circle has sides of lengths s1, s2, s3 and s4 where s1 ≤ s2 ≤ s3 ≤ s4. Find a quadrilateral of this type for which s1= sqrt2 and show s1 cannot be greater than sqrt2. Find a quadrilateral of this type for which s2 is approximately sqrt3and show that s2 is always less than sqrt3. Find a quadrilateral of this type for which s3 is approximately 2 and show that s2 is always less than 2. Find a quadrilateral of this type for which s4=2 and show that s4 cannot be greater than 2. ### Diagonals for Area Prove that the area of a quadrilateral is given by half the product of the lengths of the diagonals multiplied by the sine of the angle between the diagonals. # The Cyclic Quadrilateral ##### Stage: 3 and 4 Article by Toni Beardon Cyclic quadrilaterals are quadrilaterals with all four of their vertices on a circle. You can have cyclic polygons of any number of sides. Not all quadrilaterals are cyclic. Perhaps you can draw a quadrilateral that is not cyclic - how do you know it is not cyclic? All triangles are cyclic - how could you prove this? Cyclic quadrilaterals have some interesting features and in this brief article we invite you to look at some of them and we suggest ideas for looking further: Consider any cyclic quadrilateral $P Q R S$ with vertices on a circle centre $C$. Draw chord $Q S$ and radii $P C$, $Q C$ and $S C$. Let angle $Q P C=x$ degrees and angle $S P C=y$ degrees. This means that angle $Q P S=(x+y)$ degrees. The angles of the isosceles triangle $P Q C$ are $x$ degrees, $x$ degrees and $(180-2x)$ degrees. The angles of the isosceles triangle $P S C$ are $y$ degrees, $y$ degrees and $(180-2y)$ degrees. Angle $Q C S=360$ degrees $-$ angle $Q C P -$ angle $S C P=(2x+2y)$ degrees. This shows that angle $Q C S$ is twice angle $Q P S$. We say that angle $Q P S$ is subtended by the arc $Q R S$ and this basic property leads to the following theorems. You might like to use the Geoboard environment and some of the problems that were published in July 2005 to help investigate these ideas practically before moving into the theory. Theorem 1. The angle at the centre of a circle is twice the angle at the circumference subtended by the same arc. Because angle $Q C S$ is the same for all positions of $P$, Theorem 1 shows angle $Q P S$ is the same regardless of where $P$ lies. See this problem for a practical demonstration of this theorem. Theorem 2. All angles in the same segment of a circle are equal (that is angles at the circumference subtended by the same arc). Theorem 3. The angle subtended by a semicircle (that is the angle standing on a diameter) is a right angle. See this problem for a practical demonstration of this theorem. Theorem 4. Opposite angles of a cyclic quadrilateral add up to 180 degrees. See this problem for a practical demonstration of this theorem. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271640181541443, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53558/what-would-happen-if-large-hadron-collider-would-collide-electrons
What would happen if Large Hadron Collider would collide electrons? After some reading about the Large Hadron Collider and it's very impressive instruments to detect and investigate the collision results, there is a remaining question. What would happen if the scientists would use leptons instead of hadrons? Especially: What would happen if they would collide electrons? Isn't it intrinsic that all particles consist of smaller particles? With current technology, could we detect them? - 4 @EduardoGuerrasValera: that's a very strange idea. I don't know any working physicist who believes that leptons are composite. – Vibert Feb 10 at 22:33 1 I'm with Kostya in that answer not mentioning LEP are missing a important point, though I don't think "Do you know about LEP?" is enough to answer the question. – dmckee♦ Feb 10 at 22:53 1 @Vibert, as long as it is not needed for explaining something or at least simplifying the existing models, there is no need for that but... It seems sort of fishy that the electron carries exactly three times the electric charge of a quark. – Eduardo Guerras Valera Feb 11 at 0:27 2 Hi Mare Infinitus, and welcome to Physics Stack Exchange! Could you be more specific about what aspect of electron collisions you're curious about? For example, "If the LHC could collide electrons, would we be able to detect their constituents, if they have any?" is much better than "What would happen if...?" – David Zaslavsky♦ Feb 11 at 2:52 1 @EduardoGuerrasValera Well, at least within the context of grand unified theories the charges of the leptons and quarks are fixed by group theory without invoking any composite structure of leptons. Similarly you can argue for charge relationships within the standard model by anomaly cancellation, and anomaly matching conditions place very strong constraints on any theory of lepton or quark compositeness. Nevertheless such models have been proposed and model-independent searches performed but so far nothing as turned up. – Michael Brown Feb 11 at 8:54 show 3 more comments 6 Answers First of all -- it wouldn't be called "the Large Hadron Collider", right? Looks like one would rather call it something like "Large Electron-Positron Collider". In that case one definitely would need another abbreviation for it. Something like "LEP" instead of "LHC"... Now, guess what was there in the same tunnel before? Edit: since my shenanigan got popular, I'll elaborate. • Yes, they actually was colliding electrons and positrons, not electrons-electrons. Mainly because of the richer physics of such collisions. (But for my theoristish point of view: positron is just an electron going back in time.) • Why the same tunnel? Perhaps surprisingly, tunnel is taking a substantial part of the cost of an accelerator. Digging a new one for LHC would have definitely burnt a large hole in CERN's pocket. • Given a fixed circular tunnel (it's radius) you actually have a bound on energy you can have for your particles. Due to synchrotron radiation -- see @emarti answer for more. • 27 kilometers seems to be a reasonable limit on the size of a circular tunnel. (Actually people think about 233 km, but that sounds crazy to me.) So the next accelerator most probably will be linear and it will be electron-positron. P.S. Have you heard of a Photon Collider? - An electron-positron collision is rather different from an electron-electron collision – Martin Beckett Feb 10 at 22:21 @MartinBeckett: it seems reasonable to assume that OP meant e+e- and not e-e- collisions. Just try and draw some tree level diagrams for the latter - it's a fun exercise, but not so nice in terms of EW physics. – Vibert Feb 10 at 22:37 @Kostya, nice link!(+1) You fooled me, I already was about to downvote. Why did the dismantled the LEP for building the LHC? Is it not a waste? Couldn't they simply build the LHC in another, nearby place? – Eduardo Guerras Valera Feb 11 at 1:48 4 @EduardoGuerrasValera - there weren't many spare 27km long tunnels available! Colliders have a natural life, once they have generated a statistical number of events they are no further use, the data remains to be further analysed but new colliders will target new energies and new physics. – Martin Beckett Feb 11 at 3:12 @MartinBeckett: Nice answer! – Mare Infinitus Feb 11 at 17:33 show 1 more comment There are two points in answering this question: 1. Design: The design of the collider would have to be different. Electrons/positrons in a cyclotron radiate synchrotron radiation when they are accelerated (which itself is a useful device). To get above a few GeV, researchers use linear accelerators, such as SLAC. The proposed International Linear Collider is a design intended to reach TeV energies, close to what the LHC already achieves with protons. 2. Science: Yes, electron-electron or electron-positron collisions are very useful in studying particle physics. The signal is 'cleaner', since electrons are are not composite particles, and it's easier to calculate the cross-sections. By contrast, it is very challenging to calculate how two colliding protons, with six quarks, will decay, to say the least. A classic story of electron-electron and proton-proton colliders complement each other was the discovery of the J/psi meson. The general idea I've heard is that proton-proton colliders can reach higher energies, but electron-positron colliders tend to have better energy resolution and cleaner signals. - 1 Good answer; to improve it, you should mention LEP! – Vibert Feb 10 at 22:33 Note that there is currently a lot of interest in electron–ion collisions, though mostly on the intensity frontier rather than the energy frontier (to use the current vernacular). – dmckee♦ Feb 10 at 22:56 I don't get point one. The use protons and ions now, which are also charged particles. So they also radiate synchroton radiation, what's the difference beside the mass? – Noldig Feb 11 at 8:46 1 @Noldig The mass is the key difference. The radiated power is proportional to $m^{-4}$ (see the wiki page), so electrons radiate $10^{13}$ times more than protons! – Michael Brown Feb 11 at 8:59 That's ok, but for me point one suggests that there is no synchrotron radiation at all. Maybe one could include the m^-4 behaviour into the awnser and say, that they would radiate much more synchrotron radiation than protrons – Noldig Feb 11 at 9:32 What would happen if the scientists would use leptons instead of hadrons? They would go in the opposite direction - electrons are the opposite charge to protons. The LHC doesn't use electrons because protons are 2000x heavier so you get a lot more energy in the collision. Especially: What would happen if they would collide electrons? Not a lot - they would bounce off each other. Electrons (as far as we know) don't break apart (and not at these low energies) Isn't it intrinsic that all particles consist of smaller particles? Probably not. We don't know of any components of an electron, but we also don't know why it should have the same charge as a proton when it's not made of the same stuff - so I wouldn't take any bets. With current technology, could we detect them? Then logically those would be the smallest particles we could detect ... and so on .... - Thank you for your fast answer! – Mare Infinitus Feb 10 at 21:23 @MareInfinitus - somebody who is more familiar with the LHC will be along to provide a better answer soon, but it's late on sunday night at CERN – Martin Beckett Feb 10 at 21:27 – emarti Feb 10 at 21:55 If the electron has constituents it must break apart at "some" energy! And if it doesn't - you need a good explanation of why it has the same charge as the quarks – Martin Beckett Feb 10 at 22:20 The very basic reason LEP stopped going to higher energies ( it reached over 200GeV center of mass, at the last stage, LEPII) and the tunnel was used for LHC is synchrotron radiation . Note that radiated power is proportional to 1/m^4 It is not possible to feed a circular beam of electrons the energy needed to raise it to higher energies at the radius of LEP, it is a loosing game, The energy would go into feeding synchrotron radiation. The reason the same radius can be used for much higher energy protons is the ratio of the masses of electron to proton. Synchrotron radiation is not present in linear colliders and that is why the next electron positron accelerator will be a linear collider the ILC. Edit What would happen if the scientists would use leptons instead of hadrons? It has happened at LEP and LEPII with electrons on positrons. If the scattering is not elastic a lot of hadrons appear, as well as leptons and Z bosons. The data from LEP confirmed the calculations of the standard model for elementary particles to great accuracy. Especially: What would happen if they would collide electrons? from the previous paragraph, the standard model predicts what would happen if electrons were scattered on electrons : all the variants of the feynman diagram possibilities would appear also. - This is an interesting question, for which some interesting answers have been given from various points of view. I am intrested in what will happen in the case of the electron-electron collisions at these high energies. It is true that at low energies the two electrons will bounce of each other, not much happening. At very high energies, however, I thought that the following outcome might be possible (it is possible to draw a Feynman diagram, to calculate the probability amplitude for such an event.) Sorry I have not learnt how to bring in drawings, so I will just describe it: As the $e^-$ and $e^-$ approach each other they could interact via the weak force. Electron 1 would emit a $W^-$ and turn into a $\nu_e$ which would emerge as a free particle. The $W^-$ could interact electromagnetically with electron 2 and both particles would scatter at some other directions. The $W^-$ would then decay into $e^-+\bar{\nu_e}$. Therefore, the $e^--e^-$ collision could serve as a $\nu_e-\bar{\nu_e}$ generating machine? $e^-+e^- -->e^-+e^- + \nu_e+\bar{\nu_e}$. Unfortunately, we don't know of any other mechanism to forcast entirely new physics. - I nearly upvoted this for "Unfortunately, we don't know of any other mechanism to forcast entirely new physics." `:)` – Michael Kjörling Feb 11 at 8:43 At LHC energies many other processes would be possible. The Ws could decay into many other particles, or interact and produce a Higgs, or a photon mediated exchange could produce quark/antiquark pairs.... many many possibilities, just scratching the surface here. I'm sure someone has done detailed calculates to see which processes would dominate, but I don't know off hand. – Michael Brown Feb 11 at 9:06 @John: Please leave a link if you find something. – Mare Infinitus Feb 11 at 17:31 @John Very interesting, thanks! The left-right model is on my list to consider carefully for my Ph.D project, although I'm not working so much on the collider phenomenology side of things. Very interesting read nevertheless! – Michael Brown Feb 12 at 13:12 @John Not at all. Even the standard model breaks lepton number non-perturbatively. In fact, it's a good thing for leptogenesis. :) Of course you have to make sure the rate below the weak scale is low enough to evade detection... – Michael Brown Feb 13 at 0:57 show 3 more comments Especially: What would happen if they would collide electrons? In case you really meant electron on electron (and not electron on positron): for a 'discovery' machine it is useful to have a initial state which is 'neutral' in all respects: no net charge (electrons and positrons have opposite charge; protons not only contain quarks but also a significant amount of anti-quarks and gluons), no net lepton flavour etc. So far, lepton flavour is almost entirely conserved, so an e-e- collider would predominantly produce final states with an electron flavour number of two which would be quite unnatural for a new fundamental particle. What would happen if the scientists would use leptons instead of hadrons? Others have pointed out that this happened until the year 2000 in the LHC tunnel (LEP) and is limited by the synchrotron radiation losses (while LHC is limited by the achievable fields in the bending magnets). There is also the concept of a muon collider which would have features similar to an e+ e- collider (known initial state four momentum etc.) but this is technologically very challenging, mainly due to the liftetime of the muon of only two microseconds. It would however for example allow to measure the mass of the Higgs particle to keV precision if I remember well (through scanning of the beam energy, similar to the determination of the mass and decay width of the Z particle at LEP). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413545727729797, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/209818-forecast-cubic-spline.html
# Thread: 1. ## Forecast with cubic spline Hi all, I have a simple question: is it possible to make forecast with cubic spline? And, if yes, how? Frecast, for me, meens that I can understend the value of my interpolation function OUTSIDE the data field that I have. Thank you, every help will be appreciated. Igor 2. ## Re: Forecast with cubic spline I think the word you're looking for is extrapolate. It means (in one dimension) that you are given f(x) for various x in [a,b] and want to estimate f(x) for some x>b or some x<a. Of course, you can use any method, and the accuracy you get depends on the situation. If I understand correctly, a cubic spline would end up being just a cubic function approximation, and so would assume the third derivative of the function is approximately constant. Your accuracy will depend on how well this actually fits the situation. As an example, here Taylor series - Wikipedia, the free encyclopedia there is a graph of various polynomial approximations to the function $\sin{x}$. You can see that the cubic approximation is pretty good for $-\frac{\pi}{2}<x<\frac{\pi}{2}$ but really gets bad as you go further from 0. - Hollywood
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218963980674744, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4229914
Physics Forums ## singular matrix theory hey guys given $Ax=B$ where A is a square matrix and x and B are vectors, can anyone tell me why a singular matrix (that is, the determinant = 0) implies one of two situations: infinite solutions or zero solutions? a proof would be nice. i read through pauls notes but there was no proof. thanks all! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help If ##\mathbf{A}\vec{x}=\vec{y}## then ##\mathbf{A}^{-1}\vec{y}=\vec{x}## provided the inverse exists. If the matrix ##\mathbf{A}## is singular, it does not have an inverse. Another name for it is "degenerate". What does that tell you about the solutions? (Think about it in terms of solving simultaneous equations.) Recognitions: Gold Member Science Advisor Staff Emeritus An n by n square matrix represents a linear transformation, A, from Rn to Rn. If it is "non-singular", then it maps all of Rn to all of Rn. That is, it is a "one to one" mapping- given any y in Rn there exist a unique x in Rn such that Ax= y. But we can show that, for any linear transformation, A, from one vector space, U, to another, V, the "image" of A, that is, the set of all vectors y, of the form y= Ax for some x, is a subspace of V and that the "null space" of A, the set of all vectors, x, in U such that Ax= 0, is a subspace of U. Further, we have the "dimension theorem". If "m" is dimension of the image of A (called the "rank" of A) and "n" is the dimension of the nullspace of A (called the "nullity" of A) then m+ n is equal to the dimension of V. In particuar, if U and V have the same dimension, n, and the rank of A is m with m< n, then the nullity of A= m-n> 0. It is further true that if A(u)= v and u' is in the nullspace of A then A(u+ u')= A(u)+ A(u')= v+ 0= v. The result of all of that is this: If A is a singular linear transformation from vector space U to vector space V, then it maps U into some subspace of V. If y is NOT in that subspace then there is NO x such that Ax= y. If y is in that subspace then there exist x such that Ax= y but also, for any v in the nullity of A (which has non-zero dimension and so contains an infinite number of vectors) A(x+ v)= y also so there exist an infinite number of such vectors. ## singular matrix theory thanks this makes tons of sense! Thread Tools | | | | |---------------------------------------------|----------------------------------------------|---------| | Similar Threads for: singular matrix theory | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 2 | | | Linear & Abstract Algebra | 1 | | | Engineering, Comp Sci, & Technology Homework | 1 | | | General Math | 5 | | | Linear & Abstract Algebra | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8846976161003113, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/05/02/limits-superior-and-inferior/?like=1&source=post_flair&_wpnonce=2c354f29b9
# The Unapologetic Mathematician ## Limits Superior and Inferior As we look at sequences (and nets) of real numbers (and more general ordered spaces) a little more closely, we’ll occasionally need the finer notion of a “limit superior” (“limit inferior”). This is essentially the largest (smallest) value that a sequence takes in its tail. In general, let $x_\alpha$ be a net (indexed by $\alpha\in A$) in some ordered space $X$. Then we can consider the “tail” $A_\alpha=\{\beta\in A|\beta\geq\alpha\}$ of the index set consisting of all indices above a given index $\alpha$. We then ask what the least upper bound of the net is on this tail: $\sup\limits_{\beta\geq\alpha}x_\beta$. Alternately, we consider the greatest lower bound on the tail: $\inf\limits_{\beta\geq\alpha}x_\beta$. Now as we move to tails further and further out in the net, the least upper bound (greatest lower bound) may drop (rise) as we pass maxima (minima). That is, the supremum (infimum) of a set bounds the suprema (infima) of its subsets. So? So if we pass such a maximum it clearly doesn’t affect the long-run behavior of the net, and we want to forget it. So we’ll take the lowest of the suprema of tails (the highest of the infima of tails). Thus we finally come to defining the limit superior $\displaystyle\limsup x_\alpha=\inf\limits_{\alpha\in A}\sup\limits_{\beta\geq\alpha}x_\alpha$ and the limit inferior $\displaystyle\liminf x_\alpha=\sup\limits_{\alpha\in A}\inf\limits_{\beta\geq\alpha}x_\alpha$ Now these are related to our usual concept of a limit. First of all, $\displaystyle\liminf x_\alpha\leq\limsup x_\alpha$ and the limit converges if and only if these two are both finite and equal. In this case, the limit is this common finite value. If they both go to infinity, the limit diverges to infinity, and similarly for negative infinity. If they’re not equal, then the limit bounces around between the two values. If we’re considering a sequence of real numbers, then we’re taking a bunch of infima and suprema, all of which are guaranteed to exist. Thus the limits superior and inferior of any sequence must always exist. As an illustrative example, work out the limits superior and inferior of the sequence $(-1)^n(1+\frac{1}{n})$. Show that this sequence diverges, but does so by oscillating rather than by blowing up. Finally, note that we can consider a function $f(x)$ defined on a ray to be a net on that ray, considered as a directed subset of real numbers. Then we get limits superior and inferior as $x$ goes to infinity, just as for sequences. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 1 Comment » 1. [...] First is the ratio test. We take the ratio of one term in the series to the previous one and define the limits superior and inferior [...] Pingback by | May 5, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076372385025024, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27651?sort=newest
## Can SAT be solved in time n^k, for a specific k? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The P vs NP problem is open. How about the following questions--Can SAT be done in $n^k$ time for some specific $k$? Why do I ask these questions? Ben-David and Halevi's paper On the independence of P versus NP proves that if P = NP is independent of PA, then SAT can be solved in $n^{g(n)}$ time, where $g$ is a very slow, almost constant function. This means that if we can neither prove nor disprove SAT is in P, then SAT lies on the boundary of P. It's not in P and it's not outside P either. So there's a gray area near the boundary of P. Because of this possibility, I think the P vs NP problem is not a good formulation. I therefore propose to ask more precise questions like whether SAT can be solved in linear/quadratic/cubic/etc time. - 3 Your question is equivalent to the question "P=NP?". – Kevin Lin Jun 10 2010 at 6:39 3 There isn't a gray area. It's not that SAT is not in P and is not outside P either, as you say. Either it is in, or it's not. $n^{g(n)}$ is the latter, and only the latter. – Dror Speiser Jun 10 2010 at 7:25 6 It sounds like the question you mean to ask is for which k one can unconditionally prove that SAT is not solvable in time n^k. This question might be sensitive to your model of computation; I'm sure there are experts here who can comment on that. – David Speyer Jun 10 2010 at 7:29 2 @Wang Zirui: The phrase "the standard model, i.e. Turing machine" is misleading in this context. There are many different Turing machine models, depending on whether one allows for multiple tapes, multiple read/write heads, etc., along with other types of models such as register machines. See the wikipedia page <a href="en.wikipedia.org/wiki/… machine equivalents</a> for examples. Each can simulate the others in polynomial time, so the class P is well-defined and independent of the chosen model, but "computable in TIME(n^k)" depends on the model. – Noah Stein Jun 10 2010 at 12:55 1 I would strongly encourage someone who understands this better than me to rewrite the question so that it makes clear that it's asking the negative of the question it actually asks, that clarifies the models issue, and removes the errors. I tried to do so myself but concluded that I don't understand the issues well enough to do so confidently. – Noah Snyder Jun 10 2010 at 14:06 show 8 more comments ## 3 Answers You might find this paper by Patrascu and Williams interesting. It surveys the state of the art for SAT, as well as discussing implications for improved bounds. - I discussed "the impossibility of an improved SAT algorithm" (sort of) under the assumption that P = NP is independent of ZFC in this question: mathoverflow.net/questions/27959 – Zirui Wang Jun 12 2010 at 18:54 @Suresh: But thanks for the reference! It's certainly an interesting paper that has widened my thought. – Zirui Wang Jun 16 2010 at 9:25 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems you are looking for lower bounds on SAT, not upper bounds. In that case, see this question I asked here a while ago. In short, the best lower bounds we have for SAT are linear, so can't even say that SAT cannot be solved in O(n) time. Secondly I would just like to point out that Ben-David and Halevi's paper does not claim what you wrote. It says that if P vs NP is proved to be independent of PA (or ZFC) using currently known techniques then NP is contained in DTIME($n^{g(n)}$) for infinitely many inputs, where g(n) is an extremely slow growing function. Note the "infinitely many inputs" part, and most importantly, the "using currently known techniques" part. - Do you mean Wolfgang J. Paul's A 2.5n-lower bound on the combinational complexity of Boolean functions? Also please read Corollary 6 on page 10 in Ben-David and Halevi's paper; it doesn't have the quantifications you mentioned, and it's a theorem. – Zirui Wang Jun 10 2010 at 14:28 Corollary 6 also has the "any method known today" clause, which is the most important caveat. As for the best lower bound, it depends on the model -- boolean circuits, one tape TMs, two tape TMs, etc. See the question I linked to for some very good answers by people who understand this area much better than I do. – Rune Jun 11 2010 at 2:22 No, I'm not talking about lower bounds. Yes, a high lower bound implies that the problem can't be solved in any time less. But a lower bound is overkill because it's possible that the lower bound is low and yet the problem can't be solved in time higher than the lower bound. For example, the best lower bound for SAT could be linear. But it's still possible to prove that no quadratic SAT solver exists. – Zirui Wang Jun 11 2010 at 8:18 But I found the answer to my question in Cook's article: claymath.org/millennium/P_vs_NP/pvsnp.pdf He says "It is consistent with present knowledge that not only could Satisfiability have a polynomial-time algorithm, it could have a linear time algorithm on a multitape Turing machine." Note that multitape Turing machine is the standard model. – Zirui Wang Jun 11 2010 at 8:20 As far as I know SAT is NP-Complete. Therefore, if there was such a $k$ as you said, then SAT would be in $P$, because, you know, $n^k$ is a polynomial. Thus, finding such a $k$ would prove $P=NP$. - Thanks, but what I want is a negative answer, i.e. SAT can't be solved in, for example, linear time. If it can, great--P = NP. But if you can prove that linear time is not sufficient to solve SAT--although this does not mean P $\ne$ NP--I think it's a GREAT result still. – Zirui Wang Jun 10 2010 at 13:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554097652435303, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=215220
Physics Forums ## Weak force I encountered an argument that says that the weak mediators must be massive because the weak force is short range, by the uncertainty principle. But isn't the uncertainty principle relates the uncertainty in mass, $$\delta m$$ and $$\delta t$$ and not the absolute value of the mass? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus The uncertainty principle is a weak reed for this case. The range of an interaction, from Fourier transform of the scattering amplitude, is proportional to the inverse of the intermediate boson mass. They are very massive because they can only last a short amount of time, in the it's early stages of theory Fermi just gave them infinite mass. It also helps when you think that the Electromagnetic force, which acts infinitely far is mediated by the photon which has no mass. Blog Entries: 1 ## Weak force Maybe a little mathier way to look at it is that they are massive because their symmetry is broken. The color symmetry of QCD is not broken so the gluons are massless, and the u(1) symmetry of electric charge in EM is not broken so the photon is massless, but the su(2)Xu(1) symmetry of isospin and hypercharge is broken so the W+, W-, Z_0 get a mass. Mentor Quote by touqra I encountered an argument that says that the weak mediators must be massive because the weak force is short range That argument is backwards anyway. It confuses cause and effect. The weak force is short range because its mediators are heavy. But other mechanisms - e.g. charge screening - can also shorten range. Quote by touqra I encountered an argument that says that the weak mediators must be massive because the weak force is short range, by the uncertainty principle. But isn't the uncertainty principle relates the uncertainty in mass, $$\delta m$$ and $$\delta t$$ and not the absolute value of the mass? There is only a spread on time ! The energy is set equal to the rest energy of the particle ! Because : 1) we just wanna find out the range of a massive particle. 2) suppose you know the rest energy (which is detected by experiment), one can use that value to plug into the deltaE ! In the end, we just wanna find out that having mass means having finite range and we wanna have like an estimation of that range, not an exact value ! Check out : http://hyperphysics.phy-astr.gsu.edu.../exchg.html#c2 Quote by Vanadium 50 That argument is backwards anyway. It confuses cause and effect. The weak force is short range because its mediators are heavy. But other mechanisms - e.g. charge screening - can also shorten range. Doesn't have to be he case. One can also state that the opposite. Look at the "rage-formula" in the website above marlon Mentor Quote by marlon Quote by Vanadium 50 That argument is backwards anyway. It confuses cause and effect. The weak force is short range because its mediators are heavy. But other mechanisms - e.g. charge screening - can also shorten range. Doesn't have to be he case. One can also state that the opposite. Look at the "rage-formula" in the website above One can state whatever one wants, I suppose. That doesn't make it correct. Massive implies short range. Short range does not imply massive. The force carried by the gluon is short range, but that's because of confinement, not because the gluon is massive. In fact, it's massless. Quote by Vanadium 50 One can state whatever one wants, I suppose. That doesn't make it correct. True, but what i stated is backed up by a formula which is in accordance with experimental results, so.... Massive implies short range. Short range does not imply massive. I never stated that short range implies massive. Besides, i never used the word "implication" here because we are dealing with an "equality", which is identical "in both directions", NOT an equivalence ! So, i would say this is a semantics issue. marlon "Check out : http://hyperphysics.phy-astr.gsu.edu.../exchg.html#c2 Look at the "rage-formula" in the website above" When something is determined on dimensional grounds, almost any derivation, no matter how wrong, will give the correct answer. The QM concept of range is usually discussed in the context of the Yukawa force, mediated by the exchange of a pion of mass m, which is the only dimensional object that can appear in the potential. The only reasonable modification of the Coulomb potential is the dimensionless factor exp(-mr). Thus ANY dervation will give the range as 1/m. Quote by pam "Check out : http://hyperphysics.phy-astr.gsu.edu.../exchg.html#c2 Look at the "rage-formula" in the website above" When something is determined on dimensional grounds, almost any derivation, no matter how wrong, will give the correct answer. The QM concept of range is usually discussed in the context of the Yukawa force, mediated by the exchange of a pion of mass m, which is the only dimensional object that can appear in the potential. The only reasonable modification of the Coulomb potential is the dimensionless factor exp(-mr). Thus ANY dervation will give the range as 1/m. Is it me or is this link not working ? Besides, i cannot find the text that pam quoted. marlon I tried to just copy the website you gave in post #6. Maybe something got left out. Thread Tools | | | | |---------------------------------|-------------------------------|---------| | Similar Threads for: Weak force | | | | Thread | Forum | Replies | | | Quantum Physics | 9 | | | General Physics | 4 | | | General Physics | 8 | | | Introductory Physics Homework | 2 | | | General Physics | 11 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383499622344971, "perplexity_flag": "middle"}
http://stochastix.wordpress.com/2011/07/18/linear-dynamical-systems-over-finite-rings/
Rod Carvalho Linear dynamical systems over finite rings Consider the following linear dynamical system (example 3.4 in [1]) $\left[\begin{array}{cc} x_1^+\\ x_2^+\end{array}\right] = \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right] \left[\begin{array}{c} x_1 \\ x_2\\\end{array}\right]$ where the state vector $x := (x_1, x_2)$ travels in the finite state space $\mathbb{Z}_4^2$, where $\mathbb{Z}_4 := \{0, 1, 2, 3\}$. Since the state space is finite, let us henceforth call it state set. Let $A \in \mathbb{Z}_4^{2 \times 2}$ be defined by $A := \left[\begin{array}{cc} 2 & 3\\ 1 & 0\\\end{array}\right]$ so that we can write the state update equation in the more compressed form $x^+ = A x$. For simplicity, let us introduce function $f : \mathbb{Z}_4^2 \to \mathbb{Z}_4^2$ defined by $f (x) := A x$. Hence, we have $x^+ = f (x)$. Do note that we are working with the finite ring $(\mathbb{Z}_4, +, \times)$, whose addition and multiplication tables are as follows and, therefore, when we update each state via $x_i^+ = a_{i1} x_1 + a_{i2} x_2$, we use addition and multiplication modulo 4. When we study discrete-time dynamical systems over $\mathbb{R}$, we are usually interested in finding equilibrium points, fixed points and limit cycles. For dynamical systems over a finite ring, what are the properties of interest? And how can one study them? __________ State transition graph The state set, $\mathbb{Z}_4^2$, which has $4^2 = 16$ elements, is a subset of the 2-dimensional integer lattice $\mathbb{Z}^2$, as depicted below Computing $x^+ = f(x)$ for each $x$ in the state set and forming ordered pairs $(x, f(x))$, we can build the state transition graph, whose vertex set is the state set $\mathbb{Z}_4^2$, and whose edge set is the set $\{(x, f(x))\}_{x \in \mathbb{Z}_4^2}$. A pictorial representation of this graph is shown below Note that the out-degree of each vertex is 1. This is due to the fact that function $f$ maps states to states, not states to sets of states. In other words, we do not allow non-determinism. Visual inspection of the state transition diagram allows us to conclude that, interestingly, the in-degree of each vertex is also 1, i.e., function $f$ is injective and surjective. Thus, $f$ is invertible, which means that the dynamical system we are studying is time reversible! Switch the direction of each arrow in the diagram, and we obtain a new state transition diagram. Hence, if are given the state at a certain time step, we can determine the entire history of the system, both past and future! A state $x$ is a fixed point if $f(x) = x$, i.e., if $x^+ = x$. Taking a look at the state transition diagram, we see that there are four vertices with self-loops, which means that the system has four fixed points $\{ (0,0), (1,1), (2,2), (3,3)\}$. If the system starts at a fixed point $x$, it will remain there forever $x \mapsto f(x) = x \mapsto f(x) = x \mapsto \dots$ and, thus, the set of all fixed points is an invariant set. Note that a fixed point is a periodic point of period equal to 1. Observing the state transition diagram again, we notice the existence of loops of various lengths. We have two loops of length equal to 2 and two loops of length equal to 4, depicted below in blue and magenta, respectively Do note, however, that we also have four loops of length equal to 1, which are the self-loops corresponding to the fixed points, but we already mentioned those. For the dynamical system under study, we could obtain qualitative information about its behavior by visual inspection of its state transition diagram. Suppose, for example, that we want to study a dynamical system whose state set is $\mathbb{Z}_4^{16}$, which has over four billion states. It is evident that drawing diagrams and counting loops is an approach that does not scale! Can one use Algebra to obtain qualitative information about the dynamical system being studied? __________ Finding periodic points If state $x$ is a fixed point, then $f (x) = x$ or, equivalently, $A x = x$, as we defined $f (x) := A x$. Hence, we can find fixed points by solving the equation $A x = x$. A state $x$ is a periodic point of period equal to $k$ if $f^{(k)} (x) = x$, where $f^{(k)} = f \circ f^{(k-1)}$ and $f^{(1)} = f$. Since $f (x) := A x$, we have that $f^{(k)} (x) = A^k x$. Thus, we can find periodic points of period equal to $k$ by solving the equation $A^k x = x$. However, do note that fixed points, i.e., periodic points of period equal to 1, are also periodic points of period equal to 2, equal to 3, equal to 4, etc. Therefore, let the fundamental period be the smallest natural number $k$ for which $A^k x = x$. For example, the set of all periodic points of fundamental period equal to 2 is $\{x \in \mathbb{Z}_4^2 \mid A^2 x = x \land A x \neq x\}$ and the set of all periodic points of fundamental period equal to 3 is $\{x \in \mathbb{Z}_4^2 \mid A^3 x = x \land A x \neq x\}$. However, the periodic points of period equal to 2 are also periodic points of period equal to 4. Hence, the set of periodic points of fundamental period equal to 4 is $\{x \in \mathbb{Z}_4^2 \mid A^4 x = x \land A^2 x \neq x \land A x \neq x\}$. But, how do we solve equations of the form $A^k x = x$ over the ring $(\mathbb{Z}_4, +, \times)$? A possibility is to search the whole state set and find the states for which the equations hold. That is precisely what the following Python script does: ```# define function f def f (x): f1 = (2 * x[0] + 3 * x[1]) % 4 f2 = x[0] % 4 return (f1, f2) # create state set X = [] for x1 in [0,1,2,3]: for x2 in [0,1,2,3]: X.append((x1,x2)) print "State transitions:\n" for x in X: x1, x2, f1, f2 = x[0], x[1], f(x)[0], f(x)[1] print "f(%d,%d) = (%d,%d)" % (x1, x2, f1, f2) # find periodic points of fundamental periods 1, 2, 3, 4 PP1, PP2, PP3, PP4 = [], [], [], [] for x in X: if f(x) == x: PP1.append(x) elif f(f(x)) == x: PP2.append(x) elif f(f(f(x))) == x: PP3.append(x) elif f(f(f(f(x)))) == x: PP4.append(x) print "\nLists of periodic points:\n" print "\nFundamental period equal to 1:\n %s" % PP1 print "\nFundamental period equal to 2:\n %s" % PP2 print "\nFundamental period equal to 3:\n %s" % PP3 print "\nFundamental period equal to 4:\n %s" % PP4 ``` The output of this script is the following: ```State transitions: f(0,0) = (0,0) f(0,1) = (3,0) f(0,2) = (2,0) f(0,3) = (1,0) f(1,0) = (2,1) f(1,1) = (1,1) f(1,2) = (0,1) f(1,3) = (3,1) f(2,0) = (0,2) f(2,1) = (3,2) f(2,2) = (2,2) f(2,3) = (1,2) f(3,0) = (2,3) f(3,1) = (1,3) f(3,2) = (0,3) f(3,3) = (3,3) Lists of periodic points: Fundamental period equal to 1: [(0, 0), (1, 1), (2, 2), (3, 3)] Fundamental period equal to 2: [(0, 2), (1, 3), (2, 0), (3, 1)] Fundamental period equal to 3: [] Fundamental period equal to 4: [(0, 1), (0, 3), (1, 0), (1, 2), (2, 1), (2, 3), (3, 0), (3, 2)] ``` Note that there are no periodic points of fundamental period equal to 3, as a quick glance at the state transition diagram will tell. __________ References [1] O. Colón-Reyes, A. Jarrah, R. Laubenbacher, B. Sturmfels, Monomial Dynamical Systems over Finite Fields, May 2006. Like this: Tags: Discrete Dynamical Systems, Finite Dynamical Systems, Finite Rings, Fixed Points, Periodic Points This entry was posted on July 18, 2011 at 20:15 and is filed under Algebra, Dynamical Systems. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 2 Responses to “Linear dynamical systems over finite rings” 1. SteveBrooklineMA Says: July 24, 2011 at 12:56 | Reply Nice! and I think you could do more math here. We know $f$ is bijective because $A$ has an inverse: $A^{-1}= \left[\begin{array}{cc} 0 & 1\\ 3 & 2\end{array}\right]$. Also, we can simply solve $A x = x$, i.e. $\left[\begin{array}{cc} 2 & 3\\ 1 & 0\end{array}\right] \left[\begin{array}{c} x_1\\ x_2 \end{array}\right] = \left[\begin{array}{c} x_1\\ x_2 \end{array}\right]$, i.e. $\left[\begin{array}{cc} 1 & 3\\ 1 & 3\end{array}\right] \left[\begin{array}{c} x_1\\ x_2 \end{array}\right] = \left[\begin{array}{c} 0\\ 0 \end{array}\right]$ The solution is just $x_1 = x_2$, giving $(0,0)$, $(1,1)$, $(2,2)$ and $(3,3)$. Similarly, $A^2x = x$ gives $2 x_1 = 2 x_2$, so we get $(0,2)$, $(2,0)$, $(1,3)$ and $(3,1)$, and also of course the 4 previous ordered pairs. I think an orbit of size 3 is impossible since 3 and 4 are relatively prime. Finally, since $A^4 = I$, all 16 ordered pairs satisfy $A^4 x = x$. • Rod Carvalho Says: July 24, 2011 at 17:53 | Reply Nice! and I think you could do more math here. Indeed! That will be part II. Thanks for your comment. I edited it slightly. I will take the liberty of sending you an email when the 2nd installment is posted. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197654724121094, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Peltier_effect
# Thermoelectric effect (Redirected from Peltier effect) This article is about the thermoelectric effect as a physical phenomenon. For applications of the thermoelectric effect, see thermoelectric materials and thermoelectric cooling. Thermoelectric effect Principles • Thermoelectric effect • Seebeck effect • Peltier effect • Thomson effect • Thermopower • Ettingshausen effect • Nernst effect Applications The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice-versa. A thermoelectric device creates voltage when there is a different temperature on each side. Conversely, when a voltage is applied to it, it creates a temperature difference. At the atomic scale, an applied temperature gradient causes charge carriers in the material to diffuse from the hot side to the cold side. This effect can be used to generate electricity, measure temperature or change the temperature of objects. Because the direction of heating and cooling is determined by the polarity of the applied voltage, thermoelectric devices can be used as temperature controllers. The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect, Peltier effect and Thomson effect. Textbooks may refer to it as the Peltier–Seebeck effect. This separation derives from the independent discoveries of French physicist Jean Charles Athanase Peltier and Baltic German physicist Thomas Johann Seebeck. Joule heating, the heat that is generated whenever a voltage is applied across a resistive material, is related though it is not generally termed a thermoelectric effect. The Peltier–Seebeck and Thomson effects are thermodynamically reversible,[1] whereas Joule heating is not. ## Seebeck effect A thermoelectric circuit composed of materials of different Seebeck coefficient (p-doped and n-doped semiconductors), configured as a thermoelectric generator. If the load is removed then the current stops, and the circuit functions as a temperature-sensing thermocouple. The Seebeck effect is the conversion of temperature differences directly into electricity and is named after the Baltic German physicist Thomas Johann Seebeck, who, in 1821 discovered that a compass needle would be deflected by a closed loop formed by two metals joined in two places, with a temperature difference between the junctions. This was because the metals responded differently to the temperature difference, creating a current loop and a magnetic field. Seebeck did not recognize there was an electric current involved, so he called the phenomenon the thermomagnetic effect. Danish physicist Hans Christian Ørsted rectified the mistake and coined the term "thermoelectricity". The Seebeck effect is a classic example of an electromotive force (emf) and leads to measurable currents or voltages in the same way as any other emf. Electromotive forces modify Ohm's law by generating currents even in the absence of voltage differences (or vice versa); the local current density is given by $\mathbf J = \sigma (-\boldsymbol \nabla V + \mathbf E_{\rm emf}),$ where $V$ is the local voltage[2] and $\sigma$ is the local conductivity. In general the Seebeck effect is described locally by the creation of an electromotive field $\mathbf E_{\rm emf} = - S \boldsymbol\nabla T,$ where $S$ is the Seebeck coefficient (also known as thermopower), a property of the local material, and $\boldsymbol \nabla T$ is the gradient in temperature $T$. The Seebeck coefficients generally vary as function of temperature, and depend strongly on the composition of the conductor. For ordinary materials at room temperature, the Seebeck coefficient may range in value from −100 μV/K to +1000 μV/K (see Thermoelectric materials) If the system reaches a steady state where $\mathbf J = 0$, then the voltage gradient is given simply by the emf: $-\boldsymbol \nabla V = S \boldsymbol\nabla T$. This simple relationship, which does not depend on conductivity, is used in the thermocouple to measure a temperature difference; an absolute temperature may be found by performing the voltage measurement at a known reference temperature. Conversely, a metal of unknown composition can be classified by its thermoelectric effect if a metallic probe of known composition, kept at a constant temperature, is held in contact with it (the unknown material is locally heated to the probe temperature). Industrial-quality control instruments use this as thermoelectric alloy sorting to identify metal alloys. Thermocouples in series form a thermopile, sometimes constructed in order to increase the output voltage, because the voltage induced over each individual couple is small. Thermoelectric generators are used for creating power from heat differentials and exploit this effect. ## Peltier effect The Seebeck circuit configured as a thermoelectric cooler The Peltier effect is the presence of heating or cooling at an electrified junction of two different conductors and is named for French physicist Jean Charles Athanase Peltier, who discovered it in 1834. When a current is made to flow through a junction between two conductors A and B, heat may be generated (or removed) at the junction. The Peltier heat generated at the junction per unit time, $\dot{Q}$, is equal to $\dot{Q} = \left( \Pi_\mathrm{A} - \Pi_\mathrm{B} \right) I,$ where $\Pi_A$ ($\Pi_B$) is the Peltier coefficient of conductor A (B), and $I$ is the electric current (from A to B). Note that the total heat generated at the junction is not determined by the Peltier effect alone, as it may also be influenced by Joule heating and thermal gradient effects (see below). The Peltier coefficients represent how much heat is carried per unit charge. Since charge current must be continuous across a junction, the associated heat flow will develop a discontinuity if $\Pi_A$ and $\Pi_B$ are different. The Peltier effect can be considered as the back-action counterpart to the Seebeck effect (analogous to the back-emf in magnetic induction): if a simple thermoelectric circuit is closed then the Seebeck effect will drive a current, which in turn (via the Peltier effect) will always transfer heat from the hot to the cold junction. The close relationship between Peltier and Seebeck effects can be seen in the direct connection between their coefficients: $\Pi = T S$ (see below). A typical Peltier heat pump device involves multiple junctions in series, through which a current is driven. Some of the junctions lose heat due to the Peltier effect, while others gain heat. Thermoelectric heat pumps exploit this phenomenon, as do thermoelectric cooling devices found in refrigerators. ## Thomson effect In many materials, the Seebeck coefficient is not constant in temperature, and so a spatial gradient in temperature can result in a gradient in the Seebeck coefficient. If a current is driven through this gradient then a continuous version of the Peltier effect will occur. This Thomson effect was predicted and subsequently observed by Lord Kelvin in 1851. It describes the heating or cooling of a current-carrying conductor with a temperature gradient. If a current density $\mathbf J$ is passed through a homogeneous conductor, the Thomson effect predicts a heat production rate $\dot q$ per unit volume of: $\dot q = - \mathcal K \mathbf J \cdot \boldsymbol \nabla T ,$ where $\boldsymbol \nabla T$ is the temperature gradient and $\mathcal K$ is the Thomson coefficient. The Thomson coefficient is related to the Seebeck coefficient as $\mathcal K = T\, dS/dT$ (see below). This equation however neglects Joule heating, and ordinary thermal conductivity (see full equations below). ## Full thermoelectric equations Often, more than one of the above effects is involved in the operation of a real thermoelectric device. The Seebeck effect, Peltier effect, and Thomson effect can be gathered together in a consistent and rigorous way, described here; the effects of Joule heating and ordinary heat conduction are included as well. As stated above, the Seebeck effect generates an electromotive force, leading to the current equation[3] $\mathbf J = \sigma (-\boldsymbol \nabla V - S \boldsymbol\nabla T).$ To describe the Peltier and Thomson effects we must consider the flow of energy. To start we can consider the dynamic case where both temperature and charge may be varying with time. The full thermoelectric equation for the energy accumulation, $\dot e$ is[3] $\dot e = \boldsymbol \nabla \cdot (\kappa \boldsymbol \nabla T) - \boldsymbol \nabla \cdot (V\mathbf J + \Pi\mathbf J) + \dot q_{\rm ext},$ where $\kappa$ is the thermal conductivity. The first term is the Fourier's heat conduction law, and the second term shows the energy carried by currents. The third term $\dot q_{\rm ext}$ is the heat added from an external source (if applicable). In the case where the material has reached a steady state, the charge and temperature distributions are stable so one must have both $\dot e = 0$ and $\boldsymbol \nabla \cdot \mathbf J = 0$. Using these facts and the second Thomson relation (see below), the heat equation then can be simplified to $-\dot q_{\rm ext} = \boldsymbol \nabla \cdot (\kappa \boldsymbol \nabla T) + \mathbf J \cdot(\sigma^{-1} \mathbf J) - T \mathbf J \cdot\boldsymbol \nabla S,$ The middle term is the Joule heating, and the last term includes both Peltier ($\boldsymbol \nabla S$ at junction) and Thomson ($\boldsymbol \nabla S$ in thermal gradient) effects. Combined with the Seebeck equation for $\mathbf J$, this can be used to solve for the steady state voltage and temperature profiles in a complicated system. If the material is not in a steady state, a complete description will also need to include dynamic effects such as relating to electrical capacitance, inductance, and heat capacity. ## Physical origin of the thermoelectric coefficients A material's temperature, crystal structure, and impurities influence the value of the thermoelectric coefficients. The Seebeck effect can be attributed to two things[citation needed]: charge-carrier diffusion and phonon drag. Typically metals have small Seebeck coefficients because of partially filled bands, with a conductivity that is relatively insensitive to small changes in energy. In contrast, semiconductors can be doped with impurities that donate excess electrons or electron holes, allowing the value of S to be varied over a large range (both negative and positive). The sign of the Seebeck coefficients can be used to determine whether the electrons or the holes dominate electric transport in a semiconductor or semimetal. ### Thomson relations In 1854, Lord Kelvin found relationships between the three coefficients, implying that the Thomson, Peltier, and Seebeck effects are different manifestations of one effect (uniquely characterized by the Seebeck coefficient). The first Thomson relation is[3] $\mathcal K \equiv {d\Pi \over dT} - S,$ where $T$ is the absolute temperature, $\mathcal K$ is the Thomson coefficient, $\Pi$ is the Peltier coefficient, and $S$ is the Seebeck coefficient. This relationship is is easily shown given that the Thomson effect is a continuous version of the Peltier effect. The second Thomson relation is $\Pi = TS .$ This relation expresses a fundamental connection between the Peltier and Seebeck effects. It was not satisfactorily proven until the advent of the Onsager relations, and it is worth nothing that this second Thomson relation is only valid in a time-reversal symmetric material (the material must be nonmagnetic, and the magnetic field must be zero). Using this relation, the first Thomson relation becomes $\mathcal K = T dS/dT$. The Thomson coefficient is unique among the three main thermoelectric coefficients because it is the only one directly measurable for individual materials. The Peltier and Seebeck coefficients can only be easily determined for pairs of materials; hence, it is difficult to find values of absolute Seebeck or Peltier coefficients for an individual material. If the Thomson coefficient of a material is measured over a wide temperature range, it can be integrated using the Thomson relations to determine the absolute values for the Peltier and Seebeck coefficients. This needs to be done only for one material, since the other values can be determined by measuring pairwise Seebeck coefficients in thermocouples containing the reference material and then adding back the absolute thermopower of the reference material. ### Charge-carrier diffusion Charge carriers in the materials will diffuse when one end of a conductor is at a different temperature from the other. Hot carriers diffuse from the hot end to the cold end, since there is a lower density of hot carriers at the cold end of the conductor, and vice versa. If the conductor were left to reach thermodynamic equilibrium, this process would result in heat being distributed evenly throughout the conductor (see heat transfer). The movement of heat (in the form of hot charge carriers) from one end to the other is a heat current and an electric current as charge carriers are moving. In a system where both ends are kept at a constant temperature difference, there is a constant diffusion of carriers. If the rate of diffusion of hot and cold carriers in opposite directions is equal, there is no net change in charge. The diffusing charges are scattered by impurities, imperfections, and lattice vibrations or phonons. If the scattering is energy dependent, the hot and cold carriers will diffuse at different rates, creating a higher density of carriers at one end of the material and an electrostatic voltage. This electronic contribution to the Seebeck coefficient is described by the Mott relation,[4] $S = \frac{k_{\rm B}}{-e}\frac{1}{\sigma} \int \frac{E - \mu}{k_{\rm B}T} \sigma(E) \left( -\frac{df(E)}{dE} \right) \, dE$ where $\sigma(E)$ is the conductivity of electrons at an energy $E$, $\sigma$ is the whole conductivity given by $\textstyle\sigma = \int \sigma(E) ( -\frac{df(E)}{dE} ) \, dE$, and the function $f(E)$ is the energy occupation function. The Fermi level $\mu$ is defined by $f(\mu) = \tfrac{1}{2}$. The fact that the Seebeck coefficient depends on the structure of $\sigma(E)$ near $\mu$ means that the thermopower of a material depends greatly on impurities, imperfections, and structural changes, all of which can vary with temperature and electric field. ### Phonon drag Main article: Phonon drag Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, hence losing momentum and contributing to the thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for $T \approx {1 \over 5} \theta_\mathrm{D} ,$ where $\theta_D$ is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering. This region of the thermopower-versus-temperature function is highly variable under a magnetic field.[citation needed] ### Relationship with entropy The thermopower or Seebeck coefficient, represented by S, of a material measures the magnitude of an induced thermoelectric voltage in response to a temperature difference across that material, and the entropy per charge carrier in the material.[5] S has units of V/K, though μV/K is more common. Superconductors have S = 0 since the charged carriers produce no entropy. This allows a direct measurement of the absolute thermopower of the material of interest, since it is the thermopower of the entire thermocouple. ## Applications See also: Thermoelectric materials ### Thermoelectric generators Main article: Thermoelectric generator The Seebeck effect is used in thermoelectric generators, which function like heat engines, but are less bulky, have no moving parts, and are typically more expensive and less efficient. They have a use in power plants for converting waste heat into additional electrical power (a form of energy recycling), and in automobiles as automotive thermoelectric generators (ATGs) for increasing fuel efficiency. Space probes often use radioisotope thermoelectric generators with the same mechanism but using radioisotopes to generate the required heat difference. Commercially available examples can be found in self-powered fans and chargers designed for use on wood stoves.[6][7][8] ### Peltier effect Main article: Thermoelectric cooling The Peltier effect can be used to create a refrigerator which is compact and has no circulating fluid or moving parts; such refrigerators are useful in applications where their advantages outweigh the disadvantage of their very low efficiency. ### Temperature measurement Main article: Thermocouple Thermocouples and thermopiles are devices that use the Seebeck effect to measure the temperature difference between two objects, one connected to a voltmeter and the other to the probe. The temperature of the voltmeter, and hence that of the material being measured by the probe, can be measured separately using cold junction compensation techniques. ## See also • Joule heating • Pyroelectricity – the creation of an electric polarization in a crystal after heating/cooling. • Nernst and Ettingshausen effects – special thermoelectric effects in a magnetic field. • Thermoelectric materials • Thermoelectric cooling • Thermoelectric generator • Thermopile • Thermopower ## References 1. As the "figure of merit" approaches infinity, the Peltier–Seebeck effect can drive a heat engine or refrigerator at closer and closer to the Carnot efficiency. Disalvo, F. J. (1999). "Thermoelectric Cooling and Power Generation". Science 285 (5428): 703–6. doi:10.1126/science.285.5428.703. PMID 10426986.  Any device that works at the Carnot efficiency is thermodynamically reversible, a consequence of classical thermodynamics. 2. The voltage in this case does not refer to electric potential but rather the 'voltmeter' voltage $V = -\mu/e$, where $\mu$ is the Fermi level. 3. ^ a b c "A.11 Thermoelectric effects". Eng.fsu.edu. 2002-02-01. Retrieved 2013-04-22. 4. Cutler, Melvin; Mott, N. (1969). "Observation of Anderson Localization in an Electron Gas". Physical Review 181 (3): 1336. doi:10.1103/PhysRev.181.1336. 5. Rockwood, Alan L. (1984). "Relationship of thermoelectricity to electronic entropy". Phys. Rev. A 30 (5): 2843–4. Bibcode:1984PhRvA..30.2843R. doi:10.1103/PhysRevA.30.2843. 6. "Thermoelectric Generator For Sale". Thermoelectric-generator.com. Retrieved 2013-04-22. ## Further reading • Besançon, Robert M. (1985). The Encyclopedia of Physics, Third Edition. Van Nostrand Reinhold Company. ISBN 0-442-25778-3. • Rowe, D. M., ed. (2006). Thermoelectrics Handbook:Macro to Nano. Taylor & Francis. ISBN 0-8493-2264-2. • Ioffe, A.F. (1957). Semiconductor Thermoelements and Thermoelectric Cooling. Infosearch Limited. ISBN 0-85086-039-3. • Thomson, William (1851). "On a mechanical theory of thermoelectric currents". Proc.Roy.Soc.Edinburgh: 91–98.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8929823637008667, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/79245-principles-mathematical-analysis-ch2-basic-topology.html
# Thread: 1. ## Principles of Mathematical Analysis - Ch2 Basic Topology Hi, Here I am reading Rudin's Book(3rd). I encountered some problems in Theorem 2.20, page 32. Theorem 2.20 If p is a limit point of a set E, then every neighborhood of p contains infinitely many points of E. By intutition or imagination, I can understand the theorem, but I just don't understand why it says, "The neighborhood Nr(p) contains no point q of E such that p=/=q (not equal), so that p is not a limit point of E." in the proof. I think it would be that Nr(p) contains some q when r is in some range, but in other range of r, there is no q of E such that p=/=q. In other words, not every neighborhood of p contains a q of E, such that p=/=q. Is there anyone who has the book on hands? 2. So he has supposed there is only a finite number of points in a neighborhood around a limit point of the set, and basically taken a smaller neighborhood such that those points aren't there anymore. Since we don't know a priori whether or not p is an element of E, the statement you are questioning is simply stating we exhibit a neighborhood of p with no elements of E other than possibly p, which contradicts the definition of a limit point. 3. If there are only a finite number of points of E in a neighborhood of p, the there is a closest such point. Take $\epsilon$ to be half that distance. 4. Originally Posted by siclar So he has supposed there is only a finite number of points in a neighborhood around a limit point of the set, andbasically taken a smaller neighborhood such that those points aren't there anymore. Since we don't know a priori whether or not p is an element of E, the statement you are questioning is simply stating we exhibit a neighborhood of p with no elements of E other than possibly p, which contradicts the definition of a limit point. So he took another r' smaller than the min(r)? Because by the text I couldn't tell. As for the statement, it was obviously volative of the definition of a limit point. I have no idea why I made it I. I guess I was just so confused by the r. Thank you very much! I've known what my problem is! 5. Originally Posted by HallsofIvy If there are only a finite number of points of E in a neighborhood of p, the there is a closest such point. Take $\epsilon$ to be half that distance. Thank you very much. I got it!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9728860855102539, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/complex-geometry+general-topology
# Tagged Questions 1answer 50 views ### Fundamental group of the following disc What is the fundamental group of the following space in $\mathbf C^n$? This is the topological space given by \{(x_1,\ldots,x_n)\in \mathbf C^n-\{0\} : \vert x_1\vert < 1, \ldots, \vert x_n\vert ... 2answers 126 views ### Why is $x^2$ +$y^2$ = 1, where $x$ and $y$ are complex numbers, a sphere? I've heard $x^2 + y^2$ = 1, where $x$, $y$ are complex numbers, is supposed to be a sphere with two points removed, or also a cylinder. The problem is I've been trying to wrap my head around this for ... 1answer 159 views ### A consequence of Runge's theorem I'd like to have a reference for the proof of the following fact of complex analysis. I think it follows from Runge's theorem, but I don't know how to prove it. Fact. Let \$U \subseteq V \subseteq ... 1answer 69 views ### The $0$-section of sheaf I ran into a problem with a defintion in Complex Analysis as follows: A sheaf $\mathscr S$ over a paracompact Hausdorff space $X$ with a map $f: \mathscr S \to X$ such that (1) $f$ is surjective and ... 0answers 97 views ### Definition of a complex analytic space A complex analytic space is a topological space (say, Hausdorff and second countable) such that each point has an open neighborhood homeomorphic to some zero set $V(f_1,\ldots,f_k)$ of finitely many ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392091631889343, "perplexity_flag": "head"}
http://terrytao.wordpress.com/tag/coboundaries/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘coboundaries’ tag. ## Cohomology for dynamical systems 21 December, 2008 in expository, math.AT, math.DS | Tags: coboundaries, cocycles, cohomology, dynamical systems | by Terence Tao | 15 comments A dynamical system is a space X, together with an action $(g,x) \mapsto gx$ of some group $G = (G,\cdot)$.  [In practice, one often places topological or measure-theoretic structure on X or G, but this will not be relevant for the current discussion.  In most applications, G is an abelian (additive) group such as the integers ${\Bbb Z}$ or the reals ${\Bbb R}$, but I prefer to use multiplicative notation here.]  A useful notion in the subject is that of an (abelian) cocycle; this is a function $\rho: G \times X \to U$ taking values in an abelian group $U = (U,+)$ that obeys the cocycle equation $\rho(gh, x) = \rho(h,x) + \rho(g,hx)$ (1) for all $g,h \in G$ and $x \in X$.  [Again, if one is placing topological or measure-theoretic structure on the system, one would want $\rho$ to be continuous or measurable, but we will ignore these issues.] The significance of cocycles in the subject is that they allow one to construct (abelian) extensions or skew products $X \times_\rho U$ of the original dynamical system X, defined as the Cartesian product $\{ (x,u): x \in X, u \in U \}$ with the group action $g(x,u) := (gx,u + \rho(g,x))$.  (The cocycle equation (1) is needed to ensure that one indeed has a group action, and in particular that $(gh)(x,u) = g(h(x,u))$.)  This turns out to be a useful means to build complex dynamical systems out of simpler ones.  (For instance, one can build nilsystems by starting with a point and taking a finite number of abelian extensions of that point by a certain type of cocycle.) A special type of cocycle is a coboundary; this is a cocycle $\rho: G \times X \to U$ that takes the form $\rho(g,x) := F(gx) - F(x)$ for some function $F: X \to U$.  (Note that the cocycle equation (1) is automaticaly satisfied if $\rho$ is of this form.)  An extension $X \times_\rho U$ of a dynamical system by a coboundary $\rho(g,x) := F(gx) - F(x)$ can be conjugated to the trivial extension $X \times_0 U$ by the change of variables $(x,u) \mapsto (x,u-F(x))$. While every coboundary is a cocycle, the converse is not always true.  (For instance, if X is a point, the only coboundary is the zero function, whereas a cocycle is essentially the same thing as a homomorphism from G to U, so in many cases there will be more cocycles than coboundaries.  For a contrasting example, if X and G are finite (for simplicity) and G acts freely on X, it is not difficult to see that every cocycle is a coboundary.)  One can measure the extent to which this converse fails by introducing the first cohomology group $H^1(G,X,U) := Z^1(G,X,U) / B^1(G,X,U)$, where $Z^1(G,X,U)$ is the space of cocycles $\rho: G \times X \to U$ and $B^1(G,X,U)$ is the space of coboundaries (note that both spaces are abelian groups).  In my forthcoming paper with Vitaly Bergelson and Tamar Ziegler on the ergodic inverse Gowers conjecture (which should be available shortly), we make substantial use of some basic facts about this cohomology group (in the category of measure-preserving systems) that were established in a paper of Host and Kra. The above terminology of cocycles, coboundaries, and cohomology groups of course comes from the theory of cohomology in algebraic topology.  Comparing the formal definitions of cohomology groups in that theory with the ones given above, there is certainly quite a bit of similarity, but in the dynamical systems literature the precise connection does not seem to be heavily emphasised.   The purpose of this post is to record the precise fashion in which dynamical systems cohomology is a special case of cochain complex cohomology from algebraic topology, and more specifically is analogous to singular cohomology (and can also be viewed as the group cohomology of the space of scalar-valued functions on X, when viewed as a G-module); this is not particularly difficult, but I found it an instructive exercise (especially given that my algebraic topology is extremely rusty), though perhaps this post is more for my own benefit that for anyone else. Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9006950855255127, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/88735-exponential-differentiation.html
# Thread: 1. ## exponential differentiation Hi, I need to show that y=(Ax+B).e^3x is a solution to y''-6y'+9y=0 I'm getting y'=e^3x(3Ax+3B+A) y''=3e^3x(2A+3Ax+3B) When I sub these values back into the condition above its not working out. Thank you 2. Originally Posted by slaypullingcat Hi, I need to show that y=(Ax+B).e^3x is a solution to y''-6y'+9y=0 I'm getting y'=e^3x(3Ax+3B+A) y''=3e^3x(2A+3Ax+3B) When I sub these values back into the condition above its not working out. Thank you Show some details - I got it to work! 3. Your calculations for $y'$ and $y''$ are correct. I derived: $\begin{aligned}<br /> y''-6y'+9y&=(9Ax+9B+6A)e^{3x}\\<br /> &-(18Ax+18B+6A)e^{3x}\\<br /> &+(9Ax+9B)e^{3x}\\<br /> &=0.<br /> \end{aligned}$ 4. Your derivatives are correct, and the proposed y is indeed a solution. Check your calculations again (hint: factor out $e^{3x}$). 5. Did it it work with the first and second derivative values I gave? Or did I differentiate incorrectly? 6. Originally Posted by slaypullingcat Did it it work with the first and second derivative values I gave? Or did I differentiate incorrectly? Like the others have said - the derivatives are correct. If you show some work, maybe we can show you were you messed up
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205297827720642, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/87989-group-theory-sylow-theory-simple-groups.html
# Thread: 1. ## Group Theory - Sylow Theory and simple groups I'm unsure on the following points: (i) Prove that if a group G has order $p^nq$, where p and q are prime numbers and $p > q$, then G contains a unique Sylow p-Subgroup. Hence conclude that there are no simple groups of orders 55, 57 or 58. (ii) Prove that every group of order 59 is simple. (iii) Prove that there are no simple groups of order 56. (iv) Give an example of a simple group of order 60, as well as an example of a non-simple group of order 60. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For (i) I know that the number of Sylow p-Subgroups of G divides the order of G and that it has the form $pk+1$( $k \geq 0$). I figured $pk+1$ does not divide $p^nq$ for $k \geq 1$ (is this right?If so, why?) so there must be only one(unique) Sylow p-Subgroup. $55=11.5$, $57=19.3$, $58=29.2$. If a group is not simple then it has a proper normal subgroup. So groups of these orders have a unique Sylow p-Subgroup and for some reason this means that the Sylow p-Subgroup in question is normal(proper). For (ii) 59 is a prime so for some reason there is no unique sylow Subgroup.(no subgroup by Lagrange) My question is this: If there is no Sylow Subgroup then is there no subgroup in general? So therefore there is no proper subgroup of a group of order 59, so no proper normal subgroup, so it's simple? For (iii) $56=2^3.7$ , so there exists a subgroup in this group, say G, of order 2,4,8, and 7. I guess I need to prove that G contains a unique Sylow p-Subgroup but I don't know how. For (iv) A simple group of order 60 is $A_5$. Is a non-simple group of order 60 the dihedral group $D_{30}$, if so why is this non-simple? Thanks for any help! 2. 2) A prime ordered group is always cyclic. Lagrange's theorem suffices to show it is simple because the order of a subgroup must divide the order of the group, so the only subgroups are the trivial one and the group itself. \ Sylow's theorem guarantees the existence of a sylow p subgroup for each prime dividing the order of the group. It is just in this case the only prime dividing the group's order is 59. So it is the only subgroup. 4) $d_30$ has a cyclic subgroup of order 30 (the group of rotations) any subgroup of index 2 is normal. So it has a normal subgroup 1 and 3 are easily done if you actually understand what Sylow's theorem is. 3. Originally Posted by Jason Bourne I'm unsure on the following points: (i) Prove that if a group G has order $p^nq$, where p and q are prime numbers and $p > q$, then G contains a unique Sylow p-Subgroup. Hence conclude that there are no simple groups of orders 55, 57 or 58. (ii) Prove that every group of order 59 is simple. (iii) Prove that there are no simple groups of order 56. (iv) Give an example of a simple group of order 60, as well as an example of a non-simple group of order 60. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For (i) I know that the number of Sylow p-Subgroups of G divides the order of G and that it has the form $pk+1$( $k \geq 0$). I figured $pk+1$ does not divide $p^nq$ for $k \geq 1$ (is this right?If so, why?) so there must be only one(unique) Sylow p-Subgroup. $55=11.5$, $57=19.3$, $58=29.2$. If a group is not simple then it has a proper normal subgroup. So groups of these orders have a unique Sylow p-Subgroup and for some reason this means that the Sylow p-Subgroup in question is normal(proper). $kp + 1 \mid p^n q$ and $\gcd(kp + 1, p)=1.$ thus we must have $kp+ 1 \mid q,$ which is possible only if $kp+1=1,$ because $p > q.$ For (ii) 59 is a prime so for some reason there is no unique sylow Subgroup.(no subgroup by Lagrange) My question is this: If there is no Sylow Subgroup then is there no subgroup in general? So therefore there is no proper subgroup of a group of order 59, so no proper normal subgroup, so it's simple? by Lagrange, a group of prime order has no proper non-trivial subgroup and hence no proper non-trivial normal subgroup. so it's simple. For (iii) $56=2^3.7$ , so there exists a subgroup in this group, say G, of order 2,4,8, and 7. I guess I need to prove that G contains a unique Sylow p-Subgroup but I don't know how. suppose Sylow 7-subgroup is not normal in G. then by Sylow theorem, we must have 8 Sylow 7-subgroups. since every two Sylow 7-subgroups intersect at the identity element only, the union of these 8 Sylow 7-subgroups contains exactly 49 elements. so we'll have 56 - 49 = 7 elements left and hence we can have only one Sylow 2-subgroup, i.e. Sylow 2-subgroup is normal in G. For (iv) A simple group of order 60 is $A_5$. Is a non-simple group of order 60 the dihedral group $D_{30}$, if so why is this non-simple? Thanks for any help! $D_n$ has an element, say $a,$ of order $n$ (see the definition of a dihedral group again!). thus $<a>$ is a subgroup of index 2 in $D_n,$ and so it's normal. hence $D_n$ is not simple for any $n > 1.$ 4. Originally Posted by NonCommAlg suppose Sylow 7-subgroup is not normal in G. then by Sylow theorem, we must have 8 Sylow 7-subgroups. since every two Sylow 7-subgroups intersect at the identity element only, the union of these 8 Sylow 7-subgroups contains exactly 49 elements. so we'll have 56 - 49 = 7 elements left and hence we can have only one Sylow 2-subgroup, i.e. Sylow 2-subgroup is normal in G. Thanks, this helps very much. The argument relies on supposing that either of the sylow p-subgroups is not normal in G, can we know for certain which is actually the case? 5. Originally Posted by Jason Bourne Thanks, this helps very much. The argument relies on supposing that either of the sylow p-subgroups is not normal in G, can we know for certain which is actually the case? either of Sylow 7 or 2 subgroup can be normal. it depends on the group. what i proved was that at least one of them is normal and this means the group cannot be simple. 6. Let $G$ be the general linear group $GL(2,7)$ consisting of all 2 $\times$ 2 matrices with entries from $\bold{Z}_{7}$ which have a non-zero determinant. I know that the order of $G$ is 2016. Define the sets: $H = \{ \left(\begin{array}{cc}1&a\\0&1\end{array}\right) : a \in \bold{Z}_{7} \}$ $K = \{ \left(\begin{array}{cc}1&0\\a&1\end{array}\right) : a \in \bold{Z}_{7} \}$ (i) Prove that both H and K are Sylow 7-subgroups of G. (ii) Find a matrix $A \in G$ such that $A^{-1}HA=K$. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For (i) should I just show that H and K are subgroups of G and that they have order 7 which divides 2016? For (ii) I know that such a matrix A exists by one of Sylow's Theorems but I don't know how to get there. 7. ## Partial solution 1) $2016=2*7*144= 2*7*12^2=2^5*3^2*7$ So any subgroup with order 7^1 is a sylow 7 subgroup. Those look like they have order 7. I trust that you can show why they are in fact a subgroup 2)I think you are just gonna have to play with this, I am not sure of a good way to find this. I know if you do the JCF of K you would get [11] [01] which gets you close, but I am afraid I do not know an eloquent way of doing it other than some brute force. 8. Originally Posted by Gamma 1) $2016=2*7*144= 2*7*12^2=2^5*3^2*7$ So any subgroup with order 7^1 is a sylow 7 subgroup. Those look like they have order 7. I trust that you can show why they are in fact a subgroup 2)I think you are just gonna have to play with this, I am not sure of a good way to find this. I know if you do the JCF of K you would get [11] [01] which gets you close, but I am afraid I do not know an eloquent way of doing it other than some brute force. Yeah I guess so. I'm not familiar with "JCF". 9. Got it. multiply by [01] [10] It is its own inverse. 10. Hi Jason Bourne. Originally Posted by Jason Bourne For (ii) 59 is a prime so for some reason there is no unique sylow Subgroup.(no subgroup by Lagrange) My question is this: If there is no Sylow Subgroup then is there no subgroup in general? So therefore there is no proper subgroup of a group of order 59, so no proper normal subgroup, so it's simple? Of course there is a unique Sylow 59-subgroup, namely the whole group itself. It is just that this information is useless in showing that the group is simple, so for this question you need to use Lagrange’s theorem instead. As a general rule, every finite group of prime order is simple. Originally Posted by Jason Bourne For (iv) A simple group of order 60 is $A_5$. Is a non-simple group of order 60 the dihedral group $D_{30}$, if so why is this non-simple? You could also have chosen the cyclic group of order 60 as an example of a non-simple group of order 60. That would have been a more simple non-simple group. 11. Originally Posted by Jason Bourne (ii) Find a matrix $A \in G$ such that $A^{-1}HA=K$. Hi Jason Bourne. The elements in $H$ and $K$ have determinant 1; $\therefore\ \det(A)=\pm1.$ Let’s take $+1$ first. Let $A=\begin{pmatrix}g&h\\j&k\end{pmatrix}$ where $gk-hj=1$ and $\begin{pmatrix}1&x\\0&1\end{pmatrix}\in H.$ Then ___ $\begin{pmatrix}g&h\\j&k\end{pmatrix}\begin{pmatrix }1&x\\0&1\end{pmatrix}\begin{pmatrix}k&-h\\-j&g\end{pmatrix}$ $=\ \begin{pmatrix}g&h\\j&k\end{pmatrix}\begin{pmatrix }k-jx&gx-h\\-j&g\end{pmatrix}$ $=\ \begin{pmatrix}1-gjx&g^2x\\-j^2x&1+gjx\end{pmatrix}$ If this is to be in $K$ then $g=0.$ $\therefore\ gk-hj=1\ \implies\ j=-h^{-1}.$ Hence $A=\begin{pmatrix}0&h\\-h^{-1}&k\end{pmatrix}.$ If $\det(A)=-1,$ then we would have $A=\begin{pmatrix}0&h\\h^{-1}&k\end{pmatrix}.$ So any $A=\begin{pmatrix}0&h\\\pm h^{-1}&k\end{pmatrix}$ where $h,k\in\mathbb Z_7$ and $h\ne0$ will conjugate $H$ into $K.$ 12. I have another question: Let p be a prime. Prove that every group of order $p^2$ is abelian. ~~~~~~~~~~~~~~~~~~~~ Is this done by saying something like that the sylow p-subgroup is uniqe, then it is normal in the group, if this group has a normal subgroup then it is abelian? 13. ## Class Equation Use the class equation to see that the center is non trivial. Then consider the various possibilities for the factor groups. $G/Z(G)$. By LaGrange $|Z(G)|=1,p,p^2$ 1) Can't be 1 by the class equation p^2) This means the center is the whole group so it is abelian p) This is a really elementary proof and it is in pretty much every book, if you can't get this let us know and I am sure someone will write it up explicitly. If $G/Z(G)$ is cyclic, then G is abelian. $|G/Z(G)|=\frac{p^2}{p}=p$ so it is cyclic. 14. Originally Posted by Gamma Use the class equation to see that the center is non trivial. Then consider the various possibilities for the factor groups. $G/Z(G)$. By LaGrange $|Z(G)|=1,p,p^2$ 1) Can't be 1 by the class equation p^2) This means the center is the whole group so it is abelian p) This is a really elementary proof and it is in pretty much every book, if you can't get this let us know and I am sure someone will write it up explicitly. If $G/Z(G)$ is cyclic, then G is abelian. $|G/Z(G)|=\frac{p^2}{p}=p$ so it is cyclic. so the class equation is some thing like $|G| = |Z(G)| + \sum_{i=1}^{r} [G:N(x_i)]$ I don't think I understand the Class Equation properly. So I don't have this proof anywhere, $G/Z(G)$ is cyclic so this implies G is abelian, how? 15. ## Class Equation Sorry I don't have much time to write it out myself for you, I have to get to a final exam. basically p divides |G| and each term of the sum, but if |Z(G)|=1 this would be impossible (move the sum over to the other side and p divides the left side, but not the right. Z(G) is a subgroup, so by lagrange its order has to divide $p^2$ and we see it is not 1. Here is a proof of the G/Z(G) cyclic thing. http://crypto.stanford.edu/pbc/notes...quotient.xhtml Sorry again for the brevity.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 87, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936183750629425, "perplexity_flag": "head"}
http://mathoverflow.net/questions/96661/the-richardson-theorem-and-the-base-identities-problem/96801
## the Richardson theorem and the base identities problem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the fields related to school mathematics there is some acitivity on proving (or disproving) deducibility/decidability for some classes of school identities. In particular, 1) In logic they considered not long ago the base identities problem (this term is the translation from Russian, I am not sure that it is correct). The problem was the following. Let $N$ be the set of positive integers, and $\mathcal K$ a class of all functions from $N^k$ into $N$ ($k$ runs over $N$) which can be represented as compositions of usual algebraic operations $x+y$, $x\cdot y$ and $x^y$. Let us call a base of identities in $\mathcal K$ a set $B$ of identities for functions in ${\mathcal K}$, such that any identity for functions in $\mathcal K$ can be deduced from $B$. The question was, does there exist a finite base of identities for ${\mathcal K}$? This question appeared when A.Wilkie gave a counterexample for the Tarski high school algebra problem (where a list of identities was suggested by Tarski, and the question was whether this list is a base). In 1980-es R.Gurevich proved that there is no finite base of identities, so the problem of base identities is solved in negative. At the same time, as far as I understand, R.Gurevich proved that instead of finite base of identities, there exists a recursive base of identities, and as far as I understand this is an example of what logicians call decidability. 2) In computer algebra there is the so-called Richardson theorem, which states that if $\mathcal R$ is a class of expressions generated by -- the rational numbers and the two real numbers $\pi$ and $ln 2$, -- the variable $x$, -- the operations of addition, multiplication, and composition, and -- the sine, exponential, and absolute value functions, then for $F\in {\mathcal R}$ the predicate $F=0$ is recursively undecidable. My question is whether these two fields are related to each other? Is decidability for Richardson the same as decidability for logicians? If yes, then which exactly logical system does Richardson mean? I am not a specialist here, I am interested in this because I write a textbook on mathematical analysis (I am sorry, this happens sometimes with mathematicians), and when describing elementary functions I faced a problem analogous to the base identities problem above, but the difference is that the list of operations (and elementary functions) is wider (for example, both $x-y$ and $x^y$ are included), and as a corollary the arising functions are defined not everywhere on $R$ (one can look at the details at page 197 in the draft of the first volume of my textbook -- unfortunately, it is in Russian). This is strange, but I can't find anyone who could explain me this. I asked this question in sci.math.research some time ago, but the problem of overcoming the Kevin Buzzard resistance turned out to be undecidable for me there. So I would be much obliged to MO if my question will hang here for some time so that, perhaps, some specialitsts in logic could clarify me something. - You formulation of Richardson's Theorem is wrong since the set $E$ of constant functions given by rational numbers is a counterexample of your claim. After reading Wikipedia page it seems clear to me that you have forgotten to write down some extra hypothesis on the set $E$. – boumol May 11 2012 at 13:50 It is the same notion but note that in Richardson's theorem you need also the function sin(x) etc., see Wikipedia: en.m.wikipedia.org/wiki/Richardson's_theorem#section_1 – Bjørn Kjos-Hanssen May 11 2012 at 13:53 @boumol, @Bjørn Kjos-Hanssen: Yes, excuse me, I have just corrected the formulation. – Sergei Akbarov May 11 2012 at 15:34 I recommend you ask Mark Sapir a very specific question regarding your interest. I think there is an issue that you are trying to state but have not yet. My belief is that Mark's Russian is very good and his knowledge of decidability more than sufficient to resolve your stated and unstated issues. If you are able to pose a well formed question to him, you may get a very good answer very quickly. Gerhard "Ask Me About System Design" Paseman, 2012.05.11 – Gerhard Paseman May 12 2012 at 0:19 @Gerhard: If I were a specialist, I could of course formulate a question in such a way that it would be much easier for another specialist to give an answer, since there would not be necessity to explain elementary things to non-specialists. But in this case there would not be a necessity for me to ask elementry questions, since, being a specialist, I could understand everything independently, without other specialists. – Sergei Akbarov May 12 2012 at 8:03 ## 3 Answers I am not a professional logician, but I have studied mathematical logic, and in past work I used the rough notion (as have many before me) that if you can write a Pascal program to decide correctly the yes or no answer to a problem given the finite set of parameters as input, then the problem or issue is decidable. Otherwise it isn't. Taken at this level, I see both uses of decidability as the same. In one, there is a finite specification which can be used to test whether an identity is in the one set, in the other there is no such program to test whether an equation/identity is in the other set. (There are technical arguments to be made as to which machine model, complexity, degree of undecidability if one looks at e.g. Turing equivalent degrees, and so on. I am setting aside all these complexities and ways to distinguish the two uses of decidability, since they seem to me irrelevant to the basic intent of your question.) I can see both problems as problems of clone theory. Again roughly, the first problem talks about whether there are a finite number of relations in the generators in addition to the general relations for a clone that can be used to describe the collection of equivalence classes of terms (there are not, but there is a recursive set of such relations). The second talks about whether the set of terms in the clone equivalent to the term 0 is describable by a computer program; according to Richardson, it is not. There are other ways to recast the problems to see some similarities and highlight the differences; it depends on just what you want to see. EDIT: Another view of many issues of decidability is this one, borrowed and simplified from one used in complexity studies in computer science. If you have a decision or labelling problem, where you have a set S of instances and for each instance you want to say "yes, instance I has property P" or "no, I does not have P", you take a somewhat Platonist viewpoint and say " I will group those instance which have P into this subset R", and then you end up with two sets, S and a proper subset R. Then you shift to a constructivist mode and ask "Is there a way I can tell quickly, or even mechanically, when a member of S is also a member of R or not?" Then you switch to programmer/computer scientist mode and say "Let's see if I can either a) write a program to determine if an instance is a member of R, or b) translate the domain to one where I can encode the halting problem, so that determining membership in R solves the halting problem" . If the set R is recursive inside S, then a) is possible in theory, but may be difficult or impossible in practice, depending on the complexity of the set R. If the set R is not recursive in S, then b) may or may not be possible, but is usually the first step one tries. How does one show R recursive in S or not? One takes an encoding, which is an injective and computable map from S into the natural numbers (or computably functional equivalent), and then sees if the image of R under this map is a recursive subset of the natural numbers. So this and the previous paragraph are a long winded way of saying that most issues of decidability involve coding the problem up in a way as to move the question into the realm of subsets of natural numbers, and using recursion theory or diagonalization or something to determine the status of the image set. For me, I picture the set of identities or the set of terms as a set of numbers, each number colored with label or term or identity it represents, and I picture the subset with property P as a subset of integers which may or may not be a recursive subset. The set of identities satisfied by the real numbers with exponentiation , addition and multiplication is a set which has a logically equivalent, recursive, and non finite subset. The set of terms in the Richardson theorem which are equivalent to 0 is a nonrecursive subset of the set of all terms used in the context of the theorem. END EDIT - Gerhard, are you saying that these two questions are not connected with each other? – Sergei Akbarov May 11 2012 at 17:11 By the way, there are two ways to interpret "base" in this context: as in a basis or generating set of identities, and as in basic or simple as in high school knowledge is basic knowledge needed to help form knowledge coming from graduate studies. Gerhard "Ask Me About System Design" Paseman, 2012.05.11 – Gerhard Paseman May 11 2012 at 17:15 I think there is some commonality, but I do not think they are very closely related. In a broad study of decidability I would look at both problems, but I would not take the specific techniques of one problem and try to apply them to the other. For one, they have different outcomes: one has a Pascal program to describe it, the other does not. Gerhard "Ask Me About System Design" Paseman, 2012.05.11 – Gerhard Paseman May 11 2012 at 17:18 I thought, I gave an accurate definition for base... – Sergei Akbarov May 11 2012 at 17:19 Which one has a Pascal program to describe it? – Sergei Akbarov May 11 2012 at 17:22 show 11 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Note: I'm not actually familiar with either problem that you ask about, so I'm going by your description. Recursive base of identities means there is a computer program P such that given an identity I, running P will tell you in a finite amount of time whether I is in the base or not. P is called a "decision procedure". Richardson problem being undecidable means something like: given an arbitrary program (Turing machine) P, you can encode the halting problem for P an an expression in $\mathcal R$. That is you can write down a formula that is identically zero if and only if P halts. Since the halting problem is undecidable, there is no decision procedure for telling if such a formula in $\mathcal R$ is identically zero. That's sort of like Hilbert's tenth problem, where you can encode an arbitary program P as a set of diophantine equations, that has a solution iff P halts. Again since the halting problem is undecidable, there is no algorithm to tell whether an arbitrary diophantine system has a solution. I think the absolute value function being available in $\mathcal R$ may have something to do with the undecidability. In symbolic algebra, the Risch algorithm is a finite procedure for telling whether a given expression made from elementary functions and composition has a closed-form indefinite integral. But I seem to remember that if you add the absolute value function, the problem becomes undecidable. - I need somebody to talk about this. When reading texts on this topic I see a lot of unclear places. For example, according to definition - en.wikipedia.org/wiki/Recursive_set - recursive set must lie in the set of positive integers. When Richardson says that the set of true identities is not recursive, this means that the set of all identities is enumerated, I suppose... So there must be a standard procedure for numeration formulas, is it? What is this procedure? Or I don't understand something? – Sergei Akbarov May 12 2012 at 20:35 Sergei, this is a reply to your comment asking about enumerating formulas in $\mathcal R$. Sorry to post it as a separate answer but I no longer have the browser cookie to post it as a followup comment. You don't need a particular standardized enumeration, but just some computable mapping between formulas and natural numbers so that each formula gets a unique number. Such a numbering scheme is traditionally called a "Gödel numbering" and the numbers are called "Gödel numbers" because the idea was (I think) introduced in Gödel's landmark paper (1931) about the incompleteness theorem. A simple Gödel numbering scheme (similar to the one Gödel used) is like this: say the formulas are written in an "alphabet" whose "letters" are $\{\sigma_1,\sigma_2,\ldots\}$. Treat those as natural numbers the obvious way (i.e. $\sigma_k\mapsto k$). So a formula F might be written as $(F_1,F_2,\ldots F_n)$ where the $F_i$ are natural numbers. Then let $$N_F=2^{F_1}\cdot 3^{F_2} \cdot 5^{F_3} \cdots p_n^{F_n}$$ where $p_i$ is the $i$'th prime number. That is the Gödel number for F (under this particular scheme). It's pretty easy to see how to convert a formula to a number and back. Some numbers won't correspond to valid formulas so treat them as identically zero, for example. Maybe you should read an introductory book on logic, if you want more clarity about this stuff. There are some other threads suggesting them. - Thank you, now I recall that in the university our lecturer told something similar. But I suppose, it's too difficult to communicate each other in this way. If you have time and forbearance, you can contact me by e-mail, it is indicated at the website that I gave at my page here in Mathoverflow. Or you can just edit your answer and put there a name of a book about all this. – Sergei Akbarov May 12 2012 at 22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506772756576538, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181293/are-compacta-in-a-complete-infinite-dimensional-normed-space-nowhere-dense
# Are compacta in a complete infinite dimensional normed space nowhere dense? Let $X$ be an infinite dimensional Banach space. I want to show that any compact subset $\varnothing\neq A\subset X$ is nowhere dense. I've been able to prove the statement for $X=(C[0,1],\|\cdot\|_\infty)$ by using an Arzelà-Ascoli argument. But this is not easy to generalize to an arbitrary infinite dimensional Banach space. - ## 1 Answer 1. In an infinite-dimensional normed space the (closed) unit ball is always non-compact: using Riesz's Lemma you can construct a sequence of unit vectors $(e_n)_{n=1}^\infty$ such that $\lVert e_n - e_m \rVert \geq 1/2$ for all $m \neq n$. This sequence can't have a convergent subsequence. It follows that no closed ball is compact. More details on the linked Wikipedia page. 2. Assuming the set $A \neq \emptyset$ is closed and not nowhere dense, it must contain an open ball $B_r(x_0) \subset A$ by definition of “nowhere dense”. Thus, if $X$ is infinite-dimensional, $A$ cannot be compact: otherwise the closed ball $\bar{B}_{r/2}(x_0)$ would have to be compact as a closed subset of $A$, contradicting 1. - 2 I'm sure this came up in many other threads before, but it seemed easier to write an answer than to find a duplicate. – t.b. Aug 11 '12 at 11:21 Note also that completeness doesn't enter the argument. – t.b. Aug 11 '12 at 11:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173673987388611, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118208?sort=votes
## Intuitive meaning of Double Commutant Theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there any intuitive explanation of the Double Commutant Theorem for Von Neumann Algebras? By intuitive I mean in terms of Quantum Mechanics. For example, duality of states and observables in the case of the Gelfand-Naimark Theorem. http://en.wikipedia.org/wiki/Von_Neumann_bicommutant_theorem - May be worth to add link to theorem or say what is it about .... then you will get my + 1 :) – Alexander Chervov Jan 6 at 16:33 @a.chernov done! – Koushik Jan 6 at 16:56 Thank you! +1:) – Alexander Chervov Jan 6 at 17:40 It seems that what you want (as supplied by Nik Weaver) is an intuitive description rather than an intuitive explanation. I zmean, what is an intuiive explanation of something being WOT-closed? – Yemon Choi Jan 6 at 18:27 ## 1 Answer Okay, here's an explanation in terms of quantum mechanics. Let ${\cal A}$ be a family of observables, modeled as self-adjoint operators on some Hilbert space, and let ${\cal U}$ be the group of all unitary transformations that leave every observable in ${\cal A}$ invariant. You can consider ${\cal U}$ to be a kind of symmetry group. Mathematically it is the set of unitaries in the first commutant ${\cal A}'$ of ${\cal A}$, and the set of all observables left invariant by ${\cal U}$ is the double commutant of ${\cal A}$. So the double commutant theorem says that the set of all observables left invariant by every transformation that leaves every observable in ${\cal A}$ invariant, is the self-adjoint part of the von Neumann algebra generated by ${\cal A}$. - 1 This is a nice description, but I don't really see how it's an explanation... – Yemon Choi Jan 6 at 18:28 2 How about "an explanation of the intuitive meaning"? – Nik Weaver Jan 6 at 20:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8651398420333862, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/12461/changing-coordinates-so-that-one-riemannian-metric-matches-another-up-to-second
Changing coordinates so that one Riemannian metric matches another, up to second derivatives Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $g$ and $g'$ be two $C^2$-smooth Riemannian metrics defined on neighborhoods $U$ and $U'$ of $0$ in $\mathbb R^2$, respectively. Suppose furthermore that the scalar curvature at the origin is $K$ under both metrics. My question: Is there a coordinate transformation taking one metric to the other, such that they agree up to second derivatives at the origin? i.e., if $x : U \to U'$ is the transformation, we have $g_{ij}' = g_{ab} ~x_i^a x_j^b$, evaluating everything at $0$; there are similar equations for the first and second derivatives. Clearly this is false if the scalar curvatures aren't equal. I don't care what happens away from the origin. In the excellent thread When is a Riemannian metric equivalent to the flat metric on $\mathbb R^n$?, Greg Kuperberg says: If remember correctly, there is a more general result due to somebody, that any two Riemannian manifolds are locally isometric if and only if their curvature tensors are locally the "same". If "local isometry" means that the metrics are equal on a neighborhood of the origin, then the metrics I have in mind are not locally isometric, since the only information I have is that their curvatures match at one point. Edit: I'm pretty sure that Deane answered my question, but let me clarify. Let $g_{ij}$ be some "reasonable" metric, e.g. a bump surface metric, and consider a point $p$ where the scalar curvature is $K$. Let $g_{ij}'$ be an arbitrary metric on a neighborhood $U$ of the origin in $\mathbb R^2,$ with scalar curvature $K$ at $0$. Then the question becomes: does there exist a coordinate change on the bump surface such that the equation $g_{ij}'(0) = g_{ab}(p) ~x_i^a x_j^b$ is satisfied, as well as the corresponding equations for the first and second derivatives? That is, there are $18$ pieces of pertinent information $(*)~~~g_{11}', g_{12}', g_{22}'; g_{11,1}', g_{12,1}', g_{22,1}', g_{11,2}', g_{12,2}', g_{22,2}'; g_{11,11}', g_{12,11}', g_{22,11}', g_{11,12}', g_{12,12}', g_{22,12}', g_{11,22}', g_{12,22}', g_{22,22}'$. I want to change coordinates on my nice surface such that the metric and its derivatives line up with $(*)$. - When you say "taking one metric to the other" you do not mean that to happen on any open set, do you? – Mariano Suárez-Alvarez Jan 20 2010 at 23:57 Nope! I edited my question to clarify. – Tom LaGatta Jan 21 2010 at 0:16 3 Answers The answer is yes. Just use geodesic normal (also known as exponential) co-ordinates. If you have a book or two on Riemannian geometry, just look for that or a discussion of the exponential map. [ADDITIONAL COMMENT] For a 2-dimensional metric, it's a nice exercise to figure all of this out using Jacobi fields. In fact, in my opinion, the best way to work with and understand the exponential map (which is a natural parameterization of all radial geodesics emanating from a point) is via Jacobi fields and the Jacobi equation. For example, it leads to an easy proof of a standard result, namely that the coefficients of the Taylor series of the exponential map at the origin contain only the Riemann curvature tensor and its covariant derivatives. The answer to your question follows from this theorem. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm going to try to explain the following claim, which I imagine is a theorem of Riemann: if $g$ is a Riemannian metric on $V:=\mathbb{R}^n$, the diffeomorphism-invariant information contained in the 2-jet of $g$ at $0$ is precisely what the curvature tensor $Riem$ (at $0$) sees. The case $n=2$ is, as I understand it, just what you wanted to verify. First, let's look at the $0$th order term $g(0)$, modulo 1-jets of diffeomorphisms fixing $0$. Linear algebra tells us that we can make a linear transformation so that $g(0)=I$ (moreover, we have an $O(V)$'s worth of choices). Now let's try to arrange that $g=I+O(|x|^2)$. We'll do this by acting by 2-jets of diffeomorphisms $\phi(x)_i=x_i+a_i^{jk}x_jx_k$ to kill the $O(|x|)$ term in $g$. Well, the first order term of $g$ is in $Sym^2(V^\ast)\otimes V$ (because you look at $\partial g_{ij}/\partial x_k$) while the second order term of $\phi$ is in $V\otimes Sym^2(V^\ast)$. The diffeos act changing the 1st order term (according to my calculation) by $2(a_j^{ik}+a_k^{ij})$. The formula $a_i^{jk}\mapsto 2(a_j^{ik}+a_k^{ij})$ defines a surjective linear map $V^\ast\otimes Sym^2(V)\to Sym^2(V)\otimes V^\ast$, and that does the job. The linear map is then also injective, so we had no choices in this step. The quadratic term of $g$ lies in $Sym^2(V^\ast)\otimes Sym^2(V)$, and we act by 3-jets of diffeos whose cubic term is in $V\otimes Sym^3(V^\ast)$ (because of the symmetry among the third-order partials). So the diffeomorphism-invariant part of the 2-jet of $g$ is the cokernel of a certain linear (symmetrization) map $$f\colon V \otimes Sym^3(V^\ast) \to Sym^2(V)\otimes Sym^2(V^\ast).$$ The class of $g$ in $coker(f)$ is essentially the curvature at $0$. Indeed, $f$ turns out to be injective, so $coker(f)$ has dimension $$\frac{1}{4}n^2(n+1)^2 - \frac{1}{6}n^2(n+1)(n+2) = \frac{1}{12}n^2(n^2-1):$$ precisely the dimension of the space of Riemann curvature tensors, as computed by looking at the standard symmetries. I've described this space as a representation of $O(V)$. I think this approach might be in Spivak's multi-volume book. I heard it from Donaldson, and I hope I haven't mangled it too badly. Edit: as Deane notes, the coordinates I've described are geodesic normal coordinates. I like the method I've sketched because it requires no technology; one doesn't even need to define a connection. The coordinate change (up to third order) is found by linear algebra, not solving ODE. - Spivak probably comments in passing something along the lines of "in the years of yore, when students where familiar with classical invariant theory, everybody found this obvious" or something like that :P – Mariano Suárez-Alvarez Jan 21 2010 at 2:00 Your argument is essentially correct (it might be totally correct but I haven't read every word). The act of changing from the original co-ordinates to geodesic normal co-ordinates performs the exact normalization that you are describing. Geometric arguments tell you exactly how to normalize. – Deane Yang Jan 21 2010 at 2:01 @Tim: Since the goal is just a normal form up to second order at one point, you don't need to solve the ODE completely. You can simply write down the equations and solve them formally up to second order. The geometry (i.e., the Jacobi equation) provides a nice guide on how to do what you describe in an explicit way. Still, the approach and perspective you describe above is also very important and useful in certain contexts (which presumably is why Donaldson mentioned it). – Deane Yang Jan 21 2010 at 15:21 As far as I understand, your question and its natural generalizations belongs to the very developed theory (now it is a part of singularity theory, I think). This theory was started by Tresse: Tresse, A., Sur les Invariants Differentiels des Groupes Continus des Transformations, Acta Mathematica, 1894, vol.18, pp.1-88. See also an introduction and references to S. Dubrovskiy's paper "Moduli space of symmetric connections" http://arxiv.org/abs/math/0112291 - Thank you for the reference, Petya! – Tom LaGatta Mar 7 2010 at 21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351075291633606, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/105668/why-are-modules-called-modules
Why are modules called modules? I know that a module is a generalization of a vector space, but I would like to know why are modules called modules? - 2 – Dylan Moreland Feb 4 '12 at 15:53 1 Answer The name "module" was introduced by Dedekind in his work on ideals and number fields. You can find both his paper (in translation) and his explanation in Stillwell's translation of Dedekind's third exposition of the theory of ideals, "Theory of Algebraic Integers", Cambridge University Press, Cambridge, 1996. It's a very nice read, and it is (perhaps) suprisingly modern. Except for the fact that it takes what today would be considered an analytical detour, you could use this as a textbook in a class in algebraic number theory with almost no change in nomenclature, notation, or arguments. Essentially, when working in number fields (finite extensions of $\mathbb{Q}$), and more specifically, in rings of integers of number fields (the collection of all elements in a finite extension $K$ of $\mathbb{Q}$ that satisfy a monic polynomial with integer coefficients), Dedekind isolated the necessary properties to be able to make "modular arguments": closure under differences and absorption of multiplication. The idea was to reify Kummer's notion of "ideal number". Instead of inventing a new, not-really-existing-number to rescue unique factorization, you consider the collection of all elements that "would be" multiples of that ideal number. (It's the same idea he used to defined real numbers as Dedekind cuts; instead of defining the numbers into existence, you identify a real number with the set of all rationals that "would be" less than or equal to the real number.) Dedekind notes that the "collection of all multiples of $\alpha$" satisfies the conditions of being nonempty, closed under differences, and absorption of multiplication, and so this allows you to use "modular arguments" (as in, $a\equiv b\pmod{\alpha}$). So he called them modules, because you could "mod out" by them and do modular arithmetic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371737837791443, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/279569/getting-a-feel-for-the-transformation-a-on-vector-x-which-lies-outside-of-any-ei?answertab=active
# Getting a feel for the transformation A on vector x which lies outside of any eigenspace In one of his videos, after 13:25 Sal starts to talk about the interpretation of the eigenvectors and how they relate to a vector $x$ being transformed by the matrix $A$. He then goes through showing what happens when $x$ would be in one of the eigenspaces. My questions are: 1. what is the interpretation of the transformation being applied to a vector outside of any eigenspace, like any regular 3-dimensional $x$? 2. How could I "visualize" what happens to such a vector? 3. Is it related somehow to the characteristic polynomial? What is the meaning of the characteristic polynomial, beside giving us the eigenvalues at the intersections with the $x$-axis? - ## 2 Answers So long as your matrix is real and dimension is 2 or 3 then some software aid is available and in 2d you can do much of it by hand. You start with a unit circle in $xy$ plane and ask what does your matrix do to each point $(x,y)$ on circle. Your circle typically maps into an ellipse. Apps such as this do the work for you. - I didn't have a correct definition of eigenspaces. I'm not an expert on the last part, but I'll attempt to explain anyway. 1. If A has a limiting matrix $A^\infty$ (powers can't flip back and forth, for example), then for every (column) vector v, $A^\infty v$ is an eigenvector of A. 2. So the vectors that are not in any eigenspace tend towards the eigenspaces asymptotically. 3. "Meaning is not in things, but in between them." - Norman O. Brown. Matrices and polynomials both form rings. Rings are often studied geometrically by analogy with the polynomial ring, and how the 0-sets of polynomials behave best as curves, like x^2+y^2-1=0. Sending every matrix to its characteristic polynomial is a morphism between curves, called a regular map (as is sending it to that polynomial's discriminant, see this question), and the fact that it vanishes at every point of the n-by-n matrix algebra treated as the affine plane of dimension $n^2$ is proof of the Cayley-Hamilton theorem from a different perspective. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585033655166626, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78994?sort=oldest
## Where do the real analytic Eisenstein series live? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In obtaining the spectral decomposition of $L^2(\Gamma \backslash G)$ where $G=SL_2(\mathbb{R})$, and $\Gamma$ is an arithmetic subgroup (I am satisfied with $\Gamma = SL (2,\mathbb{Z})$) we have a basis of eigenfunctions of the hyperbolic Laplacian, and orthogonal to that we have the space spanned by the incomplete Eisenstein series, $$E(z,\psi) = \sum_{\Gamma_\infty \backslash \Gamma} \psi (\Im(\gamma z)) = \frac{1}{2\pi i}\int_{(\sigma)} E(z,s)\tilde{\psi}(s)\mathrm{d}s$$ where $\psi \in C_c^\infty(\mathbb{R}^+)$, $\tilde{\psi}$ is its Mellin transform, and $E(z,s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \Im(\gamma(z))^s$ is the usual Eisenstein series. My question is, where does $E(z,s)$ itself live with respect to the vector space $V = L^2(\Gamma \backslash G)$ which can be considered as the vector space of the right regular representation of $G$, and what is this parameter $s$? A similar question of course goes for $\mathbb{R}$, where does $e^{2\pi i x}$ live with respect to $(L^2(\mathbb{R}), \rho)$? I would appreciate a representation theoretic flavored answer, that is why I mentioned representations, but any other answer would also be an addition to my understanding of this. In general, is there an associated space to $(V,\pi)$, an automorphic representation, such that the elements of the vector space are of moderate or rapid growth, instead of decay. - I know very little about this kind of representation theory, but it seems that the concept of "rigged Hilbert spaces" is relevant here. See here: en.wikipedia.org/wiki/Rigged_Hilbert_space – Mark Schwarzmann Apr 9 2012 at 13:27 1 At their parents' place? – Emilio Pisanty Apr 9 2012 at 16:22 ## 4 Answers Surely there is not a single good answer, since the question is about how to legitimize "generalized eigenvectors", and there is no single-best notion of "legitimize". As in other answers, one interpretation of Eisenstein series is as being in the dual to "rapidly decreasing" functions. This has various weaknesses. "Continuous, moderate growth" is a better space for many purposes, but, note, it does not contain $L^2$ (!), but does contain suitable positively-indexed Sobolev spaces [sic]. There are interesting difficulties in understanding what "moderate growth" (of a given exponent) might mean, if/when one wants these spaces to be representation spaces for $G$ on $\Gamma\backslash G$. The most naive-and-appealing definitions of topological vector space structures are not $G$-stable, for elementary reasons, but sensible adaptations are easy, when one allows (by now 60-year-old) topological vector space notions. In a different direction, note that the Plancherel theorem for afms does not depend upon knowing a space in which $E_{1/2+it}$ lies, any more than the usual Plancherel for Fourier transform on $\mathbb R$ depends on knowing "where $e^{i\xi x}$ lies". - Thank you for your answer. I did not understand what you meant by "the most naive and appealing definitions of topological vector space structures". The way I know to incorporate that some function on G is moderately growing is that we take its NAK decomposition, our functions will be periodic and bounded with respect to N, and K is compact anyway, and we have some ordering on A to talk about growth. And given a representation and a vector (and another vector in the dual space) we may talk about the matrix coefficient to pass to a function on G. – Eren Mehmet Kiral Oct 29 2011 at 20:03 @Eren Mehmet Kiral, yes, as you say, there are several devices to topologize functions on $G$ (or on Siegel sets, or...) which have moderate growth. In my experience, one often must be alert to an implicit question, of whether one wants/needs the resulting space to be a (continuous!) representation space for $G$. Sometimes not, but sometimes yes. In brief, it is easy to tell needless falsehoods. This is already clear in the action of $G=\mathbb R$ on itself: the space of continuous, bounded functions on $\mathbb R$ is not a repn space, but continuous and going to 0 at $\infty$ is. Thus, ... – paul garrett Oct 31 2011 at 13:32 ... ascending unions (filtered colimits) or nested intersections (filtered limits) of spaces of various exponents of moderate growth (on $\mathbb R$ or on $SL(2,\mathbb)$) can readily be demonstrated to be (continuous!) repn spaces, while the "limitands" require slightly more delicate treatment. Often that delicacy is irrelevant to the problem at hand, of course. Also, notions of moderate growth or rapid decay for afms really best should refer to Siegel sets. E.g., as $z=x+iy$ approaches the real axis in a fashion cutting across infinitely-many distince fundamental regions, ... – paul garrett Oct 31 2011 at 13:38 ... waveforms' sups of absolute values do not go to zero: "nothing is of rapid decay"??? But this misunderstanding is needless, too. – paul garrett Oct 31 2011 at 13:39 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is kind of a complicated question, since there isn't really a single good answer. We begin with a simple Lie group $G$ (for simplicity!). On the one hand, we hopefully have a description of the unitary representations of $G$. On the other hand, we may want to understand how spaces such as $L^2(H\backslash G)$, where $H$ may be trivial or discrete or maximal compact or etc), decompose into unitary representations of $G$ (that it will decompose is known on general (highly nontrivial) principles). At least two issues arise. First, what does it mean for a representation to "appear" in the decomposition of $L^2(H\backslash G)$? We'd like it to mean that there exists an $f\in L^2(H\backslash G)$ such that $f$ generates the representation. This can't possibly work in general, and it already fails for $L^2(\mathbb R)$. Basically, whenever $H\backslash G$ is not compact, there will be a "continuous" part to the decomposition made up of unitary representations that can't be found as subrepresentations of $L^2(H\backslash G)$. Personally, a priori, it is surprising to me that you can integrate a bunch of stuff not in $L^2$ and wind up with something in $L^2$. But then, I think about Fourier inversion and Paley-Wiener theorems, and it's not so surprising. (In fact, if you believe your future will contain a nontrivial amount of harmonic analysis, you should try to become well-acquainted with Fourier theory.) Now, there are functions on $H\backslash G$ that generate these representations and they usually aren't very far from being in $L^2(H\backslash G)$ (like $e^{ix}$ on $\mathbb R$ and Eisenstein series on $\Gamma\backslash\mathfrak H$), but there really is no way to force them in there. A person might wonder why a benevolent God would allow this to happen, but that is outside of my expertise. Second, which representations will appear in $L^2(H\backslash G)$? For example, the trivial representation appears in $L^2(H\backslash G)$ if and only if $H\backslash G$ has finite volume. And complementary series representations don't seem to appear at all (usually)! (This is Selberg's Conjecture.) On a hopefully more helpful note, with certain definitions of a Schwartz space on $H\backslash G$, you can realize these functions as tempered distributions (meaning continuous linear functionals on the Schwartz space). In fact, the space of functions with uniform moderate growth on $\Gamma\backslash \mathfrak H$ contains Eisenstein series and is contained in the dual of the Schwartz space for $\Gamma\backslash\mathfrak H$. See some of Casselman's work, here and here. In a different direction, there is Schmid and Miller's work on automorphic distributions, e.g. here. - The dual of the Schwartz space is good. But what about functions that grow exponentially, such as the I-Bessel function. Do we need a space of functions which decay even faster, and look at their dual space to fit exponentially growing functions in. – Eren Mehmet Kiral Oct 29 2011 at 20:23 You could try something like that, though perhaps it would be simpler to consider the dual space to smooth functions with compact support. I'm not sure you will gain much from this perspective. – BR Oct 31 2011 at 20:03 For any $s\in\mathbb{C}$, the Eisenstein series $E(z,s)$ does not live in $L^2(\Gamma\backslash\mathcal{H})$, where $\mathcal{H}$ is the upper half-plane. It is "smallest" when $\Re(s)=1/2$ in which case it almost lives there, namely it is in $L^{2-\epsilon}(\Gamma\backslash\mathcal{H})$. In this case we can also regard $E(z,s)$ as a function $E(g,s)$ in $L^{2-\epsilon}(\Gamma\backslash G)$ which is right invariant by $K=SO_2(\mathbb{R})$. Applying the Maass raising and lowering operators on $E(g,s)$ results in functions in $L^{2-\epsilon}(\Gamma\backslash G)$ which transform by a character of $K$ under the right $K$-action. The vector space generated by these Maass shifts is the automorphic representation associated with $E(g,s)$: it is an infinite dimensional subspace $V\subset L^{2-\epsilon}(\Gamma\backslash G)$ consisting of $K$-finite functions. It is also useful to consider the closure of $V$ in $L^{2-\epsilon}(\Gamma\backslash G)$ or natural spaces in between these two extremes: e.g. the set of functions whose partial derivatives all lie in $L^{2-\epsilon}(\Gamma\backslash G)$. - Thank you for your answer. Why would we take the smallest possible and take $\Re(s) = 1/2$ in order to form the representation you have described? It seems like any $s$ in the critical strip should produce a representation $V$ lying in some $L^p(\Gamma\backslash G)$ for $1<p<2$. Why would only $\Re(s) = 1/2$ show up in the decomposition of $L^2(\Gamma \backslash G)$ appear into unitary representations. Or maybe do the other Eisenstein series show up in the decomposition of non $L^2$ spaces? – Eren Mehmet Kiral Oct 29 2011 at 20:41 I don't think $L^p(\Gamma\backslash G)$ for $1<p<2$ has a similar decomposition as for $p=2$. The proof for the case $p=2$ makes great use of the extra sructure available: $L^2(\Gamma\backslash G)$ is a Hilbert space and the Laplace-Beltrami operator is self-adjoint on this space. The decomposition we are talking about is really the spectral decomposition of a self-adjoint operator on a Hilbert space. This is by no means an easy result, I could not give a quick reason why it is true. (To be continued.) – GH Oct 29 2011 at 21:47 (Continued.) At any rate, self-adjointness and positivity implies (or at least indicates) that only Laplacian eigenvalues $\lambda>0$ are present. For $E(z,s)$ we have $\lambda=s(1-s)$ which forces $\Re(s)=1/2$ or $0<s<1$. The missing of values $0<s<1$ is not so obvious, it is part of the proof. In fact we conjecture that in the full decomposition we don't see Laplacian eigenvalues $0<\lambda<1/4$ at all (this is the Selberg conjecture), but we don't have a proof yet. – GH Oct 29 2011 at 21:53 I can give an additional point of view, which is coming from the theory of parabolic induction. Parabolic induction plays a prominent role in representation theory and gives you a better intuition for the higher rank situation. This point of view is often better stressed in the adelic theory than in the classical picture. Let $G =PSL_2(\mathbb{R})$ with standard parabolic $B$ and $\Gamma$ a cofinite lattice. My intuition is that te analytic Eisenstein series on $\Gamma \backslash G$ are vectors of the induced representation: $$Ind_{\Gamma N}^G 1,$$ but there is one major issue with this, namely that $\Gamma N$ is not a group. Rigorously seen they are given as $P$ series, i.e. $$E: C_c^\infty(N \backslash G) \rightarrow L^2(\Gamma \backslash G),$$ by defining the $B$ series $$E(f) (g)= \sum\limits_{B \cap \Gamma \backslash \Gamma} f(\gamma g).$$ The image of $E$ generates a dense subspace of the orthocomponent to the cuspidal forms. Now, we one notices that $C_c^\infty(N \backslash G)$ is a dense subspace of $Ind_N^G 1$. Induction by steps gives a decomposition $$Ind_N^G 1 \cong Ind_B^G Ind_N^B 1.$$ Now for $Ind_N^B 1 \cong L^2(B/N) = L^2(M)$, where $M$ are the diagonal matrices. Pontryagin duality gives you a direct integral decomposition of $L^2(M)$, and we have as a result $$Ind_N^G 1 \cong \int\limits_{\Re s = 0}^{\oplus} Ind_B^G | \cdotp |^s.$$ Certainly one hopes that $E$ extends to $Ind_B^G | \cdotp |^s$, but convergence only happens $\Re s >1/2$, and the operators has to be defined by analytic continuation to make sense on $\Re s = 0$. Perhaps it useful to give at least one definition here: functions $f \in Ind_B^G | \cdotp |^s$ are defined as $f(bg) = |b_{1,1} / b_{2,2}|^{s+1/2} f(g)$ for $b \in B$ with $f|_K\in L^2(K)$ for $K= PSO(2)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 116, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453427791595459, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/07/18/unsolvable-inhomogenous-systems/?like=1&_wpnonce=67418a1937
The Unapologetic Mathematician Unsolvable Inhomogenous Systems We know that when an inhomogenous system has a solution, it has a whole family of them. Given a particular solution, it defines a coset of the subspace of the solutions to the associated homogenous system. And that subspace is the kernel of a certain linear map. But must there always be a particular solution to begin with? Clearly not. When we first talked about linear systems we mentioned the example $3x^1+4x^2=12$ $x^1-x^2=11$ $x^1+x^2=10$ In our matrix notation, this reads $\displaystyle\begin{pmatrix}3&4\\1&-1\\1&1\end{pmatrix}\begin{pmatrix}x^1\\x^2\end{pmatrix}=\begin{pmatrix}12\\11\\10\end{pmatrix}$ or $Ax=b$ in purely algebraic notation. We saw then that this system has no solutions at all. What’s the problem? Well, we’ve got a linear map $A:\mathbb{F}^2\rightarrow\mathbb{F}^3$. The rank-nullity theorem tells us that the dimension of the image (the rank) plus the dimension of the kernel (the nullity) must equal the dimension of the source. But here this dimension is $2$, and so the rank can be at most $2$, which means there must be some vectors $b\in\mathbb{F}^3$ which can’t be written as $b=Ax$ no matter what vector $x$ we pick. And the vector in the example is just such a vector outside the image of $A$. The upshot is that we can only solve the system $Ax=b$ if the vector $b$ lies in the image of the linear map $A$, and it might be less than obvious what vectors satisfy this requirement. Notice that this is more complicated than the old situation for single equations of single variables. In that case, the target only has one dimension, and the linear transformation “multiply by the number $A$” only misses this dimension if $A=0$, which is easy to recognize. Like this: Posted by John Armstrong | Algebra, Linear Algebra 3 Comments » 1. [...] the system is inhomogenous in general, and as such it might not have any solutions. Since every short exact sequence of vector spaces splits we can write . Then the vector will have [...] Pingback by | July 22, 2008 | Reply 2. [...] there’s no possible way to satisfy the equation. That is, we can quickly see if the system is unsolvable. On the other hand, if lies completely within the nonzero rows of , it’s straightforward to [...] Pingback by | August 18, 2009 | Reply 3. [...] echelon form of a system of equations has a leading coefficient in the last column, that system is unsolvable. We don’t even need to climb the ladder to know [...] Pingback by | September 2, 2009 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9215522408485413, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/249181/is-kappa-lambda-2-lambda2-le-kappa-lambda-lambda-infinite-valid-i?answertab=active
# Is $\kappa^\lambda=2^\lambda$($2 \le \kappa<\lambda$,$\lambda$ infinite) valid in set models of ZF? Let $2 \le \kappa<\lambda$(both cardinal numbers), in which $\lambda$ is infinite. Then these formula as follows hold where in ZFC: 1. $\lambda+\kappa=\lambda$ 2. $\lambda\cdot\kappa=\lambda$ 3. $\kappa^\lambda=2^\lambda$ However, if $\lambda$ is not Dedekind-infinite, then 1,2 fail. But for 3, it's not quite clear. To prove it. Obviously $2^\lambda\le\kappa^\lambda$; for the other direction, I only got $\kappa^\lambda\le 2^{\kappa \cdot \lambda}$, but $2^{\kappa \cdot \lambda}=2^{\lambda}$ seems not valid. So my question: Is 3 also valid in all set models of ZF? - Note that in ZFC $\lambda^\kappa\neq\lambda$ for many pairs of $\kappa<\lambda$. The third, if so, does not need AC to fail in order to fail. It just fails a lot. One example would be $\kappa=\aleph_0$ and $\lambda=\aleph_\omega$. Another would be the failure of CH and taking $\aleph_1^{\aleph_0}$. – Asaf Karagila Dec 2 '12 at 14:04 @AsafKaragila Okay, I'm going to correct it. – Popopo Dec 2 '12 at 14:13 @P.., a dozen edits in a matter of minutes floods the front page with old questions, and is generally seen as a bad idea. Please, take it easy. – Gerry Myerson Mar 21 at 12:22 @GerryMyerson: Sorry about that. I didn't think it through. – P.. Mar 21 at 14:04 ## 1 Answer Assuming that $\lambda\cdot\kappa=\lambda$, yes -- as the proof follows through immediately. In particular when the two are ordinals then it is true. However for general cardinals this may be false. For example if $\lambda$ is the cardinal of an amorphous set then $2^\lambda$ is Dedekind-finite. It follows that $3^\lambda$ is strictly larger than $2^\lambda$, otherwise we could have omitted some of the functions and retain the same cardinality, which would imply that $2^\lambda$ is Dedekind-infinite. - If a set is D-finite, is its powerset also D-finite? – Popopo Dec 2 '12 at 14:43 Not always. But if a set is amorphous then its power set is D-finite. – Asaf Karagila Dec 2 '12 at 14:50 I don't have enough inspiration...Could you please give me a hint? – Popopo Dec 2 '12 at 15:41 @Popopo: There are several ways. The simplest in this case is the direct method. Show that if there was a countable collection of subsets then you may assume they are all finite (note that the finite and infinite subsets of an amorphous sets stand in bijection) and therefore the union is infinite. Without loss of generality you can therefore assume that the union is the entire amorphous set; fix an enumeration of these sets, $X_n$. Let $A_e$ be the sets which appear for the first time in the in an even-index set, and $A_o$ for odd-index sets. Show that these are disjoint infinite sets. – Asaf Karagila Dec 2 '12 at 15:46 Why $A_e$ and $A_o$ are disjoint? – Popopo Dec 2 '12 at 15:55 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544762969017029, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/102964/convergence-of-moments-implies-convergence-to-normal-distribution
## Convergence of moments implies convergence to normal distribution ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have a sequence ${X_n}$ of random variables supported on the real line, as well as a normally distributed random variable $X$ (whose mean and variance are known but irrelevant). I know that the moments of the $X_n$ converge to the corresponding moment of $X$, that is, for every $k\ge1$, $$\lim_{n\to\infty} \mu_k'(X_n) = \mu_k(X).$$ I need to conclude that the $X_n$ converge to $X$ in distribution. I believe that this is a standard fact in probability, and I would like an excellent source (including a clear statement and proof) for this fact, to cite in a paper I'm writing. (The application is to number theory, which is why I added the probabilistic-number-theory tag.) I also believe that this conclusion holds for many, but not all, random variables $X$ and not just a normally distributed one; I'd be happy for a general statement or one that applies only to a normal variable. Nominations for a good citing source, anyone? - 2 en.wikipedia.org/wiki/…) ? – Qiaochu Yuan Jul 24 at 0:05 1 @Qiaouchu's link has a typo. Here is the right link: en.wikipedia.org/wiki/…) – Igor Rivin Jul 24 at 14:31 @Igor's link has the same typo. Third time's a [charm](en.wikipedia.org/wiki/…)? – Erick Wong Jul 31 at 4:09 ## 2 Answers It is theorem 30.2 in Billingsley's Probability and Measure (I own a second Polish edition, so numbering may differ a little). It's quite easy to prove it, once you estabilish Prokhorov's theorem; namely use boundedness of some moments to conclude that your sequence of distributions is tight and then it suffices to convey everyone that every convergent subsequence of $(X_n)$ converges to $X$ (because convergence in distribution is metrizable), which is easy, because the limit is characterized by its moments. Before that, one needs a lemma stating that convergence in distribution combined with convergence of moments implies that moments converge to the moments of the limit. - The English editions only have 24 chapters, so I'm not sure what result this is. It seems to me that the difficult part is showing that the limit is characterised by its moments (which is false for certain distributions). – Ian Morris Jul 26 at 10:53 @Ian: The English edition I own (third edition, 1995) has 38 sections grouped into 7 chapters. The theorem Mateusz refers to is also Theorem 30.2 on p. 390 of my copy. – Mark Meckes Jul 26 at 13:37 1 @Ian: it is an assumption of this theorem that distribution of limit is characterized by its moments. Without this, it is clearly false, because we can take two random variables $X$ and $Y$ which have different distributions and equal moments, and then just take $X_{n} \equiv Y$. – Mateusz Wasilewski Jul 26 at 14:57 Oops, I was looking at the wrong book by Billingsley! The result required to show that the normal distribution is characterised by its moments is also in the book Mateusz suggests, as Theorem 30.1. – Ian Morris Jul 26 at 15:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a famous problem known as the Hamburger moment problem. It is possible though to get the same result for the normal distribution with a much smaller number of assumptions than requiring convergence for all moments with $k\geq1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422847628593445, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/5690/how-to-compare-median-survival-between-groups
# How to compare median survival between groups? I'm looking into median survival using Kaplan-Meier in different states for a type of cancer. There are quite big differences between the states. How can i compare the median survival between all the states and determine which ones are significantly different from the mean median survival all across the country? - Could you pls give some indication about sample sizes, time frame, % survival, etc. so that we get a better idea of the design of your study? – chl♦ Dec 23 '10 at 21:40 are there censored values in the data - other than for the largest values? – ronaf Dec 25 '10 at 4:31 There are indeed censored values in the data and the total population is approx 1500, median overall survival is 18 months (range 300-600 days)... the time frame is the period 2000-2007. – Misha Dec 26 '10 at 17:10 ## 3 Answers One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies behind it. This is a strength because this means there is virtually no assumptions that might be broken, but a weakness because it is hard to generalise it, and that it fits "noise" as well as "signal". If you want to make an inference, then you basically have to introduce something that is unknown that you wish to know. Now one way to compare the median survival times is to make the following assumptions: 1. I have an estimate of the median survival time $t_{i}$ for each of the $i$ states, given by the kaplan meier curve. 2. I expect the true median survival time, $T_{i}$ to be equal to this estimate. $E(T_{i}|t_{i})=t_{i}$ 3. I am 100% certain that the true median survival time is positive. $Pr(T_{i}>0)=1$ Now the "most conservative" way to use these assumptions is the principle of maximum entropy, so you get: $$p(T_{i}|t_{i})= K exp(-\lambda T_{i})$$ Where $K$ and $\lambda$ are chosen such that the PDF is normalised, and the expected value is $t_{i}$. Now we have: $$1=\int_{0}^{\infty}p(T_{i}|t_{i})dT_{i} =K \int_{0}^{\infty}exp(-\lambda T_{i})dT_{i}$$ $$=K \left[-\frac{exp(-\lambda T_{i})}{\lambda}\right]_{T_{i}=0}^{T_{i}=\infty}=\frac{K}{\lambda}\implies K=\lambda$$ and now we have $E(T_{i})=\frac{1}{\lambda}\implies \lambda=t_{i}^{-1}$ And so you have a set of probability distributions for each state. $$p(T_{i}|t_{i})= \frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)\;\;\;\;\;(i=1,\dots,N)$$ Which give a joint probability distribution of: $$p(T_{1},T_{2},\dots,T_{N}|t_{1},t_{2},\dots,t_{N})= \prod_{i=1}^{N}\frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)$$ Now it sounds like you want to test the hypothesis $H_{0}:T_{1}=T_{2}=\dots=T_{N}=\overline{t}$, where $\overline{t}=\frac{1}{N}\sum_{i=1}^{N}t_{i}$ is the mean median survivial time. The severe alternative hypothesis to test against is the "every state is a unique and beautiful snowflake" hypothesis $H_{A}:T_{1}=t_{1},\dots,T_{N}=t_{N}$ because this is the most likely alternative, and thus represents the information lost in moving to the simpler hypothesis (a "minimax" test). The measure of the evidence against the simpler hypothesis is given by the odds ratio: $$O(H_{A}|H_{0})=\frac{p(T_{1}=t_{1},T_{2}=t_{2},\dots,T_{N}=t_{N}|t_{1},t_{2},\dots,t_{N})}{ p(T_{1}=\overline{t},T_{2}=\overline{t},\dots,T_{N}=\overline{t}|t_{1},t_{2},\dots,t_{N})}$$ $$=\frac{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{t_{i}}{t_{i}}\right) }{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{\overline{t}}{t_{i}}\right) } =exp\left(N\left[\frac{\overline{t}}{t_{harm}}-1\right]\right)$$ Where $$t_{harm}=\left[\frac{1}{N}\sum_{i=1}^{N}t_{i}^{-1}\right]^{-1}\leq \overline{t}$$ is the harmonic mean. Note that the odds will always favour the perfect fit, but not by much if the median survival times are reasonably close. Further, this gives you a direct way to state the evidence of this particular hypothesis test: assumptions 1-3 give maximum odds of $O(H_{A}|H_{0}):1$ against equal median survival times across all states Combine this with a decision rule, loss function, utility function, etc. which says how advantageous it is to accept the simpler hypothesis, and you've got your conclusion! There is no limit to the amount of hypothesis you can test for, and give similar odds for. Just change $H_{0}$ to specify a different set of possible "true values". You could do "significance testing" by choosing the hypothesis as: $$H_{S,i}:T_{i}=t_{i},T_{j}=T=\overline{t}_{(i)}=\frac{1}{N-1}\sum_{j\neq i}t_{j}$$ So this hypothesis is verbally "state $i$ has different median survival rate, but all other states are the same". And then re-do the odds ratio calculation I did above. Although you should be careful about what the alternative hypothesis is. For any one of these below is "reasonable" in the sense that they might be questions you are interested in answering (and they will generally have different answers) • my $H_{A}$ defined above - how much worse is $H_{S,i}$ compared to the perfect fit? • my $H_{0}$ defined above - how much better is $H_{S,i}$ compared to the average fit? • a different $H_{S,k}$ - how much is state $k$ "more different" compared to state $i$? Now one thing which has been over-looked here is correlations between states - this structure assumes that knowing the median survival rate in one state tells you nothing about the median survival rate in another state. While this may seem "bad" it is not to difficult to improve on, and the above calculations are good initial results which are easy to calculate. Adding connections between states will change the probability models, and you will effectively see some "pooling" of the median survival times. One way to incorporate correlations into the analysis is to separate the true survival times into two components, a "common part" or "trend" and an "individual part": $$T_{i}=T+U_{i}$$ And then constrain the individual part $U_{i}$ to have average zero over all units and unknown variance $\sigma$ to be integrated out using a prior describing what knowledge you have of the individual variability, prior to observing the data (or jeffreys prior if you know nothing, and half cauchy if jeffreys causes problems). - (+1) Very interesting. Your post also made me insert a comment in my answer. – GaBorgulya Apr 2 '11 at 18:08 Perhaps I have missed it, but where is $M_1$ defined? – cardinal Apr 2 '11 at 21:06 @cardinal, my apologies - its a typo. will be removed – probabilityislogic Apr 2 '11 at 23:44 no apology necessary. Just wasn't sure if I had skipped over it while reading or was simply missing something obvious. – cardinal Apr 3 '11 at 0:43 ## Did you find this question interesting? Try our newsletter email address First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot. The “mean median survival all across the country” is a quantity that is estimated from the data and thus has uncertainty so you can not take it as a sharp reference value during significance testing. An other difficulty with the mean-of-all approach is that when you compare a state median to it you are comparing the median to a quantity that already includes that quantity as a component. So it is easier to compare each state to all other states combined. This can be done by performing a log rank test (or its alternatives) for each state. (Edit after reading the answer of probabilityislogic: the log rank test does compare survival in two (or more) groups, but it is not strictly the median that it is comparing. If you are sure it is the median that you want to compare, you may rely on his equations or use resampling here, too) You labelled your question [multiple comparisons], so I assume you also want to adjust (increase) your p values in a way that if you see at least one adjusted p value less than 5% you could conclude that “median survival across states is not equal” at the 5% significance level. You may use generic and overly conservative methods like Bonferroni, but the optimal correction scheme will take the correlations of the p values into consideration. I assume that you don't want to build any a priori knowledge into the correction scheme, so I will discuss a scheme where the adjustment is multiplying each p value by the same C constant. As I don't know how to derive the formula to obtain the optimal C multiplyer, I would use resampling. Under the null hypothesis that the survival characteristics are the same across all states, so you can permutate the state labels of the cancer cases and recalculate medians. After obtaining many resampled vectors of state p values I would numerically find the C multiplyer below which less than 95% of the vectors include no significant p values and above which more then 95%. While the range looks wide I would repeatedly increase the number of resamples by an order of magnitude. - Good advice about visualising the data. (+1) – probabilityislogic Apr 3 '11 at 0:56 @probabilityislogic Thanks! I also welcome criticism, particularly if constructive. – GaBorgulya Apr 3 '11 at 1:16 – probabilityislogic Apr 3 '11 at 1:32 Thought I just add to this topic that you might be interested in quantile regression with censoring. Bottai & Zhang 2010 proposed a "Laplace Regression" that can do just this task, you can find a PDF on this here. There is a package for Stata for this, it has yet not been translated to R although the quantreg package in R has a function for censored quantile regression, crq, that could be an option. I think the approach is very interesting and might be much more intuitive to patients that hazards ratios. Knowing for instance that 50 % on the drug survive 2 more months than ones that don't take the drug and the side effects force you to stay 1-2 months at the hospital might make the choice of treatment much easier. - I don't know "Laplace Regression", but regarding your 2nd paragraph I wonder if I'm understanding it correctly. Usually in survival analysis (thinking in terms of accelerated failure time), we would say something like 'the 50th percentile for the drug group comes 2 months later than the 50th % for the control group'. Is that what you mean, or does the output of LR afford a different interpretation? – gung Apr 29 '12 at 17:53 @gung: I think you're right in your interpretation - changed the text, better? I haven't used the regression models myself although I've encountered them recently in a course. Tt's an interesting alternative to regular Cox-models that I've used a lot. Although I probably need to spend more time digesting the idea I feel that it's probably easier for me to explain to my patients since I frequently use KM curves when explaining to my patients. HR demands that you really understand the difference between relative and absolute risks - a concept that can take some time to explain... – Max Gordon Apr 29 '12 at 18:14 That makes it clearer, thanks. +1 – gung Apr 29 '12 at 18:39 – Misha May 3 '12 at 19:42 – Max Gordon May 4 '12 at 8:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377912878990173, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34573/turning-jupiter-into-a-star?answertab=active
Turning Jupiter into a star So, I've heard from various works of science fiction about the prospect of turning Jupiter into a star. From what I know about the physics of such a task, it would require somehow condensing Jupiter into something quite a bit smaller than it currently is, such that the pressure is sufficient to create spontaneous fusion. So, my questions are all assuming that somehow this is possible, what would be the results? Specifically, I'm curious as to what kind of light there would be, what size would Jupiter have to shrink to manage fusion, and something about the temperature, light output, longevity, and other properties that such a body would have to have. Specifically, I'm going to assume that it is possible to somehow force Jupiter to shrink to the minimum size required to spontaneously fuse its current atmospheric composition spontaneously, however this might be able to happen. Also, I know that fusion forces tend to exert an expansion force, let's assume that this can be managed such that the size would remain constant. - 2 The problem with so many of these "what if" questions is that no answer is better than any other. If you say simply "there is some mechanism" for shrinking Jupiter and for keeping it small after fusion starts, you might as well just make up the rest as well. – AdamRedwine Aug 20 '12 at 12:57 1 – Ron Maimon Aug 21 '12 at 3:54 2 Answers While you can't turn Jupiter into a star, it is not ruled out that you could turn Jupiter into a catastrophic thermonuclear bomb. The limitations to this was calculated at Lawrence Livermore in the 1970s, as a continuation of the work done to check to make sure Earth's oceans wouldn't ignite due to the deuterium content of water. Necessary Conditions for the Initiation and Propagation of Nuclear Detonation Waves in Plane Atmospheres by Weaver and Wood, couldn't rule out a self-sustaining ignition shock-wave in a planetary atmosphere at a deuterium concentration of more than 1 percent at ordinary liquid densities. Although this makes the oceans safe, Jupiter is big, and it might have segregated a deuterium layer deep inside which has a high enough concentration to allow a self-sustaining nuclear ignition. Then if you drop a configuration of plutonium designed to detonate the deuterium by a nuclear explosion at the appropriate depth, you could get a detonation wave that ignites the entire deuterium layer within a very short time, the time it takes a shock wave to encircle jupiter. The energy output could convert a non-negligible fraction of the deuterium in Jupiter to He3/tritium, and release enormous amount of energy. If 1 Earth mass of deuterium is ignited by the ignition shock wave, the energy release is $10^{38} J$, over a very short time, perhaps an hour or two and this is already 10,000 times the energy output of the sun in a full year. The resulting explosion would destroy that part of the world facing Jupiter, and probably bake the rest. I don't lose sleep over this, though. If there is a natural trigger for such an explosion, perhaps the collison of a rocky planet with a gas giant, one might experimentally observe such plantary mini-supernovas somewhere. This was suggested in section VIII of Weaver and Wood's paper. - "Jupiter is big, and Jupiter is cold" -- Big, yes. Cold, not so much. The interior is believed to be very hot, about 36,000 Kelvins at the core boundary. – Keith Thompson Aug 21 '12 at 5:37 Actually, rereading, I see your point. Is it possible to segregate a dense deuterium layer? One must keep in mind that as the material becomes denser, you should get ignition at smaller and smaller deuterium fractions, although all of this is from a lower bound estimate, which was wildly optimistic about the parameters, to be on the safe side. – Ron Maimon Aug 21 '12 at 7:25 You need not only high enough densities but also high temperature to start fusion. There isn't cold fusion due to the Coulomb barrier, right? I don't believe that bombing a gas giant with rocky planets will create the millions of degrees that are needed to ignite fusion but I also don't believe that you will be able to send a nuclear weapon inside Jupiter, to its "sensitive guts", because it would still be melted before it gets there. – Luboš Motl Aug 21 '12 at 7:48 1 – mmc Aug 21 '12 at 12:03 1 @LubošMotl That would surely happen with a normal device. But maybe you would be able to arrange an implosion using only the external pressure, without requiring high explosives. Needless to say, this is extremely speculative and predetonation would be a big problem... The paper discusses ignition via impact of high velocity astronomical objects (v > 300 km/s). – mmc Aug 21 '12 at 12:18 An object of (relatively low) Jupiter's mass cannot become a star because the minimum mass for which the density becomes high enough turn the object into star is 60 times higher. Jupiter's actual mass is $2\times 10^{27}$ kilograms, i.e. 320 times the Earth's mass or 1/1,000 of the solar mass. For 60 times higher masses, any addition of mass would be enough to make the planet collapse, instead of grow, and fusion would ultimately get ignited. Even if you compressed the current Jupiter's mass into a small volume so that fusion would start, it wouldn't be sustainable because the gravity wouldn't be sufficient to sustain the high density. The intense pressure from the radiation created by the fusion would make the object explode, anyway. If your plan is to design some mechanism to mechanically keep the volume of Jupiter tiny – such a mechanism is not only infeasible for a civilization that doesn't even fully control the Earth but it may perhaps be prohibited by general principles of physics (because the huge pressure around the "confined Jupiter" could only sustained by matter in the state of fusion or even more extreme forms of matter, so effectively, the "whole" star would be much heavier than Jupiter, anyway) – that's fine but then the question becomes meaningless. Yes, if someone is equipped with divine powers not allowed by the laws of Nature as they are currently understood, He may turn Jupiter into a star, bread into gold, and water into wine, among millions of other wonderful things. But these prospects are not a topic in physics; it's debatable whether theologians would agree that they belong at least to theology: I don't think so. Physics studies Nature as it actually works and if it finds out that something doesn't work in a certain way, it must take this insight seriously rather than to overlook it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570085406303406, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/24380/centripetal-force-of-a-rotating-rigid-body/24383
# Centripetal force of a rotating rigid body? Consider someone pushing a roundabout in a playground. Initially the roundabout is stationary, but when it is pushed, it rotates with increasing rotational speed. The force of the push is balanced by the reaction force exerted by the support at the centre of the roundabout. The forces are equal in magnitude and opposite in direction, so the roundabout is in translational equilibrium. But they have different lines of action, so there is a resultant torque, causing the playground to rotate and have angular momentum. Okay, my question is, how about the centripetal force that exists whenever there is circular motion? Where does/would it come from? - – Qmechanic♦ Apr 25 '12 at 9:50 The 2 threads linked above are discussing how centripetal force is a resultant force and not a real force in the sense that tension, weight, etc. are. My above question on the other hand is about the difference between a particle in revolution versus a body in rotation, or how exactly to relate the centripetal force in the former to the context of the latter. So the 2 linked threads are not directly related to this discussion. Just to clarify. :-) – Ryan Apr 25 '12 at 16:22 – Ryan Apr 25 '12 at 17:58 ## 3 Answers Consider the simpler system of a mass in 2D, connected to the origin by a massless rod that is free to rotate about the origin. When you exert a force perpendicular to the rod on the mass, the mass exhibits circular motion. In this case, the centripetal force needed comes from the tension exerted on the mass by the rod. A similar situation happens for the roundabout: tensions in different parts of the roundabout act on each other to give the necessary centripetal force. - Thanks! Since the roundabout is rotating, there should be a net force towards the rotational axis, right? I know I'm missing something fundamental... – Ryan Apr 25 '12 at 10:41 1 When you sum the forces up, the result describes what the center of mass is doing, not what the individual parts of the roundabout are doing. In this case, the net force on the roundabout is zero, since its center of mass is stationary. To see the circular motion, we find the net force on individual parts of the roundabout. – leongz Apr 25 '12 at 10:53 1 Thank you, Leongz. I've finally wrapped my mind around it. So in summary, the net force towards the rotational axis exists for each "particle" of the rounadbout, but not for the roundabout as a whole body. Cheers. – Ryan Apr 25 '12 at 16:29 @Ryan Exactly. This is the general idea. – Pygmalion Apr 25 '12 at 16:30 Stackexchange is the best invention since water. – Ryan Apr 25 '12 at 16:35 In order to have a centripetal force, you must have mass that rotates around certain point. You should be more specific with your question, that is, you must tell us which mass is rotating and then we can tell you which centripetal force is responsible for that rotation. Here is a more complete explanation on where does the centripetal force come from: Let's suppose we are standing in intertial frame of reference. As first Newton law states: if no force is exerted to a body, velocity of the body remains constant. Now what about rotating? In rotating velocity is not constant! OK, velocity is vector, and it is possible that the body rotates in a way that the magnitude of the vector is constant. However, if the body is rotating, the direction of the vector of the velocity is changing! Therefore, some force must exert on the body, must force the body to rotate. It turns out that the force, that changes the direction of the vector of the velocity is directed toward the center of the rotation, and therefore we call that force centripetal force (petere in Latin means: to make for, tend to get to), i.e. force that tends toward center. For specific explanation provide specific case. - Oh okay, that makes perfect sense. But yet, when considering the body as a whole (instead of an individual particular part of the body), shouldn't there be a net force towards the axis of rotational motion? – Ryan Apr 25 '12 at 10:39 1 No. Take a look at two equal parts of the roundabout on the oposite sides of the axis. There are two centripetal forces of the equal magnitudes. But since they are both pointing toward the center, they have oposite directions and they cancel out when you look roundabout as a whole body. – Pygmalion Apr 25 '12 at 11:23 If roundabout is not symmetrical and you have larger mass on one side of the axis than on the other side, one of two centripetal forces will be greater than the other and then there will be net force acting on the axis. Axis of the roundabout will wobble. – Pygmalion Apr 25 '12 at 11:29 Thanks, Pyg, got it. :) – Ryan Apr 25 '12 at 16:06 The centripetal force isn't a "new" force that comes out of nowhere. It's made up of normal forces. Allow me to explain: For any sort of acceleration, via $\vec F=m\vec a$, you need a force, right? So basically, if you were accelerating a rock with some rope, the corresponding force is the rope tension. If a ball is falling on Earth, the corresponding force is gravity. You can even try pulling a ball down on Earth, and you get a mixture of forces. Now, note that velocity is a vector. In UCM (uniform circular motion), you have a body with constant speed, but the direction of its speed varies. In the above diagram, the green arrow is the initial velocity vector. The blue arrow is the velocity after a split second. The red arrow is the acceleration vector required to change the velocity thus. Now, by $\vec F=m\vec a$, we need a force to make this happen. So, which force is it? It depends. If you are whirling the stone in a horizontal plane via a rope, this force is the tension force. If you are whirling it in a vertical plane, it is the combination of gravity and rope tension. It the ball is rolling in a bowl, it is the combination of gravity and reaction(normal) force. If this is a planet-satellite system, the force is just gravity. A general term for this force is "centripetal force". CPF is a mixture of pre-existing forces, depending on the situation, as explained above. Using some calculus, we can prove that $\mathrm{CPF}=\frac{mv^2}{R}$, where $m$ is the mass of the particle, $v$ is the instantaneous velocity, and $R$ is the radius of curvature (just the radius in case of circular motion). So it doesn't "come from" anywhere--it's not a new force. It's just a name for pre-existing forces when they create circular motion. That's all. - I mean, shouldn't there be some sort of net force in the centripetal direction? In my roundabout example, aren't both forces tangential (in opposite directions) to the circular motion? – Ryan Apr 25 '12 at 10:35 1 @Ryan. There is. The tangential pushing accelerates the roundabout and puts it in UCM in the first place. Then, the metal of the roundabout does the same thing as a rope-it keept is in UCM. – Manishearth♦ Apr 25 '12 at 10:40 1 @Ryan: There is a net CPF on every individual small element of the roundabout. Overall, the CPFs add up to zero. CPF only applies to small elements-- for a larger body the internal forces come into the picture (these can be calculated easily, though). Also note: whenever a constant-magnitude force is centripetal, i.e. it changes its direction to point at a particular point, then there will be no net translational motion. – Manishearth♦ Apr 25 '12 at 10:50 With reference to the yellow paragraph in my question, there are two obvious forces (neither of which point toward or away from the rotational axis), and their sum is also the final resultant force on the body. So the metal of the roundabout is like an imaginary string keeping each particle on the circumference of the body in circular motion. So why isn't there a resultant force pointing towards the rotational axis? I'm so confused. – Ryan Apr 25 '12 at 10:56 Okay, let me try to digest your explanation above that the CPF's add up to zero. Need to be away from the computer for a few hours. Thanks Manis! – Ryan Apr 25 '12 at 11:00 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224613308906555, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/lebesgue-dominated-convergence-theorem/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘Lebesgue dominated convergence theorem’ tag. ## 245A, Notes 3: Integration on abstract measure spaces, and the convergence theorems 25 September, 2010 in 245A - Real analysis, math.CA | Tags: absolutely integrable functions, Fatou's lemma, Lebesgue dominated convergence theorem, measurable space, measure space, monotone convergence theorem, sigma-algebra | by Terence Tao | 50 comments Thus far, we have only focused on measure and integration theory in the context of Euclidean spaces ${{\bf R}^d}$. Now, we will work in a more abstract and general setting, in which the Euclidean space ${{\bf R}^d}$ is replaced by a more general space ${X}$. It turns out that in order to properly define measure and integration on a general space ${X}$, it is not enough to just specify the set ${X}$. One also needs to specify two additional pieces of data: 1. A collection ${{\mathcal B}}$ of subsets of ${X}$ that one is allowed to measure; and 2. The measure ${\mu(E) \in [0,+\infty]}$ one assigns to each measurable set ${E \in {\mathcal B}}$. For instance, Lebesgue measure theory covers the case when ${X}$ is a Euclidean space ${{\bf R}^d}$, ${{\mathcal B}}$ is the collection ${{\mathcal B} = {\mathcal L}[{\bf R}^d]}$ of all Lebesgue measurable subsets of ${{\bf R}^d}$, and ${\mu(E)}$ is the Lebesgue measure ${\mu(E)=m(E)}$ of ${E}$. The collection ${{\mathcal B}}$ has to obey a number of axioms (e.g. being closed with respect to countable unions) that make it a ${\sigma}$-algebra, which is a stronger variant of the more well-known concept of a boolean algebra. Similarly, the measure ${\mu}$ has to obey a number of axioms (most notably, a countable additivity axiom) in order to obtain a measure and integration theory comparable to the Lebesgue theory on Euclidean spaces. When all these axioms are satisfied, the triple ${(X, {\mathcal B}, \mu)}$ is known as a measure space. These play much the same role in abstract measure theory that metric spaces or topological spaces play in abstract point-set topology, or that vector spaces play in abstract linear algebra. On any measure space, one can set up the unsigned and absolutely convergent integrals in almost exactly the same way as was done in the previous notes for the Lebesgue integral on Euclidean spaces, although the approximation theorems are largely unavailable at this level of generality due to the lack of such concepts as “elementary set” or “continuous function” for an abstract measure space. On the other hand, one does have the fundamental convergence theorems for the subject, namely Fatou’s lemma, the monotone convergence theorem and the dominated convergence theorem, and we present these results here. One question that will not be addressed much in this current set of notes is how one actually constructs interesting examples of measures. We will discuss this issue more in later notes (although one of the most powerful tools for such constructions, namely the Riesz representation theorem, will not be covered until 245B). Read the rest of this entry » ### Recent Comments Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… Luqing Ye on 245A, Notes 2: The Lebesgue… E.L. Wisty on Simons Lecture I: Structure an…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8941907286643982, "perplexity_flag": "middle"}
http://alanrendall.wordpress.com/2012/01/
# Hydrobates A mathematician thinks aloud ## Archive for January, 2012 ### Modelling Dictyostelium aggregation, yet again January 29, 2012 In the last post I discussed the second chapter of the book ‘La Vie Oscillatoire’ which is concerned with glycolytic oscillations. The third chapter is on calcium oscillations, a theme which I have written about more than once in recent posts. Here I will say something about the subject of the fourth chapter, signalling by the cellular slime mould Dictyostelium. I have written some things about Dictyostelium in previous posts. It has fascinated many scientists by its ability to go from a state where the cells are independent to a state which looks like a multicellular organism as a reaction to a scarcity of food. The cells gather by means of chemotaxis. It is usual for mathematical talks about chemotaxis to start with nice pictures portraying the life cycle of Dictyostelium discoideum, the most famous organism of this type. It gave rise to the formulation of the Keller-Segel model which is rather popular among mathematicians. As I have mentioned in previous posts it is not so clear to what extent the Keller-Segel model, as attractive as it is, is really relevant to the life of D. discoideum. In his book Goldbeter seems to share this sceptical point of view while leaving open the possibility that the Keller-Segel model might be a reasonable model for aggregation in a less famous relative of D. discoideum, Dictyostelium minutum. Returning to D. discoideum, it is a known fact that the cells of this organism signal to their neighbours by producing cAMP (cyclic adenosine monophosphate). This process can be modelled by a system of ODE (or a similar system with diffusion) called the Martiel-Goldbeter model. Experimentally it is seen that cultures of starving D. discoideum develop circular or spiral waves centered on certain pacemaker cells. In the book a decription is given of how this process can be understood on the basis of the Martiel-Goldbeter model. It is useful to draw a diagram of the dynamical properties of this model as a function of the activities of two key enzymes. There is a region where solutions of the MG model converge to a stationary solution, a region where they show excitable behaviour and a region where there is a limit cycle. As a cell develops following the beginning of a period of hunger it moves around in this parameter space. It starts from constant production (1), becomes excitable (2), produces pulsations (3) and then becomes excitable again (4). This is a statement which relates to the ODE system but when diffusion is added the idea is that the pacemaker cells are in stage (3) of their development while cells in stage (4) can then lead to waves which propagate away from the pacemakers. Because of the variability of the cells (of their intrinsic properties or their life histories) the different stages of development can be present at the same time. I now want to say some more about the MG model itself. The variables in the system are the fraction $\rho_T$ of receptor in the active state and the intracellular ($\beta$) and extracellular ($\gamma$) concentrations of cAMP. The evolution equation for $\rho_T$ is of the form $\frac{d\rho_T}{dt}=-f_1(\gamma)\rho_T+f_2(\gamma)(1-\rho_T)$ where $f_1$ and $f_2$ are ratios of linear functions. The evolution equation for $\gamma$ is $\frac{d\gamma}{dt}=\frac{k_t\beta}{h}-k_e\gamma$ where the quantities other then the unknowns are constants. The most complicated evolution equation is that for $\beta$ which is $\frac{d\beta}{dt}=q\sigma\Phi(\rho_T,\gamma,\alpha)-(k_i+k_t)\beta$. Here $\alpha$ is the concentration of ATP, taken to be constant in this model, and all other quantities except the unknowns are constants. The function $\Phi$ is complicated and will not be written down here. It is obtained from a system with more equations by a quasi-steady state assumption and in its dependence on $\gamma$ it is a ratio of two quadratic functions. A further quasi-steady state assumption leads to a simplified system for $\rho_T$ and $\gamma$ alone which is more tractable for analytical considerations. ### Albert Goldbeter and glycolytic oscillations January 21, 2012 This Christmas, at my own suggestion, I was given the book ‘La Vie Oscillatoire’ by Albert Goldbeter as a present. This book is concerned with oscillatory phenomena in biological systems and how they can be explained and modelled mathematically. After the introduction the second chapter is concerned with glycolytic oscillations. I had a vague acquaintance with this subject but the book has given me a much better picture. The chapter treats both the theoretical and experimental aspects of this subject. If yeast cells are fed with glucose they convert it into alcohol. Those of us who appreciate alcoholic beverages can be grateful to them for that. In the presence of a supply of glucose with a small constant rate alcohol is produced at a constant rate. When the supply rate is increased something more interesting happens. The output starts to undergo periodic oscillations although the input is constant. It is not that the yeast cells are using some kind of complicated machine to produce these. If the cells are broken down to make yeast extract the effect persists. In fact for yeast extract the oscillations go away again for very high concentrations of glucose, an effect not seen for intact cells. This difference is not important for the basic mechanism of production of oscillations. The breakdown of sugar in living organisms takes place via a process called glycolysis consisting of a sequence of chemical reactions. By replacing the input of glucose by an input of each of the intermediate products it was possible to track down the place where the oscillations are generated. The enzyme responsible is phosphofructokinase (PFK), which converts fructose-6-phosphate into fructose-1,6-bisphosphate while converting ATP to ADP to obtain energy. Now ADP itself increases the activity of PFK, thus giving a positive feedback loop. This is what leads to the oscillations. The process can be modelled by a two-dimensional dynamical system called the Higgins-Selkov oscillator. Let $S$ and $P$ denote the concentrations of substrate and product respectively. The substrate concentration satisfies an equation of the form $\dot S=k_0-k_1SP^2$. The substrate is supplied at a constant rate and used up at a rate which increases with the concentration of the product. (Here we are thinking of ADP as the product and ignoring other possible effects.) The product concentration correspondingly satisfies $\dot P=k_1 SP^2-k_2 P$. The Higgins-Selkov oscillator gives rise to a limit cycle by means of a Hopf bifurcation. The ODE system is similar to the Brusselator. There are two clear differences. The substance which is being supplied from ouside occurs linearly in the nonlinear term in the Higgins-Selkov system and quadratically in the Brusselator. In the Higgins-Selkov system the nonlinear term occurs with a negative sign in the evolution equation for the substance being supplied from outside while in the Brusselator it occurs with a positive sign. In the book of Goldbeter the Higgins-Selkov oscillator seems to play the role of a basic example to illustrate the nature of biological oscillations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382222294807434, "perplexity_flag": "head"}
http://nrich.maths.org/81/index?nomenu=1
## 'Bean Bags for Bernard's Bag' printed from http://nrich.maths.org/ ### Show menu Some years ago I suddenly had to do some maths with some boys who were a bit turned off about it. If it had been today in England they would have said, "It's not cool!" There were two small PE hoops nearby and some small bean bags. I put down the hoops as you see: I collected eight of the bean bags. "Do they really have beans in?" I asked. They did not know and neither did I. Never mind. I suggested that we put them in the hoops. Four ended up being in the blue hoop, six in the red hoop so that two were in the overlap. We went on to talk about how many were in the blue and how many were in the red and how the ones in the middle seemed to be counted twice. Try this for yourself. We tried putting the bean bags in the hoops in a different way and each time we counted how many were in each of the two hoops. Well it was time to use the yellow hoop that had been around: I suggested we made sure that there were four in the blue, five in the red and six in the yellow. So we all tried and then ...? Well have a go at this one. Now the investigation is to take this much further. Try to find as many ways as you can for having those numbers $4$, $5$ and $6$ using just eight objects. I guess you'll need to record your results somehow so that you do not do the same ones twice! Have you found yourself using some kind of 'system' or 'method' for going from one arrangement to the next? Try to explain it if you have. When you're pretty sure you cannot find any more, check yours with a friend and see if there are any new ones! As always we then have to ask "I wonder what would happen if ...?" This month it's very easy to invent new ideas, for example, "I wonder what would happen if I used a different number of objects?" You could go about this in order and try six objects and then seven, you've done eight so move on to nine ... Any other ideas?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9891294240951538, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/potential/new
# Tag Info ## New answers tagged potential 0 ### The potentiality of the electric field Ok, let's consider $\vec{r} \vec{dr}$, it is equal to $|\vec{r}||\vec{dr}|_\vec{r}$ where $|\vec{dr}|_\vec{r}$ is projection of $\vec{dr}$ on $\vec{r}$. If you draw $\vec{r}$ and a small (remember, you need infinitesimal!) $\vec{dr}$ you will notice that this projection is actual equal to $|\vec{dr}|_\vec{r} = d|\vec{r}|$, so \$\vec{r} \vec{dr} = ... 3 ### Electron in an infinite potential well Within the superposition of the ground and the first excited state, the wavefuncion oscillates between "hump at left" and "hump at right". Maybe you are asked to find the half-period of these oscillations? 2 ### Electron in an infinite potential well Yes, I believe you have to think of it as if it were a semiclassical problem; you evaluate with QM the mean square velocity $\left< v^2 \right>$ of the particle, then calculate its square root; this should give you an estimate of the typical velocity of the particle. Once you have it, you divide the length of the well by it and find the time it takes ... 0 ### Convolution kernel of poisson equation by FFT In general, what you're trying to do is called the "spectral method" for solving PDEs. Wikipedia has a little on it, including some useful references, and a solution of the Poisson equation. http://en.wikipedia.org/wiki/Spectral_method As Peter Kravchuck says, the kernel will always be $k^{-2}$ for the Poisson equation. In the linked PDF, ... 0 ### Convolution kernel of poisson equation by FFT In Fourier space the Poisson equation is $k^2\phi=\rho$ (up to a convention-dependent constant factor). So in every dimension the kernel is $1/k^2$. As for the real space, it is, up to a constant $|r|,\,\log|r|,\,1/r$ in 1D,2D,3D respectively. 11 ### In the Lennard-Jones potential, why does the attractive part (dispersion) have an $r^{-6}$ dependence? The "simplest" classical explanation I know is the van der Waals interaction described by Keesom between two permanent dipoles. Let us consider two permanent dipoles $\vec{p}_1$ (located at $O_1$) and $\vec{p}_2$ located at $O_2$. Their potential energy of interaction is: \begin{equation} U(\vec{p}_1,\vec{p}_2,\vec{O_1 O_2}) = -\vec{p}_1\cdot \vec{E}_2 = ... 0 ### Electric potential vs potential difference The amount of work done in bringing a unit positive charge fron infinity to a given point in electric field is called electric pontial at that point mathematically, $$\phi=\frac{W}{Q}$$ Work W and charge Q. The work done in moving a unit positive charge from the point of lower potential to point of higher potential at the point is called electric potential ... 0 ### Electrostatic Potential Energy Derivation I think I've understood it now . $ds=dr$ . but $dr<0$ and $|dr|=-dr$ Because dr is a small position vector and position vector is directed along field . Now why I can't use ds directly is because the limits in the integral , (the upper and lower limit in integral notation) are in terms of position vector and not the displacement . Had they been in ... 2 ### Electrostatic Potential Energy Derivation New version The problem in your demonstration is when you write down $\vec{A}\cdot\vec{B} = ||\vec{A}||\,||\vec{B}||\,\cos\theta$. More exactly, in your case $||d\vec{r}||\neq dr$ because $dr<0$ when you go from $\infty$ to $r$ and a norm is positive by definition. So the sign error is introduced from 3rd to 4th line. Old version The demonstration on ... 1 ### Electrostatic Potential Energy Derivation Just to be clear, the potential energy of a particle of charge $q_2$ at a distance $r$ from a source of potential (supposidely at zero) of charge $q_1$ is the work that an external operator has to provide to bring the particle from infinity to $r$ at constant velocity. This reads then: $\int_{\infty}^r \vec{F}_{op}\cdot \vec{ds}$ As people have said, the ... 2 ### Electrostatic Potential Energy Derivation When you calculate work, you do so along a given path. Here, that path has tangent vector $d\mathbf s$. This is a vector with direction; the minus sign will ultimately come from choosing the path's orientation--inward or outward. Edit: Aha, I think I've found the unintuitive part. The key is in the use of the coordinate $r$ to parameterize the path, in ... -1 ### Electrostatic Potential Energy Derivation $$\mbox{d}\vec s = \mbox{d}r$$ Therefore, $$\vec F\cdot \mbox{d}\vec s= F\mbox{ d}r\mbox{ }\cos\theta=F\mbox{ d}r\mbox{ }\cos\pi=-F\mbox{ d}r$$ Edit: sorry for the error where I forgot to put the magnitude sign. I did mean the magnitude sign. $$\left|\left|\mbox{d}\vec s\right|\right| = \mbox{d}r$$ 1 ### Lorenz gauge fixing Yes. If you define $f=-\partial_\mu A^\mu$ then you can write the equation in the form $$\partial_\mu\partial^\mu\psi = f$$ This is the Klein-Gordon equation with a nonzero source ($f$) and can be solved via Green's function methods. Once you have the Klein-Gordon propagator* $G(x)$ (this is derived in any e.g. quantum field theory textbook) appropriate to ... 1 ### Electrostatic Potential Definition Potential is negative of work done per unit charge by electrostatic force 1 ### Electrostatic Potential Energy Derivation $\mathbf{r}$ is a position vector and $\mathbf{s}$ is a displacement vector between two points, let say A and B. In general case, they are not equal, but they can be if we properly choose the origin of the coordinate system: A={0,0,0} or B={0,0,0} The sign depends on at which point A or B the origin is placed. 2 ### Electrostatic Potential Definition The electric field is a conservative vector field which implies that there exists a function $V$ for which $$\mathbf E = -\nabla V$$ We call this function $V$ the electric potential. There is no mathematical need to first define potential energy. One can then physically interpret $V$ in terms of a "potential landscape" to get intuition for what it ... 0 ### Potential of a Body In a conductor at equilibrium the electric potential is equal everywhere, for, if it were not, then the electrons would experience a force proportional to the gradient of the potential, causing them to redistribute themselves until the potential became homogeneous. Top 50 recent answers are included
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192537665367126, "perplexity_flag": "head"}
http://www.leancrew.com/all-this/2011/12/omnigraphsketcher-not-yet/
# And now it’s all this ## I just said what I said and it was wrong. Or was taken wrong. « WiFi Woes with Dan Benjamin # OmniGraphSketcher - not yet December 11th, 2011 at 8:50 pm by Dr. Drang The other day I was listening to the latest Mac Power Users podcast, the one with Horace Dediu as guest,1 and I heard David ask Horace if he used OmniGraphSketcher. I tried out OmniGraphSketcher back in 2009 and decided it wasn’t for me, but I got curious as to whether the things I didn’t like had been reworked in the 2-3 years since then. So I downloaded a trial copy and started playing with it again. It still isn’t for me. The first problem is that OmniGraphSketcher still connects data points from left to right instead of in the order you give them, just as I found when I first looked into it. So if you generate a set of points to plot the hyperbola $x^2 - y^2 = 1$, you’ll find that the plot it makes looks like this instead of this There is, I found, a way to work around this problem, but it’s easier to just use another charting program. You may think this is no big deal because you don’t plot hyperbolas, but the strict left-to-right connectivity that OmniGraphSketcher imposes screw you up eventually. The second problem is that OmniGraphSketcher doesn’t understand dates. If you paste in a time series from a spreadsheet, OmniGraphSketcher will plot the dates along the horizontal axis as if it knows what it’s doing, but if you study the spacing, you’ll see that it’s all messed up. It’s interpreting the dates as text labels with no numeric value and spacing them uniformly. In the example, the 9 days between January 1 and January 10 takes up the same horizontal distance as the 17 days between January 10 and January 27. So OmniGraphSketcher is no substitute for a real charting program like Gnuplot. But if you’re not trying to plot real data, but only sketch out—as the name of the app implies—some charts to show generic behavior and trends, OmniGraphSketcher may be worth checking out. I still prefer OmniGraffle for quick plot sketches, like this generic stress-strain curve for mild steel. It’s probably not as efficient as OmniGraphSketcher for this particular use, but I’ve worked with it enough to get reasonably fast. For me, OmniGraphSketcher would be a waste. Update (12/12/11) This update may end up being longer than the original post. Dave from Omni mentioned in the comments that OmniGraphSketcher will connect points in the order you give them (without going through the rigamarole I described here). He’s right, but it’s still easy to get the wrong connection order—easier, I think, than it is to get the right order. Let me explain how it works. Say you have two columns of numbers you want to plot. We’ll use the right branch of the $x^2 - y^2 = 1$ hyperbola for our example. x y 2.236 -2.00 2.059 -1.80 1.887 -1.60 1.720 -1.40 1.562 -1.20 1.414 -1.00 1.281 -0.80 1.166 -0.60 1.077 -0.40 1.020 -0.20 1.000 0.00 1.020 0.20 1.077 0.40 1.166 0.60 1.281 0.80 1.414 1.00 1.562 1.20 1.720 1.40 1.887 1.60 2.059 1.80 2.236 2.00 You copy them from whatever application they’re in, open a new OmniGraphSketcher document, and paste. You’ll get a bunch of points, like this: The Style Inspector is where you get to choose the look of the plotted points. Open it up and choose the curvy Line Type and you’ll get this, which is disappointing. (The blue halo is there because the plot was selected when I took the screenshot. It’s not part of the plot’s style.) How to get the right connection? Don’t use the Style Inspector to tell it to connect the points. Instead, right after you paste in the points, go to the Data Inspector, which has a table of all the points you pasted in their original order. Click the Connect Points button at the bottom, and lines will be drawn between the points in the order in which the points are given. Do you find this intuitive? I don’t. To me, what I did to get the wrong connections was the natural way to use the program. The Style Inspector is where you make all the choices about lines and points—all except this one. More important, why does OmniGraphSketcher ever connect the points in an order other than the one you give? If I want my data sorted, I’ll do that before plotting, thankyouverymuch. I can’t think of any other plotting programs that work this way. It’s like a word processor automatically alphabetizing your sentences. That said, at least OmniGraphSketcher does have a way to connect the points in the order you give, which is better than what I thought when I wrote the post yesterday. I still think it’s not for me, but it’s worth a trial if you don’t need it for time series. 1. Overall, I’d say it was a pretty easy show for David and Katie. I’d be surprised if either of them spoke for more than two minutes, total. ↩ Tags: mac, omnigraffle, omnigraphsketcher, software Post #1702 | 2 Comments ## 2 Responses to “OmniGraphSketcher - not yet” 1. Dave @ Omni says: I just took another look at this and I’m can’t recall (and can’t look up at the moment) whether we changed the behavior post-1.0, or I didn’t think of the solution when replying to your tweet back in ‘09, but the Connect Points button in the bottom of the Data inspector will connect the points in creation order. We’d still like to add real date support; thanks for giving OmniGraphSketcher another look! 2. Dr. Drang says: Thanks for the heads-up, Dave. You’re right (naturally—it’s your product), but whether you get the correct connectivity or not depends on how you add lines to the chart. Doing it as you suggested gives the right connectivity; doing it by clicking one of the Line Type buttons in the Style Inspector—which is what I did because it seemed natural to me—gives the wrong connectivity. This is much better than the workaround I had to use back in ’09, and I’ll update the post to reflect this new behavior, but I have to say I still think it’s wrong to ever connect data points in an order other than that given by the user.
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269487857818604, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/30742/limit-probability/30753
## Limit probability ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) You start with a bag of N recognizable balls. You pick them one by one and replace them until they have all been picked up at least once. So when you stop the ball you pick has not been picked before but all the others have been picked once or more. Let $P_N$ be the probability that all the others were actually picked twice or more. Questions: does $P_N$ have a limit as $N\to \infty$, is this limit 0 or can you compute it? Note: This is a question that was asked at the Yahoo answers forum, by gianlino. Since no one has found an answer in that forum, I forwarded it here. Here is the original link. http://answers.yahoo.com/question/index;_ylt=ApZ_Q6sS137DnEcNoUBqn.vty6IX;_ylv=3?qid=20100704050924AAfLYYJ - ## 3 Answers One way to think about this sort of problem is to embed in continuous time. Take $N$ independent Poisson processes of rate 1. (Think of $N$ independent Geiger counters, each going off at rate 1, if you like). A point in the $i$th process corresponds to picking the $i$th ball. Since the processes are independent and all have the same rate, the sequence of ball selections is just a sequence of independent uniform choices, as we desire. Let $M_i(x)$ be the number of points in the $i$th Poisson process up to time $x$. Then the distribution of $M_i(x)$ is Poisson($x$). In particular, $P(M_i(x)\geq 2)= 1-(1+x)e^{-x}$. The time of the first point in such a process has exponential(1) distribution, so its probability density function is $e^{-x}$. So fix one ball, say ball 1. Consider the event that when ball 1 is first chosen, all the other $N-1$ balls have each been chosen at least twice. To get the probability of this event, integrate over the time that ball 1 is first chosen (i.e. the time of the first event in process 1): $\int_0^\infty e^{-x} P\big(M_i(x)\geq 2 \text{ for } i=2,3,\dots,N\big) dx$ $=\int_0^\infty e^{-x} \big(1-(1+x)e^{-x}\big)^{N-1} dx$. The same applies for any ball, and the events are disjoint, so an exact answer to your question is $N\int_0^\infty e^{-x} \big(1-(1+x)e^{-x}\big)^{N-1} dx$. I don't know if it's possible to get an exact expression for this integral, but it's easy enough to bound it. For any $K$ $N\int_0^\infty e^{-x} \big(1-(1+x)e^{-x}\big)^{N-1} dx$ $\leq N \int_0^K e^{-x} \big(1-e^{-x}\big)^{N-1} dx + N\int_K^\infty e^{-x} \big(1-Ke^{-x}\big)^{N-1} dx$. Take $K=\frac12 \log N$, for example; it's easy to evaluate both integrals exactly (substitute $u=e^{-x}$) and to show that they both tend to 0 as $N\to\infty$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm assuming, since you don't say, that the balls are drawn with uniform probability. Number and order the balls then, for fixed i, the probability of picking the i-th ball last, after picking the other N-1 balls at least twice each is no greater than $1/N^2 \times 1/N$. Likewise for the other N-1 balls, so the probability you're after is no greater than $N \times 1/N^2 \times 1/N = 1/N^2$ and no less than zero, so $\lim_{N \rightarrow \infty} P_{N} = 0$ by squeeze rule. - Also, here's a heuristic argument to show that the probability behaves like $1/N$ as $N$ becomes large. The last ball to be chosen is chosen for the first time after roughly $N\log N$ steps. (It's the maximum of $N$ random variables which are geometric with mean $N$). By this stage, the number of times that any particular other ball has been selected is Poisson($\log N$). So the probability a particular other ball has been selected only once is roughly $\log N e^{-\log N}$ which is $\log N/N$. Hence the probability that no other ball has been selected only once is roughly $(1-\log N/N)^{N-1}$, which is $\exp[-\log N + O(\log N/N)] \sim 1/N$. Of course the previous paragraph ignores various dependencies, but as $N$ becomes large these will be negligible and no doubt one could formalise the argument if desired. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.958186149597168, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/208075/extracting-an-almost-independent-large-subset-from-a-pairwise-independent-set?answertab=votes
# Extracting an (almost) independent large subset from a pairwise independent set of Bernoulli variables Let $n>1$, and let $X_1,X_2, \ldots ,X_n$ be non-constant random variables with values in $\lbrace 0,1 \rbrace$. Let us say that a subset of variables $X_{i_1},X_{i_2}, \ldots,X_{i_d}$ is complete if the vector $\overrightarrow{X}=(X_{i_1},\ldots,X_{i_d})$ satisfies $P(\overrightarrow{X}=\overrightarrow{c})>0$ for any $\overrightarrow{c}\in \lbrace 0,1 \rbrace^d$. Prove or find a counterexample : if $X_1,X_2, \ldots ,X_n$ are pairwise independent Bernoulli variables, then we may extract a complete subset of cardinality at least $t+1$, where $t$ is the largest integer satisfying $2^{t} \leq n$. This is true for $n=3$ (and hence also true for $n$ between $3$ and $7$), as is shown in the main answer to that MathOverflow question. (That other MathOverflow question is also related, and provides several links) If true, this result is sharp, as can be seen by the classical example of taking all arbitrary sums modulo 2 of an initial set of fully independent $t+1$ Bernoulli variables. This produces a set of pairwise independent $2^{t+1}-1$ variables, and where the maximal cardinality of a complete subset is $t+1$. Update 10/10/2012 : By induction, it would suffice to show the following : if $X_1, \ldots ,X_t$ is a fully independent set of $t$ Bernoulli variables and $X$ is another Bernoulli variable, such that the pair $(X_i,X)$ is independent for each $i$, then there are coefficients $\varepsilon_0,\varepsilon_1, \ldots ,\varepsilon_t$ in $\lbrace 0,1 \rbrace$ such that, if we put $$H=\Bigg\lbrace (x_1,\ldots,x_t,x) \in \lbrace 0,1 \rbrace ^{t+1} \Bigg| x=\varepsilon_0+\sum_{k=1}^{t}\varepsilon_kx_k \ {\sf mod} \ 2\Bigg\rbrace, \ \overrightarrow{X}=(X_{1},\ldots,X_{t},X)$$ then $P(\overrightarrow{X}=h)>0$ for any $h\in H$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945491313934326, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/130855-geometrical-properties-ellipses.html
# Thread: 1. ## Geometrical Properties of Ellipses Hi. 1. If M and M' are the points where the tangent at P on the ellipse meets the tangents at the ends of the major axis, then MM' subtends a right angle at either focus. Show that the line y = mx + c touches the ellipse x^2/a^2 + y^2/b^2 = 1 when c = +/- sqrt[a^2m^2+b^2] 2. The normal to the ellipse x^2/a^2 + y^2/b^2 = 1 at P(x1, y1) meets the x-axis in N and the Y-axis in G. Prove PN/NG = (1-e^2)/e^2. 3. Show that the semi-latus rectum of the ellipse (acosx, bsiny) is of length b^2/a. 4. The extremities of any diameter of an ellipse are L, L' and P is any point on the ellipse. Show that the product of the gradients of the chords LP and L'P are constant. 2. Hi The line $y = mx+c$ touches the ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ when $\frac{x^2}{a^2} + \frac{\left(mx+c\right)^2}{b^2} = 1$ has 1 solution $\left(\frac{1}{a^2}+\frac{m^2}{b^2}\right)x^2 + \frac{2mc}{b^2} x + \frac{c^2}{b^2} - 1 = 0$ has 1 solution therefore when the discriminant is equal to 0 which gives $c^2 = a^2m^2 + b^2$ after simplification
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8356741070747375, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2175/is-it-possible-for-information-to-be-transmitted-faster-than-light-by-using-a-ri/13299
# Is it possible for information to be transmitted faster than light by using a rigid pole? Is it possible for information (like 1 and 0s) to be transmitted faster than light? For instance, take a rigid pole of several AU in length. Now say you have a person on each end, and one of them starts pulling and pushing on his/her end. The person on the opposite end should receive the pushes and pulls instantaneously as no particle is making the full journey. Would this actually work? - 9 No. The pole will never "instantaneously" move at both ends. There will always be a delay. – muntoo Dec 23 '10 at 4:35 1 This is potentially a great (and frequent) question, could you edit the contents to improve it a little? Suggestion: Change your second and third sentences to something like my comment below this one, insert a couple line breaks before the fourth sentence, and erase the last sentence... – Bruce Connor Dec 27 '10 at 6:50 "For instance, take a rigid pole of several AU (astronomical units) in length. Now say you have a person on each end, and one of them starts pulling and pushing on his/her end. The person on the opposite end should receive the pushes and pulls instantaneously as no particle is making the full journey." – Bruce Connor Dec 27 '10 at 6:55 Ooops, did not find that. Good catch. – Randalfien Aug 8 '11 at 23:39 1 No, information can't move faster than light through space. However, space can be bent. There's the concept of , and information might take a short-cut. – Mike Dunlavey Nov 30 '11 at 19:01 show 1 more comment ## 12 Answers The answer is no. The pole would bend/wobble and the effect at the other end would still be delayed. The reason is that the force which binds the atoms of the pole together - the Electro-Magnetic force - needs to be transmitted from one end of the pole to the other. The transmitter of the EM-force is light, and thus the signal cannot travel faster than the speed of light; instead the pole will bend, because the close end will have moved, and the far end will not yet have received intelligence of the move. EDIT: A simpler reason. In order to move the whole pole, you need to move every atom of the pole. You might like to think of atoms as next door neighbours If one of them decides to move, he sends out a messenger to all his closest neighbours telling them he is moving. Then they all decide to move as well, so they each send out messengers to to their closest neighbours to let them know they are moving; and so it continues, until the message to move has travelled all the way to the end. No atom will move until he has received the message to do so, and the message won't travel any faster than all the messengers can run; and the messengers can't run faster than the speed of light. /B2S - 3 unfortunately still not. Pulling and pushing is the same thing, the only difference is the direction. – Born2Smile Dec 23 '10 at 2:10 35 The message will travel at the speed of sound in the pole - much slower than light! – Mark Eichenlaub Dec 23 '10 at 3:50 4 I agree with Mark, this part of the answer is wrong. Pulling and pushing has to do with crystal structure of the material and its deformation. The messengers that visit "neighbors" are phonons here, not photons. But other than that, the answer is pretty nice. – Marek Dec 23 '10 at 9:55 2 Interestingly enough the classical rigidity of a solid bar was used in Einstein's first proof for $E=mc^2$. Therefore this first proof was actually invalid from a strict point of view. – Gerard Feb 7 '11 at 15:31 7 @Gerard: This is not interesting, it is wrong. Why make a comment on a paper you never read? There are two equally ignorant people who agree with you, so it is best to delete the comment. – Ron Maimon Jan 3 '12 at 13:03 show 7 more comments The information about the pushes will be received on the other end with the speed of sound in the substance of the pole. For any real mmaterial it is much slower than the speed of light (for a steel rod it would be about 1000 m/s). - 5 This answer is short and precise, whereas the others are either wrong or inconcise. +1 – Georg Feb 7 '11 at 11:02 1 @Georg: It is not an answer to the real question about information transport. It is more like an statement(although true) about the rod example. There is no explanation of why. – Hans-Peter E. Kristiansen Dec 27 '11 at 1:58 The signal will propagate at the speed of sound in steel. I happen to know the speed of sound in aluminum, because my students measure it in lab; it's about 5000 m/s. This is many orders of magnitude less than the speed of light. - No. In relativity you cannot consider extended objects to be infinitely "stiff" - they must bend and stretch, as real objects do. When you move one end of the steel rod, it makes part of it bend and stretch which exerts a force on the next section which makes that move and which makes a new part bend and stretch and so on and so on until you reach Alpha Centauri. This moves along at some speed which is characteristic for the metal which is fast enough that we don't really notice in day to day life. All relativity tells us is that that characteristic speed is less than the speed of light - it turns out for real metal its much less than the speed of light. - Essentially the problem with this idea is there are no such thing as perfectly rigid bodies. So as you push, it sends a little compression wave through the material, which travels at the speed of sound in the material, as sound is just a type of propagating compression. - 1 What about an electron, which as far as we know has no smaller components, if you could push one side of a single electron, the opposite side will move at exactly the same time? – Jonathan. May 29 '11 at 21:12 @Jonathan : Since an electron is pointlike, and has no spatial extension, it does not have 2 sides. It is NOT a small hard sphere. – Frédéric Grosshans Feb 17 '12 at 13:48 Is it possible for information (like 1 and 0s) tO be transmitted in anyway faster than light. No. Born2Smile said the same thing (which I +1'd) but I figured it's worth repeating for emphasis. It'd be a violation of causality. For some more details on why this is not allowed, in addition to Born2Smile's answer, see What are some scenarios where FTL information transfer would violate causality? . - 1 Hm, if you are taking it from purely theoretical perspective (which I don't think the question is asking for, but okay), you don't even need causality. FTL transmission of information already violates SR because it can only be done by means of exchanging elementary particles. All of the (known) particles are either massless or massive with future-oriented time component of the velocity. So it's not possible to make any single particle move faster than speed of light. – Marek Dec 23 '10 at 10:00 @Marek: true, but I referenced causality to provide a reason that doesn't have anything to do with SR. – David Zaslavsky♦ Dec 23 '10 at 21:20 @David: ah, so it should have been an independent argument; sorry for not realizing that. – Marek Dec 23 '10 at 21:22 @Marek: can certain massless particles interact between time so-to-speak? Or independent of time anyway? – sova Jan 6 '11 at 6:28 @sova: I am not sure what you mean. But at least in the context of quantum field theory, all the interactions are purely local (both in space and time), so there the answer would be no. – Marek Jan 6 '11 at 10:30 show 1 more comment A simple explanation why the speed of sound can never be faster than the speed of light: Consider two atoms $A$ and $B$. Give the nucleus of $A$ a slight push. As we now, this push will carry over to $B$, but why? It's due to their electrostatic repulsion. So for $B$ to even react, you first need at least an electromagnetic wave/photon travel from $A$ to $B$. This can of course not get there faster than the speed of light. The nucleus of $A$ itself can obviously not be faster, either, so even with brute force it's not possible to get a sonic speed $\,>\!\!c$. - Relativity says that different inertial reference frames will have different time measurements, but causality is respected in all reference frames. That is, unrelated events A and B may appear to some observers to happen simultaneously, others may see A before B, or B before A. But if A causes B, A will be seen to precede B by all observers (though different observers may disagree on the amount of time between A and B). If any information traveled faster than the speed of light, there would be an inertial reference frame from which it would appear that the signal got to its destination before it left its source. So far, there is no evidence that the universe is non-causal. (Another reason to strongly doubt faster-than-light neutrino speeds.) - My answer is YES. Consider the EPR paradox experiment (pi0-->gamma gamma decay). By measuring the polarization of one of the emitted photons (event A) instaneously tells you that the polarization of the other photon is opposite to the one measured (event B). This information is obtained instaneously, i.e. at infinite velocity. To summarize, entangled states allow for information to travel faster than light, virtually at infinite velocity. The paradox is solved by saying that, in order for the information to reach us, it is necessary to know that the measured gamma belongs to a specific entagled state (decay of the pi0). And this information comes from an event O in the past. The two photons at the time of measurement are inside the light cone of event O. There exist a world line that connects events A and B inside the light cone of O. The fact that the world line is traveled backwards in time is irrelevant, since QFT is invariant under time reversal. This argument leads to an extended concept of causality. One consequence is that it is not possible to send information between two events outside the light cone of O. - 1 This is a point of view. There is another point of view that you got no information at all about the distant photon, only about the past light cone of the current photon. Anyway, this is irrelevant to the question, which is about transmitting information, which doesn't happen here by your own last sentence. – Ron Maimon Jan 3 '12 at 13:08 Suppose I have a particle and an anti-particle pair near the event horizon of a black hole. They are so placed the anti particle pair moves into the black hole and the other particle moves away. Suppose we can track down this particle and know it's properties. Then we can say that the other anti-particle pair also has the same properties. So we can know the information of the anti-particle even though we have no track of it(it went into the black hole). So information must travel faster than light. - What you describe is not a flow of information. Look e.g. at this lazy car analogy: If you start two cars of different colors at a point in opposite directions and after a while look at one car you immediately know the color of the other. This is what you describe. If information was traveling it would mean that you could somehow change the color of one car by manipulating the other. – user9886 Jul 31 '12 at 12:52 I guess you mean, the light in vacuum, because it is quite easy to trasnmit information faster than the speed of light in a medium. For the fast thinkers: imagine a light tower, what is the speed of a light spot created by the beam on an hypothetical wall located at a distance R. Classical kinematics gives v=R*omega, so for large enought R, the spot travels at a superluminal speed. - 1 While "it is quite easy to trasnmit information faster than the speed of light in a medium" (you use nearly luminal particles, look up Cerenkov radiation) is true it is uninteresting, and the rest of the post is misinformed. No thing (and no information) move faster than the speed of light as the spotlight sweeps. – dmckee♦ Jul 30 '12 at 22:36 Cerenkov radiation uninteresting: funny you. For the rest of the post, it is a classical paradox I was giving to readers with physics insights to solve. Obviously I have overestimated some of them. I was expecting the math to show it is impossible/possible. If you had use your time thinking about the problem, you would have realized special relativity forbids speed greater than c for particles not for spots... – Shaktyai Jul 31 '12 at 6:25 You have misunderstood both parts of that comment. Exceeding the speed of light in a medium is uninteresting because "the speed of light in a medium" is not a fundamental quantity. Nor can a person at point $A$ signal to a person at point $B$ with a superluminal spot because the decision to move the spotlight has to be made at $C$ and has to occur before the person at $A$ decides what to send. Any signalling goes from $C$ to $B$, and it moves no faster than $c$. Track the light to prove it (photons or fields). – dmckee♦ Jul 31 '12 at 13:28 I have perfectly understood your comments and I have never written about a message between A and B. I was just giving the readers of stackexchange an interesting problem to think about.I have obviously failed since as far as I can juge by the grade you give me, I have only managed to have you showing us your skills to count with negative numbers. Keep counting, 7 more fingers to go. By the way, you should try to read this book: Relativity in Rotating Frames: Relativistic Physics in Rotating Reference Frames.(ML Ruggiero) As usual with paradoxes, the answer needs a fine analysis. – Shaktyai Jul 31 '12 at 16:31 There is no paradox. Everything that happens in this situation happens at or below the speed of light. There are some interesting effects in terms of the time that various points are illuminated, but they are not paradoxical unless you insist on thinking of the "spot" as a thing in it's own right (which it isn't). – dmckee♦ Jul 31 '12 at 16:37 show 1 more comment Quantum tunneling has been shown to be superluminal. The problem with it is that It's not reliable. In order to put the redundacy in it to be reliable, you're effectively communicating subluminally. - 3 -1 this is completely wrong – user2963 Jul 30 '12 at 22:07 1 There is some kernel of truth to this answer. But as it is formulated here it is very superficial and highly misleading. – user9886 Jul 31 '12 at 12:45 ## protected by Qmechanic♦Jan 5 at 0:52 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537397623062134, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73785?sort=newest
Drawing natural numbers without replacement. Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we start with an initial probability distribution on $\mathbb{N}$ that gives positive probability to each $n$. Let's call this random variable $X_1$ so we have $P(X_1=n)=p_{1,n}>0$ for all $n\in\mathbb{N}$. $X_1$ wil be the first draw from $\mathbb{N}$. For the next draw $X_2$ we define a new distribution on $\mathbb{N}\setminus\{ X_1 \}$ by rescaling the remaining probabilities so they add up to 1. So $p_{2,X_1}=0$ and $p_{2,n}=\frac{p_{1,n}}{1-p_{1,X_1}}$ for $n\neq X_1$. Continuing in this manner we get a stochastic process (certainly not Markov) that corresponds to drawing from $\mathbb{N}$ without replacement. My question is whether this process has ever been studied in the literature. In particular, I'm wondering if a clever choice of the initial distribution could result in tractable expressions for the distributions of $X_n$ for large $n$. - 1 I have not read Feller's classic text on probability. The reputation of the text, however, I have heard. Thus I wager a small amount of money that this subject is discussed, perhaps in a different form, in that text. Gerhard "Say A Nickel, Any Takers?" Paseman, 2011.08.26 – Gerhard Paseman Aug 26 2011 at 17:36 1 Do you want the (unconditional) distribution of $X_n$? That is the same as the distribution of $X_1$. – Robert Israel Aug 26 2011 at 20:33 3 @Robert, X_1 and X_2 are never equidistributed. – Didier Piau Aug 26 2011 at 21:17 5 This model of drawing without replacement is used by professional poker players to estimate the probability of finishing in $n$th place in a tournament given a particular distribution of chips. It is called the Independent Chip Model or ICM. I proved that in heads-up pots in tournaments with nondecreasing prizes, the ICM always recommends a nonnegative amount of risk-aversion. For example, according to the ICM it is not worth it to spend some chips on average to try to knock someone out. – Douglas Zare Aug 26 2011 at 21:39 4 @Michael Hardy: If $Pr(X_1=1)=0.9$, then $Pr(X_2=1)$ can't be $0.9$ since the events are disjoint. – Douglas Zare Aug 26 2011 at 21:56 show 22 more comments 2 Answers Here are some preliminary computations. Assume the reference distribution is $(p(n))$. For every finite subset $I$ of $\mathbb N$, introduce the finite number $r(I)\ge1$ such that $$\frac1{r(I)}=1-\sum_{k\in I}p(k).$$ Obviously, $P(X_1=n)=p(n)$ for every $n$. Likewise, $P(X_2=n)=E(p(n)r(X_1);X_1\ne n)$ hence $$P(X_2=n)=p(n)(\alpha-p(n)r(n)),\qquad \alpha=\sum\limits_kp(k)r(k).$$ This shows that $X_1$ and $X_2$ are not equidistributed (if they were, $\alpha-p(n)r(n)$ would not depend on $n$, hence $p(n)$ would not either, but this is impossible since $(p(n))$ is a measure with finite mass on an infinite set). One can also compute the joint distribution of $(X_1,X_2)$ as $$P(X_1=n,X_2=k)=p(n)r(n)p(k)[k\ne n],$$ and this allows to expand $$P(X_3=n)=E(p(n)r(X_1,X_2);X_1\ne n,X_2\ne n),$$ as the double sum $$P(X_3=n)=p(n)\sum_{k\ne n}\sum_{i\ne n}[k\ne i]r(k,i)p(k)r(k)p(i),$$ but no simpler or really illuminating expression seems to emerge. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Googling "sampling without replacement" produces more information than I could ever hope to summarize here (note that in sampling theory, the population is usually assumed to be so large as to be infinite, and the distribution is whatever you feel like, certainly not usually uniform). - 1 Saying the population is infinite is often little more than a roundabout way of saying that the sampling is being done with replacement, i.e., i.i.d. As the population size grows, "without replacement" approaches "with replacement". – Michael Hardy Aug 26 2011 at 20:49 for a finite population, the multivariate hypergeometric distribution is the general answer. en.wikipedia.org/wiki/Hypergeometric_distribution – Carlo Beenakker Aug 26 2011 at 21:12 @Carlo, for a finite population, the experiment collapses after a finite time and the distribution of X_n for large n (the object the OP is interested in) does not exist. – Didier Piau Aug 26 2011 at 23:48 @Didier --- you're absolutely right; I was thinking of a double limit in which both the system size and "time" n are sent to infnity. – Carlo Beenakker Aug 27 2011 at 7:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289281368255615, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/189517-finding-region-positive-negative-regions.html
# Thread: 1. ## Finding region positive/negative regions. Hi there. I am given these functions and I want to find out in which areas they are positive or negative. x^y - 1 |x+y| - |x-y| so the way i tried the first one is i set the function x^y-1 = c and take c = 0, simplifying i get y log (x) = 0. What do i do now? Second one i did the same and ended up with |x+y| = |x-y| 2. ## Re: Finding region positive/negative regions. Originally Posted by Kuma Hi there. I am given these functions and I want to find out in which areas they are positive or negative. x^y - 1 |x+y| - |x-y| so the way i tried the first one is i set the function x^y-1 = c and take c = 0, simplifying i get y log (x) = 0. What do i do now? Second one i did the same and ended up with |x+y| = |x-y| Let's analyze the function $f(x)= x^{y}-1$. First we have to exclude the half plane where $x<0$ because here f(x) is complex, i.e. neither positive nor negative. The 'border' which devides the region where f(x) is positive and the region where f(x) is negative is done by the equation... $x^{y}= 1 \implies y\ \ln x=0$ (1) ... which is satisfied for $x=1$ or $y=0$, so that is... $f(x)>0\ \text{for}\ 0<x<1\ ,\ y<0\ \text{or}\ x>1\ ,\ y>0$ $f(x)<0\ \text{for}\ 0<x<1\ ,\ y>0\ \text{or}\ x>1\ ,\ y<0$ Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251952171325684, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/212103-asymptotic-analysis-wkb-approx.html
# Thread: 1. ## Asymptotic analysis to the WKB approx Hi i have the WKB approx of: $u_{+} = \sqrt{1-\frac{bm}{f}}e^{i\int f_{k} dt} + \sqrt{1+\frac{bm}{f}}e^{-i\int f_{k} dt}$ to the differential equation: $\frac{d^{2}u_{+}}{dt^2} + [f_{k}^{2} + i(\frac{d(bm)}{dt})]u_{+} =0$ This equation can be written as: $\frac{d^{2}u_{+}}{dN^2} + [p^2 - i + M^2]u_{+} =0$ by using $p=\frac{k}{\sqrt{\frac{\partial x}{\partial t}Cb_{*}(t_{*1})}}$ and $M=\sqrt{\frac{\partial x}{\partial t}Cb_{*}(t_{*1})}(t-t_{*})^{2}$ How to i find the asymptotic solutions for when $M\rightarrow \pm \infty$ do i need to sub in using: $f^{2}_{k}=k^2 + b^2(t_{*1})(\frac{\partial x}{\partial t}|_{*1})^2 (t-t_{*})^2$ and the above and then take the limit for M, i am a bit confused though because the broken down solutions : $\sqrt{1+\frac{bm}{f}}\rightarrow \sqrt{2}$ $\sqrt{1-\frac{bm}{f}}\rightarrow \frac{p}{-\sqrt{2}M}$ $e^{+i\int f_{k}dt}\rightarrow (\frac{p}{-2M})^{\frac{ip}{2}}e^{-\frac{iM^2}{2}}e^{\frac{-ip^2}{4}}$ contain M but surely that is $M\rightarrow \pm \infty$ so why is it there?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9135990142822266, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/exponentiation+roots
Tagged Questions 1answer 194 views Why use radical notation instead of rational exponents? I'm helping my younger sister for her math class. She has recently been taught integer exponents, and has starteed studying radicals (mainly square roots). The next topic will be rational exponents, ... 3answers 324 views How do I find if $\frac{e^x}{x^3} = 2x + 1$ has an algebraic solution? Is there some way of solving $$\frac{e^x}{x^3} = 2x + 1$$ non-numerically? How would I go about proving if there exists a closed form solution? Similarly how would I go about proving if there exists ... 2answers 140 views How many positive roots does the equation $a^x=x^a$ have? Let $a\in (1,e)\cup(e,\infty).$ I'd like to show that the equation $a^x=x^a$ has exactly two positive solutions, and one is larger and one smaller than $e.$ Is it even possible to show? I think I've ... 3answers 474 views What we can say about $-\sqrt{2}^{-\sqrt{2}^{-\sqrt{2}^\ldots}}$? Problem: How we can strictly prove $-\sqrt{2}^{-\sqrt{2}^{-\sqrt{2}^\ldots}}$ can't be 2? Can $-\sqrt{2}^{-\sqrt{2}^{-\sqrt{2}^\ldots}}$ have the value expressed by complex numbers? (See below, in ... 4answers 2k views Are the solutions of $x^{x^{x^{x^{\cdot^{{\cdot}^{\cdot}}}}}}=2$ correct? Problem: Find $x$ in $$\large x^{x^{x^{x^{ \cdot^{{\cdot}^{\cdot}} }}}}=2$$ Trick: $x^{x^{x^{x^{\cdot^{{\cdot}^{\cdot}}}}}}=2$, so, $x^{(x^{x^{x^{\cdot^{{\cdot}^{\cdot}}}}})}=x^2=2$, and, ... 1answer 103 views How do I find p for equations of the form $\sum \limits_i \frac{a_i}{b_i^p} = 1$ The problem I'm facing is solving the following equation for $p$ given the constants $a_i$ and $b_i$: $$\sum_i \frac{a_i}{b_i^p} = 1$$ Is there a general technique that would allow me to find a ... 3answers 168 views How do we solve $a \le b^{r}-r$ for $r$? Given two values $a$ and $b$, how should one go about solving the following inequality for $r$: $$a \le b^r -r .$$ Applying $\log_b$ on both sides of the inequality doesn't help me much since that ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957449197769165, "perplexity_flag": "head"}
http://mathoverflow.net/questions/7650/generalizations-of-planar-graphs/57849
## Generalizations of Planar Graphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a follow up to Harrison's question: why planar graphs are so exceptional. I would like to ask about (and collect answers to) various notions, in graph theory and beyond graph theory (topology; algebra) that generalize the notion of planar graphs, and how properties of planar graphs extend in these wider contexts. - Muktiple answers/posts are welcomed. I hope we can have a useful source. – Gil Kalai Dec 3 2009 at 17:51 More answers, remarks, links are most welcome. In particular: links to connections between planar graphs and commutative algebra (and other algebra), some info on the simlicial complex of (edges of) planar graphs on n vertices, some interesting extensions of planar graphs related to the 4CT. – Gil Kalai Dec 19 2009 at 14:58 ## 20 Answers I guess one possible generalization could be: an $m$-dimensional stratified space (i.e. "manifold with singularities") which is embeddable in $2m$-dimensional Euclidean space. Every smooth manifold can be so embedded (by Whitney's theorem), but singularities may force the ambient dimension higher, as witnessed by the simple case $m=1$ in which "stratified space" is just a graph and the embeddability condition is just planarity. (This was just invented on the spot - I have no idea if this is actually an interesting definition...) - Dear Alon, this is a very important and interesting generalization. – Gil Kalai Dec 3 2009 at 10:05 Already in the $m=2$ case this is a non-trivial question. There are some positive results -- such complexes $M$ embed if $H^2(M;\mathbb Z)$ is cyclic. M. Kranjc, "Embedding a 2-complex K in R^4 when H^2(K) is a cyclic group," Pac. J. Math. 150 (1991), 329-339. There's also some known obstructions to embedding: A. Shapriro, "Obstructions to the imbedding of a complex in Euclidean space, I. The first obstruction," Ann. of Math., 66 No. 2 (1957), 256--269. – Ryan Budney Dec 3 2009 at 15:35 I think that embeddability of 2 dimensional spaces into R^4 is the most difficult case. – Gil Kalai Dec 3 2009 at 19:20 (This is because of the wonderful Pandora box oppened by Mike Freedman :) ) – Gil Kalai Dec 4 2009 at 10:50 If you can't construct tame topological embeddings (that aren't smooth) then you stick to smooth embeddings and your life is easier. :) – Ryan Budney Dec 4 2009 at 16:20 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are many generalizations, but one of my favorites is "neighborhood systems": intersection graphs of systems of balls in a Euclidean space of bounded dimension, with the property that any point of the space is covered by a bounded number of balls. If the dimension is two and the number of balls covering any point is at most two, these are exactly the planar graphs (Koebe-Thurston-Andreev), they have at most a linear number of edges in any dimension, and more importantly from the point of view of divide-and-conquer algorithms they have separator theorems in any dimension (Shang-Hua Teng and others). - The plane can be generalized to surface of higher genus. Also although any graph can be embedded in a higher dimensional space perhaps a hypergraph could not. I think this could be generalized further to embedding hypergraphs in various higher dimensional manifolds - 1 Maybe "hypergraph" = "simplicial complex," here? I have to confess, I've spent a lot of time trying to figure out exactly what the deal is with graphs in R^4... – Harrison Brown Dec 3 2009 at 20:13 One generalization that Kristal point out to is graphs that can be embedded in a given surface. This is perhaps the most studied generalization of planar graphs. Going to higher dimensional objects was suggested by Alon (embeddability of k spaces into R^2k.) Kristal's suggestion to replace the ambient space by other spaces is certainly interesting. If I remember correctly embeddability of k manifolds in Euleriam 2k-manifolds (2k-manifolds with Euler characteristic 2) is equivalent to embeddability in R^{2k}. (OK, maybe the condition is vanishing middle homology and not being Eulerian.) – Gil Kalai Dec 4 2009 at 9:27 Some of the broadest generalizations generalize the family of planar graphs. Possibly the most important generalization along these lines is the notion of a minor-closed graph family, among which the planar graphs are apparently again quite exceptional in a way I don't pretend to understand. - One way that planar graphs are exceptional among minor-closed families is that a minor-closed family includes all planar graphs iff its graphs do not have bounded treewidth. Relatedly, a minor-closed family includes all apex graphs (planar + 1 vertex) iff its graphs do not have bounded local treewidth. – David Eppstein Dec 3 2009 at 23:59 @David: Your first statement is what I was referring to; I just don't have any idea why this should be the case! – Harrison Brown Dec 4 2009 at 0:48 1 one way of thinking about the relation between including all planar graphs and not having bounded treewidth is this: unbounded treewidth means that the graph family contains large grids (or walls). A grid (or wall) can be converted into an arbitrary planar graph by deletion/contraction operations. So a family that excludes even one planar graph can't have a large grid, and therefore must have bounded treewidth. – Suresh Venkat Feb 23 2010 at 4:26 Planar graphs have a linear number of edges in terms of the number of vertices that the graph has. One can attempt to find "geometric conditions" on a graph so that it has a linear number of edges in terms of the number of its vertices. One such result is: Quasi-planar graphs have a linear number of edges -Joe Malkevitch - Dear Joe, This refer to an important extension of planar graphs: Graphs that can be drawn in the plane by Jordan curves so that there are no r edges that every pair cross. For r=2 these are planar graphs and for r=3 these are the quasi-planar graphs in the linked paper. It is expected but not knwon that for larger r there is a linear bound on the number of edges. – Gil Kalai Dec 4 2009 at 10:20 One interesting generalization is to "small classes of graphs." A class of graphs is small if the number of isomorphism classes of such graphs with n vertices is (only) exp (O(n)). Forests, planar graphs, minor-closed families (which avoid some graph), graphs of simple d-polytopes for a fixed d, are all known to be small. (I am not sure about the later case if you include also all subgraphs of these graphs; also it is not known if the class of dual graphs of triangulated d-speheres is small, for d>= 3). The class of cubic graphs is not small. One way to prove that an hereditary class of graphs (namely a class of graphs closed under taking subgraphs) is small is via a seperator's theorem. Suppose that every graph in the class with n vertices can be separated by a set of g(n) vertices to connected components of size at most 2/3n. Then if g(n) is smaller than n/(log n)^2 it is sufficient to demonstrate that the class is small. (There are examples where g(n)=n/(log n)^0.99, so that the class is not small and it is an open problem to find the right exponent of logn that guarantees that the class is small.) For graphs with a forbidden minor it is known that g(n)=C sqrt n is enough. (I do not know if graphs of simple d-polytopes have a seperation theorem.) - Ooh, this is another "off the top of my head" one, but another generalization of the family of planar graphs is matroid families closed under taking duals. - That's a very nice idea! – Gil Kalai Dec 4 2009 at 9:20 Since the class of matroids representable over any fixed F is also closed under duality, I'm not sure this is a meaningful generalization. For example, the class of matroids representable over the reals is somehow not very planar (or even graph-like). – Tony Huynh Feb 22 2010 at 23:05 @Tony: Actually, that's one of the reasons why I think this is interesting! The idea being that the class of planar graphs has some "nicer" algebraic properties than the class of all graphs... – Harrison Brown Mar 11 2010 at 8:13 Combining this with your other idea, one can consider minor-closed families of matroids. – Timothy Chow Mar 8 2011 at 17:06 Planar graphs can be characterized in terms of various minor monotone graph invariants such as $\mu(G)$ of Colin de Verdière, or the recent $\sigma(G)$ of Van der Holst and Pendavingh. A graph $G$ is planar if and only if $\mu(G)$, or $\sigma(G)\leq 3$. You can relax this to $\leq 4$, which turns out to be the flat graphs $G$, those that are linklessly embeddable in 3-space. A linkless embedding can be found in polynomial time (Van der Holst) (checking planarity is linear - Hopcroft and Tarjan). There are many connections to linear algebra, topology, and combinatorial geometry. Also, since $\mu(G)\leq 2$ if and only if $\sigma(G)\leq 2$ if and only if $G$ is outerplanar, outerplanarity can be considered to be a natural strengthening of planarity (which goes in the opposite direction from that asked by the question). Note: There is also a $\lambda(G)$ of Van der Holst, Laurent and Schrijver (paper) which does not characterize planarity. Instead, $\lambda(G)\leq 3$ iff $G$ does not have $K_5$ or a certain graph $V_8$ as minor. - Dear Konrad, this is indeed an important generalization of planar graphs. (I am not so sure lambda<=3 gives planarity rather than the very related "not having a K_5-minor".) – Gil Kalai Dec 7 2009 at 16:11 Dear Gil, Indeed! I'll correct my answer. – Konrad Swanepoel Dec 8 2009 at 15:58 Just to restate: If $p$ is an integer graph parameter that is minor-monotone (i.e. $p(G) \le p(H)$ whenever $G$ is a minor of $H$) then by Robertson-Seymour for every $k$ the set of graphs $G$ with $p(G) \le k$ is characterized by a finite list of forbidden minors. If $p$ arises "naturally" in some way and it happens that for some $k$ the list of forbidden minors is exactly $K_5$ and $K_{3,3}$, then different values of $k$ provide an infinite family of "natural" generalizations (or specializations) of the class of planar graphs. The two parameters mentioned arguably qualify. – Tracy Hall Sep 4 2010 at 20:21 Another possibility is a family of concepts related to thickness: the minimum number of colors one needs to color the edges in a drawing of the graph in the plane such that edges of the same color do not cross. Planar graphs are graphs with thickness one, and the natural generalizations are to graphs of bounded thickness. For thickness, the drawing is allowed to have curved edges; if the edges are straight ("geometric thickness") one gets a somewhat more restricted class of graphs, and "book thickness" or "pagenumber" (the vertices are on a line and the edges are curves within a single halfplane bounded by that line) is more restrictive still. - Graphs which are stress-free for a generic embedding into space is an interesting class of graphs that includes all planar graphs. One thing to notice is that edges of maximal planar graphs with n vertices do not form the set of bases of a matroid. (Still maximal planar graphs have one pleasant property of matroids that they all have the same number of edges 3n-6.) There are related matroids defined on edge sets of complete graphs on n vertices: the most well known is the matroid described by generic rigidity of spacial embeddings. Gluck proved that planar graphs are generically 3-rigid, and this result is based on Dehn-Alexandrov's theorem asserting that (embedded) graphs of simplicial 3-polytopes are infinitesimally rigid. - There are two interesting classes of directed planar graphs (where the undirected graph is 3-connected). One class consist of planar directed graphs with an acyclic unique sink property: those are acyclic orientation so that every cycle corresponding to a 2 face has a unique sink. A more restricted class correspond to orientations that arise by some linear functional on R^3 for some realization of the graph as a graph of a 3-polytope. Those (by a theorem of Klee and Mihalisin) require the additional property that between the unique source and unique sink of the graph there are three vertex disjoint directed paths. - 1 st-planar graphs (directed acyclic plane graphs in which there is only one source and one sink, both on the outer face) are another class of directed planar graphs, important in graph drawing. – David Eppstein Dec 7 2009 at 7:16 Resembling the "quasiplanar graphs" that Joe Malkevitch mentioned, you have the class of graphs with crossing number (in the plane) at most k for any $k \geq 0$. For $k = 0$ these are exactly the planar graphs. The crossing number gives an upper bound on the genus, although the bound isn't close to tight in general. By the crossing number inequality, sufficiently large graphs with crossing number at most k are always sparse (with "sufficiently large" depending on k, of course). - Here is another generalization of planar graphs. Start with a d-dimensional polytope P with n vertices. For every 2-dimensional face F triangulate F by non crossing diagonals. So if F has k sides you add (k-3) edges. It is known that the total number of edges you get (including the original edges of the polytope) is at least $dn - {{d+1} \choose {2}}$. A polytope is called "elementary" if equality holds. We can consider the following classes of graphs: 1) E_d = Graphs of elementary d-polytopes and all their subgraphs 2) F_d = Graphs obtained by elementary d-polytopes by triangulating all 2 faces by non crossing diagonals, and all their subgraphs. For d=3 both classes are the class of planar graphs. Some properties of planar graphs are known or conjectured to extend. 1) (robustness; conjectured) We can start instead of polytopes by arbitrary polyhedral (d-1)-dimensional pseudomanifolds. But it is conjectured that we will get precisely the same class of graphs. 2) (duality; known) If P is elementary so is its dual P* 3) (coloring; conjectured) Graphs in E_d (and perhaps even in F_d) are (d+1)-colorable. - There is a nice generalization of all finite graphs via the notion of a graph-like space, introduced by Thomassen and Vella. A graph-like space is a compact metric space $G$ with a subset $V$ satisfying: 1. $V$ is totally disconnected, 2. $G-V$ consists of disjoint open sets of $G$, 3. each component of $G-V$ is homeomorphic to $\mathbb{R}$, and has exactly two limit points in $V$. Notice that the definition is purely topological, so it makes sense to define a planar graph-like space as a graph-like space which is homeomorphic to a subset of the sphere. In this context, there is the following deep generalization of Kuratowski's theorem due to Thomassen. Theorem. Let $G$ be a 2-connected, compact, and locally connected metric space. Then $G$ is homeomorphic to a subset of the sphere if and only if $G$ does not contain a subspace homeomorphic to $K_{3,3}$ or $K_5$. Here, 2-connected means that $G$ is connected and $G-x$ is connected for all $x \in G$. The thumbtack space consists of a disk together with a closed interval glued to the centre of the disk at one endpoint. Notice that the thumbtack space is not planar, but yet does not contain a subspace homeomorphic to $K_{3,3}$ or $K_5$. Thus, 2-connectedness is a necessary hypothesis in the above theorem. In addition to generalizing all finite graphs, graph-like spaces also generalize various compactifications of infinite graphs. - 2 That's very interesting! I heard about real trees (R-trees) which play important role but not about these spaces. – Gil Kalai Feb 23 2010 at 21:08 I myself did not know about real trees, but after passing through a few children websites I found them. en.wikipedia.org/wiki/Real_tree Much thanks. The nice thing about graph-like spaces is that they really are 'graph-like'. For example, they satisfy a form of Menger's theorem. Also, Maclane's theorem and Whitney's theorem hold for the planar ones. – Tony Huynh Feb 24 2010 at 16:13 A group having a planar Cayley graph is sometimes called planar. Finite planar groups are well understood. The situation with infinite planar groups and their Cayley graphs is much more complicated; in particular, if the number of ends is infinite. Edit: A flavor of the infinite ended case can be obtained from the following example: Take the truncated cube as a Cayley graph for the group $G$ generated by an element $a$ of order 3 and an involution $b$. If you amalgamate $G$ by itself over a cyclic subgroup generated by $a$, the resulting Cayley graph is planar, but it has infinitely many ends. David Eppstein gave examples of two groups having truncated cube as their Cayley graph. Hence this construction may use either of them or their amalgamated product. The resulting infinite planar graph is a Cayley graph for three distinct groups. - Tomaž, that's very tantalising! Who understands it? Can you provide links or references? – L Spice Mar 11 2010 at 2:10 Loren, a good starting point is chapter "The genus of a group", by Tom Tucker that appeared in "Topics in Topological Graph Theory",(L.W. Beineke, R.J. Wilson, eds.), Encyclopedia of Mathematics and Its Applications 128, Cambridge Univ. Press, 2009. – Tomaž Pisanski Mar 11 2010 at 6:15 Tomaž, thanks! I'll have a look. – L Spice Mar 13 2010 at 23:16 Jarik Nesetril and Patrice Ossona de Mendez developed the notion of "nowhere dense graphs" which extends the class of planar graphs. The class of nowhere dense graphs include all graphs with a forbidden minor; all graphs of bounded degrees but not all $d$-degenerate graphs. They are important in graph theory and in logic. - I think it is worth noting that circle graphs can be seen as a generalization of planar graphs. Circle graphs are intersection graphs of chords in a chord diagram. In 1981, de Fraysseix showed that a bipartite graph is a circle graph if and only if it is a fundamental graph of a PLANAR graph. (The fundamental graph of a graph G with respect to its spanning tree T is the bipartite graph on E(G) such that an edge e in T and another edge f not in T are adjacent if and only if T-f+e is a tree.) There is even a Kuratowski-type theorem for characterizing circle graphs, which actually implies the Kuratowski theorem for planar graphs. http://dx.doi.org/10.1002/jgt.v61:1 - 1 +1. Welcome to MO Sang-il. – Tony Huynh Feb 15 2011 at 14:51 I am surprised that no one has yet mentioned apex planar graphs. These are graphs $G$ such that there exists a vertex $v \in V(G)$ so that $G - v$ is planar. Apex planar graphs form a minor-closed family. Indeed, more generally, if we start with any minor closed family and 'apex' it, then we get another minor-closed family. Apex vertices are also one of the ingredients in Robertson and Seymour's Graph Minors Structure Theorem, which describes the class of graphs with no $K_n$-minor. Probably the most famous open problem along these lines is Jorgensen's Conjecture, which asserts that every 6-connected graph with no $K_6$-minor is apex planar. - Dear Tony, thanks for the answer. Let me mention that the class E_d of graphs of elementary 4-polytopes and their subgraphs contains the class of apex graphs. – Gil Kalai Mar 8 2011 at 5:56 Dear Gil. Thanks for the info! – Tony Huynh Mar 8 2011 at 13:08 Alon Amit already has mentioned above the generalization where you ask whether a d dimensional simplicial complex can be embedded continuously to a 2*d*-dimensional space. The case of 1 = d gives planar graphs. Jiří Matoušek: Using the Borsuk-Ulam Theorem however notes that you get a different generalization if you ask for an embedding where every simplex of the original complex is embedded linearly. (This is thus not a topological invariant of the simplicial complex.) This too is a true generalization of the class of planar graphs, for every planar graph can be drawn with straight edges. - A well ordering, $\leq$, on a set $S$ is a WELL-QUASI-ORDERING if and only if every sequence $x_i\in S$ there exists some $i$ and $j$ natural numbers with $i < j$ with $x_i\leq x_j$. (See wikipedia article at bottom) Robertson-Seymour Theorem: The set $S=Graphs/isomorphism$ are well-quasi-ordered under contraction. The corollary of this theorem is that any property $P$ of graphs which is closed under the relation of contraction (meaning if $P(G_2)$ and $G_1\leq G_2$ then $P(G_1)$) is characterized by a finite set of excluded minors (Which is explained below). An example of such a $P$ is planarity, or linkless embeddability of a graph into R^3. i.e. Every contraction of a planar graph is planar. Suppose $P$ is a property closed under $\leq$. ***If $B\leq G$ and $B$ is not $P(B)$, Then not $P(G)$. The idea is to characterize $P$ by a collection of bad $B$'s. The finiteness of the set of excluded minors comes from the well-quasi ordering and doesn't use the idea of a graph: Assuming we have well-quasi-ordered set $(S,\leq)$. One can prove that every property $P$ which is closed under the relation is characterized by a Finite set of excluded minors.That is, there exists some $X=\lbrace x_1,\ldots,x_n\rbrace \subset S \$ such that for all $s\in S$, $$\mbox{ not } P(s) \iff \exists i, x_i \leq s$$. The existence of a finite set $X$ is implicitly in 12.5 of Diestel (link at bottom, see the corollary of Graph Minor Theorem in Diestel). First convince yourself that there exists a set of $B's$ (not necessarily finite) as in *** that characterize property $P$. Then consider the smallest such set of $B$'s and using the property of well-quasi ordering show that is is finite. Note that as in the second wikipedia article, we can say any set of elements $A\subset S$ such that for all $a,b\in A$ we have $a \nleq b$ must be a finite set (provided $\leq$ is a well-quasi-ordering). Actual work done showing stuff in a topological direction has be done by Eran Nevo http://www.math.cornell.edu/~eranevo/ I suspect that Matroids have a well-quasi-ordering and that there is work being done toward proving an analogous theorem for them. I have a limit on links: en.wikipedia.org/wiki/Well-quasi-ordering en.wikipedia.org/wiki/Robertson-Seymour_theorem diestel-graph-theory.com/GrTh.html - 2 This was already mentioned by Harrison (see above). There are a couple of typos. In the first sentence, 'well-ordering' should be replaced by 'quasi-ordering.' With quasi-orderings it is unnecessary to mod out by isomorphism classes. You should also consider edge and vertex deletions when defining minors. For matroid minors, Geelen, Gerards and Whittle recently proved that binary matroids are well-quasi-ordered under taking minors. homepages.cwi.nl/~bgerards They hope to extend this to all finite fields soon. – Tony Huynh Feb 23 2010 at 15:55 Rad. Thanks for the corrections. Nice to hear about the Matroids. Sorry about not reading the post above above. :-) – Taylor Dupuy Feb 23 2010 at 22:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370996356010437, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/916/imagine-a-long-bar-floating-in-space-what-force-does-it-exert-on-itself-in-the/960
# Problem If you had a long bar floating in space, what would be the compressive force at the centre of the bar, due to the self-weight of both ends? Diagram - what is the force at point X in the middle of the bar?: ````<----------------------L--------------------->, total mass M =======================X====================== <- the bar F---> X <---F ```` # Summary You should be able to simplify by cutting the bar into pieces, but that gives a different answer depending on how many pieces you use (see below). So the simplification must be wrong - but why? # My approach ### Split bar in two So, one approximation would be to cut the bar in half - two pieces of length L/2, mass M/2: ```` (M/2)<-------L/2------->(M/2) #1 X #2 <- bar approximated as blobs #1 and #2 ```` Force at X is G(M1.M2)/(R^2) = G (M/2)^2 / (L/2)^2 = G M^2 / L^2 Or Fx / (G. M^2 / L^2) = 1 But is that really valid? If so, shouldn't you get the same answer if you split the bar into four pieces? ### Split bar into four ```` (M/4)<-L/4->(M/4)<-L/4->(M/4)<-L/4->(M/4) #1 #2 X #3 #4 ```` My assumption is that the force at X is the sum of the attractions of each blob on the left to every blob on the right. Force at X = #1<>#3 + #1<>#4 + #2<>#3 + #2<>#4 ('<>' being force between blobs #x and #y). Fx / (G.M^2 / L^2) = (2/4)^-2 + (3/4)^-2 + (1/4)^-2 + (2/4)^-2 = 1.61 This is bigger than the previous result (1.61 vs 1). ### Split bar into six Similarly, if you split into 6 blobs, the total force comes out as: Fx / (G.M^2 / L^2) = (3/6)^-2+(4/6)^-2+(5/6)^-2 + (2/6)^-2+(3/6)^-2+(4/6)^-2 + (1/6)^-2+(2/6)^-2+(3/6)^-2 Fx / (G.M^2 / L^2) = 2.00 ### So what's wrong with my approach? And what is the real answer? So it seems the more pieces we split the bar into, the larger the result gets. There's clearly something wrong with my assumptions! - but what? I'd be very glad if someone here could explain this. Thanks! EDIT As Peter Shor pointed out, my calculations had some dodgy algebra and I'd calculated $$L^2/M^2$$ values rather than $$M^2/L^2$$. I've now corrected that - the value still increases as you divide into more masses. I'll do a bit more work with more divisions and see if this leads to convergence or not. - 2 just surround your equations with two \$ ;) – Pratik Deoghare Nov 16 '10 at 14:55 1 When you separate the two pieces in the first example, each individual piece experiences the gravitational attraction from the other piece, but also a repulsion force originating from the material itself. If that wasn't the case, the material would collapse on itself. You forget to include that force in everyone of your calculations it seems to me. The total force in X is zero. – Raskolnikov Nov 16 '10 at 15:12 Clarification: as TheMachineCharmer correctly points out, the net force in the middle of the bar is zero. But what I'm trying to find is the pressure exerted on you at the middle of the bar, due to the weight of both halves of the bar. Or to ask another way, if you cut the bar in half and put a spring between the two halves, how much would the spring contract? – Sam Davies Nov 16 '10 at 15:16 Your mistake is in assuming that the center of gravity of the body will be the center of mass. That is not true. – Raskolnikov Nov 16 '10 at 15:36 1 @David Center of mass and center of gravity are the same in a uniform gravitational field. Center of mass is a point such that if you apply a force there, a rigid body will not rotate. Center of gravity is a point such that the torque about that point due to gravity is zero. If the gravitational field is not uniform, these could be different. In that case, there would be net torque and the rigid body would start rotating. – Mark Eichenlaub Nov 16 '10 at 21:09 show 4 more comments ## 3 Answers The reason you encountered higher and higher pressure at the center of the rod as you cut it into more pieces is that you were essentially approximating an integral, but the integral diverges ("is infinity" colloquially). When the rod has zero thickness, but still has mass, the density of the matter is infinite, and this leads to infinitely strong gravitational forces. To answer this question, we'll imagine the cylinder has some small, finite radius $R$. We want to find the force between the two halves of the cylinder. We'll let one half just sit stationary in space. It will create a gravitational potential. Then we'll grab the other half and pull it away to some distance $d$. The gravitational potential energy is a function of $d$. The force between the two halves of the cylinder is the derivative of the gravitational potential energy with respect to $d$ when $d=0$. The problem described above is too hard. It is quite difficult to calculate the gravitational potential of a cylinder at an arbitrary point. The gravitational potential of a point mass is just $-Gm/r$, but for a cylinder that extends out in three dimensions, we need to replace $m$ with the density $\rho$ and then integrate over the mass of the entire cylinder. The expression for $r$, the distance from an arbitrary point outside the cylinder to a point inside it, is not very tractable. However, at a point on the axis of the cylinder, the gravitational potential is more accessible due to the extra symmetry. If we set up cylindrical coordinates with the axis of the cylinder along the z-axis, and then integrate over the bottom half of the cylinder, we get $V(z) = \int_{z'=0}^{-L/2}\int_{r=0}^{R}\int_{\theta=0}^{2\Pi} \frac{G\rho}{\sqrt{(z-z')^2+r^2}} r\textrm{d}\theta\textrm{d}r\textrm{d}z'$ and doing the integral of $\theta$ it's $V(z) = 2\pi G\rho\int_{z'=0}^{-L/2}\int_{r=0}^{R} \frac{1}{\sqrt{(z-z')^2+r^2}} r\textrm{d}r\textrm{d}z'$. This allows us to make an approximation. Although the half of the cylinder we use to calculate the potential must have finite width, we can calculate the potential energy by assuming that the other half of the cylinder is located perfectly along the axis. As long as the radius of the cylinder is very small compared to the length, this is a valid approximation. So the potential energy comes from integrating the previous expression for $V$ along the $z$-axis for the length of the cylinder. We don't actually want the potential energy, but the derivative of the potential energy. So we imagine moving the top half up the cylinder up a little bit $dz$, and ask how the potential energy changes. Moving the entire top half of the cylinder up by $dz$ is equivalent to taking a piece of thickness $dz$ and slicing it off the bottom and moving it to the top. So we really just need to find the difference in the potential between the top and bottom of the top half of the cylinder and multiply by the mass-per-unit-length of the cylinder. The force between the two halves of the cylinder is $\frac{M}{L}[V(L/2) - V(0)]$ That still leaves two integrals to evaluate. $V(L/2)$ is easy, because it's far away from the half of the cylinder providing the gravitational potential (compared to $R$). That lets us approximate $V(L/2) = \frac{-GM}{L} \int_{-L/2}^0 \frac{1}{L/2-x}dx = -\frac{GM}{L}\ln 2$. The integral for $V(0)$ is trickier, so I put it in Mathematica and got $V(0) = -\frac{GM}{L}\textrm{arcsinh}\left(\frac{L}{2R}\right)$. In the regime we are interested in ($R$ small compared to $L$) the $\sinh(x)$ is just $e^x/2$, so this simplifies to $V(0) = -\frac{GM}{L} \ln\left(\frac{L}{R}\right)$ This gives a final answer for the force $F = \frac{M^2G}{L^2}\ln\left(\frac{L}{2R}\right)$ - That's a great answer, thanks. I'm still surprised to see R in the final answer but glad to see it matches what you'd intuitively expect - ie for R << L small changes in R make little difference to the final answer (ln(x) is fairly flat for large x). – Sam Davies Nov 17 '10 at 10:09 [cont] My previous approach using integrals assumed R << L, hence r << z, so your expression for V(z) simplifies to mG int{ (1/(z-z'))dz} (where m = M/L = 2pi.R.rho = mass/unit length). This fails though because you end up with V(0) = [-ln(z')]{0,L/2} = ln(L/2) + infinity. Still not entirely sure why this doesn't work - I suspect because r << z is not true when z -> 0. – Sam Davies Nov 17 '10 at 10:19 I think that's basically the source of the problem, yeah. Glad you got something from the answer. – Mark Eichenlaub Nov 17 '10 at 10:34 As other people answered, the force is a vector quantity, so the net force at the center is indeed zero. But it seems what you have in mind is actually the pressure (stress) at that particular point. Firstly, regarding your question. What you are computing is $$\sum_{\substack{i\in\text{first part}\\j\in\text{second part}}} \mathbf F_{ij}$$ the total force acting on each particle on the first part due to the second part. This is not equivalent to "force at X". But anyway, but why do you get infinity? Because this is the answer. If we compute the gravitational field due to the 2nd part ($\lambda=M/L$): $$g(h) = G\lambda \int_0^{L/2} \frac{1}{(z+h)^2}dz = \frac{G\lambda L}{h(2h+L)},$$ we find that the gravitational force scales as $\frac1h$. Now suppose you split the first part into $N$ blobs. The position of the $i$-th blob closest to the center will be $h_i=\frac{Li}{2N}$, its mass will be $\frac{\lambda L}{2N}$, so together the contribution of force of each blob will be $F_i = m_ig(h_i) \sim \frac{1/N}{i/N} = \frac1i$, and the total force is $\sim\sum_{i=1}^N\frac1i\sim\ln N$ which diverges logarithmically to $\infty$. This sounds unphysical, but this is because the setting itself is unphysical — you can't have a mass with no "width" but finite mass, this makes the density infinite, and this causes the infinite force as the two part are right next to each other. So let's makes sense when the bar has size. So assume that bar is a cylinder of radius $\epsilon \to 0$. Also assume the bar's density is $\rho = M/\pi\epsilon^2L$. The gravity field along the z direction is then $$g(h) = G\rho \int_0^{L/2} \int_0^{2\pi} \int_0^{\epsilon} \frac{h+z}{((h+z)^2+r^2)^{3/2}} rdrd\phi dz = G\rho\pi \left(2 \sqrt{h^2+\epsilon ^2}-\sqrt{(2 h+L)^2+4 \epsilon ^2}+L\right),$$ and the force is now bounded above by a constant $G\rho\pi(2\epsilon+L-\sqrt{4\epsilon^2+L^2})$ for varying $h$, and the total force by partitioning the 1st part would be $\sim \sum_{i=1}^N\frac{\text{constant}}N = \text{constant}$ as you would expect. (Nevertheless, this constant will diverge with $1/\epsilon$ as $\epsilon\to0$.) But the result above has nothing to do with the pressure. At precisely the center, there is no force, and the magnitude of force scales as $r$ in its immediate neighborhood. Let's mark a small square membrane of size $a\times a\times dz$ with negligible mass $a^2\rho dz$ at position $z\to0$ above the center of the bar to record the pressure. The membrane will experience a net push from above. Since the membrane is small, we could assume it is a uniform force. The total force that pushes the membrane downwards is $$\begin{aligned} F &= a^2\rho dz g_{\text{total}}(z) \\ &= a^2\rho dz\cdot G\rho\pi \left(\sqrt{(L+2z)^2+4\epsilon^2}-\sqrt{(L-2z)^2+4\epsilon^2}\right) \\ &\sim \frac{4a^2z\rho dz\cdot G\rho L\pi }{\sqrt{L^2+4\epsilon^2}} \end{aligned}$$ and thus the pressure is $$dP = -\frac F{a^2} = -\frac{4zG\rho^2 L\pi dz }{\sqrt{L^2+4\epsilon^2}} \sim 4zdz \frac{GM^2}{L^2\epsilon^4}$$ $$\implies P \sim -z^2 \frac{GM^2}{2L^2\epsilon^4}$$ note that the $z$ is still here. We see that the pressure is finite and maximal when we measure it precisely at the center (and decreases away from it). But this will diverge if we let $\epsilon\to0$. - beautiful...!!! – Pratik Deoghare Nov 16 '10 at 17:03 Thanks. Clearly the pressure does tend to infinity as the area on which it acts tends to zero. But what I have in mind is a "long, thin" bar - ie where the thickness of the bar is small compared to the length. What happens to your formula if you ignore the bar thickness (assume e << L), and try to calculate instead (stress * area) at the centre? – Sam Davies Nov 16 '10 at 17:06 @Kenny I think there's a mistake. Should depend on $M^2$, not $M$. I think you have calculated the force to mass ratio of a test particle at one end of the bar, not the force between the bars. – Mark Eichenlaub Nov 16 '10 at 17:47 @Mark: Actually the mistake is in the wording. I do mean the gravity force and pressure per unit mass, so it does depend only on M. – KennyTM Nov 16 '10 at 17:49 @Kenny I was editing my comment while you commented. Anyway, what is pressure per unit mass? See earlier comment. – Mark Eichenlaub Nov 16 '10 at 17:52 show 8 more comments For one thing, you've made a careless algebra mistake. You're computing $(L/M)^2$ instead of $(M/L)^2$. - You're right - thanks. I've corrected this in the original question now. – Sam Davies Nov 19 '10 at 18:06 1 With the fix, it still won't converge, but it diverges only logarithmically, so it will diverge a lot slower now (and in fact, it will pretty much look like its converging if you go up to a moderately large number of pieces). I think Mark Eichenlaub's answer is dead on. – Peter Shor Nov 20 '10 at 2:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371867775917053, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1247/101-advanced-access-content-system-aacs-and-subset-difference-techniques-for/1256
# 101: Advanced Access Content System (AACS) and Subset Difference techniques for Broadcast Encryption I'm trying to get a grasp on AACS and Subset Difference for a project I'm working on and am having a hard time coming up with a technically valid layman explanation, let alone implementation. Is it fair to say that the content is encrypted once per public key (device key), resulting in several copies of the movie on the same DVD?... given that a DVD player can only read one (singular) copy of each of those encrypted streams? What s a better explanation? - ## 2 Answers There is no "public" key in AACS (at least, no in the part which uses SDR). Each movie is symmetrically encrypted with a movie-specific key, and stored as is on the disc. All copies of a given movie use the same key; indeed, all copies of a disc are bit-to-bit identical (it would not be economically sound to do otherwise). Two distinct movies, however, use distinct keys. From that point, the problem of giving access to the movies to all "allowed" players (and only to them) is reduced to the problem of giving them access to the movie encryption key. A movie encryption key is small (a few dozen bits), an encrypted movie is huge (several gigabytes); presumably, keys will be easier to handle. On the disc, physically, there is only one instance of the movie itself. Each player has its own key. So one model is the following: the disk contains the movie, encrypted with the movie key $M$, and a complete set of $E_P(M)$ (movie key $M$ encrypted with player key $P$). Each player would decrypt "his" $E_P(M)$ to retrieve $M$ and decrypt the movie. If, at some point, a given player key $P$ is detected to have been leaked (e.g. there is some downloadable software which uses that key to decrypt movies), then the movie editors just have to stop including $E_P(M)$ for that key $P$ in the discs they press (we would say that the offending key is revoked). Now, the problem with that model is that there are billions of player keys (for all the current and future players, because we want disks sold now to be playable on systems which will be built next year) so the total size of all these $E_P(M)$ would exceed the disk capacity (it would need several dozen of gigabytes, more than the movie itself). So the tree-like structures come into play. The player keys all come from a giant tree structure. The needed size on each disc is in $O(\log n) + O(r)$ where $n$ is the number of player keys in circulation (including future keys) and $r$ is the number of revoked keys. This is small enough for existing discs (in particular Blu-ray discs, which use AACS) as long as there are not too many revoked keys (the hope is that revocation will remain a rare occurrence). - Take a look at Jerry Sui's Master thesis. It contains the probably most complete and most readable description of AACS. - Welcome to Cryptography Stack Exchange. We actually prefer answers which contain the information itself, not only a web link. Could you add a summary of the information to the answer? Otherwise we will convert your answer to a comment. – Paŭlo Ebermann♦ Nov 23 '11 at 22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492653012275696, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculators/86272-should-i-upgrade-83-89-next-semester-print.html
Should I upgrade from an 83 to an 89 for next semester? Printable View • April 28th 2009, 12:37 PM sinewave85 Should I upgrade from an 83 to an 89 for next semester? Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? • April 29th 2009, 12:37 AM The Second Solution Quote: Originally Posted by sinewave85 Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? If the calculator is not used in the exam then you're not disadvantaged by not having one. Most things you might like to do (checking assignment work etc.) can be done using on-line freeware. • April 29th 2009, 10:32 AM arbolis Quote: Originally Posted by sinewave85 Right now I have a Ti 83 that I have had since eighth grade. At this point (wrapping up my first semester of Calculus) I use it infrequently -- mostly to find decimal values and to double check my arithmetic before I submit work. My university "recommends, but does not require, a calculator capable of advanced functions (such as a Ti 89) for use as a learning tool to aid students in familiarizing themselves with the material and concepts presented. However, no student will be allowed access to a calculator of any kind during any test or assignment which contributes to the student's final grade." Thus far I have put off upgrading in part because of the expense, and in part because I was worried about becoming dependent on the calculator's capabilities. However, at this point I am starting to wonder if not having one is going to put me at a disadvantage as the material becomes harder. What is your opinion -- is a Ti 89 a valuable learning tool or an expensive crutch? Definitely not. I use a TI-82 since high school (I'm now a sophomore) and it is more than enough until now. Furthermore we are not allowed to use any calculator in maths classes, but in physics yes. • April 29th 2009, 11:06 AM Jameson Hello sinewave85! This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult. Cliffnotes: 1) I would buy one if it is affordable for you 2) If you don't want to or don't have the money now, it won't hurt you 3) It will only become a crutch if you allow it to Jameson • April 29th 2009, 01:12 PM sinewave85 Quote: Originally Posted by The Second Solution If the calculator is not used in the exam then you're not disadvantaged by not having one. Most things you might like to do (checking assignment work etc.) can be done using on-line freeware. Thanks for the advice! I should look into what kind of freeware is out there. • April 29th 2009, 01:41 PM sinewave85 Quote: Originally Posted by Jameson Hello sinewave85! This is a very good question. I was in a similar boat, as I had a TI-83 for 9-11th grade and got my TI-89 my senior year of high school. The TI-89 definitely has some amazing features that can be useful, if applied the right way. If not, like you said it becomes a crutch. Whether that would happen or not would be up to you. Like others have said, I agree that you would not be at a disadvantage to other students if you didn't buy one. You would just need to find other ways of achieving the same things, which isn't too difficult. Cliffnotes: 1) I would buy one if it is affordable for you 2) If you don't want to or don't have the money now, it won't hurt you 3) It will only become a crutch if you allow it to Jameson Thanks so much for all the advice! My main interest in one is, as The Second Solution said, the ability to check over my work -- both major steps of complex problems and finished solutions. The cost is significant for me, but so is the time. I find myself spending almost as much time going back over my work line by line as I do working the problems in the first place, and it adds up to a lot of time each week spent on my one math course. With relatively low-intensity freshman courses, I have found the time, but I worry that may change as I get into higher-level work. If I could, for instance, enter something like this $\frac{d}{dx}\left(\sin^{-1}(xy) + \frac{\pi}{2} = \cos^{-1}(y)\right)$ and see quickly if I had the right answer, that would be great. It is not so much that I worry about abusing it (putting questions through the calculator without first doing the work on paper) but rather becoming so attached to it psychologically that I feel lost without it. Since I am a distance student, about 95% of my final grade is one four-hour test at the end of the semester -- a bad time to be switching habits. As I said, thanks to all for the advice. I felt kind of silly asking this question, and I appreciate the thoughtfull responses! Now, I suppose, it is just a matter of weighing my priorities. • April 29th 2009, 02:23 PM Jameson I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part time or live meekly for a while to earn the \$100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't to tell you to buy it no matter what, but just something to think about. Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for the last year or so of undergraduate math study. That's about every possible thing I can think of on this topic, so good luck :) • April 29th 2009, 04:05 PM mr fantastic Quote: Originally Posted by sinewave85 Thanks for the advice! I should look into what kind of freeware is out there. An on-line integrator: Wolfram Mathematica Online Integrator An on-line derivative calculator: Step-by-Step Derivatives • April 30th 2009, 09:32 AM sinewave85 Quote: Originally Posted by Jameson I agree this comes down to your priorities. I don't know your financial situation, but I think that in the end the cost of the calculator shouldn't be the determining factor. You could work part time or live meekly for a while to earn the \$100-something it costs. Remember your time spent learning is an investment and tools like this can pay out many times over in the future. That isn't to tell you to buy it no matter what, but just something to think about. Finally, I think you should think about this. After Calculus III (multi-variable) and Differential Equations I (introductory class), computations are less and less a part of math classes. Higher level math classes deal with generalities and proofs and stop asking questions where the answer is a number. I think that it would be easily possible that you would stop using any calculator for the last year or so of undergraduate math study. That's about every possible thing I can think of on this topic, so good luck :) Ok, you've convinced me. (Clapping) I will budget for one over the summer. I have no illusions about growing up to be a mathemetician; my goal is just to make it through enough math to keep my degree options open. Thanks for all of the good advice, Jameson! • April 30th 2009, 10:05 AM sinewave85 Quote: Originally Posted by mr fantastic An on-line integrator: Wolfram Mathematica Online Integrator An on-line derivative calculator: Step-by-Step Derivatives Thanks so much for the links. Those will help alot! I really appreciate all of the input. • April 30th 2009, 01:23 PM gammaman I happen to own an 89. In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money. • May 2nd 2009, 07:37 AM sinewave85 Quote: Originally Posted by gammaman I happen to own an 89. In my calculus II class the exams are given in two parts. Anything involving integration of any type is on the non-calculator portion of the test. So I spent all that money on it and don't even get to take advantage of any of it's features. I mean I would just like to be able to use it for things like factoring or polynomial long division, or any other areas where I am likely to make a careless mistake, but no I am not allowed a calculator at all. Do yourself a favor, save your money. Thanks for the input, gammaman. Not being allowed to use the calculator on tests is on the con's list for me. I wonder, however, if you find the calculator helpful or enlightening on daily work. Does it make it easier to work though the material efficiently or to understand the concepts presented? Or do you not use it at all, given its exclusion from tests? All times are GMT -8. The time now is 02:10 PM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9686538577079773, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/174779/if-phi-in-c1-c-mathbb-r-then-lim-n-int-mathbb-r-frac-sinnxx-p
# If $\phi \in C^1_c(\mathbb R)$ then $\lim_n \int_\mathbb R \frac{\sin(nx)}{x}\phi(x)\,dx = \pi\phi(0)$. Let $\phi \in C^1_c(\mathbb R)$. Prove that $$\lim_{n \to +\infty} \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx = \pi\phi(0).$$ Unfortunately, I didn't manage to give a complete proof. First of all, I fixed $\varepsilon>0$. Then there exists a $\delta >0$ s.t. $$\vert x \vert < \delta \Rightarrow \vert \phi(x)-\phi(0) \vert < \frac{\varepsilon}{\pi}.$$ Now, I would use the well-known fact that $$\int_\mathbb R \frac{\sin x}{x} \, dx = \pi.$$ On the other hand, by substitution rule, we have also $$\int_\mathbb R \frac{\sin(nx)}{x} \, dx = \int_\mathbb R \frac{\sin x}{x} \, dx = \pi.$$ Indeed, I would like to estimate the quantity $$\begin{split} & \left\vert \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx - \pi \phi(0) \right\vert = \\ & = \left\vert \int_\mathbb R \frac{\sin(nx)}{x}\phi(x) \, dx - \phi(0)\int_\mathbb R \frac{\sin{(nx)}}{x}dx \right\vert \le \\ & \le \int_\mathbb R \left\vert \frac{\sin(nx)}{x}\right\vert \cdot \left\vert \phi(x)-\phi(0) \right\vert dx \end{split}$$ but the problem is that $x \mapsto \frac{\sin(nx)}{x}$ is not absolutely integrable over $\mathbb R$. Another big problem is that I don't see how to use the hypothesis $\phi$ has compact support. - ## 4 Answers Note that $$\small \int\limits_{\mathbb{R}}\frac{\sin(nx)}{x}\phi(x)dx-\pi\phi(0)= \int\limits_{\mathbb{R}}\frac{\sin(nx)}{x}\phi(x)dx-\phi(0)\int\limits_{\mathbb{R}}\frac{\sin(nx)}{x}dx= \int\limits_{\mathbb{R}}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx$$ Denote $$\small I_{m}(n):=\int\limits_{[-\pi m,\pi m]}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx\qquad I(n):=\int\limits_{\mathbb{R}}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx$$ We claim that $I_m(n)$ converges to $I(n)$ uniformly by $n\in\mathbb{N}$ when $m\to\infty$. Indeed, since $\phi$ is compactly supported $$\small \lim\limits_{m\to\infty}\sup\limits_{n\in\mathbb{N}}\left|I_m(n)-I(n)\right|= \lim\limits_{m\to\infty}\sup\limits_{n\in\mathbb{N}}\left|\;\int\limits_{\mathbb{R}\setminus[-\pi m,\pi m]}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx\right|=$$ $$\small \lim\limits_{m\to\infty}\sup\limits_{n\in\mathbb{N}}\left|\;\int\limits_{\mathbb{R}\setminus[-\pi m,\pi m]}\frac{\sin(nx)}{x}(-\phi(0))dx\right|= \lim\limits_{m\to\infty}\sup\limits_{n\in\mathbb{N}}\left|\phi(0)\int\limits_{\mathbb{R}\setminus[-\pi mn,\pi mn]}\frac{\sin(y)}{y}dy\right|=$$ $$\small |\phi(0)|\lim\limits_{m\to\infty}\left|\;\int\limits_{\mathbb{R}\setminus[-\pi m,\pi m]}\frac{\sin(y)}{y}dy\right|=0$$ Since convergence is uniform by $n\in\mathbb{N}$ we can say $$\small \lim\limits_{n\to\infty} I(n)= \lim\limits_{n\to\infty} \lim\limits_{m\to\infty} I_m(n)= \lim\limits_{m\to\infty}\lim\limits_{n\to\infty} I_m(n)= \lim\limits_{m\to\infty}\lim\limits_{n\to\infty}\int\limits_{[-\pi m,\pi m]}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx$$ Since $\varphi\in C_c^1(\mathbb{R})$, then the function $x^{-1}(\varphi(x)-\varphi(0))$ is in $L^1([-\pi m,\pi m])$ for all $m\in\mathbb{N}$. Then by Riemann–Lebesgue lemma $$\small \lim\limits_{n\to\infty}\int\limits_{[-\pi m,\pi m]}\frac{\sin(nx)}{x}\left(\phi(x)-\phi(0)\right)dx=0$$ so $\small\lim\limits_{n\to\infty}I(n)=0$. This is exactly what we wanted to prove. - What a simple and excellent proof! Thank you very much for your kindness. – Romeo Jul 24 '12 at 21:19 @Romeo, not at all – Norbert Jul 24 '12 at 21:20 I saw the edit, but not how to get the bound. – Davide Giraudo Jul 24 '12 at 21:40 Davide, it is more possible for me to mistake then you not to see the bound – Norbert Jul 24 '12 at 21:41 Note that $\varphi$ is compcatly supported, then does its derivative. So we can integrate over support of $\phi$. The measure of support is finite (since support is bounded) and derivative is bounded since it is continuous – Norbert Jul 24 '12 at 21:44 show 12 more comments A quick proof of this can be given using the Riemann-Lebesgue lemma, which is covered in Rudin and a number of other texts. Write your limit as $$\lim_{n \rightarrow \infty} \int_\mathbb R \sin(nx)\frac{\phi(x) - \phi(0)\chi_{[-1,1]}(x)}{x}\, dx + \lim_{n \rightarrow \infty} \int_\mathbb R \sin(nx)\frac{\phi(0)\chi_{[-1,1]}(x)}{x}\, dx$$ Here $\chi_{[-1,1]}(x)$ denotes the characteristic function of $[-1,1]$. Since $\phi(x) \in C_c^1({\mathbb R})$, the function ${\displaystyle \frac{\phi(x) - \phi(0)\chi_{[-1,1]}(x)}{x}}$ is a bounded function with compact support; the only place you have to worry about is $x = 0$ and you can use the mean value theorem for example to show it's bounded near $x = 0$. Since the function is bounded function with compact support it is in $L^1$, which is enough to apply the Riemann-Lebesgue lemma and say the first term goes to zero. As for the second term, after changing variables to $y = nx$ we may rewrite it as $$\lim_{n \rightarrow \infty} \int_\mathbb R \sin(y)\frac{\phi(0)\chi_{[-n,n]}(y)}{y}\, dy$$ $$= \phi(0)\lim_{n \rightarrow \infty} \int_{-n}^n \frac{\sin(y)}{y}\, dy$$ $$= \phi(0)\int_\mathbb R \frac{\sin(y)}{y}\, dy$$ $$= \pi \phi(0)$$ So this will be the overall limit. - Nice, and doesn't use further derivatives than first one. – Davide Giraudo Jul 26 '12 at 15:56 Only need that $\phi(x)$ is a compactly supported Lipschitz function... – Zarrax Jul 26 '12 at 17:49 If you assume it, then the argument for the convergence on $0$ may not follow by mean value theorem, but just using boundedness of the ratio $\frac{\phi(x)-\phi(0)}x$. – Davide Giraudo Jul 26 '12 at 18:33 We can assume WLOG that $\phi\in C^3_c(\Bbb R)$, since this subset is dense for the supremum norm and $\frac{\sin x}x$ is integrable over compact subsets. We have $$\phi(x)-\phi(0)=x\phi'(0)+x\int_0^1(1-s)\phi''(sx),$$ hence, if the support of $\phi$ is contained in $[-R,R]$ \begin{align} \small \int_{-\infty}^{+\infty}\frac{\sin(nx)}x\phi(x)dx&=\small\phi(0)\int_{-R}^R\frac{\sin(nx)}xdx+\phi'(0)\int_{-R}^R\sin(nx)dx+\int_{—R}^R\sin(nx)\int_0^1(1-s)\phi''(sx)dsdx\\ &=\small\phi(0)\int_{-nR}^{nR}\frac{\sin t}tdt-\left[\frac{\cos(nx)}n\int_0^1(1-s)\phi''(sx)ds\right]_{-R}^R+\int_{—R}^R\frac{\cos(nx)}n\int_0^1 s(1-s)\phi'''(sx)dsdx \end{align} We have $$\lim_{n\to+\infty}\int_{-nR}^{nR}\frac{\sin t}tdt=\int_{-\infty}^{+\infty}\frac{\sin t}tdt;$$ $$\left|\left[\frac{\cos(nx)}n\int_0^1(1-s)\phi''(sx)ds\right]_{-R}^R\right|\leq \frac 2n\sup_{t\in \Bbb R}|\phi''(t)|,$$ and $$\left|\int_{—R}^R\frac{\cos(nx)}n\int_0^1 s(1-s)\phi'''(sx)dsdx\right|\leq \frac{2R}n\sup_{t\in \Bbb R}|\phi'''(t)|$$ - Assume that $\phi(x)$ is supported in $|x|< L$. Since $\phi$ is differentiable, $\frac{\phi(x)-\phi(0)}{x}$ is bounded and therefore integrable on $|x|<L$. $$\begin{align} \int_{-\infty}^\infty\frac{\sin(nx)}{x}\phi(x)\,\mathrm{d}x &=\pi\,\phi(0)+\int_{-\infty}^\infty\sin(nx)\frac{\phi(x)-\phi(0)}{x}\,\mathrm{d}x\\ &=\pi\,\phi(0)+\color{#C00000}{\int_{-L}^L\sin(nx)\frac{\phi(x)-\phi(0)}{x}\,\mathrm{d}x}\\ &-2\,\phi(0)\color{#00A000}{\int_{nL}^\infty\frac{\sin(x)}{x}\,\mathrm{d}x}\\ \end{align}$$ As $n\to\infty$, the red integral vanishes by the Riemann-Lebesgue Lemma and the green integral vanishes because The Dirichlet Integral converges. This leaves us with $$\lim_{n\to\infty}\int_{-\infty}^\infty\frac{\sin(nx)}{x}\phi(x)\,\mathrm{d}x=\pi\,\phi(0)$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 23, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250059723854065, "perplexity_flag": "head"}
http://physics.stackexchange.com/tags/subatomic/hot?filter=all
# Tag Info ## Hot answers tagged subatomic 39 ### What does it mean for two objects to “touch”? Wow, this one has been over-answered already, I know... but it is such a fun question! So, here's an answer that hasn't been, um, "touched" on yet... :) You sir, whatever your age may be (anyone with kids will know what I mean), have asked for an answer to one of the deepest questions of quantum mechanics. In the quantum physics dialect of High Nerdese, ... 29 ### Are all electrons identical? One good piece of evidence that all particles of a given type are identical is the exchange interaction. The exchange symmetry (that one can exchange any two electrons and leave the Hamiltonian unchanged) results in the Pauli exclusion principle for fermions. It also is responsible for all sorts of particle statistics effects (particles following the ... 12 ### Why are electrons and quarks 0-dimensional? I think the best answer to your question is simply "because that's all we can see when we do experiments." That is, no matter how hard anyone tries or how much energy they toss into the processes, electrons and quarks show no signs of any appendages, surfaces, hair-like structures, bumps, volume, whatever. When you model them mathematically as points, the ... 10 ### What is in the space between a nucleus of an atom and its electrons? Short answer: The space between the nucleus and the electron is not empty space, it is filled with an electron cloud. (You will understand this answer better if you read the long answer) Long answer: Firstly, physics is a description of what we can observe. Depending on the scale of which you are describing, physicists, over the years, have different ... 7 ### What does it mean for two objects to “touch”? As a useable heuristic I would go with something along the lines of the intermolecular forces between the surface molecules of the bodies are comparable to the scale of one-to-one intermolecular forces between nearby{*} molecules due to other components of the same body You could make it a little more strict by replacing "comparable to" with ... 7 ### Anti-Matter for Neutrons The anti-particle corresponding to a neutron is an anti neutron! The neutron is made up of one up quark and two down quarks. The anti-neutron is made up of an anti-up quark and two anti-down quarks. Both have zero charge because the charges of the quarks within them balance out. You are correct that elementary particles with no charge are often their own ... 6 ### Why are there 3 quarks in proton? Why three quarks? In very simple terms bound states of quarks (hadrons) have to be color neutral so that means either color quark + anticolor antiquark (mesons) or three quarks carrying R, G and B color charge respectively (baryons). (Note: There should also exist exotic particles like tetraquarks and pentaquarks but these haven't been observed yet and ... 6 ### Intrinsic structure of electron That's a great question! Unfortunately, the only honest answer is "that's what we see in nature, with great precision and complete reproducibility." There is no deep theoretical understanding. The more exotic form of your question is phrased in terms the self-energy of an electron, and it's a question that plagued Nobel Laureate Richard Feynman his entire ... 5 ### What would be likely to completely stop a subatomic particle assuming it was possible? This is just a misunderstanding--- "no motion" in quantum mechanics is a different concept than "no motion" in classical mechanics. At zero temperature, nothing stops. Spherical uncharged black holes don't stop particles at the singularity, they absorb particles and time just ends at the singularity for the infalling matter. The wavefunctions are not made to ... 5 ### What did Marie Curie do for atomic theory? From: NobelPrize.org "Her continued systematic studies of the various chemical compounds gave the surprising result that the strength of the radiation did not depend on the compound that was being studied. It depended only on the amount of uranium or thorium. Chemical compounds of the same element generally have very different chemical and physical ... 4 ### What is in the space between a nucleus of an atom and its electrons? Maybe one should add to the analysis of @QEntanglement and the nice electron probability clouds in the illustrations in the other answers, that also the space between the nucleus and the electrons is teaming with the exchange of virtual particles between the electrons and the nucleus, necessary to create the potential which determines the energy levels of ... 4 ### Why does amount of protons define how matter is? Your day to day experience of the material world is governed by chemistry. This is at some level the science of atoms and groups of atoms. Things like hardness, colour, toxicity and others are all largely determined by the interaction of atoms. In particular the outer coating of atoms, the electrons. Obviously the details of why element or compound A is ... 3 ### How can Sub-Atomic Particles be Visualized? You can see a nucleus and the nucleus of a hydrogen atom is a proton which is the same. You can't see below that at least with a source of neutrons that ISIS produce, but you can see down to the level of the proton. 3 ### Explanation on Atomic Orbitals and Molecular Orbitals Your teacher is referring to the LCAO approximation as a way of calculating molecular orbitals. Suppose you bring two hydrogen atoms together i.e. create a hydrogen molecule. To calculate the electronic structure you need to solve the Schrodinger equation, but even for something as simple as the hydrogen molecule the Schrodinger equation is too complex to ... 3 ### Paramagnetism what about Paraweakism or Parastrongism? First, there is no universal inequality that would say that materials have to be "paramagnets". The opposite effects imply that materials may also be "diamagnets" which means that they react oppositely to the magnetic field. I think that atoms and molecules are the smallest objects whose response to the magnetic field may be viewed as the microscopic cause ... 2 ### How can particles being closed strings in String Theory create solidity in objects? You say: I understand how particles with certain masses can form to make atoms, which create rigidity in objects due to Pauli's Exclusion Principle and what have you. These particles actually have mass and to a certain extent clearly would produce rigidity in objects. Do you understand what rigidity is ? I would define it as the resistance of a solid ... 2 ### What does it mean for two objects to “touch”? This is very legitimate question for something we usually take for granted. I think it would be possible to define macroscopically touching as the situation, in which the total force between two electrically neutral rigid bodies is larger than pure gravitational (for some measurable value). The difference is of course the normal component of the surface ... 2 ### Looking for a list of possible subatomic particle collisions Oh goodness... that is an immensely complicated topic. Many thousands of people have put in decades of work figuring out exactly what happens when two subatomic particles collide. The calculations are all done using quantum field theory, so I would say if you want to learn about the process involved in describing the outcome of a collision, read up on QFT - ... 2 ### How can Sub-Atomic Particles be Visualized? We can image the sub-structure of nucleons by a number of different techniques involving high energy scattering. The results are generally presented in terms of "parton distribution functions" or "structure functions". One such experiment that I had some small relationship with (though not enough to be an author) was NuSea (E866) at Fermilab in the mid ... 2 ### The Nucleus of an Atom Short answer: the strong nuclear force. The strong nuclear force binds nucleons (protons and neutrons) together. It is a very short-range force, which is why it only acts over distances on the scale of atomic nuclei. There is repulsion between the protons, which is why, as the number of protons goes up, more and more neutrons are required to stabilize the ... 2 ### Energy source of electrons? However where did the electron get its energy from in the first place(during the creation of the universe"Big bang"). All energy, and remember energy and mass are related by E=m*c^2, that exists in the universe existed after the first minutes of the Big Bang . For t=0 plus an interval after it where gravitational forces predominate, i.e the realm of ... 2 ### The Value of Newton's Gravitational Constant $G$ within an Atom If you're asking whether we can measure the effect on atomic structure of gravitational forces between the nucleus and the electrons, then the answer is that not only have we never measured such effects but it's unlikely we'll ever be able to measure them as they would be many orders of magnitude below the electrostatic forces that hold the atom together. ... 2 ### What would be the effect of an excess of up quarks on stellar formation? It's really not clear what hypothetical limits you're imposing. I take your question to mean that in the process of baryogenesis the various baryons like protons and neutrons highly favored up quarks (lots more protons than neutrons). Remember, quarks are subject to confinement so other than a quark-gluon plasma, quarks are confined to baryons. Since ... 2 ### Alpha particle and helium nucleus They are exactly the same, with the different notations arising in different contexts. You could start with a bunch of helium gas and heat it up or shine UV light on it to turn it into a plasma, and then you'd probably say you have $\mathrm{He}^{2+}$ (or $\mathrm{He}\ \mathrm{III}$ if you are an astronomer). The symbol $\alpha$ is more often reserved for ... 1 ### Energy source of electrons? You have a lot of questions here, and they show you really need to read up on some basic physics, but here goes with some simple answers: Where did the electron get its energy from in the first place? What energy do you mean? Why doesn't everything fall apart when we sit on a chair or grab a pencil,why wont the electrons fall from trajectory and get caught ... 1 ### What would be likely to completely stop a subatomic particle assuming it was possible? Your question is interesting, and gets specifically to the kinds of questions that quantum mechanics was intended to answer in the first place. It helps to understand the motivation behind the original Bohr model of the atom, and how that led to QM in the first place. The problem Bohr was trying to address can be paraphrased as, "If an electron orbits a ... 1 ### What would be likely to completely stop a subatomic particle assuming it was possible? There exists a huge number of experimental evidence that in the subatomic world nature is using quantum mechanics. In quantum mechanics bound states always have the first energy level above 0 energy. Your magnetic field thought experiment creates a bound stated in the collective potential. Free states have to obey the HUP and therefore cannot have 0 fixed ... 1 ### A question on the property of proton and neutron? [closed] I will assume you have heard something about proton decay. proton decay is a hypothetical form of radioactive decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron.1 There is currently no experimental evidence that proton decay occurs. There are a number of experiments which test whether such decays ... 1 ### Looking for a list of possible subatomic particle collisions In a sense these "games" exist, need large computing power and are called high energy physics monte carlos. These are very complicated simulations of the reality of the experiment and include all the detector effects. At the first level of the core of these HEP monte carlos there exist tables of "complete" possibilities of scattering products: all ... 1 ### Intrinsic structure of electron It's a very good question. The electron is described by a wave field which resembles a charge distribution, so it is natural to wonder why it doesn't repel itself and spread out all over. However, the wave is not a classical wave but is quantized, i.e. the energy in a given vibration mode has to come in discrete bundles. One can count how many excitations ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460906386375427, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/9994-show-inequality-x-2-a.html
# Thread: 1. ## Show Inequality X^2 > a If a > 0, show that the solution set of the inequality x^2 > a consists of all numbers x for which -(sqrt{a}) < a (sqrt{a}). Okay...what exactly is the question asking in simple math terms? I don't follow the logic behind this sort of reasoning. 2. Originally Posted by symmetry If a > 0, show that the solution set of the inequality x^2 > a consists of all numbers x for which -(sqrt{a}) < a (sqrt{a}). Okay...what exactly is the question asking in simple math terms? I don't follow the logic behind this sort of reasoning. First there is a typo in your post, the phrase: "consists of all numbers x for which -(sqrt{a}) < a (sqrt{a})" is probably mistyped as it is always true, there should be an x in there somewhere. If you sketch the graph of x^2-a (with a>0) you will see that this is >0 everywhere except between the roots of x^2-a, and when this is >0 we have x^2>a. So x^2>0 when x<-sqrt(a), or when x>sqrt(a). RonL 3. Originally Posted by symmetry If a > 0, show that the solution set of the inequality x^2 > a consists of all numbers x for which -(sqrt{a}) < a (sqrt{a}). You have, for $a>0$. $x^2 > a$ Express as, $x^2 - a >0$ $x^2 - (\sqrt{a})^2 > 0$ $(x-\sqrt{a})(x+\sqrt{a})>0$ To be positive we require both factors to be positive or negative. 1)Both positive. That means, $x-\sqrt{a}>0$ $x+\sqrt{a}>0$ Thus, $x>\sqrt{a}$ $x>-\sqrt{a}$ Another way of writing this is, $x>\sqrt{a}$ Because if the top inequality is true then for certainly the bottom one, for it is contained in the top one. 2)Both negativei. That means, $x-\sqrt{a}<0$ $x+\sqrt{a}<0$ Thus, $x<\sqrt{a}$ $x<-\sqrt{a}$ Another way of writing this is, $x<-\sqrt{a}$ Because if the bottom inequality is true then for certainly the top one, for it is contained in the bottom one. Thus, $x>\sqrt{a} \mbox{ or }x<-\sqrt{a}$ 4. ## ok I want to thank both for your replies. Captainblack, You are right in making your statement. There is a typing error. I will now send the correct questions. Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077184796333313, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Gelfand%E2%80%93Naimark%E2%80%93Segal_construction
Gelfand–Naimark–Segal construction Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: In functional analysis, a discipline within mathematics, given a C*-algebra A, the Gelfand–Naimark–Segal construction establishes a correspondence between cyclic *-representations of A and certain linear functionals on A (called states). The correspondence is shown by an explicit construction of the *-representation from the state. The content of the GNS construction is contained in the second theorem below. It is named for Israel Gelfand, Mark Naimark, and Irving Segal. States and representations A *-representation of a C*-algebra A on a Hilbert space H is a mapping π from A into the algebra of bounded operators on H such that • π is a ring homomorphism which carries involution on A into involution on operators • π is nondegenerate, that is the space of vectors π(x) ξ is dense as x ranges through A and ξ ranges through H. Note that if A has an identity, nondegeneracy means exactly π is unit-preserving, i.e. π maps the identity of A to the identity operator on H. A state on C*-algebra A is a positive linear functional f of norm 1. If A has a multiplicative unit element this condition is equivalent to f(1) = 1. For a representation π of a C*-algebra A on a Hilbert space H, an element ξ is called a cyclic vector if the set of vectors $\{\pi(x)\xi:x\in A\}$ is norm dense in H, in which case π is called a cyclic representation. Any non-zero vector of an irreducible representation is cyclic. However, non-zero vectors in a cyclic representation may fail to be cyclic. Note to reader: In our definition of inner product, the conjugate linear argument is the first argument and the linear argument is the second argument. This is done for reasons of compatibility with the physics literature. Thus the order of arguments in some of the constructions below is exactly the opposite from those in many mathematics textbooks. Let π be a *-representation of a C*-algebra A on the Hilbert space H with cyclic vector ξ having norm 1. Then $x \mapsto \langle \xi, \pi(x)\xi\rangle$ is a state of A. Given *-representations π, π' each with unit norm cyclic vectors ξ ∈ H, ξ' ∈ K such that their respective associated states coincide, then π, π' are unitarily equivalent representations. The operator U that maps π(a)ξ to π'(a)ξ' implements the unitary equivalence. The converse is also true. Every state on a C*-algebra is of the above type. This is the GNS construction: Theorem. Given a state ρ of A, there is a *-representation π of A with distinguished cyclic vector ξ such that its associated state is ρ, i.e. $\rho(x)=\langle \xi, \pi(x) \xi \rangle$ for every x in A. The construction proceeds as follows: The algebra A acts on itself by left multiplication. Via ρ, one can introduce a Hilbert space structure on A compatible with this action. Define on A a, possibly singular, inner product $\langle x, y \rangle =\rho(x^*y).$ Here singular means that the sesquilinear form may fail to satisfy the non-degeneracy property of inner product. By the Cauchy–Schwarz inequality, the degenerate elements, x in A satisfying ρ(x* x)= 0, form a vector subspace I of A. By a C*-algebraic argument, one can show that I is a left ideal of A. The quotient space of the A by the vector subspace I is an inner product space. The Cauchy completion of A/I in the quotient norm is a Hilbert space H. One needs to check that the action π(x)y = xy of A on itself passes through the above construction. As I is a left ideal of A, π descends to the quotient space A/I. The same argument showing I is a left ideal also implies that π(x) is a bounded operator on A/I and therefore can be extended uniquely to the completion. This proves the existence of a *-representation π. If A has a multiplicative identity 1, then it is immediate that the equivalence class ξ in the GNS Hilbert space H containing 1 is a cyclic vector for the above representation. If A is non-unital, take an approximate identity {eλ} for A. Since positive linear functionals are bounded, the equivalence classes of the net {eλ} converges to some vector ξ in H, which is a cyclic vector for π. It is clear that the state ρ can be recovered as a vector state on the GNS Hilbert space. This proves the theorem. The above shows that there is a bijective correspondence between positive linear functionals and cyclic representations. Two cyclic representations πφ and πψ with corresponding positive functionals φ and ψ are unitarily equivalent if and only if φ = α ψ for some positive number α. If ω, φ, and ψ are positive linear functionals with ω = φ + ψ, then πω is unitarily equivalent to a subrepresentation of πφ ⊕ πψ. The embedding map is given by $\pi_{\omega}(x) \xi_{\omega} \mapsto \pi_{\phi}(x) \xi_{\phi} \oplus \pi_{\psi}(x) \xi_{\psi}.$ The GNS construction is at the heart of the proof of the Gelfand–Naimark theorem characterizing C*-algebras as algebras of operators. A C*-algebra has sufficiently many pure states (see below) so that the direct sum of corresponding irreducible GNS representations is faithful. The direct sum of the corresponding GNS representations of all positive linear functionals is called the universal representation of A. Since every nondegenerate representation is a direct sum of cyclic representations, any other representation is a *-homomorphic image of π. If π is the universal representation of a C*-algebra A, the closure of π(A) in the weak operator topology is called the enveloping von Neumann algebra of A. It can be identified with the double dual A**. Irreducibility Also of significance is the relation between irreducible *-representations and extreme points of the convex set of states. A representation π on H is irreducible if and only if there are no closed subspaces of H which are invariant under all the operators π(x) other than H itself and the trivial subspace {0}. Theorem. The set of states of a C*-algebra A with a unit element is a compact convex set under the weak-* topology. In general, (regardless of whether or not A has a unit element) the set of positive functionals of norm ≤ 1 is a compact convex set. Both of these results follow immediately from the Banach–Alaoglu theorem. In the unital commutative case, for the C*-algebra C(X) of continuous functions on some compact X, Riesz representation theorem says that the positive functionals of norm ≤ 1 are precisely the Borel positive measures on X with total mass ≤ 1. It follows from Krein–Milman theorem that the extremal states are the Dirac point-mass measures. On the other hand, a representation of C(X) is irreducible if and only if it is one dimensional. Therefore the GNS representation of C(X) corresponding to a measure μ is irreducible if and only if μ is an extremal state. This is in fact true for C*-algebras in general. Theorem. Let A be a C*-algebra. If π is a *-representation of A on the Hilbert space H with unit norm cyclic vector ξ, then π is irreducible if and only if the corresponding state f is an extreme point of the convex set of positive linear functionals on A of norm ≤ 1. To prove this result one notes first that a representation is irreducible if and only if the commutant of π(A), denoted by π(A)', consists of scalar multiples of the identity. Any positive linear functionals g on A dominated by f is of the form $g(x^*x) = \langle \pi(x) \xi, \pi(x) T_g \, \xi \rangle$ for some positive operator Tg in π(A)' with 0 ≤ T ≤ 1 in the operator order. This is a version of the Radon–Nikodym theorem. For such g, one can write f as a sum of positive linear functionals: f = g + g' . So π is unitarily equivalent to a subrepresentation of πg ⊕ πg' . This shows that π is irreducible if and only if any such πg is unitarily equivalent to π, i.e. g is a scalar multiple of f, which proves the theorem. Extremal states are usually called pure states. Note that a state is a pure state if and only if it is extremal in the convex set of states. The theorems above for C*-algebras are valid more generally in the context of B*-algebras with approximate identity. Generalizations The Stinespring factorization theorem characterizing completely positive maps is an important generalization of the GNS construction. History Gelfand and Naimark's paper on the Gelfand–Naimark theorem was published in 1943.[1] Segal recognized the construction that was implicit in this work and presented it in sharpened form.[2] In his paper of 1947 Segal showed that it is sufficient, for any physical system that can be described by an algebra of operators on a Hilbert space, to consider the irreducible representations of a C*-algebra. In quantum theory this means that the C*-algebra is generated by the observables. This, as Segal pointed out, had been shown earlier by John von Neumann only for the specific case of the non-relativistic Schrödinger-Heisenberg theory.[3] References • William Arveson, An Invitation to C*-Algebra, Springer-Verlag, 1981 • Jacques Dixmier, Les C*-algèbres et leurs Représentations, Gauthier-Villars, 1969. English translation: Dixmier, Jacques (1982). C*-algebras. North-Holland. ISBN 0-444-86391-5 [Amazon-US | Amazon-UK]. • Thomas Timmermann, An invitation to quantum groups and duality: from Hopf algebras to multiplicative unitaries and beyond, European Mathematical Society, 2008, ISBN 978-3-03719-043-2 [Amazon-US | Amazon-UK] – Appendix 12.1, section: GNS construction (p. 371) • Stefan Waldmann: On the representation theory of deformation quantization, In: Deformation Quantization: Proceedings of the Meeting of Theoretical Physicists and Mathematicians, Strasbourg, May 31-June 2, 2001 (Studies in Generative Grammar) , Gruyter, 2002, ISBN 978-3-11-017247-8 [Amazon-US | Amazon-UK], p. 107–134 – section 4. The GNS construction (p. 113) Inline references 1. I. M. Gelfand, M. A. Naimark (1943). "On the imbedding of normed rings into the ring of operators on a Hilbert space". Math. Sbornik 12 (2): 197–217. (also Google Books, see pp. 3–20) 2. Richard V. Kadison: Notes on the Gelfand–Neimark theorem. In: Robert C. Doran (ed.): C*-Algebras: 1943–1993. A Fifty Year Celebration, AMS special session commemorating the first fifty years of C*-algebra theory, January 13–14, 1993, San Antonio, Texas, American Mathematical Society, pp. 21–54, ISBN 0-8218-5175-6 [Amazon-US | Amazon-UK] (available from Google Books, see pp. 21 ff.) 3. I. E. Segal (1947). "Irreducible representations of operator algebras". Bull. Am. Math. Soc. 53: 73–88. Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Gelfand–Naimark–Segal construction", available in its original form here: http://en.wikipedia.org/w/index.php?title=Gelfand%E2%80%93Naimark%E2%80%93Segal_construction • Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8770261406898499, "perplexity_flag": "middle"}
http://physicspages.com/2011/07/07/doppler-effect/
Notes on topics in science ## Doppler effect Required math:vectors, algebra Required physics: basics of relativity The world line of a photon is a null line, so any vector tangent to a photon’s world line is a null vector. This means in particular that a four-velocity cannot be defined for a photon, since all four-velocities have to satsify the condition $\displaystyle \vec{U}\cdot\vec{U}=-1$ Another consequence of this is that, if we require a photon’s momentum to be parallel to its world line, then the momentum must also be a null vector. Since the four-momentum is defined as the vector ${\vec{p}}$ with ${p^{0}}$ equal to the energy and the other three components equal to the three-momentum, this means that for photons $\displaystyle E^{2}=\mathbf{p}\cdot\mathbf{p}$ In particular, if the photon is moving in the ${x}$ direction, then ${p^{1}=E}$ and ${p^{2}=p^{3}=0}$. From quantum mechanics, we know (well, it’s one of the postulates of quantum mechanics anyway) that $\displaystyle E=h\nu$ where ${\nu}$ is the frequency of the photon and ${h}$ is Planck’s constant. We can combine this with the Lorentz transformation of the photon’s four-momentum to get a formula for the Doppler shift. The Doppler effect occurs because the observer is moving relative to a light source. If light is being emitted by a source such as a star, then the light will have a particular frequency (or in general, mixture of frequencies, but we’ll concentrate on monochromatic light), which can be measured as the number of peaks in the wave that pass a fixed point in one second. If the observer moves towards the light source, then in that second, he will pass a greater number of peaks in the wave, and thus the frequency of the light appears higher, or blue-shifted, since for visible light, the colour appears shifted towards the blue end of the spectrum. Similarly, if the observer moves away from the light source, the frequency appears lower and the light is red-shifted. Note that this effect does not violate the postulate of the constancy of the speed of light, which is fundamental to relativity. The light itself still moves at the same speed relative to the moving observers; what changes is the frequency, and hence the energy, of the light that is observed. If the photon is moving at angle ${\theta}$ relative to the ${x}$ axis, then assuming it is moving in the ${x-y}$ plane, its momentum is $\displaystyle \vec{p}$ $\displaystyle =$ $\displaystyle (E,E\cos\theta,E\sin\theta,0)$ $\displaystyle$ $\displaystyle =$ $\displaystyle (h\nu,h\nu\cos\theta,h\nu\sin\theta,0)$ Since ${E=h\nu}$, we use the Lorentz transformation for an observer moving at speed ${v}$ along the ${x}$ axis to get for ${p^{\bar{0}}=h\bar{\nu}}$ (be careful not to confuse the symbol ${\nu}$ (Greek lowercase nu) with ${v}$): $\displaystyle h\bar{\nu}$ $\displaystyle =$ $\displaystyle h\nu(\gamma-\gamma v\cos\theta)$ $\displaystyle$ $\displaystyle =$ $\displaystyle \frac{h\nu}{\sqrt{1-v^{2}}}(1-v\cos\theta)$ $\displaystyle \frac{\bar{\nu}}{\nu}$ $\displaystyle =$ $\displaystyle \frac{1-v\cos\theta}{\sqrt{1-v^{2}}}$ In the special case where the photon is moving along the ${x}$ axis, ${\theta=0}$ and the formula becomes $\displaystyle \frac{\bar{\nu}}{\nu}=\sqrt{\frac{1-v}{1+v}}$ If ${v>0}$, this formula gives a red-shift, since ${\bar{\nu}<\nu}$. If ${v<0}$, the direction of motion of the observer relative to the light is reversed and we get a blue-shift. Note that it is impossible for the Doppler shift to reduce a photon’s frequency to zero, since this would require ${v=1}$, and relativity forbids anything except massless particles (photons, mainly) from moving at the speed of light. ### Like this: By , on Thursday, 7 July 2011 at 15:22, under Physics, Relativity. Tags: Doppler effect, photons. 4 Comments ### Trackbacks • By Compton scattering by cosmic rays « Physics tutorials on Thursday, 7 July 2011 at 15:33 [...] direction to the photon, the photon will appear blue-shifted towards a higher frequency, by the Doppler factor (assuming ) [...] • By Four-momentum conservation in electron-positron annihilation | Physics tutorials on Friday, 8 March 2013 at 11:49 [...] Note that we could have just applied the Lorentz transformation to the final form of ; we didn’t need to work out the transformations on each photon separately. However it’s interesting to see how the two photons transform. The energy of the photon travelling to the right is reduced by a factor of , while that of the photon moving to the left is increased by a factor of . Since the speeds of the photons remain unchanged (the speed of light), this change in energy is reflected in a change of their wavelengths. This is of course the Doppler effect. [...] • By Doppler effect and four-momentum | Physics tutorials on Wednesday, 13 March 2013 at 13:48 [...] Doppler effect can be derived from the Lorentz transformation of momentum for a photon. The energy and wavelength [...] • By Geodesics: paths of longest proper time | Physics tutorials on Saturday, 30 March 2013 at 15:13 [...] on the ground is effectively moving towards the light source with speed so the light will appear blue-shifted, with the relation between the emitted wavelength and observed wavelength [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924848735332489, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/73658/list
## Return to Answer 1 [made Community Wiki] I was trying to write this as a comment, but the place is too small. According to the intro of http://www.jstor.org/stable/1428226, the perimeter of the typical cell is in $\mathbb{R}^2$: $4/\sqrt{\lambda}$ ($\lambda$: intensity of the Poisson point process). It means that if you take a large ball, compute the sum of perimeters of all cells, and divide by the number of cells, you converge to this value. So using ergodicity the average perimeter should be the average number of cells (i.e. of points) multiplied by the perimeter of the typical cell, and divided by $2$ (because each cell is counted twice), which makes $2\sqrt{\lambda}$ (asymptotically). All this is quite vague but in the same intro there are vey good references, which also allow for arbitrary dimension.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570263624191284, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/97309/legendre-symbol-and-from-mathbbz-p-mathbbz?answertab=active
# Legendre symbol and from $\mathbb{Z}/p\mathbb{Z}$ Let's define map $$\Phi:\mathbb{Z}/p\mathbb{Z}\to\mathbb{Z}/p\mathbb{Z}$$ as $$\Phi(x) = x^{(p-1)/2}.$$ It's easy to proof that $$\Phi(x) = 1,~~~\text{iff}~~~x = b^2~~~\text{for some}~~~b\in\mathbb{Z}/p\mathbb{Z}$$ and $$\Phi(0) = 0.$$ And consider Legendre symbol $$\left(\frac{n}{p}\right): \mathbb{Z}/p\mathbb{Z}\to \{\pm1, 0\}$$ Why it's false that $$\left(\frac{n}{p}\right) = \Phi(n)?$$ More exactly, that $$\left(\frac{n}{p}\right) = 1\in\mathbb{Z},~~~\text{if}~~~\Phi(n) = 1\in\mathbb{Z}/p\mathbb{Z},$$ $$\left(\frac{n}{p}\right) = -1\in\mathbb{Z},~~~\text{if}~~~\Phi(n) = - 1\in\mathbb{Z}/p\mathbb{Z}$$ and $$\left(\frac{n}{p}\right) = 0,~~~\text{if}~~~\Phi(n) = 0.$$ Thanks. - 4 It is not false. I don't understand what you are asking, what you write for $\Phi(n)$ can be taken to be the definition of the Legendre symbol. If you proved that $\Phi(n)$ is only 1 if $n$ is a square, I am left wondering, what is your definition of the Legendre symbol? – Eric♦ Jan 8 '12 at 6:06 – platinumtucan Jan 8 '12 at 6:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9082110524177551, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/191018-reciprocal-quadratic-function.html
# Thread: 1. ## Reciprocal of a Quadratic Function Say you have a function f(x)=1/(x^2+6x+11). Would you have to complete the square to figure out the vertex of the parabola in the denominator(then sub in the x-value in the function to figure out the vertex in the rational function), or is there an easier way to draw the graph? 2. ## Re: Reciprocal of a Quadratic Function Personally, I've never heard of the vertex of a denominator? ... If you have to find the vertex of a function (in this case a rational one) then the first derivative can be a good option. 3. ## Re: Reciprocal of a Quadratic Function Originally Posted by Dragon08 Say you have a function f(x)=1/(x^2+6x+11). Would you have to complete the square to figure out the vertex of the parabola in the denominator(then sub in the x-value in the function to figure out the vertex in the rational function), or is there an easier way to draw the graph? The quadratic function in the denominator is positive for all x. The vertex of the quadratic function is an absolute minimum for that function. The minimum of the quadratic function in the denominator, corresponds to the absolution maximum of the rational function, in that they both have the same x coordinate. The maximum of the rational function is equal to the reciprocal of the minimum for the quadratic function. 4. ## Re: Reciprocal of a Quadratic Function Originally Posted by SammyS The quadratic function in the denominator is positive for all x. The vertex of the quadratic function is an absolute minimum for that function. The minimum of the quadratic function in the denominator, corresponds to the absolution maximum of the rational function, in that they both have the same x coordinate. The maximum of the rational function is equal to the reciprocal of the minimum for the quadratic function. So you still have to figure out the minimum value of the quadratic function by completing the square right? 5. ## Re: Reciprocal of a Quadratic Function Yes, and for this problem that is very simple. 6/3= 3 and $3^2= 9$. 11= 9+ 3 6. ## Re: Reciprocal of a Quadratic Function Originally Posted by HallsofIvy Yes, and for this problem that is very simple. 6/3= 3 and $3^2= 9$. 11= 9+ 3 Someone needs to check their work! 7. ## Re: Reciprocal of a Quadratic Function See the graph. The vertex is at ( $-3,2$). This could also have been calculated using differentiation. Since, the parabola is of the form $X^2=4aY$, the slope at the vertex must be equal to zero. Therefore, $y=x^2+6x+11$ <=> $\frac{dy}{dx}=2x+6$<=> $0=2x+6$<=> $x=-3$ Attached Thumbnails
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952240347862244, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/diophantine-equations?sort=votes&pagesize=15
# Tagged Questions Questions on the use of Mathematica to find integer/rational solutions to equations. 6answers 395 views ### How to find lattice points on a line segment? How do I find points on the line segment joining {-4, 11} and {16, -1} whose coordinates are positive integers? 2answers 227 views ### Is there a way to use functions like Prime[n] within Solve[]? I'm trying to see if a number can be written as the sum of two prime numbers. Ideally, I would like to use Solve[Prime[n] + Prime[m] == 100, {n, m}] But that ... 1answer 158 views ### Efficient way to solve equal sums $x_1^k+x_2^k+\dots+x_5^k=y_1^k+y_2^k+\dots+y_5^k$ with Mathematica? I need to solve the system of equations, call it $S_1$, in the integers $$x_1x_2x_3x_4x_5 = y_1y_2y_3y_4y_5$$ $$x_1^k+x_2^k+\dots+x_5^k=y_1^k+y_2^k+\dots+y_5^k,\;\; k= 2,4,6$$ I used a very ... 3answers 530 views ### Finding the number of solutions to a diophantine equation I want to count total number of the natural solutions (different from 0) of the equation $2x + 3y + z = 100$, but don't know how. How can I calculate it using Mathematica? I tried: ... 4answers 261 views ### Defining a Unique Domain for Solving Diophantine Equations I am working on a research problem in discrete geometry to do with sphere packings, and believe it or not, I have been able to reduce it to finding the solutions to the Diophantine equation, n = ... 2answers 323 views ### Solving/Reducing equations in $\mathbb{Z}/p\mathbb{Z}$ I was trying to find all the numbers $n$ for which $2^n=n\mod 10^k$ using Mathematica. My first try: Reduce[2^n == n, n, Modulus -> 100] However, I receive ... 2answers 290 views ### How can I solve the equation with integers as a solution? I want to solve the equation $$(x-1)^2 + (y-1)^2 + (z-1)^2 = 49$$ where $x$, $y$, $z$ are integer and $x \neq 1$, $y \neq 1$, $x \neq 1$. How do I tell Mathematica to do that? 0answers 90 views ### Computing Ehrhart's polynomial for a convex polytope Is there a Mathematica implementation for computing the Ehrhart polynomial of a convex polytope which is specified either by its vertices or by a set of inequalities? I am interested in knowing this ... 1answer 278 views ### How to solve this equation with positive integers as a solutions? This is a problem of United Kingdom Mathematical Olympiad. Find all triples $(x,y,z)$ of positive integers such that \biggl(1+\dfrac{1}{x}\biggr)\cdot \biggl(1+\dfrac{1}{y}\biggr)\cdot ... 2answers 81 views ### Extracting Reduce results I'm solving a Diophantine equation inside of a function using Reduce but I'm having trouble extracting the necessary parts of the answer. For example, if my input ... 1answer 140 views ### How to solve this equation with integers as a solution? I want to solve the equation $$x^y + y = y^x + x$$ with $x$, $y$ are integer numbers. I tried Solve[x^y + y == y^x + x, {x, y}, Integers] How to solve the ... 2answers 59 views ### How do I generate a set of n-tuples containing integral solutions to a linear equation provided certain constraints? Let $m,k,p$ be fixed positive integers. I want to create a table of k-tuples $(x_1,x_2,\ldots,x_k)$ comprised of solutions in positive integers to the equation below: ... 1answer 126 views ### Using 'Reduce' to solve a set of inequalities, specified by a list I have a two lists $LHS$ and $RHS$, both of size $n$. I want to solve a system of inequalities of the form: $$LHS[1] \leq RHS[1]$$ $$LHS[2] \leq RHS[2]$$ $$LHS[3] \leq RHS[3]$$ $$\vdots$$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177384972572327, "perplexity_flag": "head"}
http://mathoverflow.net/questions/17803?sort=oldest
## What is the infinite-dimensional-manifold structure on the space of smooth paths mod thin homotopy? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is motivated by the recent paper An invitation to higher gauge theory by Baez and Huerta, and the 2007 paper Parallel Transport and Functors by Schreiber and Waldorf. Let $M$ be a smooth, finite-dimensional manifold. A lazy path in $M$ is a smooth function $\gamma: [0,1]\to M$ such that all derivatives of $\gamma$ vanish at $0,1$. A homotopy of lazy paths $\gamma_0,\gamma_1: [0,1] \to M$ is a smooth function $\Gamma: [0,1]^2 \to M$ such that all derivatives of $\Gamma(s,t)$ vanish near $t=0,1$, and such that $\Gamma(s,t) = \gamma_s(t)$ for $s = 0,1$. A homotopy of lazy paths is lazy if additionally we have that for each $t$, all the $s$ derivatives of $\Gamma(s,t)$ vanish near $s = 0,1$. A homotopy (of possibly non-lazy paths) is thin if $\text{rank}\, d\Gamma < 2$ everywhere. Note that (non-lazy) thin homotopies include all reparameterizations, and so any (possibly non-lazy) piecewise-smooth path is thinly homotopic to a lazy path. Note also that lazy paths concatenate smoothly, and the concatenation of lazy paths is associative up to thin homotopy. Note also that if $\gamma^{-1}(t) = \gamma(1-t)$, then the concatenation $\gamma^{-1}\gamma$ is thinly homotopic to a constant path. Note also that lazy thin homotopies concatenate, and so define an equivalence relation, and if two paths are thinly homotopic, then they are lazily-thinly homotopic. So define $\mathcal P^1(M)$ to be the set of all lazy thin homotopy equivalence classes of lazy paths in $M$. It is a groupoid with base $M$. The idea is to consider $\mathcal P^1(M) \rightrightarrows M$ as not just a groupoid but an infinite-dimensional Lie groupoid. I think I understand the smooth structure on $\mathcal P^1(M)$: a curve in $\mathcal P^1(M)$ should be precisely a (non-thin) homotopy of lazy thin paths, up to thin homotopy. It's not entirely clear to me that this defines a smooth structure. But it probably works in some formalism. But if I really want to think of $\mathcal P^1(M) \rightrightarrows M$ as a Lie groupoid, then I should treat $\mathcal P^1(M)$ not just as a smooth space, but actually as an (infinite-dimensional) manifold, and there are various things to check about the maps (the source and target maps should be surjective submersions, etc.). And it's not clear to me how to write down a smooth manifold structure on $\mathcal P^1(M)$. Here's what I'd like. Given a point in $\mathcal P^1(M)$, I'd like a neighborhood of it and a "diffeomorphism" between that neighborhood and some (Fréchet, maybe?) vector space, and I'd like it to be clear that the gluings are smooth. I can make a start: it's clear that the space of lazy paths in a finite-dimensional vector space is a vector space, and that thin homotopies respect addition, so that $\mathcal P^1(\mathbb R^n)$ is a vector space. It's not clear to me how to put a topology on it, and it's not clear that I can approximate $\mathcal P^1(M)$ by chopping $M$ into trivializable pieces, take $\mathcal P^1$ of each piece, and try to glue back together — thin homotopies can take a path in one trivializable patch and make it wrap around $M$ in a complicated way, providing it wraps back, for example. Hence the question: What is the manifold structure on $\mathcal P^1(M)$? - You know, you could probably also send an e-mail to Urs. He's quite friendly. – Harry Gindi Mar 11 2010 at 6:07 3 @fpqc: Yes, of course. But he's sometimes here, and Konrad and John Baez and so on are as well --- I mean, I could also walk down the hall and talk to Konrad. But then I wouldn't get, I don't know, Andrew Stacey's answer, and other folks wouldn't get to listen in. – Theo Johnson-Freyd Mar 11 2010 at 6:19 1 +2 for linking to my paper, -1 for not linking to any nLab pages! – Andrew Stacey Mar 11 2010 at 9:03 3 +1 for your reasons for asking this here, rather than email. If everyone starts doing this (especially asking the questions so well!), the world will be a better place. – Scott Morrison♦ Mar 11 2010 at 16:55 I'm commenting here just to ensure that Theo gets notified that I've added a link to my answer. – Andrew Stacey Mar 14 2010 at 15:27 show 1 more comment ## 2 Answers Question: What is the manifold structure on $P^1(M)$? Answer: There isn't one. Update: The biggest failing is actually that the obvious model space is not a vector space. The space of paths mod thin homotopy in $\mathbb{R}^n$ does not inherit a well-defined addition from the space of paths in $\mathbb{R}^n$. Full details at the nLab page http://ncatlab.org/nlab/show/smooth+structure+of+the+path+groupoid. (Update added here, rather than at the end, as it's the most direct answer to the specific question; the rest should be viewed as extra for those interested in more than just whether or not this space is a smooth manifold.) It is, as you say, a smooth space. This is formal: whatever category of generalised smooth spaces you like, take the quotient of $P(M)$ by thin homotopies. All the proposed categories of generalised smooth spaces admit quotients, so the quotient exists and is a smooth space. Depending on your choice of category, the description of this smooth space may vary. For example, its Frolicher structure and its diffeological structure are very different. But it is not "locally linear" in any sense. The basic problem is that, as you say, within an equivalence class you have paths wrapping all the way around the manifold. This destroys any hope of local linearity. As for the proposed local model, you hit the nail on the head when you say: It's not clear to me how to put a topology on it, Absolutely! Topologising these spaces can lead to quite strange behaviour. You want a LCTVS structure, else you haven't a hope of even starting, and that can distort the topology from what you expect. For example, if you take piecewise-smooth paths (with no quotient) then the LCTVS topology on that is the $C^0$-topology! Indeed, simply taking so-called "lazy paths" could be fraught with difficulties (I notice that you define "lazy" slightly differently to how I've seen it done before with sitting instances). Is that space a manifold? (I know the answer to this one, but if you don't then you should start with that one as it is a much easier question and will hone your skills a little.) If you really want a manifold, the solution is to go one step further. Rather than quotienting out by thin homotopies, make your "thing" into a 2-structure and put the thin homotopies in at the 2-level. Keep all paths at the 1-level. Then each level has a manifold structure and by mapping into a 1-structure you effectively quotient out by the 2-structure but never actually have to consider the quotient itself. To coin a phrase: Quotients are horrible, it's a shame so many people think otherwise. Lastly, that's not to say that there is no way of making $P^1(M)$ into a manifold. There may well be. But if there is, it'll be so convoluted and contrived that it won't look anything like the quotient of $P(M)$. A cautionary tale here is the case of all paths in a manifold, $C^\infty(\mathbb{R},M)$. That can be made into a manifold, but it has uncountably many components, for example, so looks absolutely horrid. Okay, not quite lastly. There's lots of details here that have been glossed over. If you are really interested in working out the smooth space structure of this particular space then I (and I suspect Urs and Konrad) would be very interested in seeing it done and helping out. But MO isn't the place for that. Hop on over to the nLab, create a spin-off of http://ncatlab.org/nlab/show/path+groupoid, and start working. ### Further Reading 1. Constructing smooth manifolds of loop spaces. canonical page. The point of this is to figure out exactly when the "standard method" (alluded to by Tim) works. The distinction between "loop" and "path" is irrelevant. 2. The Smooth Structure of the Space of Piecewise-Smooth Loops. canonical page. Why you should be very, very nervous whenever anyone says "consider piecewise-smooth maps"; and take as a cautionary tale as to the inadvisability of going beyond smooth maps in general. 3. Work of David Roberts on the nLab. This is where I got the 2-idea that I mentioned above. 4. Other relevant nLab pages: http://ncatlab.org/nlab/show/generalized+smooth+space, http://ncatlab.org/nlab/show/smooth+loop+space and further. 5. Of course, the magnificent book by Kriegl and Michor. (I'm going to create a separate MO account for that book; its role will be to post an answer on relevant questions simply saying "Read Me".) In response to Konrad's comment below, I've started an nlab page to work out the smooth structure of this space. The initial content considers the linear structure of the space of paths in some Euclidean space modded out by thin homotopy. The page is http://ncatlab.org/nlab/show/smooth+structure+of+the+path+groupoid. - Great answer, Andrew! But can we be more specific? Given any assumption on a possible manifold structure on $P_1(M)$ (e.g. that the projections to $M$ are submersions), can one prove that there is no such manifold structure? – Konrad Waldorf Mar 11 2010 at 16:58 That would be worth settling once and for all, I've wondered off-and-on about this. However, I think that that's an nlab project rather than an MO one, and I'll only start it if I know that others (you and Theo?) will join in! – Andrew Stacey Mar 11 2010 at 21:23 Expanding Andrew's comment above about 2-structure, one may be able to describe a Lie groupoid which presents a smooth stack of paths up to thin homotopy. I echo the above sentiment, this would be a good nlab project (not necessarily linked with mine and Andrew's) – David Roberts Mar 12 2010 at 0:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you want to prove something is a smooth manifold, a good way to begin is to decide what its tangent spaces ought to be. So let $\gamma_s$ be a (smooth) homotopy of lazy paths, say for $s$ in $(-\epsilon,\epsilon)$. Its derivative at $s=0$ is a vector field $\xi$ along $\gamma:=\gamma_0$. This is a section of $\gamma^\ast TM$, not necessarily a "lazy" one. The vector field is to count for nothing if $\gamma_s$ is a lazy thin homotopy. So we should take the quotient of $C^\infty(\gamma^* TM)$ by the subspace $L$ of those $\xi$ which have vanishing (higher) derivatives at $0$ and $1$ and such that $\dot{\gamma}(t)$ and $\xi(t)$ are linearly dependent in $T_{\gamma(t)}M$ for all $t$. A standard method to produce smooth charts (on path spaces in particular) is to exponentiate tangent vectors. This requires an auxiliary choice, say of a metric $g$ on $M$, so the manifold structure won't be absolutely canonical; but it may well be canonical up to diffeomorphism (strategy: define a smooth structure on the family of manifolds parametrized by the contractible manifold $Met(M)$, and show it's a smooth fibre bundle). Well, $g$ induces an $L^2$-metric on $C^\infty(\gamma^{\ast}TM)$, so we could take the orthogonal complement $L^{\perp}$ (isn't that the vector fields pointwise-orthogonal to $\dot{\gamma}$, vanishing where $\dot{\gamma}$ does?) and view that as our tangent space. That makes it a little clearer that it's a Frechet space (consider the $C^k$ norms on $L^\perp$...). Let $L^{\perp}_\epsilon$ be the vector fields in $L^\perp$ which, pointwise, have length $<\epsilon$. Assume $\epsilon$ is smaller than the injectivity radius of $g$ along $\gamma$. Then one has $Exp_g \colon L^\perp\to \mathcal{P}^1 M$ (since it defines a diffeo from $(T_{\gamma(0)}M)_{<\epsilon}$ onto its image, $\exp_g$ preserves laziness). This map is injective, and it's a reasonable candidate for a coordinate chart. Declare such charts to be our atlas, defining, as a by-product, a topology - the coarsest that makes the charts continuous. Now you have several things to check. (I haven't - maybe it doesn't work...) One of those is that the topology is Hausdorff, so you might even want to make this into a metric space, perhaps via a Riemannian metric. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462623596191406, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/257946/how-to-solve-x1-2-x1-3-0/257948
# How to solve $x^{1/2}-x^{1/3} = 0$ How can I solve the following equation? I really can't figure out how to solve it: $x^{1/2}-x^{1/3} = 0$ Thank you. - 2 Please, try to make the title of your question more informative. E.g., Why does $a<b$ imply $a+c<b+c$? is much more useful for other users than A question about inequality. From How can I ask a good question?: Make your title as descriptive as possible. In many cases one can actually phrase the title as the question, at least in such a way so as to be comprehensible to an expert reader. – Julian Kuelshammer Dec 13 '12 at 15:34 ## 7 Answers $$\begin{eqnarray*}x^{1/2}-x^{1/3} &=& 0 \\ \\ \iff x^{3/6} - x^{2/6} &=& 0 \\ \\ \iff x^{2/6}(x^{1/6} - 1)&=& 0 \\ \\ \iff [x^{1/6} = 1\text{ or}\;x^{2/6} &=& 0] \\ \\ \iff [x = 1\text{ or}\;x &=& 0]\end{eqnarray*}$$ - Another thumbs-up from me! +1 – Amzoti yesterday Putting $x=y^6, y^3=y^2,y^2(y-1)=0\implies y=0$ or $1\implies x=0$ or $1$ - 6 Observe that $6$ is chosen as lcm$(2,3)=6$ – lab bhattacharjee Dec 13 '12 at 15:16 You might have heard this a thousand times, but the equation really does speak here! What it says is that when you raise a number ($x$) to two different powers ($\frac 12$ and $\frac 13$) they turn out to be equal (their difference is $0$). The only two numbers that come to mind are $0$ and $1$. $\Big[ \forall n \in \mathbb R, \;\; 0^n=0$ and $1^n=1 \Big]$ So the solution should be $x=0\ \text{ or } \ 1$ - 6 +1 for the very nice first two paragraphs… but I almost took it back for the disclaimer! If an intuitive approach isn’t mathematically sound, the response shouldn’t be “ah well, accept that it’s not so good, and use an alternative solution”. The response should be “OK, so which bit isn’t mathematically sound, and how can we make it so?” With a little work in the third paragraph, your answer can become a solution that’s both mathematically sound and more intuitively clear than some of the other answers here. – Peter LeFanu Lumsdaine Dec 14 '12 at 3:35 I think that this can be trivially made "mathematically sound" and it's a nice way to show how carefully reading and understanding what the problem asks one to show is important before one starts writing. – Nik Bougalis Dec 14 '12 at 4:50 (Henceforth, $x$ is a variable denoting a non-negative real number) $$x^{1/2} - x^{1/3} = 0$$ if and only if $$x^{1/2} = x^{1/3}$$ if and only if $$\frac{x^{1/2}}{x^{1/3}} = 1 \qquad \text{or} \qquad x^{1/3} = 0$$ if and only if $$x^{\frac{1}{2} - \frac{1}{3}} = 1 \qquad \text{or} \qquad x = 0$$ if and only if $$x^{1/6} = 1 \qquad \text{or} \qquad x = 0$$ if and only if $$x = 1 \qquad \text{or} \qquad x = 0$$ $$x^{1/2} - x^{1/3} = 0$$ if and only if $$x^{1/2} = x^{1/3}$$ if and only if $$x = x^{2/3}$$ if and only if $$x^3 = x^2$$ if and only if $$x = 1 \qquad \text{or} \qquad x^2 = 0$$ if and only if $$x = 1 \qquad \text{or} \qquad x = 0$$ - $x^{1/2}-x^{1/3} = 0$ $x^{1/2}=x^{1/3}$ $(x^{1/2})^6=(x^{1/3})^6$ $x^3=x^2$ $x^3-x^2=0$ $x^2(x-1)=0$ $x^2 = 0$ or $x-1=0$ $x = 0$ or $x = 1$ - Similar to the first answer. $$x^{\frac{1}{2}}-x^{\frac{1}{3}}=0$$ Rewrite $x^{\frac{1}{2}}$ as something clever: $$x^{\frac{3}{6}}-x^{\frac{1}{3}}=0$$ (In case that's illegible, that's x^(3/6)) Now factor out $x^{\frac{1}{3}}$: $$x^{\frac{1}{3}}(x^{\frac{2}{6}}-1)=0$$ Now divide both sides by $x^{\frac{1}{3}}$. Note that this assumes $x \neq 0$; we will have to go back and check for that case later. $$x^{\frac{2}{6}}-1=0$$ $$x^{\frac{2}{6}}=1$$ $$x^{\frac{1}{3}}=1$$ $$(x^{\frac{1}{3}})^3=1^3$$ $$x=1$$ Now let's go back and check if it works for $x=0$: $$(0)^{\frac{1}{2}}-(0)^{\frac{1}{3}}=0$$ Yep, that equation holds! So either $x=1$ or $x=0$. - This is late, but for problems like these, I use Wolfram Alpha if I'm stuck. If you log in, it lets you see 3 step-by-step solutions daily. Here's the page for this problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352410435676575, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10123?sort=newest
## presentation for GL(n,K) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) let $K$ be a field, $n \geq 1$. denote $E_{i,j}$ the elementary matrix having $1$ on the diagonale and in the entry $(i,j)$, and $E_i(a)$ the elementary matrix $diag(1,...,a,...,1)$. you know that $GL_n(K)$ is generated by these matrices, but what relations do we need in order to get a presentation for $GL_n(K)$? here are some relations, which correspond to simple relations about row operations: • $E_i(1)=1$ • $E_i(ab) = E_i(a) E_i(b)$ • $E_i(a) E_j(b) = E_j(b) E_i(a)$ • $(E_j(-1) E_{ij})^2=1$ • $E_j(a+b)^{-1} E_{ij} E_j(a+b) = E_j(a)^{-1} E_{ij} E_j(b) E_i(a)^{-1} E_{ij} E_j(a)$ • $(E_{ji} E_{ij} E_{ji} E_j(-1))^2=1$ are these all relations? how can we prove that? EDIT: Mariano has given a counterexample when $K = \mathbb{F}_2$. well, how can we fix this? add more relations? incorporate the structure of $K$ as a ring? what about concrete examples such as $K=\mathbb{Q}$? - 3 One way of getting a presentation for a Lie group is exponentiating a presentation from the Lie algebra. For the Lie algebra, the presentation will not be hard to do (just take the Cartan subalgebra, $e_{i,i+1}$, and $e_{i+1,i}$), and you can work out the relations for them. It is not necessary to use the matrices corresponding to all $(i,j)$-s - that might overcomplicate things. – Vinoth Dec 30 2009 at 12:10 1 It seems to me that this should be a presentation of the simply connected form of the Lie group. But GL_2(R) is not simply connected: the universal cover is basically the metaplectic group. I could imagine much worse things could happen with nonarchimedean fields, but I don't know. – David Speyer Dec 30 2009 at 21:40 1 Heh. Is $\mathbb F_2$ not concrete? :) – Mariano Suárez-Alvarez Jan 4 2010 at 2:04 can you give a presentation for $GL_n(\mathbb{F}_2)$? – Martin Brandenburg Jan 4 2010 at 9:42 1 See Steinberg, Lectures on Chevalley groups. For $n=2$ extra relations are needed. This is related to David's comment. – Victor Protsak Jun 6 2010 at 6:03 ## 4 Answers You might want to look at Cohn's paper "On the structure of the ${\rm GL}_{2}$ of a ring", MR0207856. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I suggest you give Mariano a check mark. MathOverflow does not function properly when you change a question substantially after it has been correctly answered. Regarding your revised question, we can fix the presentation by throwing it away and using Steinberg's presentation, which works without problems. I couldn't find an online copy of his paper, but this 1971 article describes how to present the R-points of any semisimple simply connected Chevalley-Demazure group, for R any commutative ring. I have heard that the presentation in the case of the general linear group is due to Schur, but I don't know a reference. - Suppose $R=\mathbb F_2$ is the field with two elements and $n=2$. Then we need not consider the matrices $E_i(a)$, for the only possible $a$ is $1$, so your group is generated by $\alpha=E_{12}$ and $\beta=E_{21}$. Your fourth relation implies that $$\alpha^2=\beta^2=1.$$ Your fifth equation is empty in this case (for it is only meaningful for when $a$ and $b$ are non-zero elements of the field which add up to a non-zero element of the field!) Finally, your sixth relation in this case tells us that $(\alpha\beta\alpha)^2=(\beta\alpha\beta)^2=1$, but these two equalities follow from the previous equation. We thus see that the group generated by the $E_i(a)$'s and the $E_{ij}$ subject to your relations is, in this case, $$\langle\alpha,\beta:\alpha^2=\beta^2=1\rangle.$$ This is an infinite group, so it is not $\mathrm{GL}(2,\mathbb F_2)$. Exactly the same reasoning shows that the same happens for all $n\geq 2$: you get free products of cyclic groups of order $2$. NB: It wouldn't be the first time that $\mathbb F_2$ behaves differently from other fields... I doubt that is the case, and surely someone with enough determination will be able to use GAP to check whether the group given by your generators and relations is or not $\mathrm{GL}$, at least for other small fields... - This is a sideways answer. Let $E_{ij}(a)=I+a E_{ij}$, for $i\neq j$ and $a\in A$.These matrices generate the conmutator subgroup $$E(n, A)=[\mathrm{GL}(n, A),\mathrm{GL}(n, A)]\subseteq\mathrm{GL}(n, A).$$ One can easily check that the obvious relations satisfied by these elements are $$E_{ij}(a)E_{ij}(b)=E_{ij}(a+b),$$ $$[E_{ij}(a),E_{jk}(a)]=E_{ik}(a) \mbox{ if $i\neq k$,}$$ $$[E_{ij}(a),E_{kl}(b)]=1 \mbox{ if $i\neq l$ and $j\neq k$.}$$ Yet the group presented by generators and this relations is not $E(n,A)$, but what we call the $n$-th unstable Steinberg group $\mathrm{St}(n, A)$ of $A$. In general, this is larger than (precisely, an extension of) $E(n,A)$. (NB: The following paragraph has been edited to make it match reality. Thanks to Allen for pointing the mistake in the comment bellow) This is seen, for example, because the map $\mathrm{St}(n, A)\to E(n,A)$ has a non-trivial kernel. Indeed, after passing to the direct limit as $n$ goes to infinity, the kernel of that map is precisely the second algebraic $K$-theory group of $A$, $K_2(A)$. Milnor shows in his book that $K_2(\mathbb{R})$ is uncountable, and describes $K_2(\mathbb Q)$ (he also shows that $K_2(\mathbb Z)$ is cyclic of order two, so this can be done for rings that are not fields too...) A nice reference for all this is Jonathan Rosenberger's Algebraic $K$-theory and its applications, and there is John Milnor's Introduction to algebraic $K$-theory, which is also extremely nice. A short intuitive description for $K_2(A)$ is: it measures how much more information is there in the elementary matrices of a ring which does not follow formally from the Steinberg relations. - 3 The kernel of the stabilized map $St(\infty,A)\to E(\infty,A)$ is $K_2(A)$, not $K_3(A)$. Milnor showed that $K_2(\mathbb Z)$ is cyclic of order 2. – Allen Hatcher Dec 30 2009 at 16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379032850265503, "perplexity_flag": "head"}
http://pediaview.com/openpedia/Ising_model
# Ising model Statistical mechanics • NVE Microcanonical • NVT Canonical • µVT Grand canonical • NPH Isoenthalpic–isobaric • NPT Isothermal–isobaric • µVT Open statistical Models • Debye • Einstein • Ising Scientists The Ising model (; German: ), named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually, a lattice, allowing each spin to interact with its neighbors. The model allows the identification of phase transitions, as a simplified model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition.[1] The Ising model was invented by the physicist Wilhelm Lenz (1920), who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model has no phase transition and was solved by Ising (1925) himself in his 1924 thesis.[2] The two-dimensional square lattice Ising model is much harder, and was given an analytic description much later, by Lars Onsager (1944). It is usually solved by a transfer-matrix method, although there exist different approaches, more related to quantum field theory. In dimensions greater than four, the phase transition of the Ising model is described by mean field theory. ## Definition Consider a set of lattice sites Λ, each with a set of adjacent sites (e.g. a graph) forming a d-dimensional lattice. For each lattice site j ∈ Λ there is a discrete variable σj such that σj  ∈ {+1, −1}. A spin configuration, σ = (σj)j∈Λ is an assignment of spin value to each lattice site. For any two adjacent sites i, j ∈ Λ one has an interaction Jij, and a site i ∈ Λ has an external magnetic field hi. The energy of a configuration σ is given by the Hamiltonian Function $H(\sigma) = - \sum_{<i~j>} J_{ij} \sigma_i \sigma_j -\mu \sum_{j} h_j\sigma_j$ where the first sum is over pairs of adjacent spins (every pair is counted once). <ij> indicates that sites i and j are nearest neighbors. The magnetic moment is given by µ. Note that the sign in the second term of the Hamiltonian above should actually be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally.[3] The configuration probability is given by the Boltzmann distribution with inverse temperature β ≥ 0: $P_\beta(\sigma) ={e^{-\beta H(\sigma)} \over Z_\beta},$ where β = (kBT)−1 and the normalization constant $Z_\beta = \sum_\sigma e^{-\beta H(\sigma)}$ is the partition function. For a function f of the spins ("observable"), one denotes by $\langle f \rangle_\beta = \sum_\sigma f(\sigma) P_\beta(\sigma) \,$ the expectation (mean value) of f. The configuration probabilities Pβ(σ) represent the probability of being in a state with configuration σ in equilibrium. ### Discussion The minus sign on each term of the Hamiltonian function H(σ) is conventional. Using this sign convention, the Ising models can be classified according to the sign of the interaction: if, for all pairs i,j Jij>0, the interaction is called ferromagnetic Jij<0, the interaction is called antiferromagnetic Jij=0, the spins are noninteracting otherwise the system is called nonferromagnetic. In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs. The sign convention of H(σ) also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If: hj>0, the spin site j desires to line up in the positive direction hj<0, the spin site j desires to line up in the negative direction hj=0, there is no external influence on the spin site. Ising models are often examined without an external field interacting with the lattice: $H(\sigma) = - \sum_{<i~j>} J_{ij} \sigma_i \sigma_j.$ Another common simplification is to assume that all of the nearest neighbors <ij> have the same interaction strength. Then we can set Jij = J for all j in Λ such that: $H(\sigma) = -J\sum_{<i~j>}\sigma_i \sigma_j.$ When the external field is everywhere zero, h = 0, the Ising model is symmetric under switching the value of the spin in all the lattice sites; a non zero field breaks this symmetry. The interesting statistical questions to ask are all in the limit of large numbers of spins: 1. In a typical configuration, are most of the spins +1 or −1, or are they split equally? 2. If a spin at any given position i is 1, what is the probability that the spin at position j is also 1? 3. If β is changed, is there a phase transition? 4. On a lattice, what is the fractal dimension of the shape of a large cluster of +1 spins? ## Basic properties and history Visualization of the translation-invariant probability measure of the one-dimensional Ising model. The most studied case of the Ising model is the translation-invariant ferromagnetic zero-field model on a d-dimensional lattice, namely, Λ = Zd, Jij = 1, h = 0. In his 1924 PhD thesis, Ising solved the model for the 1D case, which can be thought of as a linear horizontal lattice where each site only interacts with its left and right neighbor. In one dimension, the solution admits no phase transition.[4] Namely, for any positive β, the correlations <σiσj> decay exponentially in |i−j|: $\langle \sigma_i \sigma_j \rangle_\beta \leq C \exp(-c(\beta) |i-j|),\,$ and the system is disordered. On the basis of this result, he incorrectly concluded that this model does not exhibit phase behaviour in any dimension. The Ising model undergoes a phase transition between an ordered and a disordered phase in 2 dimensions or more. Namely, the system is disordered for small β, whereas for large β the system exhibits ferromagnetic order: $\langle \sigma_i \sigma_j \rangle_\beta \geq c(\beta) > 0.\,$ This was first proven by Rudolph Peierls in 1936[5], using what is now called a Peierls argument. The Ising model on a two dimensional square lattice with no magnetic field was analytically solved by Lars Onsager (1944). Onsager showed that the correlation functions and free energy of the Ising model are determined by a noninteracting lattice fermion. Onsager announced the formula for the spontaneous magnetization for the 2-dimensional model in 1949 but did not give a derivation. Yang (1952) gave the first published proof of this formula, using a limit formula for Fredholm determinants, proved in 1951 by Szegő in direct response to Onsager's work.[6] ## Historical significance One of Democritus' arguments in support of atomism was that atoms naturally explain the sharp phase boundaries observed in materials, as when ice melts to water or water turns to steam. His idea was that small changes in atomic-scale properties would lead to big changes in the aggregate behavior. Others believed that matter is inherently continuous, not atomic, and that the large-scale properties of matter are not reducible to basic atomic properties. While the laws of chemical binding made it clear to nineteenth century chemists that atoms were real, among physicists the debate continued well into the early twentieth century. Atomists, notably James Clerk Maxwell and Ludwig Boltzmann, applied Hamilton's formulation of Newton's laws to large systems, and found that the statistical behavior of the atoms correctly describes room temperature gases. But classical statistical mechanics did not account for all of the properties of liquids and solids, nor of gases at low temperature. Once modern quantum mechanics was formulated, atomism was no longer in conflict with experiment, but this did not lead to a universal acceptance of statistical mechanics, which went beyond atomism. Josiah Willard Gibbs had given a complete formalism to reproduce the laws of thermodynamics from the laws of mechanics. But many faulty arguments survived from the 19th century, when statistical mechanics was considered dubious. The lapses in intuition mostly stemmed from the fact that the limit of an infinite statistical system has many zero-one laws which are absent in finite systems: an infinitesimal change in a parameter can lead to big differences in the overall, aggregate behavior, as Democritus expected. ### No phase transitions in finite volume In the early part of the twentieth century, some believed that the partition function could never describe a phase transition, based on the following argument: 1. The partition function is a sum of e−βE over all configurations. 2. the exponential function is everywhere analytic as a function β. 3. the sum of analytic things is analytic. But the logarithm of the partition function is not analytic as a function of the temperature near a phase transition, so the theory doesn't work. This argument works for a finite sum of exponentials, and correctly establishes that there are no singularities in the free energy of a system of a finite size. For systems which are in the thermodynamic limit (that is, for infinite systems) the infinite sum can lead to singularities. The convergence to the thermodynamic limit is fast, so that the phase behavior is apparent already on a relatively small lattice, even though the singularities are smoothed out by the system's finite size. This was first established by Rudolf Peierls in the Ising model. ### Peierls droplets Shortly after Lenz and Ising constructed the Ising model, Peierls was able to explicitly show that a phase transition occurs in two dimensions. To do this, he compared the high-temperature and low temperature limits. At infinite temperature, β = 0, all configurations have equal probability. Each spin is completely independent of any other, and if typical configurations at infinite temperature are plotted so that plus/minus are represented by black and white, they look like television snow. For high, but not infinite temperature, there are small correlations between neighboring positions, the snow tends to clump a little bit, but the screen stays random looking, and there is no net excess of black or white. A quantitative measure of the excess is the magnetization, which is the average value of the spin: $M= {1\over N} \sum_{i=1}^{N} S_i.$ A bogus argument analogous to the argument in the last section now establishes that the magnetization in the Ising model is always zero. 1. Every configurations of spin has equal energy to the configuration with all spins flipped. 2. So for every configuration with magnetization M there is a configuration with magnetization −M with equal probability 3. So the magnetization is zero. As before, this only proves that the magnetization is zero at any finite volume. For an infinite system, fluctuations might not be able to push the system from a mostly-plus state to a mostly minus with any nonzero probability. For very high temperatures, the magnetization is zero, as it is at infinite temperature. To see this, note that if spin A has only a small correlation ε with spin B, and B is only weakly correlated with C, but C is otherwise independent of A, the amount of correlation of A and C goes like ε2. For two spins separated by distance L, the amount of correlation goes as εL but if there is more than one path by which the correlations can travel, this amount is enhanced by the number of paths. The number of paths of length L on a square lattice in d dimensions is $N(L) = (2d)^L$ since there are 2d choices for where to go at each step. A bound on the total correlation is given by the contribution to the correlation by summing over all paths linking two points, which is bounded above by the sum over all lengths paths of length L divided by $\sum_L (2d)^L (\varepsilon)^L$ which goes to zero when ε is small. At low temperatures, β ≫ 1, the configurations are near the lowest energy configuration, the one where all the spins are plus or all the spins are minus. Peierls asked whether it is statistically possible at low temperature, starting with all the spins minus, to fluctuate to a state where most of the spins are plus. For this to happen, droplets of plus spin must be able to congeal to make the plus state. The energy of a droplet of plus spins in a minus background is proportional to the perimeter of the droplet L, where plus spins and minus spins neighbor each other. For a droplet with perimeter L, the area is somewhere between (L − 2)/2 (the straight line) and (L/4)2 (the square box). The probability cost for introducing a droplet has the factor e−βL, but this contributes to the partition function multiplied by the total number of droplets with perimeter L, which is less than the total number of paths of length L: $N(L)< 4^{2L}.$ So that the total spin contribution from droplets, even overcounting by allowing each site to have a separate droplet, is bounded above by $\sum_L L^2 4^{-2L} e^{-4\beta L}$ which goes to zero at large β. For β sufficiently large, this exponentially suppresses long loops, so that they cannot occur, and the magnetization never fluctuates too far from −1. So Peierls established that the magnetization in the Ising model eventually defines superselection sectors, separated domains which are not linked by finite fluctuations. ### Kramers–Wannier duality Main article: Kramers–Wannier_duality Kramers and Wannier were able to show that the high temperature expansion and the low temperature expansion of the model are equal up to an overall rescaling of the free energy. This allowed the phase transition point in the two-dimensional model to be determined exactly (under the assumption that there is a unique critical point). ### Yang–Lee zeros Main article: Lee–Yang theorem After Onsager's solution, Yang and Lee investigated the way in which the partition function becomes singular as the temperature approaches the critical temperature. ## Monte Carlo Methods for Numerical Simulation ### Definitions The Ising model can often be difficult to evaluate numerically if there are many states in the system. Consider an Ising Model with L lattice sites. Let: • L = |Λ|: the total number of sites on the lattice, • σj ∈ {−1, +1}: an individual spin site on the lattice, j = 1, ..., L, • S ∈ {−1, +1}L: state of the system. Since every spin site has ±1 spin, there are 2L different states that are possible.[7] This motivates the reason for the Ising Model to be simulated using Monte Carlo Methods.[7] The Hamiltonian Function that is commonly used representing the energy of the model when using Monte Carlo Methods is: $H(\sigma) = - J\sum_{<ij>}\sigma_i \sigma_j -h\sum_{j}\sigma_j.$ Furthermore, the Hamiltonian is further simplified by assuming zero external field (h) since many questions that are poised to be solved using the model can be answered in absence of an external field. This leads us to the following energy equation for state σ: $H(\sigma) = - J\sum_{<ij>}\sigma_i \sigma_j.$ Two such examples of estimates of interest are the specific heat or the magnetization of the magnet at a given temperature.[7] ### The Metropolis Algorithm #### Overview of Algorithm The Metropolis–Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising Model estimations.[7] The algorithm first chooses selection probabilities g(μ, ν), which represent the probability that state ν is selected by the algorithm out of all states, given that we are in state μ. It then uses acceptance probabilities A(μ, ν) so that detailed balance is satisfied. If the new state ν is accepted, then we move to that state and repeat with selecting a new state and deciding to accept it. If ν is not accepted then we stay in μ. This process is repeated until some stopping criteria is met, which for the Ising model is often when the lattice becomes ferromagnetic, meaning all of the sites point in the same direction.[7] When implementing the algorithm, we must ensure that g(μ, ν) is selected such that ergodicity is met. In thermal equilibrium a system's energy only fluctuates within a small range.[7] This is the motivation behind the concept of single-spin-flip dynamics, which states that in each transition, we will only change one of the spin sites on the lattice.[7] Furthermore, by using single- spin-flip dynamics, we can get from any state to any other state by flipping each site that differs between the two states one at atime. The maximum amount of change between the energy of the present state, Hμ and any possible new state's energy Hν (using single-spin-flip dynamics) is 2J between the spin we choose to "flip" to move to the new state and that spin's neighbor.[7] Thus, in a 1D-Ising Model, where each site has 2 neighbors (left, and right), the maximum difference in energy would be 4J. Let c represent the lattice coordination number; the number of nearest neighbors that any lattice site has. We assume that all sites have the same number of neighbors due to periodic boundary conditions.[7] #### Algorithm Specification Specifically for the Ising model and using single-spin-flip dynamics, we can establish the following. Since there are L total sites on the lattice, using single-spin-flip as the only way we transition to another state, we can see that there are a total of L new states ν from our present state μ. The algorithm assumes that the selection probabilities are equal to the L states: g(μ, ν) = 1/L. Detailed balance tells us that the following equation must hold: $\frac{P(\mu,\nu)}{P(\nu,\mu)}=\frac{g(\mu,\nu)A(\mu,\nu)}{g(\nu,\mu)A(\nu,\mu)}= \frac{A(\mu,\nu)}{A(\nu,\mu)}=\frac{P_\beta(\nu)}{P_\beta(\mu)}=\frac{e^{-\beta(H_\nu)} /Z}{e^{-\beta(H_\mu)} /Z}=e^{-\beta(H_\nu-H_\mu)}.$ Thus, we want to select the acceptance probability for our algorithm to satisfy: $\frac{A(\mu,\nu)}{A(\nu,\mu)}=e^{-\beta(H_\nu-H_\mu)}.$ If Hν > Hμ then A(ν, μ) > A(μ, ν) Metropolis sets the larger of A(μ, ν) or A(ν, μ) to be 1. By this reasoning the acceptance algorithm is:[7] $A(\mu,\nu)=\begin{cases} e^{-\beta(H_\nu-H_\mu)}, & \text{if }H_\nu-H_\mu>0 \\ 1, & \text{otherwise}. \end{cases}$ The basic form of the algorithm is as follows: 1. Pick a spin site using selection probability g(μ, ν) and calculate the contribution to the energy involving this spin. 2. Flip the value of the spin and calculate the new contribution. 3. If the new energy is less, keep the flipped value. 4. If the new energy is more, only keep with probability $e^{-\beta(H_\nu-H_\mu)}.$ 5. Repeat. The change in energy Hν−Hμ only depends on the value of the spin and its nearest graph neighbors. So if the graph is not too connected, the algorithm is fast. This process will eventually produce a pick from the distribution. ### Viewing the Ising Model as a Markov Chain It is easy to view the Ising Model as a Markov Chain, as the immediate future state ν transition probability: Pβ(ν) only depends on the present state μ. The Metropolis Algorithm is actually a version of Markov Chain Monte Carlo simulation, and since we use single-spin-flip dynamics in the Metropolis Algorithm, every state can be viewed as having links to exactly L other states, where each transition corresponds to flipping a single spin site to the opposite dipole. Furthermore, since the energy equation Hσ change only depending on the nearest neighbors interaction influence 'J', we can view the Ising model as a form of Voter Model[8] ## One dimension The thermodynamic limit exists as soon as the interaction decay is $J_{ij} \sim |i-j|^{-\alpha}$ with α > 1.[9] • In the case of ferromagnetic interaction $J_{ij} \sim |i-j|^{-\alpha}$ with 1< α < 2 Dyson proved, by comparison with the hierarchical case, that there is phase transition at small enough temperature.[10] • In the case of ferromagnetic interaction $J_{ij} \sim |i-j|^{-2}$, Fröhlich and Spencer proved that there is phase transition at small enough temperature (in contrast with the hierarchical case).[11] • In the case of interaction $J_{ij} \sim |i-j|^{-\alpha}$ with α > 2 (that includes the case of finite range interactions) there is no phase transition at any positive temperature (i.e. finite β) since the free energy is analytic in the thermodynamic parameters.[9] • In the case of nearest neighbor interactions, E. Ising provided an exact solution of the model. At any positive temperature (i.e. finite β) the free energy is analytic in the thermodynamics parameters and the truncated two-point spin correlation decays exponentially fast. At zero temperature, (i.e. infinite β), there is a second order phase transition: the free energy is infinite and the truncated two point spin correlation does not decay (remains constant). Therefore T = 0 is the critical temperature of this case. Scaling formulas are satisfied.[12] ### Ising's exact solution In the nearest neighbor case (with periodic or free boundary conditions) an exact solution is available. The energy of the one dimensional Ising model on a lattice of L sites with periodic boundary conditions is $H(\sigma)=-J\sum_{i=1,\ldots,L} \sigma_i \sigma_{i+1} - h \sum_i \sigma_i$ where J and h can be any number, since in this simplified case J is a constant representing the interaction strength between the nearest neighbors and h is the constant external magnetic field applied to lattice sites. Then the free energy is $f(\beta, h)=-\lim_{L\to \infty} \frac{1}{\beta L} \ln (Z(\beta))=-\frac{1}{\beta} \ln\left(e^{\beta J} \cosh \beta h+\sqrt{e^{2\beta J}(\sinh\beta h)^2+e^{-2\beta J}}\right)$ and the spin-spin correlation is $\langle \sigma_i \sigma_j\rangle-\langle \sigma_i \rangle\langle\sigma_j\rangle=C(\beta)e^{-c(\beta)|i-j|}$ where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length, c(β), vanishes. #### Proof The proof of this result is a simple computation. If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when $H(\sigma)=-J(\sigma_1\sigma_2+\cdots+\sigma_{L-1}\sigma_L).$ Then the model factorizes under the change of variables $\sigma'_j=\sigma_j\sigma_{j-1} \qquad j\ge 2.$ That gives $Z(\beta) =\sum_{\sigma_1,\ldots, \sigma_L} e^{\beta J\sigma_1\sigma_2}\; e^{\beta J\sigma_2\sigma_3}\; \cdots e^{\beta J\sigma_{L-1}\sigma_L}= 2\prod_{j=2}^L \sum_{\sigma'_j} e^{\beta J\sigma'_j} =2\left[ e^{\beta J}+e^{-\beta J}\right]^{L-1}.$ $f(\beta,0)=-\frac{1}{\beta } \ln\left[e^{\beta J}+ e^{-\beta J}\right].$ With the same change of variables $\langle \sigma_{j}\sigma_{j+N}\rangle=\left[\frac{e^{\beta J}- e^{-\beta J}}{e^{\beta J}+ e^{-\beta J}}\right]^N$ hence it decays exponentially as soon as T ≠ 0; but for T = 0, i.e. in the limit β → ∞ there is no decay. If h ≠ 0 we need the transfer matrix method. For the periodic boundary conditions case is the following. The partition function is $Z(\beta)=\sum_{\sigma_1,\ldots, \sigma_L} e^{\beta h \sigma_1}e^{\beta J\sigma_1\sigma_2}\; e^{\beta h \sigma_2}e^{\beta J\sigma_2\sigma_3}\; \cdots e^{\beta h \sigma_L}e^{\beta J\sigma_L\sigma_1}= \sum_{\sigma_1,\ldots, \sigma_L} V_{\sigma_1,\sigma_2}V_{\sigma_2,\sigma_3}\cdots V_{\sigma_L,\sigma_1}.$ The coefficients $V_{\sigma, \sigma'}$'s can be seen as the entries of a matrix. There are different possible choices: a convenient one (because the matrix is symmetric) is $V_{\sigma, \sigma'} = e^{\frac{\beta h}{2} \sigma} e^{\beta J\sigma\sigma'} e^{\frac{\beta h}{2} \sigma'}$ or $V=\begin{bmatrix} e^{\beta(h+J)}&e^{-\beta J}\\ e^{-\beta J}&e^{-\beta(h-J)} \end{bmatrix}.$ In matrix formalism $Z(\beta)={\rm Tr} V^L= \lambda_1^L + \lambda_2^L= \lambda_1^L\left[1+ \left(\frac{\lambda_2}{\lambda_1}\right)^L\right]$ where λ1 is the highest eigenvalue of V, while λ2 is the other eigenvalue: $\lambda_1=e^{\beta J} \cosh \beta h+ \sqrt{e^{2\beta J} (\sinh\beta h)^2 +e^{-2\beta J}}$ and |λ2| < λ1. This gives the formula of the free energy. #### Comments The energy of the lowest state is −L, when all the spins are the same. For any other configuration, the extra energy is equal to the number of sign changes as you scan the configuration from left to right. If we designate the number of sign changes in a configuration as k, the difference in energy from the lowest energy state is 2k. Since the energy is additive in the number of flips, the probability p of having a spin-flip at each position is independent. The ratio of the probability of finding a flip to the probability of not finding one is the Boltzmann factor: ${p \over 1-p} = e^{-2\beta}.$ The problem is reduced to independent biased coin tosses. This essentially completes the mathematical description. From the description in terms of independent tosses, the statistics of the model for long lines can be understood. The line splits into domains. Each domain is of average length exp(2β). The length of a domain is distributed exponentially, since there is a constant probability at any step of encountering a flip. The domains never become infinite, so a long system is never magnetized. Each step reduces the correlation between a spin and its neighbor by an amount proportional to p, so the correlations fall off exponentially. $\langle S_i S_j \rangle \,\propto\, e^{-p|i-j|}.$ The partition function is the volume of configurations, each configuration weighted by its Boltzmann weight. Since each configuration is described by the sign-changes, the Partition function factorizes: $Z = \sum_{\mathrm{configs}} e^{\sum_k S_k} = \prod_k (1 + p ) = (1+p)^L.$ The logarithm divided by L is the free energy density: $\beta f = \log(1+p) = \log\left( 1 + {e^{-2\beta}\over 1+e^{-2\beta}} \right),$ which is analytic away from β = ∞. A sign of a phase transition is a non-analytic free energy, so the one dimensional model does not have a phase transition. ## Two dimensions • In the ferromagnetic case there is a phase transition: at low temperature, Peierls argument proves positive magnetization for the nearest neighbor case and then, by Griffiths inequality, also when longer range interactions are added; while, at high temperature, cluster expansion gives analyticity of the thermodynamic functions. • In the nearest-neighbor case, the free energy has been exactly computed by Onsager, through the equivalence of the model with free fermions on lattice. The spin-spin correlation functions has been computed by McCoy and Wu. ### Onsager's exact solution Main article: square-lattice Ising model The partition function of the Ising model in two dimensions on a square lattice can be mapped to a two-dimensional free fermion. This allows the specific heat to be calculated exactly. Onsager obtained the following analytical expression for the magnetization as a function of temperature: $M = \left(1-\left[\sinh\left(\log(1+\sqrt{2})\frac{T_c}{T}\right)\right]^{-4}\right)^{\frac{1}{8}}$ where $T_c = \frac{2J}{k_B\log(1+\sqrt{2})}.$ #### Transfer matrix Start with an analogy with quantum mechanics. The Ising model on a long periodic lattice has a partition function $\sum_S \exp\biggl(\sum_{ij} S_{i,j} S_{i,j+1} + S_{i,j} S_{i+1,j}\biggr).$ Think of the i direction as space, and the j direction as time. This is an independent sum over all the values that the spins can take at each time slice. This is a type of path integral, it is the sum over all spin histories. A path integral can be rewritten as a Hamiltonian evolution. The Hamiltonian steps through time by performing a unitary rotation between time t and time t + Δt: $U = e^{i H \Delta t}$ The product of the U matrices, one after the other, is the total time evolution operator, which is the path integral we started with. $U^N = (e^{i H \Delta t})^N = \int DX e^{iL}$ where N is the number of time slices. The sum over all paths is given by a product of matrices, each matrix element is the transition probability from one slice to the next. Similarly, one can divide the sum over all partition function configurations into slices, where each slice is the one-dimensional configuration at time 1. This defines the transfer matrix: $T_{C_1 C_2}.$ The configuration in each slice is a one dimensional collection of spins. At each time slice, T has matrix elements between two configurations of spins, one in the immediate future and one in the immediate past. These two configurations are C1 and C2, and they are all one dimensional spin configurations. We can think of the vector space that T acts on as all complex linear combinations of these. Using quantum mechanical notation: $|A\rangle = \sum_S A(S) |S\rangle$ where each basis vector $|S\rangle$ is a spin configuration of a one-dimensional Ising model. Like the Hamiltonian, the transfer matrix acts on all linear combinations of states. The partition function is a matrix function of T, which is defined by the sum over all histories which come back to the original configuration after N steps: $Z= \mathrm{tr}(T^N).$ Since this is a matrix equation, it can be evaluated in any basis. So if we can diagonalize the matrix T, we can find Z. #### T in terms of Pauli matrices The contribution to the partition function for each past/future pair of configurations on a slice is the sum of two terms. There is the number of spin flips in the past slice and there is the number of spin flips between the past and future slice. Define an operator on configurations which flips the spin at site i: $\sigma^x_i.$ In the usual Ising basis, acting on any linear combination of past configurations, it produces the same linear combination but with the spin at position i of each basis vector flipped. Define a second operator which multiplies the basis vector by +1 and −1 according to the spin at position i: $\sigma^z_i.$ T can be written in terms of these: $\sum_i A \sigma^x_i + B \sigma^z_i \sigma^z_{i+1}$ where A and B are constants which are to be determined so as to reproduce the partition function. The interpretation is that the statistical configuration at this slice contributes according to both the number of spin flips in the slice, and whether or not the spin at position i' has flipped. #### Spin flip creation and annihilation operators Just as in the one dimensional case, we will shift attention from the spins to the spin-flips. The σz term in T counts the number of spin flips, which we can write in terms of spin-flip creation and annihilation operators: $\sum C \psi^\dagger_i \psi_i. \,$ The first term flips a spin, so depending on the basis state it either: 1. moves a spin-flip one unit to the right 2. moves a spin-flip one unit to the left 3. produces two spin-flips on neighboring sites 4. destroys two spin-flips on neighboring sites. Writing this out in terms of creation and annihilation operators: $\sigma^x_i = D {\psi^\dagger}_i \psi_{i+1} + D^* {\psi^\dagger}_i \psi_{i-1} + C\psi_i \psi_{i+1} + C^* {\psi^\dagger}_i {\psi^\dagger}_{i+1}.$ Ignore the constant coefficients, and focus attention on the form. They are all quadratic. Since the coefficients are constant, this means that the T matrix can be diagonalized by Fourier transforms. Carrying out the diagonalization produces the Onsager free energy. #### Onsager's formula for spontaneous magnetization Onsager (1949) obtained the following formula for the spontaneous magnetization M of a two-dimensional Ising ferromagnet $M=(1- (\sinh 2\beta E_1 \sinh 2\beta E_2)^{-2})^{\frac{1}{8}}.$ A complete derivation was later given by Yang (1952), using Szegő's limit formula for Toeplitz determinants, proved in 1951 in response to Onsager's work.[13][6] In this formula the total energy of Onsager's lattice model is given by $E=-E_1 \sum_{j,k} S_{j,k}S_{j,k+1} - E_2\sum_{j,k} S_{j,k} S_{j+1,k}$ and β−1 = kT where k is Boltzmann's constant and T is the absolute temperature. ## Three and four dimensions In three dimensions, the Ising model was shown to have a representation in terms of non-interacting Fermionic lattice strings by Alexander Polyakov. In dimensions near four, the critical behavior of the model is understood to correspond to the renormalization behavior of the scalar phi-4 theory (see Kenneth Wilson). ## More than four dimensions In any dimension, the Ising model can be productively described by a locally varying mean field. The field is defined as the average spin value over a large region, but not so large so as to include the entire system. The field still has slow variations from point to point, as the averaging volume moves. These fluctuations in the field are described by a continuum field theory in the infinite system limit. ### Local field The field H is defined as the long wavelength Fourier components of the spin variable, in the limit that the wavelengths are long. There are many ways to take the long wavelength average, depending on the details of how high wavelengths are cut off. The details are not too important, since the goal is to find the statistics of H and not the spins. Once the correlations in H are known, the long-distance correlations between the spins will be proportional to the long-distance correlations in H. For any value of the slowly varying field H, the free energy (log-probability) is a local analytic function of H and its gradients. The free energy F(H) is defined to be the sum over all Ising configurations which are consistent with the long wavelength field. Since H is a coarse description, there are many Ising configurations consistent with each value of H, so long as not too much exactness is required for the match. Since the allowed range of values of the spin in any region only depends on the values of H within one averaging volume from that region, the free energy contribution from each region only depends on the value of H there and in the neighboring regions. So F is a sum over all regions of a local contribution, which only depends on H and its derivatives. By symmetry in H, only even powers contribute. By reflection symmetry on a square lattice, only even powers of gradients contribute. Writing out the first few terms in the free energy: $\beta F = \int d^dx \left[ A H^2 + \sum_{i=1}^{d} Z_i (\partial_i H)^2 + \lambda H^4 ... \right].$ On a square lattice, symmetries guarantee that the coefficients Zi of the derivative terms are all equal. But even for an anisotropic Ising model, where the Z's in different directions are different, the fluctuations in H are isotropic in a coordinate system where the different directions of space are rescaled. On any lattice, the derivative term $Z_{ij} \partial_i H \partial_j H$ is a positive definite quadratic form, and can be used to define the metric for space. So any translationally invariant Ising model is rotationally invariant at long distances, in coordinates that make Zij = δij. Rotational symmetry emerges spontaneously at large distances just because there aren't very many low order terms. At higher order multicritical points, this accidental symmetry is lost. Since βF is a function of a slowly spatially varying field. The probability of any field configuration is: $P(H) \propto e^{ - \int d^dx \left[ AH^2 + Z |\nabla H|^2 + \lambda H^4 \right]}.$ The statistical average of any product of H's is equal to: $\langle H(x_1) H(x_2)\cdots H(x_n) \rangle = { \int DH P(H) H(x_1) H(x_2) \cdots H(x_n) \over \int DH P(H) }.$ The denominator in this expression is called the partition function, and the integral over all possible values of H is a statistical path integral. It integrates exp(βF) over all values of H, over all the long wavelength fourier components of the spins. F is a Euclidean Lagrangian for the field H, the only difference between this and the quantum field theory of a scalar field is that all the derivative terms enter with a positive sign, and there is no overall factor of i. $Z = \int DH e^{ \int d^dx \left[ A H^2 + Z |\nabla H|^2 + \lambda H^4 \right]}$ ### Dimensional analysis The form of F can be used to predict which terms are most important by dimensional analysis. Dimensional analysis is not completely straightforward, because the scaling of H needs to be determined. In the generic case, choosing the scaling law for H is easy, the only term that contributes is the first one, $F = \int d^dx A H^2.$ This term is the most significant, but it gives trivial behavior. This form of the free energy is ultralocal, meaning that it is a sum of an independent contribution from each point. This is like the spin-flips in the one-dimensional Ising model. Every value of H at any point fluctuates completely independently of the value at any other point. The scale of the field can be redefined to absorb the coefficient A, and then it is clear that A only determines the overall scale of fluctuations. The ultralocal model describes the long wavelength high temperature behavior of the Ising model, since in this limit the fluctuation averages are independent from point to point. To find the critical point, lower the temperature. As the temperature goes down, the fluctuations in H go up because the fluctuations are more correlated. This means that the average of a large number of spins does not become small as quickly as if they were uncorrelated, because they tend to be the same. This corresponds to decreasing A in the system of units where H does not absorb A. The phase transition can only happen when the subleading terms in F can contribute, but since the first term dominates at long distances, the coefficient A must be tuned to zero. This is the location of the critical point: $F= \int d^dx \left[ t H^2 + \lambda H^4 + Z (\nabla H)^2 \right],$ where t is a parameter which goes through zero at the transition. Since t is vanishing, fixing the scale of the field using this term makes the other terms blow up. Once t is small, the scale of the field can either be set to fix the coefficient of the H4 term or the (∇H)2 term to 1. ### Magnetization To find the magnetization, fix the scaling of H so that λ is one. Now the field H has dimension −d/4, so that H4ddx is dimensionless, and Z has dimension 2−d/2. In this scaling, the gradient term is only important at long distances for d ≤ 4. Above four dimensions, at long wavelengths, the overall magnetization is only affected by the ultralocal terms. There is one subtle point. The field H is fluctuating statistically, and the fluctuations can shift the zero point of t. To see how, consider H4 split in the following way: $H(x)^4 = \langle H(x)^2\rangle^2 + 2\langle H(x)^2\rangle H(x)^2 + ( H(x)^2-\langle H(x)^2\rangle)^2$ The first term is a constant contribution to the free energy, and can be ignored. The second term is a finite shift in t. The third term is a quantity that scales to zero at long distances. This means that when analyzing the scaling of t by dimensional analysis, it is the shifted t that is important. This was historically very confusing, because the shift in t at any finite λ is finite, but near the transition t is very small. The fractional change in t is very large, and in units where t is fixed the shift looks infinite. The magnetization is at the minimum of the free energy, and this is an analytic equation. In terms of the shifted t, ${\partial \over \partial H } \left( t H^2 + \lambda H^4 \right ) = 2t H + 4\lambda H^3 = 0$ For t < 0, the minima are at H proportional to the square root of t. So Landau's catastrophe argument is correct in dimensions larger than 5. The magnetization exponent in dimensions higher than 5 is equal to the mean field value. When t is negative, the fluctuations about the new minimum are described by a new positive quadratic coefficient. Since this term always dominates, at temperatures below the transition the flucuations again become ultralocal at long distances. ### Fluctuations To find the behavior of fluctuations, rescale the field to fix the gradient term. Then the length scaling dimension of the field is 1−d/2. Now the field has constant quadratic spatial fluctuations at all temperatures. The scale dimension of the H2 term is 2, while the scale dimension of the H4 term is 4−d. For d < 4, the H4 term has positive scale dimension. In dimensions higher than 4 it has negative scale dimensions. This is an essential difference. In dimensions higher than 4, fixing the scale of the gradient term means that the coefficient of the H4 term is less and less important at longer and longer wavelengths. The dimension at which nonquadratic contributions begin to contribute is known as the critical dimension. In the Ising model, the critical dimension is 4. In dimensions above 4, the critical fluctuations are described by a purely quadratic free energy at long wavelengths. This means that the correlation functions are all computable from as Gaussian averages: $\langle S(x)S(y)\rangle \propto \langle H(x)H(y)\rangle = G(x-y) = \int {dk \over (2\pi)^d} { e^{ik(x-y)}\over k^2 + t }$ valid when x−y is large. The function G(x−y) is the analytic continuation to imaginary time of the Feynman propagator, since the free energy is the analytic continuation of the quantum field action for a free scalar field. For dimensions 5 and higher, all the other correlation functions at long distances are then determined by Wick's theorem. All the odd moments are zero, by +/− symmetry. The even moments are the sum over all partition into pairs of the product of G(x−y) for each pair. $\langle S(x_1) S(x_2) ... S(x_{2n})\rangle = C^n \sum G(x_{i1},x_{j1}) G(x_{i2},X_{j2}) \ldots G(x_{in},x_{jn})$ where C is the proportionality constant. So knowing G is enough. It determines all the multipoint correlations of the field. ### The critical two-point function To determine the form of G, consider that the fields in a path integral obey the classical equations of motion derived by varying the free energy: $(-\nabla_x^2 + t) \langle H(x)H(y) \rangle = 0 \rightarrow \nabla^2 G(x) +t G(x) = 0$ This is valid at noncoincident points only, since the correlations of H are singular when points collide. H obeys classical equations of motion for the same reason that quantum mechanical operators obey them—its fluctuations are defined by a path integral. At the critical point t = 0, this is Laplace's equation, which can be solved by Gauss's method from electrostatics. Define an electric field analog by $E = \nabla G$ away from the origin: $\nabla \cdot E = 0$ since G is spherically symmetric in d dimensions, E is the radial gradient of G. Integrating over a large d−1 dimensional sphere, $\int d^{d-1}S E_r = \mathrm{constant}$ This gives: $E = {C \over r^{d-1} }$ and G can be found by integrating with respect to r. $G(r) = {C \over r^{d-2} }$ The constant C fixes the overall normalization of the field. ### G(r) away from the critical point When t does not equal zero, so that H is fluctuating at a temperature slightly away from critical, the two point function decays at long distances. The equation it obeys is altered: $\nabla^2 G + t G = 0 \to {1\over r^{d-1}} {d\over dr} \left( r^{d-1} {dG\over dr} \right) + t G(r) =0$ For r small compared with $\sqrt{t}$, the solution diverges exactly the same way as in the critical case, but the long distance behavior is modified. To see how, it is convenient to represent the two point function as an integral, introduced by Schwinger in the quantum field theory context: $G(x) = \int d\tau {1\over (\sqrt{2\pi\tau})^d} e^{- {x^2\over 4\tau} -t \tau}$ This is G, since the Fourier transform of this integral is easy. Each fixed τ contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k. $G(k) = \int d\tau e^{- (k^2 - t)\tau} = {1\over k^2 - t}$ This is the inverse of the operator ∇2−t in k space, acting on the unit function in k space, which is the fourier transform of a delta function source localized at the origin. So it satisfies the same equation as G with the same boundary conditions that determine the strength of the divergence at 0. The interpretation of the integral representation over the proper time τ is that the two point function is the sum over all random walk paths that link position 0 to position x over time τ. The density of these paths at time τ at position x is Gaussian, but the random walkers disappear at a steady rate proportional to t so that the gaussian at time τ is diminished in height by a factor that decreases steadily exponentially. In the quantum field theory context, these are the paths of relativistically localized quanta in a formalism that follows the paths of individual particles. In the pure statistical context, these paths still appear by the mathematical correspondence with quantum fields, but their interpretation is less directly physical. The integral representation immediately shows that G(r) is positive, since it is represented as a weighted sum of positive Gaussians. It also gives the rate of decay at large r, since the proper time for a random walk to reach position τ is r2 and in this time, the Gaussian height has decayed by $e^{-t\tau}=e^{-tr^2}$. The decay factor appropriate for position r is therefore $e^{-\sqrt t r}$. A heuristic approximation for G(r) is: $G(r) \approx { e^{-\sqrt t r} \over r^{d-2}}$ This is not an exact form, except in three dimensions, where interactions between paths become important. The exact forms in high dimensions are variants of Bessel functions. ### Symanzik polymer interpretation The interpretation of the correlations as fixed size quanta travelling along random walks gives a way of understanding why the critical dimension of the H4 interaction is 4. The term H4 can be thought of as the square of the density of the random walkers at any point. In order for such a term to alter the finite order correlation functions, which only introduce a few new random walks into the fluctuating environment, the new paths must intersect. Otherwise, the square of the density is just proportional to the density and only shifts the H2 coefficient by a constant. But the intersection probability of random walks depends on the dimension, and random walks in dimension higher than 4 don't intersect. The fractal dimension of an ordinary random walk is 2. The number of balls of size ε required to cover the path increase as ε−2. Two objects of fractal dimension 2 will intersect with reasonable probability only in a space of dimension 4 or less, the same condition as for a generic pair of planes. Kurt Symanzik argued that this implies that the critical Ising fluctuations in dimensions higher than 4 should be described by a free field. This argument eventually became a mathematical proof. ### 4−ε dimensions – renormalization group The Ising model in four dimensions is described by a fluctuating field, but now the fluctuations are interacting. In the polymer representation, intersections of random walks are marginally possible. In the quantum field continuation, the quanta interact. The negative logarithm of the probability of any field configuration H is the free energy function $F= \int d^4 x \left[ {Z \over 2} |\nabla H|^2 + {t\over 2} H^2 + {\lambda \over 4!} H^4 \right] \,$ The numerical factors are there to simplify the equations of motion. The goal is to understand the statistical fluctuations. Like any other non-quadratic path integral, the correlation functions have a Feynman expansion as particles travelling along random walks, splitting and rejoining at vertices. The interaction strength is parametrized by the classically dimensionless quantity λ. Although dimensional analysis shows that both λ and Z dimensionless, this is misleading. The long wavelength statistical fluctuations are not exactly scale invariant, and only become scale invariant when the interaction strength vanishes. The reason is that there is a cutoff used to define H, and the cutoff defines the shortest wavelength. Fluctuations of H at wavelengths near the cutoff can affect the longer-wavelength fluctuations. If the system is scaled along with the cutoff, the parameters will scale by dimensional analysis, but then comparing parameters doesn't compare behavior because the rescaled system has more modes. If the system is rescaled in such a way that the short wavelength cutoff remains fixed, the long-wavelength fluctuations are modified. #### Wilson renormalization A quick heuristic way of studying the scaling is to cut off the H wavenumbers at a point λ. Fourier modes of H with wavenumbers larger than λ are not allowed to fluctuate. A rescaling of length that make the whole system smaller increases all wavenumbers, and moves some fluctuations above the cutoff. To restore the old cutoff, perform a partial integration over all the wavenumbers which used to be forbidden, but are now fluctuating. In Feynman diagrams, integrating over a fluctuating mode at wavenumber k links up lines carrying momentum k in a correlation function in pairs, with a factor of the inverse propagator. Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and don't change at all. The lowest order effect of integrating out can be calculated from the equations of motion: $\nabla^2 H + t H = - {\lambda \over 6} H^3.$ This equation is an identity inside any correlation function away from other insertions. After integrating out the modes with Λ < k < (1+b)Λ, it will be a slightly different identity. Since the form of the equation will be preserved, to find the change in coefficients it is sufficient to analyze the change in the H3 term. In a Feynman diagram expansion, the H3 term in a correlation function inside a correlation has three dangling lines. Joining two of them at large wavenumber k gives a change H3 with one dangling line, so proportional to H: $\delta H^3 = 3H \int_{\Lambda<|k|<(1+b)\Lambda} {d^4k \over (2\pi)^4} {1\over (k^2 + t)}$ The factor of 3 comes from the fact that the loop can be closed in three different ways. The integral should be split into two parts: $\int dk {1\over k^2} - t \int dk { 1\over k^2( k^2 + t)} = A\Lambda^2 b + B b t$ the first part is not proportional to t, and in the equation of motion it can be absorbed by a constant shift in t. It is caused by the fact that the H3 term has a linear part. part is independent of the value of t. Only the second term, which varies from t to t, contributes to the critical scaling. This new linear term adds to the first term on the left hand side, changing t by an amount proportional to t. The total change in t is the sum of the term from dimensional analysis and this second term from operator products: $\delta t = \left( 2 - {B\lambda \over 2} \right)b t$ So t is rescaled, but its dimension is anomalous, it is changed by an amount proportional to the value of λ. But λ also changes. The change in lambda requires considering the lines splitting and then quickly rejoining. The lowest order process is one where one of the three lines from H3 splits into three, which quickly joins with one of the other lines from the same vertex. The correction to the vertex is $\delta \lambda = - {3 \lambda^2 \over 2} \int_k dk {1 \over (k^2 + t)^2} = -{3\lambda^2 \over 2} b$ The numerical factor is three times bigger because there is an extra factor of three in choosing which of the three new lines to contract. So $\delta \lambda = - 3 B \lambda^2 b$ These two equations together define the renormalization group equations in four dimensions: ${dt \over t} = \Bigl(2 - {B\lambda \over 2}\Bigr) b$ ${d\lambda \over \lambda} = {-3 B \lambda \over 2} b$ The coefficient B is determined by the formula $B b = \int_{\Lambda<|k|<(1+b)\Lambda} {d^4k\over (2\pi)^4} {1 \over k^4}$ And is proportional to the area of a three dimensional sphere of radius λ, times the width of the integration region bΛ divided by Λ4 $B= (2 \pi^2 \Lambda^3) {1\over (2\pi)^4} { b \Lambda} {1 \over b\Lambda^4} = {1\over 8\pi^2}$ In other dimensions, the constant B changes, but the same constant appears both in the t flow and in the coupling flow. The reason is that the derivative with respect to t of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the t is the combinatorial factors from joining and splitting. #### Wilson–Fisher point To investigate three dimensions starting from the four dimensional theory should be possible, because the intersection probabilities of random walks depend continuously on the dimensionality of the space. In the language of Feynman graphs, the coupling doesn't change very much when the dimension is changed. The process of continuing away from dimension four is not completely well defined without a prescription for how to do it. The prescription is only well defined on diagrams. It replaces the Schwinger representation in dimension 4 with the Schwinger representation in dimension 4−ε defined by: $G(x-y) = \int d\tau {1 \over t^{d\over 2}} e^{{x^2 \over 2\tau} + t \tau}$ In dimension 4−ε, the coupling λ has positive scale dimension ε, and this must be added to the flow. ${d\lambda \over \lambda} = \varepsilon - 3 B \lambda$ ${dt \over t} = 2 - \lambda B$ The coefficient B is dimension dependent, but it will cancel. The fixed point for λ is no longer zero, but at: $\lambda = {\varepsilon \over 3B}$ where the scale dimensions of t is altered by an amount λB = ε/3. The magnetization exponent is altered proportionately to: $\tfrac{1}{2} \left( 1 - {\varepsilon \over 3}\right)$ which is .333 in 3 dimensions (ε = 1) and .166 in 2 dimensions (ε = 2). This is not so far off from the measured exponent .308 and the Onsager two dimensional exponent .125. ### Infinite dimensions – mean field Main article: Mean field theory The behavior of an Ising model on a fully connected graph may be completely understood by mean field theory. This type of description is appropriate to very high dimensional square lattices, because then each site has a very large number of neighbors. The idea is that if each spin is connected to a large number of spins, only the average number of + spins to − spins is important, since the fluctuations about this mean will be small. The mean field H is the average fraction of spins which are + minus the average fraction of spins which are −. The energy cost of flipping a single spin in the mean field H is ±2JNH. It is convenient to redefine J to absorb the factor N, so that the limit N → ∞ is smooth. In terms of the new J, the energy cost for flipping a spin is ±2JH. This energy cost gives the ratio of probability p that the spin is + to the probability 1−p that the spin is −. This ratio is the Boltzmann factor. ${p\over 1-p} = e^{2\beta JH}$ so that $p = {1 \over 1 + e^{-2\beta JH} }$ The mean value of the spin is given by averaging 1 and −1 with the weights p and 1−p, so the mean value is 2p−1. But this average is the same for all spins, and is therefore equal to H. $H = 2p - 1 = { 1 - e^{-2\beta JH} \over 1 + e^{-2\beta JH}} = \tanh (\beta JH)$ The solutions to this equation are the possible consistent mean fields. For βJ < 1 there is only the one solution at H = 0. For bigger values of β there are three solutions, and the solution at H = 0 is unstable. The instability means that increasing the mean field above zero a little bit produces a statistical fraction of spins which are + which is bigger than the value of the mean field. So a mean field which fluctuates above zero will produce an even greater mean field, and will eventually settle at the stable solution. This means that for temperatures below the critical value βJ = 1 the mean field Ising model undergoes a phase transition in the limit of large N. Above the critical temperature, fluctuations in H are damped because the mean field restores the fluctuation to zero field. Below the critical temperature, the mean field is driven to a new equilibrium value, which is either the positive H or negative H solution to the equation. For βJ = 1 + ε, just below the critical temperature, the value of H can be calculated from the Taylor expansion of the Hyperbolic tangent: $H = \tanh(\beta J H) = (1+\varepsilon)H - {(1+\varepsilon)^3H^3\over 3}$ dividing by H to discard the unstable solution at H = 0, the stable solutions are: $H = \sqrt{3\varepsilon}$ The spontaneous magnetization H grows near the critical point as the square root of the change in temperature. This is true whenever H can be calculated from the solution of an analytic equation which is symmetric between positive and negative values, which led Landau to suspect that all Ising type phase transitions in all dimensions should follow this law. The mean field exponent is universal because changes in the character of solutions of analytic equations are always described by catastrophes in the Taylor series, which is a polynomial equation. By symmetry, the equation for H must only have odd powers of H on the right hand side. Changing β should only smoothly change the coefficients. The transition happens when the coefficient of H on the right hand side is 1. Near the transition: $H = {\partial (\beta F) \over \partial h} = (1+A\varepsilon) H + B H^3 + \cdots$ Whatever A and B are, so long as neither of them is tuned to zero, the sponetaneous magnetization will grow as the square root of ε. This argument can only fail if the free energy βF is either non-analytic or non-generic at the exact β where the transition occurs. But the spontaneous magnetization in magnetic systems and the density in gasses near the critical point are measured very accurately. The density and the magnetization in three dimensions have the same power-law dependence on the temperature near the critical point, but the behavior from experiments is: $H \propto \varepsilon^{0.308}$ The exponent is also universal, it is the same in the Ising model as in the experimental magnet and gas, but it is not equal to the mean field value. This was a great surprise. This is also true in two dimensions, where $H \propto \varepsilon^{0.125}$ But there it was not a surprise, because it was predicted by Onsager. ### Low dimensions – block spins In three dimensions, the perturbative series from the field theory is an expansion in a coupling constant λ which is not particularly small. The effective size of the coupling at the fixed point is one over the branching factor of the particle paths, so the expansion parameter is about 1/3. In two dimensions, the perturbative expansion parameter is 2/3. But renormalization can also be productively applied to the spins directly, without passing to an average field. Historically, this approach is due to Leo Kadanoff and predated the perturbative ε expansion. The idea is to integrate out lattice spins iteratively, generating a flow in couplings. But now the couplings are lattice energy coefficients. The fact that a continuum description exists guarantees that this iteration will converge to a fixed point when the temperature is tuned to criticality. #### Migdal-Kadanoff renormalization Write the two dimensional Ising model with an infinite number of possible higher order interactions. To keep spin reflection symmetry, only even powers contribute: $E = \sum_{ij} J_{ij} S_i S_j + \sum J_{ijkl} S_i S_j S_k S_l \ldots.$ By translation invariance,Jij is only a function of i-j. By the accidental rotational symmetry, at large i and j its size only depends on the magnitude of the two dimensional vector i−j. The higher order coefficients are also similarly restricted. The renormalization iteration divides the lattice into two parts – even spins and odd spins. The odd spins live on the odd-checkerboard lattice positions, and the even ones on the even-checkerboard. When the spins are indexed by the position (i,j), the odd sites are those with i+j odd and the even sites those with i+j even, and even sites are only connected to odd sites. The two possible values of the odd spins will be integrated out, by summing over both possible values. This will produce a new free energy function for the remaining even spins, with new adjusted couplings. The even spins are again in a lattice, with axes tilted at 45 degrees to the old ones. Unrotating the system restores the old configuration, but with new parameters. These parameters describe the interaction between spins at distances $\scriptstyle \sqrt{2}$ larger. Starting from the Ising model and repeating this iteration eventually changes all the couplings. When the temperature is higher than critical, the couplings will converge to zero, since the spins at large distances are uncorrelated. But when the temperature is critical, there will be nonzero coefficients linking spins at all orders. The flow can be approximated by only considering the first few terms. This truncated flow will produce better and better approximations to the critical exponents when more terms are included. The simplest approximation is to keep only the usual J term, and discard everything else. This will generate a flow in J, analogous to the flow in t at the fixed point of λ in the ε expansion. To find the change in J, consider the four neighbors of an odd site. These are the only spins which interact with it. The multiplicative contribution to the partition function from the sum over the two values of the spin at the odd site is: $e^{J (N_+ - N_-)} + e^{J (N_- - N_+)} = 2 \cosh(J (N_+ - N_-))$ where N± is the number of neighbors which are ±. Ignoring the factor of 2, the free energy contribution from this odd site is: $F = \log(\cosh(J (N_+ - N_-))).$ This includes nearest neighbor and next-nearest neighbor interactions, as expected, but also a four-spin interaction which is to be discarded. To truncate to nearest neighbor interactions, consider that the difference in energy between all spins the same and equal numbers + and – is: $\Delta F = \ln(\cosh(4J)).$ Where D is the dimension of the lattice, D is three. From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8J. The difference in energy between all spins equal and nonstaggered but net zero spin is 4J. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6J. Since each link will contribute to two odd spins, the right value to compare with the previous one is half that: $3J' = \ln(\cosh(4J)).$ For small J, this quickly flows to zero coupling. Large J's flow to large couplings. The magnetization exponent is determined from the slope of the equation at the fixed point. Variants of this method produce good numerical approximations for the critical exponents when many terms are included, in two and three dimensions. ## Applications ### Magnetism The original motivation for the model was the phenomenon of ferromagnetism. Iron is magnetic; once it is magnetized it stays magnetized for a long time compared to any atomic time. In the 19th century, it was thought that magnetic fields are due to currents in matter, and Ampère postulated that permanent magnets are caused by permanent atomic currents. The motion of classical charged particles could not explain permanent currents though, as shown by Larmor. In order to have ferromagnetism, the atoms must have permanent magnetic moments which are not due to the motion of classical charges. Once the electron's spin was discovered, it was clear that the magnetism should be due to a large number of electrons spinning in the same direction. It was natural to ask how the electrons all know which direction to spin, because the electrons on one side of a magnet don't directly interact with the electrons on the other side. They can only influence their neighbors. The Ising model was designed to investigate whether a large fraction of the electrons could be made to spin in the same direction using only local forces. ### Lattice gas The Ising model can be reinterpreted as a statistical model for the motion of atoms. Since the kinetic energy doesn't depend on the position only on the momentum, the statistics of the positions only depends on the potential energy, the thermodynamics of the gas only depends on the potential energy for each configuration of atoms. A coarse model is to make space-time a lattice and imagine that each position either contains an atom or it doesn't. The space of configuration is that of independent bits Bi, where each bit is either 0 or 1 depending on whether the position is occupied or not. An attractive interaction reduces the energy of two nearby atoms. If the attraction is only between nearest neighbors, the energy is reduced by −4JBiBj for each occupied neighboring pair. The density of the atoms can be controlled by adding a chemical potential, which is a multiplicative probability cost for adding one more atom. A multiplicative factor in probability can be reinterpreted as an additive term in the logarithm – the energy. The extra energy of a configuration with N atoms is changed by μN. The probability cost of one more atom is a factor of exp(−βμ). So the energy of the lattice gas is: $E = - \frac{1}{2} \sum_{\langle i,j \rangle} 4 J B_i B_j + \sum_i \mu B_i$ Rewriting the bits in terms of spins, $B_i = (S_i + 1)/2.$. $E = - \frac{1}{2} \sum_{\langle i,j \rangle} J S_i S_j - \frac{1}{2} \sum_i (4 J - \mu) S_i$ For lattices where every site has an equal number of neighbors, this is the Ising model with a magnetic field h = (zJ−μ)/2, where z is the number of neighbors. ### Application to neuroscience The activity of neurons in the brain can be modelled statistically. Each neuron at any time is either active + or inactive −. The active neurons are those that send an action potential down the axon in any given time window, and the inactive ones are those that do not. Because the neural activity at any one time is modelled by independent bits, Hopfield suggested that a dynamical Ising model would provide a first approximation to a neural network which is capable of learning.[14] Following the general approach of Jaynes,[15][16] a recent interpretation of Schneidman, Berry, Segev and Bialek,[17] is that the Ising model is useful for any model of neural function, because a statistical model for neural activity should be chosen using the principle of maximum entropy. Given a collection of neurons, a statistical model which can reproduce the average firing rate for each neuron introduces a Lagrange multiplier for each neuron: $E = - \sum_i h_i S_i$ But the activity of each neuron in this model is statistically independent. To allow for pair correlations, when one neuron tends to fire (or not to fire) along with another, introduce pair-wise lagrange multipliers: $E= - \tfrac{1}{2} \sum_{ij} J_{ij} S_i S_j - \sum_i h_i S_i$ This energy function only introduces probability biases for a spin having a value and for a pair of spins having the same value. Higher order correlations are unconstrained by the multipliers. An activity pattern sampled from this distribution requires the largest number of bits to store in a computer, in the most efficient coding scheme imaginable, as compared with any other distribution with the same average activity and pairwise correlations. This means that Ising models are relevant to any system which is described by bits which are as random as possible, with constraints on the pairwise correlations and the average number of 1s, which frequently occurs in both the physical and social sciences. ### Spin Glasses With the Ising model the so-called spin glasses can also be described, by the usual Hamiltonian $\hat H=-\frac{1}{2}\,\sum J_{i,k}\,S_i\,S_k,$ where the S-variables describe the Ising spins, while the Ji,k are taken from a random distribution. For spin glasses a typical distribution chooses antiferromagnetic bonds with probability p and ferromagnetic bonds with probability 1−p. These bonds stay fixed or "quenched" even in the presence of thermal fluctuations. When p=0 we have the original Ising model. This system deserves interest in its own; particularly one has "non-ergodic" properties leading to strange relaxation behaviour.| Much attention has been also attracted by the related bond and site dilute Ising model, especially in two dimensions, leading to intriguing critical behavior.[18] ## Footnotes 1. ^ a b 2. Newman MEJ, Barkema GT, "Monte Carlo Methods in Statistical Physics, Clarendon Press, 1999 3. Bla 2005 4. ^ a b Ruelle (1969). Statistical Mechanics:Rigorous Results. New York: W.A. Benjamin Inc. 5. Dyson, F.J. (1969). "Existence of a phase-transition in a one-dimensional Ising ferromagnet". Comm. Math. Phys. 12: 91–107. Bibcode:1969CMaPh..12...91D. doi:10.1007/BF01645907. 6. Fröhlich, J.; Spencer, T. (1982). "The phase transition in the one-dimensional Ising model with 1/r 2 interaction energy.". Comm. Math. Phys. 84. 7. Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics*, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7 [Amazon-US | Amazon-UK], MR 690578 8. J. J. Hopfield (1982), "Neural networks and physical systems with emergent collective computational abilities", Proceedings of the National Academy of Sciences of the USA 79 (8): 2554–2558, Bibcode:1982PNAS...79.2554H, doi:10.1073/pnas.79.8.2554, PMC 346238, PMID 6953413. 9. Jaynes, E. T. (1957), "Information Theory and Statistical Mechanics", Physical Review 106 (4): 620, Bibcode:1957PhRv..106..620J, doi:10.1103/PhysRev.106.620. 10. Jaynes, Edwin T. (1957), "Information Theory and Statistical Mechanics II", Physical Review 108 (2): 71, Bibcode:1957PhRv..108..171J, doi:10.1103/PhysRev.108.171. 11. Elad Schneidman, Michael J. Berry, Ronen Segev and William Bialek (2006), "Weak pairwise correlations imply strongly correlated network states in a neural population", Nature 440 (7087): 1007–1012, arXiv:q-bio/0512013, Bibcode:2006Natur.440.1007S, doi:10.1038/nature04701, PMC 1785327, PMID 16625187. 12. J-S Wang, W Selke, VB Andreichenko, and VS Dotsenko (1990), "The critical behaviour of the two-dimensional dilute model", Physica A 164: 221–239, doi:10.1016/0378-4371(90)90196-Y ## References • Baxter, Rodney J. (1982), Exactly solved models in statistical mechanics, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-083180-7 [Amazon-US | Amazon-UK], MR 690578 • K. Binder (2001), "Ising model", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 [Amazon-US | Amazon-UK] • Stephen G. Brush (1967), History of the Lenz-Ising Model. Reviews of Modern Physics (American Physical Society) vol. 39, pp 883–893. ( 10.1103/RevModPhys.39.883) • Baierlein, R. (1999), Thermal Physics, Cambridge: Camberidge University Press, ISBN 0-521-59082-5 [Amazon-US | Amazon-UK] • Gallavotti, G. (1999), Statistical mechanics, Texts and Monographs in Physics, Berlin: Springer-Verlag, ISBN 3-540-64883-6 [Amazon-US | Amazon-UK], MR 1707309 • Huang, Kerson (1987), Statistical mechanics (2nd edition), Wiley, ISBN 0-471-81518-7 [Amazon-US | Amazon-UK] • Ising, E. (1925), "Beitrag zur Theorie des Ferromagnetismus", Z. Phys. 31: 253–258, Bibcode:1925ZPhy...31..253I, doi:10.1007/BF02980577 • Itzykson, Claude; Drouffe, Jean-Michel (1989), Théorie statistique des champs, Volume 1, Savoirs actuels (CNRS), EDP Sciences Editions, ISBN 2-86883-360-8 [Amazon-US | Amazon-UK] • Itzykson, Claude; Drouffe, Jean-Michel (1989), Statistical field theory, Volume 1: From Brownian motion to renormalization and lattice gauge theory, Cambridge University Press, ISBN 0-521-40805-9 [Amazon-US | Amazon-UK] • Ross Kindermann and J. Laurie Snell (1980), Markov Random Fields and Their Applications. American Mathematical Society. ISBN 0-8218-3381-2 [Amazon-US | Amazon-UK]. • Kleinert, H (1989), Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore); Paperback ISBN 9971-5-0210-0 [Amazon-US | Amazon-UK] (also available online: Vol. I and Vol. II) • Kleinert, H and Schulte-Frohlinde, V (2001), Critical Properties of φ4-Theories, World Scientific (Singapore); Paperback ISBN 981-02-4658-7 [Amazon-US | Amazon-UK] (also available online) • Lenz, W. (1920), "Beiträge zum Verständnis der magnetischen Eigenschaften in festen Körpern", Physikalische Zeitschrift 21: 613–615. • Barry M. McCoy and Tai Tsun Wu (1973), The Two-Dimensional Ising Model. Harvard University Press, Cambridge Massachusetts, ISBN 0-674-91440-6 [Amazon-US | Amazon-UK] • Montroll, Elliott W.; Potts, Renfrey B.; Ward, John C. (1963), "Correlations and spontaneous magnetization of the two-dimensional Ising model", 4 (2): 308–322, Bibcode:1963JMP.....4..308M, doi:10.1063/1.1703955, ISSN 0022-2488, MR 0148406 • Onsager, Lars (1944), "Crystal statistics. I. A two-dimensional model with an order-disorder transition", Phys. Rev. (2) 65 (3–4): 117–149, Bibcode:1944PhRv...65..117O, doi:10.1103/PhysRev.65.117, MR 0010315 • Onsager, Lars (1949), "Discussion", Nuovo Cimento (suppl.) 6: 261 • John Palmer (2007), Planar Ising Correlations. Birkhäuser, Boston, ISBN 978-0-8176-4248-8 [Amazon-US | Amazon-UK]. • Istrail, Sorin (2000), "Statistical mechanics, three-dimensionality and NP-completeness. I. Universality of intractability for the partition function of the Ising model across non-planar surfaces (extended abstract)", Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, ACM, pp. 87–96, MR 2114521 • Yang, C. N. (1952), "The spontaneous magnetization of a two-dimensional Ising model", Physical Rev. (2) 85 (5): 808–816, Bibcode:1952PhRv...85..808Y, doi:10.1103/PhysRev.85.808, MR 0051740 ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Ising model", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Ising_model • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9012808203697205, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/41416?sort=newest
## Glueing triangulated categories ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello! Given a triangulated category, one can look for semiorthogonal decompositions into (simpler?) triangulated subcategories. I'd like to know if there's a way to attack the opposite problem, i.e. to classify the ways two given triangulated categories can be composed to give a big triangulated category which decomposes semiorthogonally into the given ones. Does anybody know about this? I hope this question isn't too vague. Thank you! - ## 3 Answers If $T = \langle A ,B\rangle$ is a semiorthogonal decomposition and $\alpha_\ast:A \to T$, $\beta_\ast:B \to T$ are the embedding functors then one can consider the functor $\beta^\ast\circ\alpha_\ast:A \to B$ (or its right adjoint $\alpha^!\circ\beta_\ast:B \to A$). This is called the gluing functor. Morally, the ways of gluing $A$ to $B$ are classified by gluing functors --- to each functor one should be able to associate a triangulated category $T$ with a s.o.d. into $A$ and $B$ for which the gluing functor is isomorphic to the given one. Because of the well-known problems with nonfunctoriality of the cone, one cannot hope to prove this precise statement. However, if everything has a DG-enhancement, one can. Indeed, assume that $A = Hot(R)$, $B = Hot(S)$, where $R$ and $S$ are pretriangulated DG-algebras and assume that the functor $A \to B$ is realized by a $R-S$-bimodule $M$. Then one can consider a DG-algebra of the form $$U = \left(\begin{array}{cc} R & 0 \cr M & S \end{array}\right).$$ Then $T := Hot(U)$ should give what you want. - Sasha, that looks great, thanks alot. Is there a reference for the details? – Hanno Becker Oct 7 2010 at 20:35 I am afraid there is no reference. This is a folklore. – Sasha Oct 8 2010 at 5:30 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. [Lidia Angeleri Hügel, Steffen Koenig, Qunhua Liu: On the uniqueness of stratifications of derived module categories] at http://arxiv.org/abs/1006.5301 (and other recent work by Koenig) should be relevant: this is Jordan-Hölder for (appropriate) triangulated categories. - This is very interesting, thank you! – Hanno Becker Oct 7 2010 at 20:34 I'm not sure but Proposition 1.16 in the paper: http://arxiv.org/pdf/0911.0172 by Iyama-Kato-Miyachi might be related to your question. - Thank you, Yann! – Hanno Becker Oct 7 2010 at 20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094234108924866, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56378/semisimplicity-of-etale-cohomology-representations
## Semisimplicity of étale cohomology representations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K$ be a number field and $G=Gal(\overline{K}/K)$ the absolute Galois group of $K$. Let $\ell$ be a prime number. Let $A/K$ be an abelian variety. Then the representation of $G$ on $V_\ell(A)$ is semisimple. This is the famous theorem of Faltings (Invent. Math. 73). Now let $X/K$ be a smooth projective variety and $0\le q\le 2\dim(X)$, and define $\overline{X}=X_{\overline{K}}$. Question. Is it known that the representation of $G$ on $H^q(\overline{X}, \mathbb{Q}_\ell)$is semisimple? Remark. The answer is yes for $q=1$, because $H^1(\overline{X}, Q_\ell)$ is dual to $V_\ell(A)$ where $A$ is the Albanese variety of $X$. I would also be interested in the case where the number field $K$ is replaced by a global function field (say), and $\ell$ is assumed to be coprime to the characteristic. - I strongly suspect that the answer is "no" in the number field case, and it is surely "no" over finite fields (already). – Mikhail Bondarko Feb 23 2011 at 10:54 Thx for your comment! I somehow expected a "not known" in the number field case as well. Why is the answer a definite "no" over finite fields? I do not know how to prove this. Can you give me a view details on that? – Sebastian Petersen Feb 23 2011 at 12:09 Sorry, just to be sure: Do you mean "not known" or "false" in the case of a finite ground field? – Sebastian Petersen Feb 23 2011 at 12:21 One more comment: My question is exactly conjecture $SS^i(X)$ in Tate's article "Conjectures on algebraic cycles on l-adic cohomology", Proceedings of Symposia in Pure Mathematics 55 (the motives volume I). So my question is, whether there has been progress on this conjecture since this article was written. – Sebastian Petersen Feb 23 2011 at 12:32 1 Joel Bellaiche's Hawaii notes people.brandeis.edu/~jbellaic/BKHawaii4.pdf (page 5) say: "This is sometimes called “conjecture of Grothendieck-Serre”. This is known for abelian varieties, by a theorem that Faltings proved at the same times he proved the Mordell’s conjecture, and in a few other cases (some Shimura varieties, for example)." (But that's all he says.) – fherzig Feb 23 2011 at 15:56 show 1 more comment ## 1 Answer This semi-simplicity is a part of what is called the Tate conjecture. It is generally believed to be true, but little is known about it outside the case of $H^1$, in either the finite field or global field case. Searching on mathscinet for "Tate conjecture" (or googling) should turn up the relevant literature. - This conjecture is discussed in the recent interview with John Tate (ams.org/notices/201103/rtx110300444p.pdf) in the Notices of the AMS. – Chandan Singh Dalawat Feb 24 2011 at 4:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9092979431152344, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76126/lifting-the-isomorphisms-between-abelian-schemes-over-pd-thickenings/76132
## lifting the isomorphisms between abelian schemes over PD thickenings ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume that X and Y are abelian schemes (or even abelian varieties) over a base T. If $\ S --> T$ is a PD nilpotent thickening (i.e. the ideal of $\ S$ in $\ T$ is a nilpotent divided power ideal) and S is of characteristic $\ p>0$ . If $\ X_{0}$ and $\ Y_{0}$ are reductions of $\ X$ and $\ Y$ to $\ S$. If $\ f_{0} : X_{0} ---> Y_{0}$ is an isomorphism over S, is this true that this isomorphism (if it can be lifted!) lifts to an isomorphism $\ f : X ---> Y$ ? one may also assume that $\ H¹_ {cris} (f_{0})$ preserves the Hodge filtration. - I deleted my answer since I think I had misunderstood the question. Are you asking whether any lift of $f_0$ to a morphism $f:X \to Y$ is necessarily an isomorphism? – ulrich Sep 23 2011 at 11:31 Yes! in fact this is what I ask and I think your answer was correct. and if I remember properly your counterexample also satisfied the condition on Hodge filtration.So it perfectly answered my question. Thank you very much. – Jack Sep 23 2011 at 18:00 Now I am confused since in my example there was no lift. In fact, I think that if a lift exists then it must indeed be an isomorphism. – ulrich Sep 24 2011 at 14:47 As far as I remember, in your answer you took $S = Spec(\mathbb{Z}/p)$ and $T = Spec(\mathbb{Z}/p^2)$ and then using the fact that The versal deformation space of an elliptic curve $E$ over $S$ is isomorphic to $Spec(\mathbb{Z}_p[[x]])$, you rgued that there are lifts of $E$ that are not isomorphims over $T$. So in particular the identitity map of $E$ does not lift to an isomorphism. – Jack Sep 24 2011 at 19:44 Yes, that's right. Since that's what you wanted I will undelete the answer. – ulrich Sep 25 2011 at 6:51 ## 1 Answer No, this is very far from being true. For a counterexample, let $S = Spec(\mathbb{Z}/p)$ and $T = Spec(\mathbb{Z}/p^2)$. The versal deformation space of an elliptic curve $E$ over $S$ is isomorphic to `$Spec(\mathbb{Z}_p[[x]])$` so lifts of $E$ to $T$ are parametrized by the set of homomorphisms of local algebras $Hom(\mathbb{Z}_p[[x]], \mathbb{Z}/p^2) = p\mathbb{Z}/p^2$. So there do exist lifts for which the identity map of $E$ does not lift to an isomorphism. - 3 Won't the condition on the Hodge filtration sort this out though? More generally, I thought the statement the OP wants follows from Serre-Tate theory...? – anon Sep 22 2011 at 19:08 I don't understand the relevance of Serre-Tate theory to the question. In any case, according to the OP what I wrote does answer his question. – ulrich Sep 25 2011 at 6:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492830038070679, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/56239/when-i-ride-my-bike-does-half-of-the-energy-go-into-the-earth/56244
# When I ride my bike, does half of the energy go into the earth? When I ride my bike, does half of the energy go into the earth because of Newton's third law? Does the energy in the earth transfer into heat upon my brakes when I utilize them? If the earth was 2000 times less massive than it is, and I applied the same energy to my pedal, would my bike move (relative to an observer not connected to earth) at the same speed as it would if the earth was normal? I think if you think about it these two questions are the same. - Hi Velox. Your bike won't waste any fuel for earth. Because the contact force is provided by the weight itself. And, What do you mean by the phrase, energy in the earth... ? – Ϛѓăʑɏ βµԂԃϔ Mar 8 at 3:04 – anna v Mar 8 at 7:37 ## 2 Answers No, because even though the force that you exert on the earth is equal and opposite to the force it exerts back on you, you're not doing the same amount of work on the earth as the earth on you. Your kinetic energy increases due to the work done by the earth on you. Remember that $W = F \cdot d$; your bicycle moves a lot due to this force, but the earth doesn't really move much at all. Another way to think about this is in terms of kinetic energy. $\mathrm{KE} = \frac{1}{2} mv^2$, so if your velocity is high, so is your kinetic energy. The earth's velocity is low, and so is its kinetic energy. So the forces are equal and opposite, and the impulse, or change in momentum, is too, but the kinetic energy stays mostly with you. - – Chris White Mar 8 at 4:58 Thanks! I was wondering about that. I was needing it for another answer too. – krs013 Mar 8 at 5:25 I think you are confusing momentum and energy. If you ignore air friction then it would take almost no energy to cycle in a vacuum, just the energy to overcome friction in the wheel bearings (well that and the inability to breath) Cycling at reasonably quick speeds most of the energy goes into aerodynamic drag, that is moving the air out of the way, and so ultimately to heat in the air. Yes - when you brake all the kinetic energy goes into the brake pads which is then dissipated into the air as the pads heat up - you are ignoring friction of the bike wheels on the ground, without which no transport can happen, and also ignoring angular momentum induced on the earth, extremely small but calculable. ni.com/white-paper/13020/en . see also connected answer by Joshphysics physics.stackexchange.com/questions/56245/… – anna v Mar 8 at 5:20 Is there any work done by the friction of the wheels on the ground ? - there is a force but no movement. There are friction loses in the tyres due to compression and expansion of the rubber but there shouldn't be in the road contact unless you are skidding. – Martin Beckett Mar 8 at 5:31 – anna v Mar 8 at 5:35 Think of a large turn table, motionless. Ride a bicycle close to the periphery. What will happen? – anna v Mar 8 at 6:13 @annav - good point I was thinking there was a momentum transfer but not energy – Martin Beckett Mar 9 at 4:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557962417602539, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39960/geodesic-metrics-that-admit-dilatation-at-each-point
## Geodesic metrics that admit dilatation at each point ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the class of geodesic metrics $g$ on manifolds, that have the following property: for each point $x$ there exists a neighbourhood $U_x$ and a smooth vector field $v_x$ in $U_x$ that vanishes at $x$ and whose flow (for small time) dilatates $g$ by a constant factor. Let us call such metrics dilatatable. An obvious example is provided by an Euclidean $\mathbb R^n$, the flow of the field $\sum_i x_i \frac{\partial}{\partial x_i}$ dilatates the Euclidean metric by a constant factor. More generally one can take any Banach space. I would like to make a guess about the structure of such metrics in general. Guess. Suppose $g$ on $M^n$ is dilatatable. Then there exists a triangulation of $M^n$ such that the restriction of the metric $g$ to each simplex if flat with respect to the flat structure on the simplex, and $g$ is flat on the complement to the union of all co-dimension $2$ simplexes. The first question is the following: was such class of metrics considered somewhere and is this guess correct? Are there obvious counterexamples? Second part of the question is about examples. It is not hard to construct an example of such a metric, if we don't require $M^n$ to be a smooth manifold. Namely, we can take any polyhedral metric on $M^n$, i.e. glue $M^n$ from a union of Euclidean simplexes (glue the boundaries by isometries). Then for each point there is a conical neighbourhood, and obviously we can always scale this neighbourhood by the radial field emanating from $x$. So now comes the Second question. Take a topological manifold $M^n$ of dimension $n<7$ with such a polyhedral metric. It is known then that such a manifold has a smooth structure (because a PL structure in dimension up to $6$ always defines a unique smooth structure). Is it possible to chose this smooth structure in such a way, that the polyhedral metric is dilatatable for the smooth structure? The answer to this question is positive for $n=2$, but I don't know already what happen for $n=3$. At the same time, there are non-trivial examples in higher dimensions, coming from complex geometry. For example one can quotient some complex tori $\mathbb T^n$ by a finite group of isometries to get $\mathbb CP^n$, the obtained polyheral metric on $\mathbb CP^n$ is dilatatable with respect to the canonical complex (and hence smooth) structure on $\mathbb CP^n$. - 1 I can't figure out your first sentence. As $v_x$ is said to fix $x,$ this suggests an action or infinitesimal action on $U_x,$ but then $v_x$ acts on the set of metrics. Could you please expand on this a bit, maybe say more about how $\mathbf R^n$ is an example? – Will Jagy Sep 25 2010 at 18:24 @Will, thanks for your remark, that was sloppy indeed, I meant that the flow generated by $v_x$ should dilatate the metric constantly. – Dmitri Sep 25 2010 at 18:55 1 Full marks for using the word "dilatation" but I think "dilatatable" is one syllable too far. – gowers Sep 26 2010 at 8:17 3 Берестовский В. Н. Подобно однородные локально полные пространства с внутренней метрикой. Известия ВУЗов. Математика. — 2004. — № 11(510). — с. 3-22. – Anton Petrunin Sep 26 2010 at 16:19 ## 2 Answers Concerning the first question: you description is incomplete, even in the homogeneous case. There are homogeneous geodesic metrics that admit smooth families of dilatations but are not made of flat Banach metrics. In particular, some Carnot-Caratheodory metrics are. For example, consider the Heisenberg group $H$, which can be thought of as $\mathbb R^3$ equipped with the following group law: $$(x,y,z)\cdot(x',y',z') = (x+x',y+y',z+z'+x'y) .$$ Observe that for every $t\in\mathbb R$, the map $\phi_t:(x,y,z)\mapsto (e^tx,e^ty,e^{2t}z)$ is a group homomorphism, and these maps form a smooth 1-parameter group of diffeomorphisms (and hence a flow generated by a smooth vector field). Consider a left-invariant two-dimensional distribution $V\subset TH$ spanned by left-invariant vector fields $X$ and $Y$ whose values at $(0,0,0)$ equal $\partial/\partial x$ and $\partial/\partial y$, respectively. Equip this distribution with a left-invariant Euclidean metric. The distribution is completely non-integrable, so we get a Carnot-Caratheodory metric on $H$. Observe that $\phi_t$ maps $X$ to $e^tX$ and $Y$ to $e^tY$, hence it is a $e^t$-dilatation of the Carnot-Caratheodory metric. The Carnot-Caratheodory metric is very different from Banach metrics. For example, its Hausdorff dimension equals 4. - Sergei, thanks a lot for the answer! It makes clear that Banach metrics are not enough, and also that an existence of a polyhderal stratification is too optimistic. Still I wonder if some kind of decent stratified structure always appear for this dilatateble metrics... – Dmitri Sep 25 2010 at 21:01 2 I believe there is one. Let's say that two point are connected if one is an image of the other under one of these local dilatations. Two points are equivalent if they one can get from one to the other via a chain of such connections. It seems that equivalence classes form a stratification into smooth submanifolds, but I did not try to check the details. – Sergei Ivanov Sep 25 2010 at 21:24 Yes, this sounds plausible. I wonder, if you impose the condition, that the metric is complete, $M^n$ is simply connected, and the size of $U_x$ for all $x$ is at least $\varepsilon$ -- can there be classification of all dilatatable metrics in this case? (So this should include flat ones and Carnot-Caratheodory). Do you think this question was studied? Finally I am not sure that I interpret correctly your second phrase -- that "the description is not complete even in homogeneous case". What "homogeneous" means here? The transitivity of isometry group? – Dmitri Sep 25 2010 at 21:49 3 Yes I meant transitivity of the isometry group. If sizes of neighborhoods are bounded away from zero, then all points are equivalent in the sense of my previous comment, hence all these neighborhoods are isometric, so the space is locally homogeneous. I vaguely remember that Berestovskij proved that every homogeneous geodesic metric is a Finsler Carnot-Caratheodory metric. I don't know whether anyone studied which of those admit dilatations but this should not be hard. – Sergei Ivanov Sep 25 2010 at 22:15 1 I think homogeneous examples come from nilpotent Lie groups. The low dimensional ones tend to have expanding automorphisms, but most high-dimensional ones do not. I think the list is complicated. The possible structure near lower-dimensional strata, where size of neighborhoods goes to 0, seems interesting and intricate. But, Maybe it's worth 1st settling it assuming hausdorff dimension is normal. – Bill Thurston Sep 26 2010 at 7:16 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Relative to comments by Sergei Ivanov and Bill Thurston, maybe this line of research concerning "metric spaces with dilations" or "dilation structures" provides a precise answer, more general than Berestovskii result. See this introduction and dig into the biblio. Concerning examples related to Carnot-Caratheodory geometry and nilpotent groups (precisely: "Carnot groups"), they appear naturally as models of the (metric) tangent space to a point in a space with dilations. If you stand to read a more algebraic account, see emergent algebras, where it is proven that this is not really a metric induced phenomenon. - Marius, thanks a lot! I will have a look. – Dmitri Nov 9 2010 at 23:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134553670883179, "perplexity_flag": "head"}
http://cms.math.ca/Reunions/hiver12/abs/mpr
Réunion d'hiver SMC 2012 Fairmont Le Reine Elizabeth (Montréal), 7 - 10 décembre 2012 Physique mathématique - matrices aléatoires et systèmes intégrables Org: Marco Bertola et Dmitry Korotkin (Concordia) [PDF] MARCO BERTOLA, Concordia University Universality in Unitary Random matrix models  [PDF] It is very well known that for Hermitean matrices the scaling limit of the eigenvalue statistics is a determinantal point field with kernel given by the sine kernel in the bulk and the Airy kernel (generally) at the edge. We consider the normal matrix model with external field $U=Tr(M M^\dagger) - Tr( Harm(M))$ where $Harm$ stands for a (locally) harmonic function. We give a conjectural form for the strong asymptotic of the corresponding orthogonal polynomials. This form has been verified in a few outstanding cases. We then show how to use this conjectural form to prove universality in the bulk and on the boundary of the support region for the asymptotic location of eigenvalues, where the limiting kernel are simple expressions in terms of exponentials and the complementary error function. Thus we prove universality in all cases where the conjecture has been, or will be, verified. The proof does not require anything more than some general features. PAVEL BLEHER, Indiana University-Purdue University Indianapolis Exact solution of the six-vertex model with DWBC. Critical line between disordered and antiferroelectric phases  [PDF] We obtain the large $N$ asymptotics of the partition function $Z_N$ of the six-vertex model with domain wall boundary conditions on the critical line between the disordered and antiferroelectric phases. Using the weights $a=1-x,b=1+x,c=2,|x|<1$, we prove that, as $N\rightarrow\infty$, $Z_N=CF^{N^2}N^{1/12}\left(1+O(N^{-1})\right)$, where $F$ is given by an explicit expression in $x$ and the $x$-dependency in $C$ is determined. Our result gives a complete proof and substantially strengthens the one given in the physics literature by Bogoliubov, Kitaev and Zvonarev. Furthermore, we prove that the free energy exhibits an infinite order phase transition between the disordered and antiferroelectric phases. Our proofs are based on the large $N$ asymptotics for the underlying orthogonal polynomials which involve a non-analytical weight function, the Deift-Zhou nonlinear steepest descent method to the corresponding Riemann-Hilbert problem, and the Toda equation for the tau-function. This is a joint work with Thomas Bothner. ANTON DZHAMAY, University of Northern Colorado Discrete Hamiltonian Structure of Schlesinger Transformations  [PDF] Schlesinger transformations are algebraic transformations of a Fuchsian system that preserve its monodromy representation and act on the characteristic indices of the system by integral shifts. One of the main reasons for studying these transformations is the relationship between Schlesinger transformations and discrete Painlevé equations; this is also the main motivation behind our work. In this talk we show how to write an elementary Schlesinger transformation as a discrete Hamiltonian system w.r.t. the standard symplectic structure on the space of Fuchsian systems. We also show how such transformations reduce to discrete Painlevé equations by computing two explicit examples, d-$P\big(D_{4}^{(1)}\big)$ (or difference Painlevé V) and d-$P\big(A_{2}^{(1)*}\big)$. In considering these examples we also illustrate the role played by the geometric approach to Painlevé equations not only in determining the type of the equation, but also in studying the relationship between different explicit forms of equations of the same type. This is a joint work with Tomoyuki Takenawa (Tokyo University of Marine Science and Technology) and Hidetaka Sakai (The University of Tokyo). JOHN HARNAD, Concordia University and CRM Finite Dimensional Tau Functions  [PDF] We show how a microcosm of the $\tau$-function approach to the KP hierarchy developed by the Kyoto group, consisting of solutions having a finite number of degrees of freedom, may be studied within the setting of finite dimensional Grassmannnians. This gives both a Grassmannian and fermionic interpretation of the determinantal formula of Gekhtman and Kasman, and makes evident the origin of the "rank-$1$'' condition characterizing finite dimensional reductions. In particular, this includes the well-known cases of polynomial $\tau$-functions, those associated to Calogero-Moser pole dynamics, multisolitions and their degenerations. It also sheds light on the recently introduced notion of "convolution flows". (Based on joint work with F. Balogh and T. Dinis da Fonseca) DMITRY KOROTKIN, Concordia University Baker-Akhiezer spinor and Bergman tau-function on moduli spaces of meromorphic differentials  [PDF] We derive variational formulas of Rauch-Ahlfors type on moduli spaces of meromorphic differentials on Riemann surfaces. In particular, we show that the derivatives of the Szeg\"o kernel with respect to homological coordinates on these spaces are expressed via Hirota derivative of that kernel. This formula is used to derive variational formulas for the Baker-Akhiezer kernel, which in particular encode the KP-type hierarchies, as well as dependence of the baker-Akhiezer kernel on the moduli of the Riemann surface. We also define Bergman tau-function on these spaces, compute it in several important special cases and describe it as a section of an appropriate line bundle; this allows to express the Hodge class on these moduli spaces in terms of the tautological class. ANDREW MCINTIRE, Bennington College Chern-simons invariant of infinite volume hyperbolic 3-manifolds  [PDF] We define a Chern-Simons invariant for a certain class of infinite volume hyperbolic 3-manifolds. We then prove an expression relating the Bergman tau function on a cover of the Hurwitz space, to the lifting of the function $F$ defined by Zograf on Teichm\"uller space, and another holomorphic function on the cover of the Hurwitz space which we introduce. If the point in cover of the Hurwitz space corresponds to a Riemann surface $X$, then this function is constructed from the renormalized volume and our Chern-Simons invariant for the bounding 3-manifold of $X$ given by Schottky uniformization, together with a regularized Polyakov integral relating determinants of Laplacians on $X$ in the hyperbolic and singular flat metrics. Combining this with a result of Kokotov and Korotkin, we obtain a similar expression for the isomonodromic tau function of Dubrovin. We also obtain a relation between the Chern-Simons invariant and the eta invariant of the bounding 3-manifold, with defect given by the phase of the Bergman tau function of $X$. ANTHONY METCALFE, KTH, Royal Institute of Technology, Sweden Universality classes of lozenge tilings of a polyhedron  [PDF] A regular hexagon can be tiled with lozenges of three different orientations. Letting the hexagon have sides of length $n$, and the lozenges have sides of length $1$, we can consider the asymptotic behaviour of a typical tiling as $n$ increases. Typically, near the corners of the hexagon there are regions of "frozen" tiles, and there is a "disordered" region in the center which is approximately circular. More generally one can consider lozenge tilings of polyhedra with more complex boundary conditions. The local asymptotic behaviour of tiles near the boundary of the equivalent "frozen" and "disordered" regions is of particular interest. In this talk, we shall discuss work in progress in which we classify necessary conditions under which such tiles behave asymptotically like a determinantal random point field with the Airy kernel, and also with the Pearcey kernel. We do this by considering an equivalent interlaced discrete particle system. CHRISTOPHER SINCLAIR, University of Oregon Kernel Asymptotics for the Mahler Ensemble of Real Polynomials  [PDF] The Mahler measure of a polynomial is the absolute value of the lead coefficient times the product of the absolute values of the roots outside the unit circle. The set of degree $N$ polynomials with Mahler measure at most 1 forms a bounded subset of $\mathbb{R}^{N+1}$. The roots of polynomials chosen uniformly from this region yields a Pfaffian point process on the complex plane similar to that of Ginibre's real ensemble but with a different (sub-exponential) weight. The limiting density of roots is uniform measure on the unit circle, and we discuss the scaling limits for the matrix kernel in a neighborhood of a point on the unit circle. New phenomena appear in a neighborhood of 1, since the spectrum consists of both real roots and complex conjugate pairs. Relationships with the related determinantal ensemble (of roots of complex polynomials) will be discussed as well as an electrostatic and matrix model for the ensemble. JACEK SZMIGIELSKI, University of Saskatchewan Two applications of Cauchy biorthogonal polynomials  [PDF] Cauchy biorthogonal polynomials were introduced by M. Bertola, M. Gekhtman and the speaker. They appeared originally in the computations of peakon solutions to the Degasperis-Processi equation. In this talk I will review basic properties of this class of polynomials and describe the highlights of two recent applications; one, to the solution of certain inverse problem associated to a system of equations put forward by Geng and Xue, the other, to the computation of the correlation kernels for the Cauchy two-matrix model with Laguerre type one-body interactions. BALINT VIRAG, University of Toronto Random operators at the edge  [PDF] The density of states of a random GUE matrix at the edge behaves like $x^{1/2}$. The large-$n$ limit of this matrix is the Stochastic Airy Operator, whose ground state has Tracy-Widom distribution. The Painleve II formulas for this distribution can be derived using the random operator. The density of states of some natural classes of matrix models have different power law at the edge. I will describe the conjectured limiting operators and state some open problems about their behavior. ## Commanditaires © Société mathématique du Canada © Société mathématique du Canada : http://www.smc.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8784252405166626, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/waves+wavelength
# Tagged Questions 2answers 135 views ### De Broglie wavelength, frequency and velocity - interpretation Two fundamental equations regarding wave-particle duality are: $$\lambda = \frac{h}{p}, \\ \nu = E/h .$$ We talk about de Broglie wavelength, is it meaningful to talk about de Broglie frequency ... 1answer 212 views ### De broglie equation What is the de Broglie wavelength? Also, does the $\lambda$ sign in the de Broglie equation stand for the normal wavelength or the de Broglie wavelength? If $\lambda$ is the normal wavelength of a ... 2answers 867 views ### Frequency of the sound when blowing in a bottle I'm sure you have tried sometime to make a sound by blowing in an empty bottle. Of course, the tone/frequency of the sound modifies if the bottle changes its shape, volume, etc. I am interested in ... 1answer 151 views ### How to reconstruct information from a graph of an oscillation? [closed] We are given a graph of the position of a wave (amplitude). How can we calculate the wavelength, frequency and the maximum speed of a particle attached to that wave? We have Speed = wave length ... 0answers 36 views ### How do you super impose two or more signals to occupy a fix area of space with the resultant summed wave? Is it possible to super-impose two or more signals all sent from different directions as a standing wave with the resulting summed wave occupying a fix area of space that is also a complex area? Do ... 2answers 3k views ### Can the equation $v=\lambda f$ be made true even for non sinusoidal waves? The known relation between the speed of a propagating wave, the wave length of the wave, and its frequency is $$v=\lambda f$$ which is always true for any periodic sinusoidal waves. Now consider: ... 3answers 790 views ### Why is it necessary for an object to have a bigger size than the wavelength of light in order for us to see it? I keep hearing this rule that an object must have a bigger size than the wavelength of light in order for us to see it, and though I don't have any professional relationship with physics, I want to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287867546081543, "perplexity_flag": "middle"}