url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/20432/gauss-law-changes-in-the-magnitude-of-e-field-inside-the-closed-surface
# Gauss' law - changes in the magnitude of E field inside the closed surface Gauss's law says that the flux through a closed surface which contains neither a sink nor a source will be zero. It's quite clear that all field lines will have to exit somehow, but the strength of the E-field is also proportional to the inverse of the distance squared. So if, for example, we have a cube, and the E field is perpendicular to one of the sides, the electric flux through that one side will be $A * E$ = $A$ * $kq \over r^2$. But on the opposite side, the distance from the source of the E field will be larger, so the magnitude of the E field should be smaller. Where is my misconception? Thank you. EDIT: Okay, the point charge was just an example. All the proofs I've seen of this concept state that "all field lines that enter the closed surface must also leave the closed surface, hence the total flux will be zero". But how does this account for the differences in the distances of the sides of the closed surface from the source of the charge? Can someone refer me to a proof or give an explanation of why the differences in distances always balance out with the differences in area in order to give you a zero result? - 1 If the electric field is perpendicular to one of the sides of the cube, the charge configuration can't actually be a point charge, so $E\neq kq/r^2$. – David Zaslavsky♦ Feb 2 '12 at 19:18 Okay, I've edited the question – fiftyeight Feb 2 '12 at 22:07 ## 3 Answers The OP wants an intuitive answer to an intuitive obstacle to seeing its truth. Well, the intensity of the flux is like how many lines we draw per unit area. No one line « loses strength » so to speak. (There is no dissipation, no friction.) If it is a point source, the lines are not parallel, they diverge, and the greater distance between the lines leads to their lesser density, and so less field strength. But each line keeps its strength... So all the lines enter one face, and most but not all leave at the parallel, far wall....which shows that the field strength there is a little smaller. But the diverging lines leave the other walls after all...and there, too, their density is less, but there is more area, so it all adds up. - You should take into account the flux through all six faces of the cube. If the field is perpendicular to one face (for example, if it is homogeneous), then the field is not "relative to the inverse of the distance squared", as it is for a field of a point charge. - Okay, I've edited the question to be more general. – fiftyeight Feb 2 '12 at 22:08 This is a simple result of the differential form of Gauss' law $\nabla \cdot \mathbf{E} = \rho$ and the divergence theorem $\iiint_V \nabla\cdot \mathbf{F} dV = \iint_S \mathbf{F}\cdot\mathbf{n} dS$ If there is no charge in the region, then the LHS is zero so the total flux must be zero as well. If you want a proof of the divergence theorem, there is a fairly straightforward one here: http://www.proofwiki.org/wiki/Divergence_Theorem - This post is true, but doesn't figure out where the misconception was. – joseph f. johnson Feb 4 '12 at 17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381910562515259, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/202799-frobenius-equation-w-second-order-root.html
# Thread: 1. ## Frobenius equation w/ second-order root I am given this: $y'' + \frac{2}{x}y' + y = 0$ Using Frobenius' Theorem, I was able to get the equation into this: $r(r+1)a_{0} + (r+1)(r+2)a_{1} + \sum\limits_{k = 1}^\infty[(k+r+3)(k+r+2)a_{k+2} + a_{k} = 0$ Obviously, for the $a_{0}$ term, I have to find the roots, which are -1 and 0. But what about the $a_{1}$ term? Do I need the -1 and -2 roots from there as well? Will I have 3 separate solutions by the end, or just 2?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9575961232185364, "perplexity_flag": "middle"}
http://gambasdoc.org/help/comp/gb.opengl/gl/copypixels?pt_BR&v3
2.0 3.0 > comp > gb.opengl > gl > copypixels Anterior  Próximo  Editar  Renomear  Desfazer  Procurar  Administração Documentação Cuidado! Esta página não está traduziada.  Veja a versão em inglês Gl.CopyPixels (gb.opengl) `Static Sub CopyPixels ( X As Integer, Y As Integer, Width As Integer, Height As Integer, Buffer As Integer )` Copy pixels in the frame buffer. ### Parameters x, y Specify the window coordinates of the lower left corner of the rectangular region of pixels to be copied. width, height Specify the dimensions of the rectangular region of pixels to be copied. Both must be nonnegative. type Specifies whether color values, depth values, or stencil values are to be copied. Symbolic constants Gl.COLOR, Gl.DEPTH, and Gl.STENCIL are accepted. ### Description Gl.CopyPixels copies a screen-aligned rectangle of pixels from the specified frame buffer location to a region relative to the current raster position. Its operation is well defined only if the entire pixel source region is within the exposed portion of the window. Results of copies from outside the window, or from regions of the window that are not exposed, are hardware dependent and undefined. x and y specify the window coordinates of the lower left corner of the rectangular region to be copied. width and height specify the dimensions of the rectangular region to be copied. Both width and height must not be negative. Several parameters control the processing of the pixel data while it is being copied. These parameters are set with three commands: Gl.PixelTransfer, Gl.PixelMap, and Gl.PixelZoom. This reference page describes the effects on Gl.CopyPixels of most, but not all, of the parameters specified by these three commands. Gl.CopyPixels copies values from each pixel with the lower left-hand corner at $\left(\mathit{x}+\mathit{i},\mathit{y}+\mathit{j}\right)$ for $0<=\mathit{i}<\mathit{width}$ and $0<=\mathit{j}<\mathit{height}$. This pixel is said to be the $\mathit{i}$th pixel in the $\mathit{j}$th row. Pixels are copied in row order from the lowest to the highest row, left to right in each row. type specifies whether color, depth, or stencil data is to be copied. The details of the transfer for each data type are as follows: Gl.COLOR Indices or RGBA colors are read from the buffer currently specified as the read source buffer (see Gl.ReadBuffer). If the GL is in color index mode, each index that is read from this buffer is converted to a fixed-point format with an unspecified number of bits to the right of the binary point. Each index is then shifted left by Gl.INDEX_SHIFT bits, and added to Gl.INDEX_OFFSET. If Gl.INDEX_SHIFT is negative, the shift is to the right. In either case, zero bits fill otherwise unspecified bit locations in the result. If Gl.MAP_COLOR is true, the index is replaced with the value that it references in lookup table Gl.PIXEL_MAP_I_TO_I. Whether the lookup replacement of the index is done or not, the integer part of the index is then ANDed with ${2}^{\mathit{b}}-1$, where $\mathit{b}$ is the number of bits in a color index buffer. If the GL is in RGBA mode, the red, green, blue, and alpha components of each pixel that is read are converted to an internal floating-point format with unspecified precision. The conversion maps the largest representable componente value to 1.0, and component value 0 to 0.0. The resulting floating-point color values are then multiplied by Gl.c_SCALE and added to Gl.c_BIAS, where c is RED, GREEN, BLUE, and ALPHA for the respective color components. The results are clamped to the range 0,1. If Gl.MAP_COLOR is true, each color component is scaled by the size of lookup table Gl.PIXEL_MAP_c_TO_c, then replaced by the value that it references in that table. c is R, G, B, or A. If the ARB_imaging extension is supported, the color values may be additionally processed by color-table lookups, color-matrix transformations, and convolution filters. The GL then converts the resulting indices or RGBA colors to fragments by attaching the current raster position z coordinate and texture coordinates to each pixel, then assigning window coordinates $\left({\mathit{x}}_{\mathit{r}}+\mathit{i},{\mathit{y}}_{\mathit{r}}+\mathit{j}\right)$, where $\left({\mathit{x}}_{\mathit{r}},{\mathit{y}}_{\mathit{r}}\right)$ is the current raster position, and the pixel was the $\mathit{i}$th pixel in the $\mathit{j}$th row. These pixel fragments are then treated just like the fragments generated by rasterizing points, lines, or polygons. Texture mapping, fog, and all the fragment operations are applied before the fragments are written to the frame buffer. Gl.DEPTH Depth values are read from the depth buffer and converted directly to an internal floating-point format with unspecified precision. The resulting floating-point depth value is then multiplied by Gl.DEPTH_SCALE and added to Gl.DEPTH_BIAS. The result is clamped to the range 0,1. The GL then converts the resulting depth components to fragments by attaching the current raster position color or color index and texture coordinates to each pixel, then assigning window coordinates $\left({\mathit{x}}_{\mathit{r}}+\mathit{i},{\mathit{y}}_{\mathit{r}}+\mathit{j}\right)$, where $\left({\mathit{x}}_{\mathit{r}},{\mathit{y}}_{\mathit{r}}\right)$ is the current raster position, and the pixel was the $\mathit{i}$th pixel in the $\mathit{j}$th row. These pixel fragments are then treated just like the fragments generated by rasterizing points, lines, or polygons. Texture mapping, fog, and all the fragment operations are applied before the fragments are written to the frame buffer. Gl.STENCIL Stencil indices are read from the stencil buffer and converted to an internal fixed-point format with an unspecified number of bits to the right of the binary point. Each fixed-point index is then shifted left by Gl.INDEX_SHIFT bits, and added to Gl.INDEX_OFFSET. If Gl.INDEX_SHIFT is negative, the shift is to the right. In either case, zero bits fill otherwise unspecified bit locations in the result. If Gl.MAP_STENCIL is true, the index is replaced with the value that it references in lookup table Gl.PIXEL_MAP_S_TO_S. Whether the lookup replacement of the index is done or not, the integer part of the index is then ANDed with ${2}^{\mathit{b}}-1$, where $\mathit{b}$ is the number of bits in the stencil buffer. The resulting stencil indices are then written to the stencil buffer such that the index read from the $\mathit{i}$th location of the $\mathit{j}$th row is written to location $\left({\mathit{x}}_{\mathit{r}}+\mathit{i},{\mathit{y}}_{\mathit{r}}+\mathit{j}\right)$, where $\left({\mathit{x}}_{\mathit{r}},{\mathit{y}}_{\mathit{r}}\right)$ is the current raster position. Only the pixel ownership test, the scissor test, and the stencil writemask affect these write operations. The rasterization described thus far assumes pixel zoom factors of 1.0. If Gl.PixelZoom is used to change the $\mathit{x}$ and $\mathit{y}$ pixel zoom factors, pixels are converted to fragments as follows. If $\left({\mathit{x}}_{\mathit{r}},{\mathit{y}}_{\mathit{r}}\right)$ is the current raster position, and a given pixel is in the $\mathit{i}$th location in the $\mathit{j}$th row of the source pixel rectangle, then fragments are generated for pixels whose centers are in the rectangle with corners at $\left({\mathit{x}}_{\mathit{r}}+{\mathit{zoom}}_{\mathit{x}}\mathit{i},{\mathit{y}}_{\mathit{r}}+{\mathit{zoom}}_{\mathit{y}}\mathit{j}\right)$ and $\left({\mathit{x}}_{\mathit{r}}+{\mathit{zoom}}_{\mathit{x}}\left(\mathit{i}+1\right),{\mathit{y}}_{\mathit{r}}+{\mathit{zoom}}_{\mathit{y}}\left(\mathit{j}+1\right)\right)$ where ${\mathit{zoom}}_{\mathit{x}}$ is the value of Gl.ZOOM_X and ${\mathit{zoom}}_{\mathit{y}}$ is the value of Gl.ZOOM_Y. ### Examples To copy the color pixel in the lower left corner of the window to the current raster position, use ```glCopyPixels(0, 0, 1, 1, Gl.COLOR); ``` ### Notes Modes specified by Gl.PixelStore have no effect on the operation of Gl.CopyPixels. ### Errors Gl.INVALID_ENUM is generated if type is not an accepted value. Gl.INVALID_VALUE is generated if either width or height is negative. Gl.INVALID_OPERATION is generated if type is Gl.DEPTH and there is no depth buffer. Gl.INVALID_OPERATION is generated if type is Gl.STENCIL and there is no stencil buffer. Gl.INVALID_OPERATION is generated if Gl.CopyPixels is executed between the execution of Gl.Begin and the corresponding execution of Gl.End. ### Associated Gets Gl.Get with argument Gl.CURRENT_RASTER_POSITION Gl.Get with argument Gl.CURRENT_RASTER_POSITION_VALID ### Veja também Gl.ColorTable, Gl.ConvolutionFilter1D, Gl.ConvolutionFilter2D, Gl.DepthFunc, Gl.DrawBuffer, Gl.DrawPixels, Gl.MatrixMode, Gl.PixelMap, Gl.PixelTransfer, Gl.PixelZoom, Gl.RasterPos, Gl.ReadBuffer, Gl.ReadPixels, Gl.SeparableFilter2D, Gl.StencilFunc, Gl.WindowPos See original documentation on OpenGL website
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8141686916351318, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/97992/lie-algebras-and-integral-curves?answertab=active
# Lie algebras and integral curves I am trying to understand the proof of the following which comes from "Matrix Groups for Undergraduates" by Kristopher Tapp. Let $G \subset GL_n(\mathbb K)$ be a matrix group with Lie algebra $\mathfrak g \subset gl_n(\mathbb K)$. Then for all $X \in \mathfrak g$, $e^X \in G$. It starts with let $\{ X_1,\ldots,X_k \}$ be a basis of $\mathfrak g$. For each $i=1,\ldots,k$ choose a differentiable path $\alpha_i: (-\epsilon,\epsilon)\to G$ with $\alpha_i(0)=I$ and $\alpha'_i(0)=X_i$. Define $F_\mathfrak{g}:(\text{neighborhood of 0 in } \mathfrak{g}) \to G$ as follows: $F_\mathfrak{g}(c_1X_1+\cdots+c_kX_k)=\alpha_1(c_1)\cdot\alpha_2(c_2)\cdots\alpha_k(c_k)$. Notice that $F_\mathfrak{g}(0)=I$, and $d(F_\mathfrak{g})_0$ is the identity function: $d(F_\mathfrak{g})_0(X)=X$ for all $X\in\mathfrak{g}$ as is easily verified on basis elements. EDIT to include use of inverse function theorem in response to Bill Choose a subspace $\mathfrak p\subset M_n(\mathbb K)$ which is complementary to $\mathfrak g$, which means completing the set $\{X_1,\ldots,X_k\}$ to a basis of all of $M_n(\mathbb K)$ and defining $\mathfrak p$ as the span of the added basis elements. So $M_n(\mathbb K)=\mathfrak g\times \mathfrak p$. Choose a function $F_{\mathfrak p}: \mathfrak p \to M_n(\mathbb K)$ with $F_{\mathfrak p}(0) = I$ and with $d(F_{\mathfrak p})_0(V)=V$ for all $V\in \mathfrak p$. For example, $F_{\mathfrak p}(V)=I+V$ works. Next define the function $F:(\text{neighborhood of 0 in }\mathfrak g \times \mathfrak p = M_n(\mathbb K)) \to M_n(\mathbb K)$ by the rule $F(X+Y)=F_{\mathfrak g}(X)\cdot F_{\mathfrak p}(Y)$ for all $X\in \mathfrak g$ and $Y\in \mathfrak p$. Notice that $F(0)=I$ and $dF_0$ is the identity function: $dF_0(X+Y)=X+Y$. By the inverse function theorem, $F$ has an inverse function defined on the neighborhood of $I$ in $M_n(\mathbb K)$. My question is how does one see that $d(F_\mathfrak{g})_0(X)=X$ given that $F_\mathfrak{g}$ is a function from matrices to matrices and normally the jacobian is defined for functions of the type similar to $f:\mathbb R^n \to \mathbb R^m$. And how would one go about computing efficiently that $d(F_\mathfrak{g})_0(X)=X$? - Matrices are just $\mathbb R^{n^2}$, written in a different way (a square matrix rather than a row or column). So you have a function from $\mathbb R^{n^2}$ (or an open subset of $\mathbb R^{n^2}$) to itself, and you can compute its Jacobian. – Matt E Jan 11 '12 at 3:15 ## 1 Answer You can view the map $F_{\mathfrak{g}}$ as a map from a neighborhood of $0$ in $\mathbb{R}^n$: $(x_1,x_2,\dots,x_k) \mapsto \alpha_1(x_1)\cdots \alpha_k(x_k)$. Then $D_{x_i}[F_{\mathfrak{g}}](x_1,\dots,x_k)=\alpha_1(x_1)\cdots \alpha_{i-1}(x_i) \alpha_i'(x_i) \alpha_{i+1}(x_{i+1})\cdots \alpha_k(x_k)$, so the $i^{th}$ partial at $0$ is $\alpha_1(0)\cdots \alpha_{i-1}(0) \alpha_i'(0) \alpha_{i+1}(0)\cdots \alpha_k(0) = I\cdots I\cdot X_i \cdot I\cdots I=X_i$. Thus the Jacobian is $[X_1 \; X_2\; \cdots \; X_k]$. So to get the derivative at $X=c_1X_1+\cdots+c_kX_k$ multiply this by the Jacobian by $[c_1\;c_2\;\cdots\;c_k]^T$ and get $c_1X_1+\cdots+c_kX_k=X$ (as desired). - But how does one reconcile the notion of the jacobian for $f:\mathbb R^n \to R^n$ and $h:\mathbb R^n \to GL_n(\mathbb K)$ in the context of the inverse function theorem. Roughly, if $\det Df \neq 0$ then the inverse function theorem says there is a local inverse. The inverse function theorem (proper) applies to functions like $f:\mathbb R^n \to R^n$. How does one make it work for functions like $h:\mathbb R^n \to GL_n(\mathbb K)$? – user782220 Jan 10 '12 at 23:51 I'm a little unclear as to what you are asking. If you want to use a Jacobian matrix to see $\mathrm{det}(Df)\not=0$, you need to choose coordinate patches on the domain and codomain, then write your map in terms of coordinates, finally you have a map between open sets in $\mathbb{R}^n$ so regular methods apply. This is basic manifold theory. – Bill Cook Jan 11 '12 at 0:17 I edited my original post to include the usage of the inverse function theorem which I am unclear about. – user782220 Jan 11 '12 at 4:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410955309867859, "perplexity_flag": "head"}
http://divisbyzero.com/2011/10/28/applet-to-illustrate-the-epsilon-delta-definition-of-limit/
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | October 28, 2011 ## Applet to illustrate the epsilon-delta definition of limit Here’s a GeoGebra applet that I made for my Real Analysis class. It can be used to explore the definition of limit: Definition. The limit of $f(x)$ as $x$ approaches $c$ is $L$, or equivalently $\displaystyle \lim_{x\to c}f(x)=L,$ if for any $\varepsilon>0$ there exists $\delta>0$ such that whenever $0<|x-c|<\delta$, it follows that $|f(x)-L|<\varepsilon$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141522645950317, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/optics?page=2&sort=newest&pagesize=50
# Tagged Questions Optics is the study of light, and its interaction with matter. It includes topics such as imaging systems, fiber optics, lasers, quantum optics, and more. 2answers 43 views ### How do you calculate heat flux (Kw/m2) at the focal point of a mirror? [duplicate] can anyone help me to determine the heat flux (Kw/m2) on a focal point of a parabolic dish having a diameter of 1.5 meter and a focal length 60 cm ??? please awaiting your soonest reply for my senior ... 2answers 97 views ### Why don't you see multiple images of an object? Consider the ray model of light. Let's say an object such as a pencil is illuminated, and consider one point on that pencil. Since there could be many rays of light bouncing off the same point on the ... 4answers 143 views ### Effects of surface roughness on specularity Say you have a piece of glass, which looks specular if propery cut/polished. But if you sand the surface using say sand paper, it will look hazy and glossy. I'm wondering how much surface roughness ... 1answer 244 views ### Why does the sky look black in pictures taken from the summit of everest? In pictures taken from the summit of Mount Everest (such as this one), the colour of the sky is a very dark blue or even black in some pictures. I remember from my own experiences of hiking in the ... 1answer 55 views ### Fraunhofer diffraction simulation for a hexagonal aperture, what are the typical units? Kostya answered a question that was asking what the diffraction pattern looks like for a hexagonal aperture in front of a lens. He lists an equation which was derived using a Heaviside function to ... 2answers 135 views ### Why does the index of refraction change the direction of light I've been studying in optics the macroscopic maxwell's equations, and how electromagnetic fields propagate through different mediums. Over there, the index of refraction appears, as a complex function ... 1answer 63 views ### Which fraction of light is refracted from a source of light under a lake? I was trying to solve this problem: "A punctiform source of light is standing inside a lake, at a height h of the surface. f is the fraction of the total of energy emitted that escapes directly from ... 1answer 50 views ### double slit experiment with two opposite quarter waveplates Consider the usual double slit experiment involving laser and a double slit and a screen. Now place in front of the left slit a quarter waveplate (let's call it QWP1) that changes a certain linear ... 0answers 163 views ### Why a person with a further near point experience a larger magnification with a magnifier Two people, Micah and Lyra, with different near points are equally close to an object. Both inspect the object through the same magnifier by holding the lens close to the eye. Micah's near ... 1answer 68 views ### What determines the sign of an image distance? A lens placed at the origin with its axis pointing along the x axis produces a real inverted image at $x = - 24 cm$ that is twice as tall as the object. What is the image distance? Why ... 2answers 189 views ### How do you calculate power at the focal point of a mirror? I'm a Mechanical Engineering student and I'm working on my senior project, so I need help. My project is about designing a solar dish having a diameter of 1.5 meters and a focal length of 60cm. so at ... 1answer 140 views ### Polarizability and the Clausius-Mossotti Relation There seems to be a fairly large inconsistency in various textbooks (and some assorted papers that I went through) about how to define the Clausius-Mossotti relationship (also called the ... 0answers 51 views ### cgs Gauss' system of units I had never seen this system until today, and I'm really confused. I've read the wikipedia article about it but I still don't know how to change between this and the international system. For example, ... 4answers 228 views ### Eye sensitivity & Danger signal Why are danger signal in red, when the eye is most sensitive to yellow-green? You can check luminosity function for more details... 1answer 141 views ### Is a holographic recorder able to capture a large full color picture? [closed] Is it practical to attempt to build a 3D hologram generator that is full color and big enough to recreate a watermelon full size? If so, is real-time control feasible? 0answers 59 views ### Are EM waves scattered the most when the wavelength and the obstacle have a similar size? I heard that when the wavelength and obstacle are similar in size, the scattering is the greatest. Is this true? 1answer 79 views ### Funny classroom experiments [closed] I'd like to perform some weekly classroom experiments to keep my students interested and curious about everyday physics. Those experiments have to be quite easy to set up and not too easy to ... 0answers 33 views ### Smaller Airy disk with another lens? Is it possible to reduce the airy disk size produced by one lens with another lens placed after the previous one? For example, parallel ray incident on first lens L1 (without aberration), then there ... 2answers 67 views ### Why is $\vec j\cdot \vec e$ the joule dissipation? I always see $\vec j\cdot \vec e$ as Joule's dissipation and I don't understand why. For example, if we have a uniform electric field $\vec e=e_o\vec u_x$ and we release an electron in it, it will ... 2answers 70 views ### the possibilty of that detuned laser can excite an atom? if yes how? I am not sure about the possibility if detuned laser can excite an atom? if yes what is the explanation? 3answers 38 views ### Trapping EM radiation Is there a material which can allow light (or any other EM radiation) to pass through from one side as if it is transparent but its other side reflects light like a mirror? 0answers 107 views ### How does a Fresnel rhomb work (half and quarter wave plate)? How does a Fresnel rhomb work (half and quarter wave plate)? I am aware of birefringence, which creates a phase shift of $\Delta\phi=\dfrac{2\pi\Delta nL}{\lambda_0}$. But this doesn't explain how a ... 0answers 9 views ### Image formation [duplicate] What is the real cause behind the formation of an image? It is explained as" when rays of light focus at a point image is formed." So here we have two events, one focusing of light and another ... 1answer 76 views ### Is this mental picture of photon correct? What is exactly meant by a statement like "there are about 400 photons per cubic cm in certain region"? Should I mentally picture this as 400 discrete photons enclosed in that volume, each moving at ... 1answer 43 views ### Circular polarisation If we have a planar and harmonic EM wave, with $B$ field: $$B=A\left(\begin{array}{c} 1\\ i\\0 \end{array} \right)e^{-i(\omega t-\vec k\cdot\vec r)}$$ and with it's corresponding $E$ field. This is ... 0answers 41 views ### EigenMode expansion for beam propagation I want to understand how to apply EigenMode expansion method (http://www.photond.com/files/docs/PW03_eme_paper.pdf) for beam propagation on a system of lenses. The interface between two mediums of ... 2answers 89 views ### Gravitational distortion of an object's diameter, at a distance, Does the curvature of space-time cause objects to look smaller than they really are? What is the relationship between the optical distortion and the mass of the objects? 1answer 66 views ### Is there a formula for determining the focal point of a sphere? I guess this is the same as for cylinders, when light is shone through parallel to the cross-section, but Google-ing this only turns up lenses like the ones used in glasses. I'm looking for something ... 1answer 74 views ### Seeing a mirage through mirror? Okay, I am not really good in physics (rather terrible), but nonetheless. So, I was just wondering if you can see a mirage, is there something special in our eyes that we can see it or what? I mean, ... 1answer 49 views ### Where did this equation come from ∠I+ ∠E = ∠A+ ∠D? ∠I+ ∠E= ∠A + ∠D Angle of incidence + angle of emergence = angle of prism (Normally 60°) + angle of deviation. If their sum is not equal,we made personal error in doing an experiment with prism. ... 1answer 76 views ### Photon in a weighted superposition of states Consider an experiment that produces photons in an entangled state such as $1/\sqrt{2}(|{H,H}\rangle+|{V,V}\rangle)$. The photons are in a superposition of horizontal and vertical polarization, and ... 2answers 132 views ### What is the minimum optical power detectable by human eye? If one is in complete darkness, what is the minimum optical power that the eye can "see" (let's say in 500-600 nm range). I found that for 510 nm, 90 photons can be detected ... 0answers 80 views ### How can some optical microscopes measure height differences of different sample planes with nanometer accuracy? I could use last week an optical microscope, didn't seem special in any way, 50x magnification, image viewable per a CCD camera on a computer screen besides through the ocular. But the software of ... 2answers 74 views ### All mirrors always shrink to 50% scale? I have this geometric optics exercise here, in which a man is looking at himself in a mirror. Determine the minimum height at which the bottom of the mirror must be placed so the man can see his ... 1answer 142 views ### Maximum theoretical bandwidth of fibre-optics Ignoring hardware at either end and their technological limitations, what is the maximum theoretical bandwidth of fibre optic cables currently in use / being deployed in a FTTH type situations? I ... 2answers 156 views ### Why does light not refract when incidented perpendicularly? I had read that light does slow down in glass because photons interact with atoms in glass. They are absorbed and re-emitted and during this phenomenon it's speed decreases. See also this and this ... 1answer 92 views ### Liouville's theorem and gravitationally deflected lightpaths It is customary in gravitational lensing problems, to project both the background source and the deflecting mass (e.g. a background quasar, and a foreground galaxy acting as a lens) in a plane. Then, ... 1answer 119 views ### What is the effect of refractive index of an object for imaging? My Question is as follows. What is the effect of refractive index of an object for imaging (Photographs by high speed camera) on its size and shape information incurred from image? Lets say , I ... 0answers 39 views ### Weakly Guiding Approximation I was reading a chapter on Fiber Optics and I encountered Weakly Guiding Approximation. I am reading John M. Senior and it says ... 3answers 1k views ### What causes insects to cast large shadows from where their feet are? I recently stumbled upon this interesting image of a wasp, floating on water: Assuming this isn't photoshopped, I have a couple of questions: Why do you see its image like that (what's the ... 2answers 111 views ### How to calculate the height and length of a reflected ray? I barely know anything about optics, so I could use some help about how to go about solving this problem. If I have a ray of light at a certain height from the optical axis, propagating at an angle, ... 3answers 216 views ### Why is visible light used in Optical fibers (instead of other EM waves)? Why aren't other electromagnetic waves used in optical fibres instead of visible light? Is it because the wavelength of light fits the internal reflection/refractive index of the material used for the ... 2answers 266 views ### Where does energy go in destructive interference? [duplicate] I have read that when two light waves interfere destructively, the energy contained within is transferred to other parts of the wave which have interfered constructively. However, I am having some ... 2answers 184 views ### What is a two-photon process? I am reading some introductory materials on modern optics, in which they mention two-photon processes everywhere. I know fundamental optics and a bit on quantum mechanics. Can anyone explain in a ... 1answer 80 views ### Eikonal approximation for wave optics. Why follow the unit vector parallel to the Pointing vector? The description of the passage from wave optics to geometrical optics claims that light rays are the integral curves of a certain vector field (the Pointing vector direction, normalized to 1). Here ... 1answer 43 views ### Why is spectral sensivity of a photodiode expressed in A/W Can someone explain me the meaning of the A/W unit of the photosensivity when reading a spectral response function of the wavelength characteristic of a photodiode? 2answers 57 views ### Optical trapping problem Can we make light slower by applying optical trapping (I mean applying laser beam to lower the speed of light)? 1answer 163 views ### calculating focal length of meniscus lens As I read about telescope, distance between objective lens and eyepiece must be equal to addition of their focal lengths. D = F1 + F2 I used one of the eyepiece ... 0answers 11 views ### Speed of Light in a Medium [duplicate] For light travelling in a medium with refractive index greater than one: The "average" speed of light is slower than the speed of light in a vacuum. As far as I know, the instantaneous speed of light ... 1answer 78 views ### Factors that make beam divergence worse after refocusing A beam of light of width $W$ and wavelength $\lambda$ with divergence that is diffraction-limited is refocused with an optical element placed at a distance $D$ from the beam source. Will the refocused ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9294529557228088, "perplexity_flag": "middle"}
http://mathhelpforum.com/business-math/196442-college-savings-plan.html
# Thread: 1. ## College Savings Plan Hey Guys, I am trying to figure out this problem dealing with a college savings plan for my kids. Its currently September.....my kids are 14,13, and twins at 11. They are all planning on going to college in September of the year they turn 18. We estimate that tuition will cost 5642 per year for each of their four years of college assuming 0% inflation. We want to calculate our savings plan with a 3% annual interest rate compounded monthly. How much should I save each month in my savings plan? Thanks. 2. ## Re: College Savings Plan If you put S dollars in the bank each month then in n months it will have increased to $S(1+ .03/12)^n= S(1.0025)^n$. Of course, each month you will have one less month for the money added that year to accrue interest. That is the money put in the bank the first year will earn $S(1+ .03/12)^n$, the money put in the second month will have earned $S(1.0025)^{n-1}$. That means that if you have a total of n months to save, and put S dollars in the account each month, at the end of those n months you will have $S+ S(1.0025)+ S(1.0025)^2+ \cdot\cdot\cdot+ S(1.0025)^{n-1}+ S(1.0025)^n$. That is a "geometric" series, of the form $\sum Ar^i$ with A= S and r= 1.0025. It is easy to show that the sum of such a geometric series is $A\frac{1- r^{n+1}}{1- r}$. For your sum that is $S\frac{1.0025^{n+1}- 1}{.0025}$. Put in the number of months you will have to save for each one, set it equal to 5642, and solve for S. For given n, $\frac{1.0025^n- 1}{.0025}$ is a specific number and solving for S means just dividing 5642 by that number. 3. ## Re: College Savings Plan Thanks for the help !
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578996896743774, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2434/question-about-hash-collisions?answertab=oldest
# Question about hash collisions If we have a hash function $h(x)$ and then a hash function $H(X) = h(h(X_0) || h(X_1))$ where $X_0$ is the first half of $X$, $X_1$ is the second half of $X$ and $||$ is concatenation. Then assuming we can easily find a collision for $H$, then it would be easy to find a collision for $h$ as well - Therefore finding a collision for $H$ is as hard as finding one for $h$. Why is this? I can to some extent understand why that might be the case, but I can't logically connect the dots. Can anyone help me with some logic or math behind it or link to some resources where it is explained. I have tried google, but without the precise correct terminology I'm having a hard time finding the right pages. Thanks - If you extend this scheme directly to a tree-hash it will be trivially vulnerable to collisions with a different length. That's why tree-hashes typically use a different hash function for inner hashes and leaf hashes. – CodesInChaos Apr 23 '12 at 18:27 ## 1 Answer The general idea is that either one of the inner hashes, or the combining hash must collide, since there is not other place to introduce the collision. Assume we found a collision for `H`. This means we have X, Y with $X \neq Y$ such that: $h(h(X_0)||h(X_1)) = h(h(Y_0)||h(Y_1))$ Now we define: $A = h(X_0)||h(X_1)$ and $B = h(Y_0)||h(Y_1)$ This gives us a new equation h(A) = h(B) • If $A \neq B$, we found a collision, and are done. • If $A=B$ we know that $h(X_0)||h(X_1) = h(Y_0)||h(Y_1)$, which we can split into $h(X_0)=h(Y_0) \land h(X_1)=h(Y_1)$. From $X \neq Y$ follows $X_0 \neq Y_0 \lor X_1 \neq Y_1$. Thus at least one of $h(X_0)=h(Y_0)$ and $h(X_1)=h(Y_1)$ has different inputs on both sides of the equation, and thus represents a collision. - Don't you mean h(A) = h(B) instead of h(A) = A(B) or am I missing something? – Mads Apr 23 '12 at 19:09 @Mads yes of course. – CodesInChaos Apr 23 '12 at 19:26 Thank you for your time :) Appreciate it – Mads Apr 23 '12 at 19:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336927533149719, "perplexity_flag": "head"}
http://mathoverflow.net/questions/4582/upper-bound-on-the-area-of-a-midpoint-pentagon/5668
## Upper bound on the area of a midpoint pentagon? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Starting with a convex pentagon P, we define the "middle polygon" Q, whose vertices are the middle points of the sides of the initial pentagon. The ratio between the areas of this polygons seem to always satisfy: 1/2 < area(Q)/area(P) < 3/4 The lower bound is easy to obtain. I don't see how to get the upper bound. This problem is equivalent to the following one. Just forget about the middle polygon for a moment. Start with a convex pentagon and consider also all his 5 diagonals. You will obtain a central pentagon. Prove that the area of the new central pentagon is less than the sum of the areas of the five small triangles which have a side adjacent to the sides of this central polygon. - 5 Your "solution" deals with the regular pentagon case. The question asked is for any convex pentagon. – Manuel Silva Nov 8 2009 at 3:50 13 Can someone explain to me why the geometry of plane pentagons is not a fit subject for mathematical research? And how does your argument apply to papers such as jstor.org/pss/3647745 ? – David Eppstein Nov 8 2009 at 4:17 4 I think there is a danger of overdoing the "not appropriate here" smackdowns. The present problem is probably easily answered by some arguments from plane geometry; one past problem could have easily be answered by "Schanuel's lemma does the trick". Not convinced that we should be too dogmatic/hasty about depth – Yemon Choi Nov 8 2009 at 8:12 5 I would like to add that there is very interesting differential geometry/dynamics surrounding the study of precisely this sort of problem as a dynamical system. For instance, if you do this projectively, you can think that you are not drawing a new smaller pentagon, (so the limiting pentagon as you iterate becomes smaller and smaller) but replacing the original. The limit of such systems is studied by people like Serge Tabachnikov at Penn State, and is very interesting. – David Jordan Nov 9 2009 at 14:56 4 The inequality is stated on page 412 of the the book "Recent Advances in Geometric Inequalities" by Mitrinovic, Pecaric, and Volenec (which is available online). They state it without a proof as a theorem of Skljarskiy, Cencov, and Jaglom. The reference they give is a russian(?) book or article which I unfortunately cannot find. – Philipp Lampe Nov 16 2009 at 18:18 show 9 more comments ## 7 Answers I used qepcad to compute that the intersection of the set of possible area ratios with the interval [1/2, 3/4] is (1/2, 3/4). Since the set of possible area ratios is the image of a connected space under a continuous function, and we know the set contains (1/2, 3/4), but not 1/2 or 3/4, it must equal (1/2, 3/4). Here is a log of the qepcad session. ```======================================================= Quantifier Elimination in Elementary Algebra and Geometry by Partial Cylindrical Algebraic Decomposition Version B 1.53, 16 Jul 2009 by Hoon Hong ([email protected]) With contributions by: Christopher W. Brown, George E. Collins, Mark J. Encarnacion, Jeremy R. Johnson Werner Krandick, Richard Liska, Scott McCallum, Nicolas Robidoux, and Stanly Steinberg ======================================================= Enter an informal description between '[' and ']': [ area of middle pentagon ] Enter a variable list: (a,x1,y1,x2,y2) Enter the number of free variables: 1 Enter a prenex formula: (E x1)(E y1)(E x2)(E y2)[ a >= 1/2 /\ a <= 3/4 /\ x1 > 0 /\ y1 > 0 /\ 1 - x1 - y1 < 0 /\ x2 > 0 /\ x2 y1 + y2 - x1 y2 - y1 < 0 /\ x1 + x2 y1 - x2 - x1 y2 < 0 /\ a (1/2)(y1 + x1 y2 - x2 y1 + x2) = (1/8)(0 - 1 + x1 + 2 x2 + 2 y1 + y2 + 2 x1 y2 - 2 x2 y1) ]. ======================================================= Before Normalization > finish An equivalent quantifier-free formula: 2 a - 1 > 0 /\ 4 a - 3 < 0 ===================== The End ======================= ----------------------------------------------------------------------------- 12 Garbage collections, 473385670 Cells and 0 Arrays reclaimed, in 8158 milliseconds. 1345504 Cells in AVAIL, 40000000 Cells in SPACE. System time: 79624 milliseconds. System time after the initialization: 79028 milliseconds. ----------------------------------------------------------------------------- ``` - I mentioned this problem to a friend of mine and he also solved it by computer. He mentioned that he could also prove that for convex polygons with more than 5 sides, the range was (1/2,1) [independent of the number of sides]. He also made the cryptic remark that the 3/4 bound for n=5 (the bound we are struggling with here), could be obtained by hand using coordinates (and this would remove all computers from the proof of the question at hand). His only hint was "WLOG A=(0, 0), B = (1, 0) and D = (0, 1)" (which I guess one achieves by a linear transformation). Ouch! – Kevin Buzzard Nov 15 2009 at 21:25 Reid: if you change 1/2 to 1/2+epsilon does "FALSE" become "TRUE"? That would be a check to see if you'd done the translation to coordinates correctly. – Kevin Buzzard Nov 15 2009 at 21:28 I improved my query so that it checks that ratios strictly between 1/2 and 3/4 are attainable. It would be nice to remove the conditions on a altogether, but unfortunately that causes qepcad to run for longer than I was willing to wait (> 1 hour). – Reid Barton Nov 15 2009 at 22:03 Reid, it should be possible to prove by explicit constructions like the ones I mentioned that ratios strictly between 1/2 and 3/4 are attainable. – Michael Lugo Nov 16 2009 at 0:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This was originally a comment on Kristal Cantwell's answer; it is now a complete solution of its own. As Kristal and others have pointed out, we can assume that the vertices of the pentagon are, in cyclic order, $(1,0)$, $(0,0)$, $(0,1)$, $(x_1, y_1)$, $(x_2, y_2)$. As she describes, we compute $$\mathrm{Area}(P) = (1/2) \left( x_1 + [x_2 y_1 - x_1 y_2] + y_2 \right)$$ and $$\mathrm{Area}(Q) = (1/8) \left( x_1 + [(x_1+x_2)(1+y_1) - x_1(y_1 + y_2)] \right.$$ $$\phantom{\mathrm{Area}(Q)=} \left.+[(1+x_2)(y_1+y_2) - y_2 (x_1 + x_2)] + y_2 -1 \right).$$ This simplifies to $$\mathrm{Area}(Q) = (1/8) \left( 2 x_1 + x_2 + y_1 + 2 y_2 + 2 [ x_2 y_1 - x_1 y_2] -1 \right).$$ In a previous version of her comment; Kristal and I had different formulas at this point, and I had some discussion of that here. We now seem to both agree on the formulas above. We want to show that $$(3/4) \mathrm{Area}(P) - \mathrm{Area}(Q) \geq 0$$. Plugging in the above formulas, clearing out the $8$ in the denominator, and simplifiying, we want to show that $$x_1 - x_2 - y_1 + y_2 + [x_2 y_1 - x_1 y_2] +1 \geq 0.$$ Rearranging, the left hand side is $$[(x_2 -1)(y_1 -1) - (x_1-1)(y_2-1)] + 1$$ The quantity in square brackets is twice the signed area of the triangle $(x_1, y_1)$, $(1,1)$, $(x_2, y_2)$. (Signed area means positive if the triangle is oriented counter-clockwise, negative if clockwise.) If this is positive, we are done. By the convexity hypothesis, the triangle $(x_1, y_1)$, $(0,0)$, $(x_2, y_2)$ is positively oriented. So the only way for the term in square brackets to be negative is if $(0,0)$ and $(1,1)$ are on opposite sides of the line $(x_1, y_1)$, $(x_2, y_2)$. Assume from now on that this is the case. If we slide $(x_1, y_1)$ and $(x_2, y_2)$ apart, keeping them on the same line, then the oriented area of $(x_1, y_1)$, $(1,1)$, $(x_2, y_2)$ will only grow more negative. So we may assume that we have slid them as far apart as possible. That is to say, that $x_1 = 0$ and $y _2 =0$. Since $x_1+y_1$ and $x_2 + y_2$ are $\geq 1$, this means that $x_2$, $y_1 \geq 1$. Then $$[(x_2 -1)(y_1 -1) - (x_1-1)(y_2-1)] + 1 \geq$$ $$0 \cdot 0 - (-1)(-1) + 1 =0.$$ - Where you diverged from me I was wrong. I tried to correct that in a rewrite. – Kristal Cantwell Nov 16 2009 at 19:03 Ah, good. I'll edit. – David Speyer Nov 16 2009 at 19:10 1 Those are also the formulas in my input to qepcad, except with (x1, y)1) and (x2, y2) swapped. – Reid Barton Nov 16 2009 at 20:21 I think I can prove this. I start by getting three consecutive points of the pentagon ABC equal to (0, 1),(0, 0) (1, 0) and the remainder with coordinates (x_1,y_1) and (x_2,y_2) both with sum of coordinates greater than one and both with both coordinates positive using area preserving transformations as mentioned in a comment to another post. I expand one dimension and contract an orthogonal direction by the same amount. Doing this I first get the angle to be right then after this I use the same type of transformation contracting along one ray of the now right angle and expanding along the other ray. Doing this I can any ratio I want. Then I translate the middle point to the origin and rotate till I have the three desired points in the desired position and the other conditions on the remaining points come from the convexity of the polygon which is preserved by the transformations. I think I also have to note that the following proof will not be changed by an expansion of both coordinates by the same amount because while I can get an equilateral right triangle by area preserving transformations The area of the original triangle may not be 1/2. Then I use the fact that the area in the plane of the triangle formed by any two points with coordinates (x_1,y_1) and (x_2,y_2) and the origin is the one half of the determinant of the matrix containing the two points. I think you have to be careful the points are going in clockwise order or else you will get a negative number. The points being in clockwise order means that y_1/x_1 > y_2/x_2. I apply this to the pentagon and get the area the 1/2 of determinant of (x_1,y_1) and (x_2,y_2) plus 1/2(x_1) + 1/2(y_2). I sweep clockwise from the origin and count the areas of the five coordinates formed by the midpoints this overestimates by 1/8 which I will have to subtract. I look at the determinants. For the second and fourth of these triangles it becomes clearer if I subtract one row from another. In any case the result of all this is the area of the midpoint polygon is 1/4 of the determinant of (x_1,y_1) and (x_2,y_2) plus 1/4(x_1) + 1/4(y_2) +1/8(x_1)+1/8(y_2)-1/8. To get this is less than 3/4 of the area of the polygon I need 1/8 the determinant of (x_1,y_1) and (x_2,y_2) + 1/8(x_1) + 1/8(y_2) + 1/8 is greater than 1/8(x_2)+1/8(y_1) or or 1/8 the determinant of (x_1,y_1) and (x_2,y_2) + 1/8(x_1) + 1/8(y_2) + 1/8-1/8(x_2)-1/8(y_1) +1/8 The above expression is 1/4 of the area of the triangle with vertices (x_1,y_1),(1,1),(x_2.y_2) if the point (1,1) is on the same side of line connecting (x_1,y_1) and (x_2.y_2) plus 1/8. If this holds then the expression is positive and we are done. So we can assume that the point (1,1) is on the same side of the line connecting (x_1,y_1) and (x_2.y_2) is on the other side of the line and the above expression is the negative of the area of the triangle mentioned above. Now the farther apart the points are the greater the area of the triangle so if we assume maximum separation we get x_1 and y_2 = zero and substituting this into the above we have 1/8(-x_2-y_1 + (x_1)(y_1) -1 is less than zero or 1/8(y_1-1)(x_2-1) is less than zero but since x_1+y_1 is greater than one and x_1 is zero we have y_1 is greater than one similarly we have x_2 is greater than one and the product is positive. This follows Speyer's so far proof, I thought I could get the 1/8 as the difference between the midpoint polygon and 3/4 the original polygon but in some cases it is less. So we have the following the area of the midpoint polygon is less than 3/4 of the original polygon furthermore the difference between 3/4 of the original polygon and the area of the midpoint polygon is 1/8-plus or minus the area of the triangle formed by (x_1,y_1),(1,1),(x_2.y_2) depending on which side of the line connecting (x_1,y_1) and (x_2.y_2) is (1,1).I want to find the difference between 3/4 of the area and the area of the midpoint polygon in terms of the areas of geometric figures in the original polygon. Tracing this back to the original polygon the area of the midpoint polygon is 3/4 of the original polygon -1/4 of the area of triangle ABC plus another triangle which has a point N I need to construct. Here is the construction take the midpoint of CA, M connect it to B then extend the line BM to twice its length to get point N then if N is on the same side of DE as A subtract 1/4 of the area of END otherwise add it from the above the result will always be less than 3/4 of the area of the original polygon or less from the above. This should hold for any three consecutive vertices of any polygon. - 2 Holding three corners of the pentagon fixed and multiplying the other two by a large constant, the area of the middle pentagon should tend towards 1/2 -- the pentagon is almost a triangle, two quarters of which are filled by the middle pentagon. I'm also worried about your calculation of the area of the original pentagon -- it seems to me that it need not be more than 1/2 the determinant of (x_1,y_1) and (x_2,y_2): for example, if (x_1,y_1) is close to the x-axis and (x_2,y_2) is close to the y axis. – Hugh Thomas Nov 16 2009 at 3:51 There were mistakes I think I have fixed them. – Kristal Cantwell Nov 16 2009 at 19:01 I think I see the problem with what I did. The triangle NDE appears with another term in Speyer's proof. I think that term corresponds to triangle MBN. I think I could eventually get a geometric equivalent to the difference between the 3/4 area and the area of the half point pentagon but there is a complication that it involves the area of two triangles at least one of which may have to be subtracted in some cases. – Kristal Cantwell Nov 17 2009 at 1:17 I finally got this to work and got the difference between 3/4 of the area of the original polygon and the area of the midpoint polygon as an expression of geometrical figures in the original polygon everything before this didn't work because I was trying to get too big a difference between areas. – Kristal Cantwell Nov 19 2009 at 21:35 For pentagons, area(Q)/area(P) can take any value in the open interval (1/2, 3/4). Let the vertices of P be A, B, C, D, E in cyclic order. We can get arbitrarily close to 3/4 from below if we let P degenerate to a triangle by letting the pair (A, B) approach one vertex of a triangle, (C, D) approach a second vertex of the triangle, and E approach a third. (It doesn't seem to matter which triangle we take.) The interval is open because I am not allowing the case where the vertices actually coincide. Similar constructions show that if pentagons are replaced with an n-gon for n ≥ 6, we can get area(Q)/area(P) arbitrarily close to 1. If instead we let (A, B, C) degenerate to a single vertex of the triangle, then area(Q)/area(P) can be made arbitrarily close to 1/2; this construction actually works for n ≥ 4. It's obvious that area(Q)/area(P) = 1/4 if P is a triangle. I think area(Q)/area(P) = 1/2 if P is a quadrilateral. - 1 It seems like this shows that you can get any value in (1/2,3/4), but why can't you get values outside of that interval? – Anton Geraschenko♦ Nov 10 2009 at 15:34 Anton, I don't know. I was hoping someone else would. In particular, maybe there's an argument that extrema of this ratio are at the degenerate polygons. – Michael Lugo Nov 10 2009 at 15:59 It's called a "midpoint polygon". The problem seems to be addressed (with proofs for triangles and quadrilaterals but only a conjecture for the pentagon) here: http://techhouse.brown.edu/~mdp/midpoint/pentagons.php (The other construction mentioned is a {5/2} star polygon inscribed in a pentagon.) - 1 I doubt that the problem is solved over there. They prove that the area of the midpoint pentagon is equal to P/2 plus one fourth of the area of the star pentagon. But for the area of the star polygon they count the area of the middle pentagon twice (for some strange reason). In my opinion the arguments presented in the link show the equivalence between the two formulations of the problem given by Manuel. – Philipp Lampe Nov 10 2009 at 16:21 1 Ah yes, they switch from proof to conjecture for the upper bound once they get to pentagons. – Jason Dyer Nov 10 2009 at 16:28 I asked this problem to a friend of mine, and he gave a proof of both the lower and the upper bounds that, to me, look cleaner than any offered so far. People's opinions may differ on this. I am now going to cut and paste from his email. Note that both arguments are rather short but it's easy to check the details and they look fine to me. The argument establishes the correct bounds for an n-gon for any n>=5. The area lost in the process of joining midpoints is 1/4 of the sum of the areas of ABC, BCD, etc., and it's easy to see that the only pairs of those triangles that intersect are consecutive pairs, from which you get that the total of those areas is at most twice the area of the original pentagon for 4 or more side (and it's easy to see it's strict inequality for 5 or more sides). So the lower bound is easy, geometrically (and for 6 or more sides, cluster two or more vertices near each of the vertices of a triangle to see that 1 is the right upper bound). As for 3/4 ... you want to prove the total of the areas of ABC, BCD etc. is greater than the area of the pentagon, or equivalently, cutting two triangles out of the pentagon, that area(EAB) + area(ABC) + area(CDE) > area(ABD). Putting E (x_1, y_1) and C (x_2, y_2), convexity means y_1, y_2 > 0, x_1 < 0, x_2 + y_2 > 1, EAB alone is big enough if y_1 > 1 and ABC is big enough if y_2 > 1 (while if either = 1 you have one of the areas equal to area(ABD) while the others are positive). So you can assume 0 < y_1, y_2 < 1, want to prove y_1 + y_2 + x_1(y_2 - 1) - x_2(y_1 - 1) > 1 and the LHS is minimised for x_1 as large as possible and x_2 as small as possible (we know the signs of y_2 - 1 and y_1 - 1), which minimum turns out to be 1 + y_1y_2 > 1. - I think we no loss the generality if we take the vertices (0, 1), (0, 0), (1, 0), (1, y), (x, 1), in couterclockwise, with $0 < x < 1, 0 < y<1$. In this case the pentagon is still convex, and $Area(P)=\frac{1}{2}+\frac{x+y-xy}{2}$ and $Area(Q)=\frac{P}{2}+\frac{1}{8}$. Than the ratio $R=\frac{Area(Q)}{Area(P)}=\frac{1}{2}+\frac{1}{4}\frac{1}{1+x+y-xy}$. From $0< x < 1, 0 < y < 1$ \implies $1/x + 1/y$ > $2$ \implies $1+x+y-xy$ > $1+xy$ > $1$ \implies $R$ < \frac{3}{4}. Also, we can take $x=1-\alpha$, $y=1-\beta$, $0 < \alpha < 1, 0<\beta<1$ \implies $1+x+y-xy=2-\alpha\beta<2$. Then $R > \frac{1}{2}$ + $\frac{1}{8}$ = $\frac{5}{8}$ a new inferior limit. - I think there is a problem with this. For one thing there are examples where the ratio is 3/4. Your maximum might hold for all pentagons with parallel sides though. – Kristal Cantwell Nov 19 2009 at 19:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381542801856995, "perplexity_flag": "head"}
http://vec3.ca/gjk/
From the artist, to your screen; this is everything between. # GJK By request, an accessible explanation of the GJK (that’s short for Gilbert-Johnson-Keerthi) intersection algorithm. I’ll note in advance for the uninitiated – this algorithm only works for convex objects. Arbitrary convex objects, yes, but they must be convex. That’s important! Another note: the full GJK algorithm is more than just an intersection test. It returns data about the actual separation between the objects (if they are non-intersecting). I’m presenting a simple boolean intersection query, since doing so simplifies things greatly without rendering this whole page useless. The fundamentals are the same for the full algorithm, and an understanding of this version will go a long way towards an understanding of the real deal. This is a high-level description of the algorithm. The implementation notes are here, for those that already understand how the algorithm works and just want to code it. # Intersection Queries: A Primer The hardest part of explaining GJK is explaining exactly what it is that it’s doing and why that actually works. Next to that, the implementation is fairly straight-forward. So let’s start by looking at one of the simplest intersection queries there is to try and work up a bit of intuition on exactly what it is that we want to accomplish. Don’t pay attention to the particular geometry of this problem, which doesn’t generalize well to arbitrary shapes at all. Pay attention to the way I’m going to transform the problem into a slightly different one, which I’ll then generalize to arbitrary convex forms. And so, behold! A circle-circle intersection: [processing include="./diagrams/gridcir.pinc"] Grid bg = new Grid( 500, 300, 40 ); Circle cirR = new Circle( 2, 1, 2, #FF0000 ); cirR.label = “R”; Circle cirB = new Circle( -2, 2, 1, #0000FF ); cirB.label = “B”; mouseable.push( cirR, cirB ); void draw() { float dx = cirR.posX – cirB.posX; float dy = cirR.posY – cirB.posY; float r = cirR.radius + cirB.radius; cirR.doFill = cirB.doFill = dx * dx + dy * dy <= r * r; bg.draw(); cirB.draw(); cirR.draw(); } [/processing] The diagram is interactive: you can drag each circle by its center or change its radius by dragging a point on its edge. The circles will fill in if they intersect. Anyway, what we have here are two circles in 2D. Each circle is defined by its center point (we’ll call that $C$) and its radius $R$. For our two circles, we have two pairs of these variables, which we’ll call $C_r$, $C_b$, $R_r$, and $R_b$ (the subscripts stand for “red” and “blue”, in case it wasn’t painfully obvious). The circle-circle intersection test is absurdly simple: a pair of circles intersects if and only if the distance between their centers is less than the sum of their radii. That is, the circles intersect if the following is true: $$distance( C_r, C_b ) < R_r + R_b$$ We’re going to rearrange that formula a little. Let’s start with the left side. $$distance( C_r, C_b ) = length( C_r – C_b )$$ Remember: a point minus a point is a vector. A point plus a vector is a point. The $0$ in this next bit represents the point at the origin. $$length( C_r – C_b ) = length( 0 + (C_r – C_b) – 0 )$$ We haven’t really changed anything. We added the vector $C_r – C_b$ to the origin to get some point and then subtracted the origin from that to get our original vector back. Why’d we do that? Because the above is one set of parentheses away from looking like another distance operation: $$length( 0 + (C_r – C_b) – 0 ) = length( (0 + (C_r – C_b)) – 0 ) = distance( 0 + (C_r – C_b), 0 )$$ Plugging that back into our original formula gives us: $$distance( 0 + (C_r – C_b), 0 ) < R_r + R_g$$ And then we clean things up a little by substituting the following: $C_g = 0 + (C_r – C_b)$, $R_g = R_r + R_b$. $$distance( C_g, 0 ) < R_g$$ And that’s it. What we’ve done above is change the problem into an equivalent one. Instead of asking whether two circles intersect, we now ask whether a third circle, constructed out of the two, contains the origin. [processing include="./diagrams/gridcir.pinc"] Circle cirR = new Circle( 2, 1, 2, #FF0000 ); cirR.label = “R”; Circle cirB = new Circle( -2, 2, 1, #0000FF ); cirB.label = “B”; Circle cirG = new Circle( 0, 0, 0, #004000 ); cirG.label = “G”; mouseable.push( cirR, cirB ); Grid bg = new Grid( 500, 300, 40 ); void setup() { size( bg.width, bg.height ); } void draw() { cirG.posX = cirR.posX – cirB.posX; cirG.posY = cirR.posY – cirB.posY; cirG.radius = cirR.radius + cirB.radius; cirG.doFill = cirG.contains( 0, 0 ); bg.draw(); cirG.draw(); cirB.draw(); cirR.draw(); } [/processing] That’s a fairly silly derivation for a pair of circles, but it’s a very simple illustration of the general transform we’re going to apply to intersection problems in order to handle arbitrary convex forms. # Enter The Minkowski Sum So how exactly did we construct the green circle above? In essence, what we’ve done is subtract area off of one circle (the blue one) and add it to the other. That’s all well and good with a pair of circles, but real objects don’t always have handy things like centers and radii for us to work with. What we need is a more general approach. That approach is based on something called the Minkowski sum (represented by the symbol $\oplus$). You’ll get a lot more detail out of the Wikipedia article linked, but in short, the Minkowski sum operates by treating each point within the second shape (yes, all infinity of them) as a vector and adding each vector to each point in the first shape, producing a whole new set of (infinitely many) points which define a shape. (I’m going to become less and less careful about the distinction between points and vectors as this goes on. If you see a point where a vector should be, subtract the origin off of it. If you see the reverse, add it to the origin.) However, we’re not interested in the plain Minkowski sum here. We’re interested in a slightly different operation, commonly referred to as the Minkowski difference, which works just like the sum, except that each of the vectors derived from the second shape is subtracted from the first shape’s points. In geometric terms, the Minkowski difference of two shapes is the Minkowski sum of the first shape and the second shape’s reflection through the origin (drawn below in the darker blue). [processing include="./diagrams/gridcir.pinc"] Circle cirR = new Circle( 2, 1, 2, #FF0000 ); cirR.label = “R”; Circle cirB = new Circle( -2, 2, 1, #0000FF ); cirB.label = “B”; Circle cirG = new Circle( 0, 0, 0, #004000 ); cirG.label = “G”; Circle cirB2 = new Circle( 0, 0, 0, #0000A0 ); cirB2.label = “-B”; Grid bg = new Grid( 500, 300, 40 ); mouseable.push( cirR, cirB ); void setup() { size( bg.width, bg.height ); } void draw() { cirB2.posX = -cirB.posX; cirB2.posY = -cirB.posY; cirB2.radius = cirB.radius; cirG.posX = cirR.posX – cirB.posX; cirG.posY = cirR.posY – cirB.posY; cirG.radius = cirR.radius + cirB.radius; cirG.doFill = cirG.contains( 0, 0 ); bg.draw(); cirB2.draw(); cirG.draw(); cirB.draw(); cirR.draw(); } [/processing] Our silly algebra up above with the pair of circles is such that the formulas for the green circle’s center and radius are equivalent to taking the Minkowski difference of the red and blue circles and simplifying the formulas as far as they’ll go. The nice thing about the Minkowski difference is that, because it isn’t simplified, it will work with any pair of shapes, and the composite shape (which I’ll keep calling green) it produces contains the origin if and only if the two source shapes intersect. Unfortunately, using the Minkowski sum also leaves us with a problem. The Minkowski sum works with arbitrary shapes because it reduces them to one of the most basic representations imaginable: a set of points. The problem is that this set is infinite, so we’re clearly not going to be computing anything with it directly. What we need to do is derive another representation. # Describing the Green Shape Let’s start with what we’ve got. We’ve got our two shapes, a red one and a blue one, which we’ll represent as the point sets $R$ and $B$. Now we want to perform a Minkowski difference, so instead of $B$ we’ll need to operate on its mirror through the origin. Let’s call that $-B$, since negation is descriptive of how the set is built to begin with. And then we’ve got our green shape represented as the set $G = R \oplus -B$. Now, remember how I made a big deal about these shapes all being convex? This is where that starts to be important. We know (well, we demand) that $R$ and $B$ represent convex shapes. Reflection through the origin doesn’t change whether a figure is convex or concave, so we also know that $-B$ is convex. And the Minkowski sum of convex shapes is also convex (this is somewhat intuitive, though I’m sure there’s a proper proof out there somewhere), which covers $G$. So what does that mean? Well, one of the defining properties of convex forms is that no straight line can possibly intersect the shape more than twice. And if the hypothetical line is directed, then it will (obviously) have a maximum of one entry and one exit point. As these intersection points are all on the surface of the shape, we can actually describe our shape by throwing enough (again, infinitely many in the general case) directed lines at it and taking just the exit points. In fact, we don’t even need the full lines. We can define the shape just as well using pure vectors (directions) by taking the exit points of all possible lines parallel to the vector and keeping only the one farthest along said vector. Wrap that last idea up into a function, and you have what’s called a support mapping. ## The Support Mapping The formal definition of the support mapping of a shape $A$ (where $A$ denotes the set of all points within or on the surface of the shape) looks something like the following: $$S_A(\vec{v})\in A,\;\vec{v}\cdot S_A(\vec{v})=\max\left\{\vec{v}\cdot x : x \in A\right\}$$ So a support mapping is a function (mathematical or, as you’ll see shortly, otherwise) which is associated with a given convex shape, and which takes a vector ($\vec{v}$) as input and returns a point on the shape’s surface which is maximally extreme with respect to $\vec{v}$. To make things somewhat simpler, if there are many such points (say $\vec{v}$ is perpendicular to a face), the support mapping is allowed to return any one of those points. Here are a few examples of support mappings: vec3 sphere_support( const sphere &s, const vec3 &v ) { //remove the division if v is known to be normalized elsewhere return s.center + v * (s.radius / length( v )); }   //aabb = axis-aligned bounding box vec3 aabb_support( const aabb &bb, const vec3 &v ) { vec3 ret;   ret.x = v.x >= 0 ? bb.max.x : bb.min.x; ret.y = v.y >= 0 ? bb.max.y : bb.min.y; ret.z = v.z >= 0 ? bb.max.z : bb.min.z;   return ret; } Here’s another one. It operates on the convex hull encasing an arbitrary set of points (for instance, the corners on a frustum): vec3 point_cloud_support( const vec3 points[], unsigned int n_points, const vec3 &v ) { unsigned int best = 0; float best_dot = dot( points[0], v );   for( unsigned int i = 1; i < n_points; i++ ) { float d = dot( points[i], v ); if( d > best_dot ) { best = i; best_dot = d; } }   return points[best]; } Simple, and no infinities in sight. One thing remains. ## Once More, In Green We have our red and blue shapes, so we know what $R$ and $B$ represent, and that allows us to write simple little support mappings like the above (call those $S_R(\vec{v})$ and $S_B(\vec{v})$. But what about $G$’s support mapping? Well, let’s go back to the definition of the Minkowski sum. The farthest point on the surface of the green shape along $\vec{v}$ is going to be the sum of the farthest points along $\vec{v}$ in its source shapes, $R$ and $-B$: $$S_G(\vec{v})=S_R(\vec{v})+S_{-B}(\vec{v})$$ The only wrinkle is that the above requires $S_{-B}(\vec{v})$, but we only have $S_B(\vec{v})$. Not to worry, $-B$ is just a reflection of $B$, so we can just mirror the results of its support mapping. There’s one additional step, though: mirroring the whole mapping mirrors not just the result, but also the meaning of $\vec{v}$, so we’ll negate that as well, in order to compensate: $$S_{-B}(\vec{v})=-S_B(-\vec{v})$$ Which we can plug into our formula for $G$’s support mapping: $$S_G(\vec{v})=S_R(\vec{v})-S_B(-\vec{v})$$ And that’s that. With that formula and our hand-crafted support mappings for each type of shape, we’re free to test for intersections between any pair of shapes. All we need now is the algorithm which is going to make use of $S_G(\vec{v})$. # Searching for the Origin And now we’re at the core of the GJK intersection algorithm. This is the part where we search for the origin in $G$. Now, our search space is potentially huge, so we want an algorithm that’s going to head straight for the origin, cutting the available search space down as quickly as possible. We also want something relatively simple, so that we don’t go insane trying to write and debug the thing. So we’re going to accomplish that by jumping around the figure in an orderly manner, trying to construct a simplex that contains the origin. The way we’re going to do this is we’re going to pick points one at a time from the support mapping and add them to a set of points. We’re then going to look at the figure described by that set (two points is a line, three is a triangle, four is a tetrahedron, so on if you’re working in four or more dimensions) and figure out which of its aspects (vertices, edges, faces) the origin is closest to (that is, which aspect’s Voronoi region contains the origin). We’re then going to throw away all of the points that aren’t part of that aspect and pick a sane search direction to find our next point, such that we converge on the origin. That sane direction has two defining properties – it’s perpendicular to the part of the set we’re keeping and it points towards the origin. We stop when we manage to build a simplex (a triangle in 2D, tetrahedron in 3D, so on) that encompasses the origin or when we find that it’s impossible to do so. So let’s look at some examples. The cases differ based on the number of points in our current simplex, so I’ll go over a few of them. The cases are labelled by the number of points in the current simplex after a new point has been selected. Each case is responsible for deciding which of the points to keep and then computing the search direction used to get the next point. ## No Points This is the case when we start the algorithm. As we have no points, we really haven’t anything to restrict our selection of the next point. We select an arbitrary starting search vector $\vec{v}$ and go to the next iteration. ## One Point This is where we land after our first iteration. We’ve got one point to work with, so all we can do is look for another. We search along a vector pointing from our point towards the origin. ## Two Points [processing include="./diagrams/ptlines.pinc"] void setup() { size( 500, 100 ); } Point a = new Point( 100, 50 ); a.label = “A”; Point b = new Point( 400, 50 ); b.label = “B”; mouseable.push( a, b ); void draw() { background( #000000 ); stroke( #404040 ); perpline( a, b, 0 ); perpline( b, a, 0 ); stroke( #FFFFFF ); ptline( a, b ); a.draw(); b.draw(); } [/processing] So two points ($A$ and $B$) divide our space into three regions (this is an edge-on view of 3D space – the middle region wraps all the way around the line segment). Our dividing planes are the two planes perpendicular to the line segment and passing through the endpoints. Great, what do we do with it? Well, the question is which aspect of the line is the origin closest to? There are the end points and then there’s the line segment itself. The Voronoi diagram above is the answer. If the origin lies in the region nearest to $A$, then we throw away point $B$. If it lies in the region on the far side nearest $B$ then we keep $B$ and throw $A$ out. Otherwise it must lie closest to the line segment itself (in that center region), and we keep both $A$ and $B$. Figuring out which region we’re in is remarkably simple, too, it’s just a few dot products. (Actually, that’s a lie. In this case the origin will always be in region $2$, but that’s an optimization I’ll discuss later.) Once we reduce the simplex to those parts nearest the origin, we need to come up with a search direction for the next point. That’s easy enough if we’ve reduced the simplex down to a single point – we just pick the vector from that point towards the origin (as we did in the previous one-point case). If we have a line segment, then we need to do a few cross products to come up with a vector that’s oriented towards the origin and is perpendicular to the line. ## Three Points [processing include="./diagrams/ptlines.pinc"] void setup() { size( 500, 300 ); } Point a = new Point( 100, 50 ); a.label = “A”; Point b = new Point( 130, 270 ); b.label = “B”; Point c = new Point( 400, 150 ); c.label = “C”; mouseable.push( a, b, c ); void draw() { background( #000000 ); stroke( #404040 ); int w = winding( a, b, c ); perpline( a, b, -w ); perpline( a, c, w ); perpline( b, c, -w ); perpline( b, a, w ); perpline( c, a, -w ); perpline( c, b, w ); stroke( #FFFFFF ); ptline( a, b ); ptline( b, c ); ptline( c, a ); a.draw(); b.draw(); c.draw(); } [/processing] So again, we have some regions. If the origin is in the inner region, then we keep all three points and take a direction perpendicular to the plane the triangle lies in and in the direction of the origin. Otherwise, depending on which region it falls into we’ll keep either a vertex (well, no, but again I’ll talk about that later) or an edge’s line segment. The only subtlety here is that the central region is actually two regions. This is 3D space, remember? There’s the volume “above” the triangle and the volume “below”. That’s a distinction which will help us make an awesome optimization when it’s time to write the code. ## Four Points I’m not even going to try to make a diagram of a tetrahedron. I spent hours drawing them on paper when I was working my way through this the first time, and I’ve come to the conclusion that it’s pointless to even try to get all the nuances of it down in 2D. In the end I built a little model, which I highly recommend to anyone that’s trying to wrap their brain around this. The important case with the tetrahedron is the volume inside of it. If the origin is found to be there, then we’re done. We’ve built a shape that contains the origin, thus there’s nothing more to do. One note though – don’t be daunted by the apparent number of cases that a tetrahedron creates! A great many of them are impossible, and the rest are largely duplicates of each other. # Knowing When to Stop But what if the objects don’t intersect? How do we figure that out? That’s rather easy. See all those spots where we get our next point from the support mapping? If that next point is ever closer to our existing figure than the origin itself is, then we know we can stop. The support mapping gives us the farthest point in a given direction, and our search direction is always pointed at the origin, so if we go as far as we can straight towards the origin and fail to reach it, then we know we never will, and we can abort the search knowing that the objects do not, in fact, intersect. # That Stuff I Said I’d Get to Later Those little parenthetical notes I left above stating that certain cases weren’t going to be a problem for us – the explanation is on the implementation page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106007814407349, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/84212-trig-id.html
# Thread: 1. ## trig id Hey all, I was wondering if someone can walk me through this one: [cot^2(B) - cos^2(B)]/[csc^2(B) -1] My solution is as follows: [cot^2(B)/-sec^2(B)] - cos^2(B)/-sec^2(B) -cos^4(B)/sin^2(B) +1 and now i am a bit stuck. the correct answer is cos^2(B). Can someone please finish this off or tell me a better way to get to the right answer? Thanks, 2. Originally Posted by pberardi Hey all, I was wondering if someone can walk me through this one: [cot^2(B) - cos^2(B)]/[csc^2(B) -1] My solution is as follows: [cot^2(B)/-sec^2(B)] - cos^2(B)/-sec^2(B) -cos^4(B)/sin^2(B) +1 and now i am a bit stuck. the correct answer is cos^2(B). Can someone please finish this off or tell me a better way to get to the right answer? Thanks, From where you got to: Distribute the minus sign to the denominator: $\frac{cos^4(B)}{-(sin^2(B)+1)} = \frac{cos^4(B)}{1-sin^2(B)}$ and what does 1-sin^2(B) equal 3. Originally Posted by pberardi Hey all, I was wondering if someone can walk me through this one: [cot^2(B) - cos^2(B)]/[csc^2(B) -1] My solution is as follows: [cot^2(B)/-sec^2(B)] - cos^2(B)/-sec^2(B) -cos^4(B)/sin^2(B) +1 and now i am a bit stuck. the correct answer is cos^2(B). Can someone please finish this off or tell me a better way to get to the right answer? Thanks, Hi! I think you did wrong on the signs. Because you should get -cos^4(B)/sin^2(B) - 1. If you get this, you get the exercice correct: If you want to see my resolution, here it is: 4. Originally Posted by e^(i*pi) From where you got to: Distribute the minus sign to the denominator: $\frac{cos^4(B)}{-(sin^2(B)+1)} = \frac{cos^4(B)}{1-sin^2(B)}$ and what does 1-sin^2(B) equal But $\frac{cos^4(B)}{-(sin^2(B)+1)} =\frac{cos^4(B)}{-sin^2(B)-1}$ 5. Originally Posted by pberardi Hey all, I was wondering if someone can walk me through this one: [cot^2(B) - cos^2(B)]/[csc^2(B) -1] My solution is as follows: [cot^2(B)/-sec^2(B)] - cos^2(B)/-sec^2(B) -cos^4(B)/sin^2(B) +1 and now i am a bit stuck. the correct answer is cos^2(B). Can someone please finish this off or tell me a better way to get to the right answer? Thanks, Here is a better way, see, $\frac{\cot^2 B-\cos^2 B}{\csc^2 B-1}$ $=\frac{\cot^2 B-\cos^2 B}{\cot^2 B}$ $=\frac{\cot^2 B}{\cot^2 B}-\frac{\cos^2 B}{\cot^2 B}$ $=1-\frac{\cos^2 B}{\cot^2 B}$ $=1-\frac{\cos^2 B}{\frac{\cos^2 B}{\sin^2 B}}$ $=1-\cos^2 B\times \frac{\sin^2 B}{\cos^2 B}$ $=1-\sin^2 B$ $=\cos^2 B$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231739044189453, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/200853/does-apparent-retrograde-motion-of-planets-begin-and-end-at-quadrature?answertab=votes
# Does apparent retrograde motion of planets begin and end at quadrature? I've read it several places that the apparent retrograde motion of planets (during which they seem, as viewed from Earth, to move in the opposite sense of their normal "direct" orbital motion against background stars at infinite distance) occurs between the two quadrature points (at which the planet-Earth-Sun angle is 90°). I have always assumed that "between" is an approximation, since, at quadrature, though the Earth's motion is contributing nothing to the planet's apparent motion (since it is moving directly along the line of sight to the planet), the planet's true motion is providing apparent "direct motion", so that retrograde motion, though bounded by the quadrature points, does not begin or end there. I've never been able to prove this to my satisfaction, and often come across descriptions that make me wonder whether I've had it wrong all along. Some of these sources would seem to be quite authoritative, such as a translator's footnote to Copernicus, in which it is stated that the angular extent of a superior planet's retrograde motion observed from Earth is defined by the tangents to the Earth's orbit that pass through the planet: While these tangents certainly bound the angular extent of retrograde motion — in fact, they define the retrograde (and direct) motion of an unmoving planet (since they correspond to the maximum parallax for the planet seen from Earth) — isn't the actual extent of retrograde motion smaller, for the reasons stated above? How can it demonstrated geometrically (assuming circular orbits with common centers and uniform angular velocities, and given periods and radii for those orbits for Earth and the Planet) what the rate of change of the apparent angular position of an orbiting planet is as a function of the planet-Earth-Sun angle? - 1 As a practical astronomical matter, it should be possible to do this given only the orbital periods ($p_E$ and $p_P$) orbital radii ($r_E$ and $r_P$) and the planet-Earth-Sun angle, or "elongation" ($\varepsilon$). – raxacoricofallapatorius Sep 22 '12 at 20:41 ## 1 Answer The tangents do not bound the retrograde motion, but the range is close. You are right they define the range for an unmoving outer planet. The boundaries of retrograde motion are found when the velocity of each planet projected on the perpendicular to the line joining them is the same. For a first approximation, you can consider the velocity of the outer planet to be perpendicular to the joining line (valid if the outer planet is much farther from the sun than Earth.) In this approximation, if $\theta$ is the angle at the Earth from the sun vector to the Earth vector is $v_P=v_E \cos \theta$ This supports the tangent for an unmoving outer planet-if we set $v_P=0$ we want $\cos \theta =0,$ so $\theta = \frac \pi 2.$ The boundary of retrograde motion for a moving outer planet will be beyond the tangent when Earth is leading, and behind it when Earth is lagging, but the range will be about the same. Added: the following assumes circular orbits. Draw the triangle from the planet to the earth and sun. Let $\theta$ be the angle at the earth and $\phi$ the angle at the planet. The projection of the planet velocity perpendicular to the planet-earth line is $v_P \cos \phi$ The projection of the earth velocity perpendicular to the planet-earth line is $v_E \cos \theta$ At the boundaries of the retrograde motion these are equal, so $v_P \cos \phi=v_E \cos \theta$ $\phi$ will very close to the half-width of the Earth's orbit as seen from the planet, so you can solve for $\theta=\arccos \frac {v_P \cos \phi}{v_E}$. You can iterate the solution (improve $\phi$ from the $\theta$ you find) if you need more accuracy. - Yes, it's clearly close, and the tangent lines are bounding, but what are the actual beginning and end values of $\varepsilon$? (Also, I'm not sure I understand the last sentences.) – raxacoricofallapatorius Sep 26 '12 at 22:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9156044125556946, "perplexity_flag": "middle"}
http://polymathprojects.org/2012/06/09/polymath7-discussion-thread/?like=1&_wpnonce=e3a0b2f395
# The polymath blog ## June 9, 2012 ### Polymath7 discussion thread Filed under: discussion,hot spots — Terence Tao @ 5:50 am The “Hot spots conjecture” proposal has taken off, with 42 comments as of this time of writing.  As such, it is time to take the proposal to the next level, by starting a discussion thread (this one) to hold all the meta-mathematical discussion about the proposal (e.g. organisational issues, feedback, etc.), and also starting a wiki page to hold the various facts, strategies, and bibliography around the polymath project (which now is “officially” the Polymath7 project). I’ve seeded the wiki with the links and references culled from the original discussion, but it was a bit of a rush job and any editing would be greatly appreciated.  From past polymath experience, these projects can get difficult to follow from the research threads alone once the discussion takes off, so the wiki becomes a crucial component of the project as it can be used to collate all the progress made so far and make it easier for people to catch up.  (If the wiki page gets more complicated, we can start shunting off some stuff into sub-pages, but I think it is at a reasonable size for now.) One thing I see is that not everybody who has participated knows how to make latex formatting such as $\Delta u = \lambda u$ appear in their comments.  The instructions for that (as well as a “sandbox” to try out the code) are at this link. Once the research thread gets long enough, we usually start off a new thread (with some summaries of the preceding discussion) to make it easier to keep the discussion at a manageable level of complexity; traditionally we do this at about the 100-comment mark, but of course we can alter this depending on how people are able to keep up with the thread. About these ads Like Loading... ## 41 Comments » 1. [...] 7 is officially underway. It is here. It has a wiki here. There is a discussion page here. Like this:LikeBe the first to like this [...] Pingback by — June 9, 2012 @ 5:22 pm 2. My ignorance is embarrassing, but I’ll ask anyway. Could someone point me to directions on how to edit a comment I make on the research thread after I’ve posted it? I’d like to save moderators and readers the irritation of fixing my typographical/LaTeX errors. Comment by — June 9, 2012 @ 7:38 pm • Unfortunately, the hosting company for this blog doesn’t allow editing of comments by users :-(. So I’ll be editing comments manually, which has worked well enough in the past. For really lengthy computations, though, it may be a good idea to put the details on the wiki (maybe creating a subpage if necessary) and just put a link and a summary on the blog, since the wiki is easier to edit and format. Comment by — June 9, 2012 @ 8:29 pm • OK, thanks! Comment by — June 10, 2012 @ 1:44 am • It took me a while to find something that worked… the trick seems to be to type \$ latex [YOUR LATEX CODE] \$ but without a space between the first \$ and the word ‘latex’. Comment by — June 11, 2012 @ 3:03 am 3. As there has been a lot of talk about approaching the conjecture using Bessel functions I decided to finally learn about them. I wrote up a summary (basically for my own benefit to understand the material better) which can be seen at http://www.math.missouri.edu/~evanslc/Polymath/BasicBessel The case of a sector is considered at the end, which ties back to the discussion in the research thread. Comment by — June 10, 2012 @ 9:20 pm 4. The research thread is getting rather lengthy, so I will probably roll it over tomorrow by starting a new research thread that tries to summarise the progress so far, and then direct all discussion to the new thread. This is just a heads-up, though; in the meantime, keep using the current research thread. :-) Comment by — June 12, 2012 @ 2:45 am • Hmm, it’s only been three days and I think I may have to roll over the thread again, maybe sometime tomorrow. (From past experience with polymath projects, the first week or two are quite hectic and chaotic, with lots of people pursuing lots of possible angles of attack but after that things settle down, focusing on a core group of people pursuing a core set of strategies; the progress becomes less exciting, but more steady.) Comment by — June 15, 2012 @ 2:04 am 5. Continuing the comments of Hung (and moving the discussion to the discussion thread): I met with Hung today to discuss analytic proof of Isosceles Triangle – Special Case – Corollary 4, so maybe I can clarify some of our confusion. Hung suggested that an argument could be made via the scalar maximum principle by instead of considering the vector-valued gradient of $u$, considering the directional derivative $u_{\xi}$ in a direction $\xi$ which “points within the cone” (i.e. $0\leq\arg(\xi)\leq \Theta$, where $\Theta$ is the angle ABD). This seems reasonable: $u_{\xi}$ itself solves the heat equation. By considering, say $u_0\equiv 1$, we have that $u_{\xi}\leq 0$ at the bottom of the parabolic boundary. Along the Dirichlet boundary $u_{\xi}\leq 0$ for all time (because $u$ is non-negative in the interior). But we couldn’t figure out why $u_{\xi}\leq 0$ for all time on the Neumann part of the parabolic boundary. If we only had that we would be done by the scalar maximum principle… Maybe there is a simple reason we missed My understanding of the vector-valued (weak) maximum principle is that if you have a vector valued function which solves the heat equation, and its parabolic boundary data lies in some convex set $K$, then so too will the function on the entire parabolic domain. Switching between this and the scalar valued maximum principle is only a matter of projecting onto an axis $\xi$ or equivalently considering your convex set $K$ to be the space between two hyperplanes (Things get messier for, say, reaction-diffusion equations but for the heat equation I believe considering the gradient and directional derivatives are equivalent?) So in arguing via the vector-valued maximum principle we are taking as our convex set the infinite sector $S$ it seems. But then I wasn’t sure what the importance of $\epsilon$ is. Is an argument with $\epsilon$ necessary if we take for granted the weak maximum principle in the previous paragraph? Or is it that the argument is going beyond the basic weak maximum principle (in either the scalar of vector-valued case)? Part of the confusion for me I think is in parsing things as both the domain and range of $u$ were in the sector $S$. Comment by — June 16, 2012 @ 7:32 am • (Thanks, by the way, for moving these sorts of discussions to the discussion thread rather than the research thread – I should have mentioned earlier that this thread is intended in part to help explain and clarify all the hectic stuff that goes on on the research thread.) It is a bit odd that one can’t use an “off the shelf” maximum principle, either in the scalar or vector setting, to establish this result, but has instead to “roll one’s own” principle by adapting the proof (and this is why the epsilons have to come in, usually they are hidden in the proof of the textbook maximum principle). I’m not sure exactly why this is the case, but I think it is because the Neumann boundary condition does not directly place the gradient of u inside S on the boundary, but instead offers the option of either lying in the interior of S or in the exterior, and one then has to rule out the second possibility by an additional argument (taking advantage of either the Neumann boundary conditions or on a reflection argument) to conclude. I think considering just a single derivative $u_\xi$ doesn’t work because when one bounces off of a boundary, this derivative somehow gets mixed up with other derivatives, so one has to control the whole gradient at once or else one can’t predict what happens on a boundary. Comment by — June 16, 2012 @ 4:35 pm 6. Ok, I see your point about losing information when only considering one directional derivative; it seems that part of the argument is to take advantage of the fact that $\nabla u$ points parallel to the boundary for points on the Neumann boundary (something which cannot be reflected in a property of a single derivative $u_{\xi}$). It seems you take advantage of this by expanding the sector to the set $S_{\epsilon(t+1)}$, which then allows you to identify a unique point $v$ that $\nabla u(x,t)$ is equal to for $x\in$ BD. But then I don’t see where you take advantage of knowing the particular location of $v$ (besides knowing that it is on the boundary). Is it correct that the idea of reflecting across BD is just to justify that the equality $\partial_t\left(\nabla u\right)=\Delta\left(\nabla u\right)$ holds on the boundary BD as well? Another point I don’t get is why you consider the set $S_{\epsilon(t+1)}$ and not just $S_{\epsilon}$. Is the idea that “As the set $S_{\epsilon(t+1)}$ is expanding in time, the only way for $\nabla u$ to catch it is if it is actively moving towards the (receeding) boundary\$… which prevents it from touching the boundary with “zero derivative”? P.S. Looking at the argument I don’t see any place that the acuteness of the triangle is used. Is it correct to say the same argument would work for an obtuse mixed Dirichlet-Neumann triangle? Comment by — June 16, 2012 @ 6:39 pm • The location of v is important (as pointed out by Hung) because it is both on the boundary of $S_{\varepsilon(t+1)}$ and on its reflection, which allows one to keep the direction of $\Delta \nabla u$ pointing inwards or tangentially. As you say, the receding nature of the boundary is to make sure that the time derivative $\partial_t \nabla u$ points strictly outwards, rather than tangentially, since otherwise one doesn’t quite get a contradiction. (One could of course use some other increasing function of t than $\varepsilon(t+1)$, e.g. $\varepsilon e^t$, if desired.) I think the argument works fine for obtuse triangles, though one has to be a little careful with regularity. Once one has an obtuse Neumann angle, one no longer has C^2 regularity at the vertices, but only $C^{1,\alpha}$ for some $\alpha$ (basically, one has $\pi/\theta$ degrees of regularity at a vertex of angle $\theta$, except when $\theta$ divides evenly into $\pi$ when one has smoothness instead). But this may still be enough regularity to run the argument. Comment by — June 16, 2012 @ 7:24 pm • Ok, I think I follow the argument now: The idea is that if $(x_0,t_0)$ is the first point with $v=\nabla u(x_0,t_0)$ on $\partial S_{\epsilon(t+1)}$ and $x_0\in$ BD, then (considering reflections) points of the form $(x,t_0)$ in the reflected domain have $\nabla u(x,t_0)$ lying in the union of $S_{\epsilon(t+1)}$ and its reflection, which is a *convex* set. Thus the “pull” from these nearby $x$, i.e. $\Delta \nabla u$, is into/tangent to this convex set and in particular doesn’t point in the direction of $\partial_t\nabla u$. I don’t think this argument would work for the obtuse case then as the union of $S_{\epsilon(t+1)}$ and its reflection wouldn’t be convex (and so while $\Delta \nabla u$ might be horizontal, it might still point in the same direction as $\partial_t\nabla u$). Also, whether my understanding above is correct or not, because the level curves of $u$ are roughly concentric circles at the corner, in the obtuse case even if we could show that $\nabla u$ stayed within certain angle bounds, it would not be the case that the *directional derivative* in the expected directions would have constant sign. So, it definitely seems considering $\nabla u$ has its advantages! Comment by — June 17, 2012 @ 4:29 am • This may be somewhat tangential to the original aim of this project, but it may be of interest to try to explore the vector maximum principle/coupled Brownian motion connection further. Two obvious directions to pursue are (a) finding a maximum principle version of mirror coupling arguments and (b) finding analogues of both coupled Brownian motion and maximum principle arguments in the discrete graph setting. (There must presumably be some literature on this sort of thing already – I would find it hard to believe that the two most fundamental tools in parabolic PDE have not been previously linked together!) Comment by — June 17, 2012 @ 7:21 am • Well actually my introduction to the hot spots problem came from David Jerison who suggested that his graduate student, Nikola Kamburov, and I try to come up with a purely analytic replacement for the coupling arguments used in hotspots problems (specifically those in the paper on Lip domains by Atar and Burdzy). We approached it by trying to consider maximum principles etc on the product domain, but we never formalized anything concrete… it seems that we should have been looking at the gradient vector (although as your argument shows some finesse is required even there) Comment by — June 17, 2012 @ 7:33 am 7. Here is another tool that may be useful. As I don’t see a direct application to solving the main conjecture, I will just give a brief explanation here in the discussion thread. The paper “Scaling coupling of Reflecting Brownian Motion and the Hot Spots Conjecture” by Mihai Pascu (http://www.ams.org/journals/tran/2002-354-11/S0002-9947-02-03020-9/home.html) introduces an exotic coupling known as the “Scaling Coupling” and uses it to identify the extremum of the first eigenfunction for certain mixed Dirichelet-Laplacian domains. The idea of a “Scaling Coupling” is as follows (see also http://www.diaspora-stiintifica.ro/prezentari/wks04/Pascu.pdf for pictures/further explanation): Consider the unit disk $U$. Then if $X_t$ is (free) Brownian motion started at $x$ it turns out that $Y_t=\frac{X_t}{M_t}$, where $M_t =a\vee\sup_{s\leq t}\{\vert X_t\vert\}$ for $a\leq1$, is (after a time-scaling) a reflected Brownian motion started at $y=\frac{x}{a}$. In fact, this still holds if $X_t$ itself was a reflected Brownian motion to begin with! Suppose now we consider the upper half of the unit disk, $U^+$ and assign Dirichlet boundary conditions to its flat bottom and Neumann boundary conditions to its arc-boundary. Considering the paths of $X_t$ and $Y_t$, we see that the *paths* will meet up when $X_t$ first hits the arc-boundary of $U^+$, or otherwise both *paths* will terminate at the same time when they hit the Dirichlet boundary. But now recall that for $Y_t$ to be a true Brownian motion, we need to scale its speed; to be a Brownian motion, $Y_t$ must travel slower than $X_t$ until they meet. But this implies that $X_t$ will always hit the Dirichlet boundary before $Y_t$ (since $Y_t$ is further behind on its path). By the usual adjoint/duality between Brownian motion and the heat equation it therefore follows that the first eigenfunction for $U^+$ is monotone radially out from the origin! Then, by conformal mapping, Pascu extends this scaling coupling to more general domains. The basic idea is that, as conformal mappings preserve angles, reflected Brownian motion gets mapped to reflected Brownian motion up to a time-scaling (the idea is that the “reflection angles” at the boundary are preserved, and in the interior “angles of direction of motion” are not “squished” so the Brownian motion still is equally likely to head in any direction). Therefore, if we consider a $C^{1,\alpha}$ (I think this matters for conformal mappings?) domain $D$ with an axis of symmetry (whence we can consider the domain $D^+$ with mixed boundary), we can identify it with $U$ via conformal mapping, and define a scaling coupling on $D^+$ via the scaling coupling on $U^+$. The crucial issue however, is that while the *paths* of $X_t$ and $Y_t$ in $D^+$ are nicely coupled, we need also control on the time-scaling of $X_t$ and $Y_t$ to ensure that $Y_t$ moves slower along its path than $X_t$ does along its path until the paths meet. This can be ensured provided that $D$ is a *convex* domain (This is the content of Proposition 2.13). Under this additional assumption, we can argue as before to show that the extremum of the first eigenfunction of $D^+$ lies on its Neumann boundary. Points of Interest: 1) It may be useful to keep in mind the scaling coupling as a tool (at least at the heuristic level… i.e. we can define a scaling coupling and move it to other domains via conformal maps provided we are wary of time-scaling effects) especially as this is a somewhat exotic coupling which I don’t believe has a direct analytic counterpart (though I would of course be happy to see one). 2) I don’t really see where in the argument we need to consider all of $U$ and $D$. It seems that we only need the convexity of $D^+$ (for time-scaling purposes) so we could consider just it provided it is conformally equivalent to $U^+$. This then would give information on the location of the extremum of the first eigenfunction of such a convex mixed-boundary domain. However, the paper http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CE4QFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.9.1765%26rep%3Drep1%26type%3Dpdf&ei=YK7eT_79C4iW2gXCn5XSDQ&usg=AFQjCNG1m-y4m9oCljJN3noohV9WE2pFnQ&sig2=G9PyOZ8KWH5Hnnx10MsfLA of Banuelos, Pang, and Pascu seems to argue such an extension and requires that the Dirichlet boundary and Neumann boundary meet at acute angles, so there is probably something I am overlooking. Comment by — June 18, 2012 @ 5:43 am 8. I seem to be unable to leave comments on the research thread. Is there an FAQ section I can look up to fix this? Comment by — June 22, 2012 @ 4:51 am • seems to be working again. [Huh, for some reason your comment got trapped in the spam filter, but I managed to retrieve it - T.] Comment by — June 22, 2012 @ 4:52 am • to clarify: I was able to post anonymously, but not after signing in. Comment by — June 22, 2012 @ 5:07 am 9. Since it may get lost in the shuffle: I’d made an error in the data I reported earlier today. Bartlomiej Siudeja very kindly identified the problem. I’ve subsequently updated the data file http://www.math.sfu.ca/~nigam/polymath-figures/dump-data.odt Apologies for the error and any confusion caused, and thanks to Bartlomiej for catching it. Comment by — June 23, 2012 @ 6:13 am 10. For some reason I cannot post anything to research thread. Comment by Bartlomiej Siudeja — June 25, 2012 @ 5:39 pm • Hmm, your comments got caught by the spam filter (presumably because of the link). Sometimes it has false positives. Nilam had a similar issue; I will try to upgrade both of your user statuses to try to get past the filter. Comment by — June 25, 2012 @ 5:46 pm 11. I am a bit behind in all that’s been done with regards to the rigorous numerics argument, but I am confused on one point: As I understand it, the goal is to get C^0 continuity of the eigenfunction with respect to the domain in order to get control over where the extrema of the eigenfunction go as we perturb the domain. And for the moment, all we have is continuity with respect to the L^2 and H^1 norms. But the paper of Banuelos and Pang gives a C^0 continuity result. So why can’t we apply their result as it stands? Comment by — June 29, 2012 @ 11:55 pm • The Banuelos-Pang paper does in principle give explicit C^0 bounds, but they depend on bounds on the heat kernel on the triangles, which are presumably in the literature but it would take a fair amount of effort to make all the constants explicit (and I would imagine that the final constants would be terrible, making it much harder to use them for numerics as one would have to use an enormous number of reference triangles). My belief is that by working with the explicit nature of the triangular domains one can get better constants. Also, there is a possibility that we may also get explicit C^1 or even C^2 bounds as well, which would also be helpful in locating extrema, though it is probably going to be simplest to try for C^0 bounds first. Comment by — June 30, 2012 @ 12:09 am • I see, thanks! Comment by — June 30, 2012 @ 4:02 am 12. Terry, I have looked over your recent notes “Stability Theory for Neumann Eigenfunctions” and had a few comments/questions: 1) (Typo) In Lemma 1.2 you talk about P and Q but then call them X and Y. 2) (Typo?) On page 6 in the equation after the line “From the orthonormality (2.5) and the Bessel inequality, we conclude that” shouldn’t it be $e^{-4\omega}$ in the term on the left? 3) (Question) At the bottom of page 6, why is it that when we differentiate $\frac{d^2}{dt^2}\int_H e^{2\omega}$ we don’t have an extra term of the form $\int_H 2 \omega'' e^{2\omega}$? 4) (General Question) It seems that getting a bound on $|\dot{u_2}|_{L^\infty}$ will control how much the values of $u_2$ change… but I don’t see how that would immediately give control of the *location* of the extrema. Is the argument to appeal to the fact that “if the extrema is near the corner it must be at the corner”? Comment by — July 15, 2012 @ 8:20 pm • (1) thanks for the correction, it will appear in the next revision of the notes. (2) I think the factor is $e^{-2\omega} = (e^{-2\omega})^2 \times e^{2\omega}$, the extra $e^{2\omega}$ factor coming from the weight in (2.5). (3) $\omega$ depends linearly on time (assuming $\alpha,\beta$ vary linearly in time) and so the second derivative is zero. (4) Yes, one also needs to separately exclude extrema occuring near the corner in addition to L^infty variation bounds to completely control all extrema, this is the rationale behind my previous comment at Comment 11. Unfortunately I am beginning to be a bit worried that the bounds there are a bit weak and will lead to requiring the mesh density to be huge… Comment by — July 15, 2012 @ 9:52 pm • Is your concern about the mesh spacing required in parameter space, or that required for the computation on any given triangle? In other words, are you concerned about the bounds on the variation, or on the location of the extrema near the corners? I should be ready to post some numerical results by Wednesday on computing the bounds you calculated on the variation, as well as the (numerically computed) variation. Based on the preliminary results, no surprises so far. Comment by — July 16, 2012 @ 2:20 am • I guess both, because the net error in our L^infty control on an eigenfunction on a triangle will depend on both (a) the distance in parameter space to the nearest reference triangle, and (b) the accuracy of our eigenfunction approximation in the reference triangle, as well as (c) the spectral gap bounds. Then this has to be compared against (d) our numerical bounds on how far away from the extrema the numerical eigenfunctions are far away from the extremal vertices, and (e) the neighbourhoods around the extremal vertices for which we may rigorously exclude extrema. The hope is that (c), (d), (e) are strong enough that we can use numerically feasible mesh sizes for (a) and (b). Comment by — July 16, 2012 @ 3:44 am • Agreed. A while ago I was trying to get a handle on (d) numerically. My approach was to use an overlapping Schwarz iteration. The idea is to iterate on eigenvalue problems on subdomains of the triangle – I partitioned the triangle into regions which are wedges and a full circle. My hope was the tools of Siudeja and Banuelos-Pang would help in establishing the method converged. Unfortunately, I was unable to rigorously prove this. But I think the approach could work, using tools from PDE analysis. http://www.math.sfu.ca/~nigam/polymath-figures/Schwarz.pdf Comment by — July 16, 2012 @ 7:51 pm • Well, perhaps we don’t need a rigorous guarantee that the numerical algorithm converges, but instead go with a numerical recipe that in practice gives a numerical eigenfunction $\tilde u_2$ and numerical eigenvalue $\tilde \lambda_2$ with very good residual $\| -\Delta \tilde u_2 - \tilde \lambda_2 \tilde u_2 \|_{L^2}$, and then do some a posteriori analysis to rigorously conclude that the error is small. Indeed, if one has a demonstrable gap between the numerical eigenvalue $\tilde \lambda_2$ and the true third eigenvalue $\lambda_3$, then some simpleplaying around with eigenvalue decomposition (computing the inner product of $-\Delta \tilde u_2 -\tilde \lambda \tilde u_2$ against other true eigenfunctions $u_k$ via integration by parts) shows that the residual controls the error $\| \tilde u_2 - u_2 \|_{H^2}$ in H^2 norm (and hence in L^infty norm, by the Sobolev inequality in my notes), at least if one can ensure that $\tilde u_2$ obeys the Neumann condition exactly. Comment by — July 16, 2012 @ 9:53 pm • The challenge with this is in how I compute the residual. Numerically, my strategy was to approximate \$u_i\$ (in the notes) by finite linear combinations of Fourier-Bessel functions. The trace of the approximations on the arcs can be written down readily; the application of the Laplacian on the sub-domains is also OK. However, to compute the L2 inner products, I used a quadrature. This is how I assemble the matrices to get the approximate eigenfunctions. Also, the conditioning of the eigenvalue problems wasn’t great. Since one is looking at minimizing the residual in \$L^2\$, the all-critical traces of \$u_i^n \frac{\partial u_0^n}{\partial \nu}\$ on the common interfaces play a role, but not as important as one may want. While I want to believe this method gives a good approximation by looking at the numerical residual, I am not 100% convinced. One I got the approximate eigenfunction by this method, I still have to locate the extrema. I do this by interpolating the function by piecewise linears onto a mesh of the triangle, and then doing a search. This can be improved. Let me add in some of the details of the implementation in the notes. Perhaps some collective trouble-shooting will help. Using the finite element method, the quadratures are exact (since I use piecewise polynomials). The search proceeds as above. Since I’m using a quasi-regular discretization, both the Galerkin errors and the Lanczos errors are well-understood and the methods are provably convergent. This is a reliable, if not super-fast, work-horse. Comment by — July 16, 2012 @ 10:13 pm • I’ve updated http://www.math.sfu.ca/~nigam/polymath-figures/Schwarz.pdf to include the implementation details. As a numerical method, this is OK (not great because of conditioning issues!) Comment by — July 16, 2012 @ 11:21 pm • Here’s one possibility. You’re dividing the triangle into three sectors and a disk, and on each of these regions one can create an exact eigenfunction with Neumann conditions on the original boundary (and some garbage on the new boundaries). Now with some explicit C^2 partition of unity, one can splice together these exact eigenfunctions on the subregions into an approximate eigenfunction on the whole triangle, and the residual will be controlled by the H^1 error between the exact eigenfunctions on the intersection between the subregions. To illustrate what I mean by this, let us for simplicity assume that the triangle $\Omega$ is covered into just two subregions $\Omega_1, \Omega_2$ instead of four. Let $u_1, u_2$ be exact eigenfunctions on $\Omega_1,\Omega_2$ respectively with the same eigenvalue $\lambda$, and obey the Neumann condition exactly on $\partial \Omega \cap \Omega_1$ and $\partial \Omega \cap \Omega_2$ respectively. We then glue these together to create a function $u := \eta u_1 + (1-\eta) u_2$ on the entire triangle $\Omega$, where $\eta$ is a C^2 bump function that equals 1 outside of $\Omega_2$ and equals 0 outside of $\Omega_1$. Then we may compute $-\Delta u = \lambda u + 2 \nabla \eta \cdot \nabla (u_1-u_2) + \Delta \eta \cdot (u_1-u_2)$. Also u obeys the Neumann conditions exactly. Thus if u_1 and u_2 are close in H^1 norm on the common domain $\Omega_1 \cap \Omega_2$, the global residual $\| -\Delta u - \lambda u \|_{L^2}$ will be small. One advantage of this approach is that we don’t need to care too much about the boundary traces of u_1,u_2. But one does need a certain margin of overlap between the subregions so that the cutoffs $\eta$ lie in C^2 with reasonable bounds, it’s not enough for them to be adjacent. Comment by — July 18, 2012 @ 7:25 pm • Yes, this is certainly one way to analyze the overlapping strategy: the partitions of unity will assure convergence of the Schwartz iteration in one step. In the set-up I tried using numerically, the domains have non-trivial overlap. Solving boundary value problems this way would ensure nice convergence of the iteration. My misgiving came from the conditioning of the eigenvalue problems on the sub-domains; since the computations were in floating-point arithmetic, poor conditioning is worrying. My thinking was that since the actual eigenfunction is C^2 in the interior, the non-standard eigenvalue problem for the disk will have smooth coefficients. My rationale for not using the partition of unity was that the approximation functions I used in each region satisfy \$-\Delta u = \Lambda u\$ exactly (but potentially not the boundary data). However, for the purpose of an analytical treatment, the partition of unity strategy may be easier to work with. Comment by — July 18, 2012 @ 7:42 pm • Thanks for the clarifications! Comment by — July 16, 2012 @ 3:20 am 13. I’ve posted something twice on the research thread- my first attempt did not show up after refreshing the page. Would it be possible to remove the duplicate post? I’m not sure how to do this. [Done. - T.] Comment by — July 21, 2012 @ 4:44 am 14. Just a short note to say that I’m still interested in this problem, but am preparing for a two-week vacation starting on Saturday and so unfortunately have had to prioritise my time. But I will definitely return to this project afterwards… Comment by — August 9, 2012 @ 3:07 am • Apologies about the delay from my end- I’ve been writing up some notes to summarize the numerical strategy, include some validation experiments, and discuss the results so far. The conjecture has been (numerically) examined and (numerically) verified on a fine, non-uniform, grid in parameter space away from the equilateral triangle. The grid spacing is chosen so that the variation of the eigenfunctions is controlled to 0.001. At each of these points, we have numerical upper and lower bounds on the second eigenvalue; these bounds provide an interval of width 1e-7 around the true eigenvalue. The eigenfunctions are computed so the Ritz residual is under 1e-11. I have *something* coded up which uses the bounds near the equilateral triangle, but am not confident enough about these yet to present them. Comment by — August 9, 2012 @ 4:30 am 15. I just wanted to say that Bartlomiej and I are at a stochastic analysis conference at the moment and we are discussing ideas for the problem (the discussion has been restricted to analytic approaches though) along with some other interested people (Mihai Pascu, Rodrigo Banuelos, Chris Burdzy, etc) at the conference. My internet access is limited (I am on a public computer at the moment) but I/we will try to write a summary of our discussion after the conference! Comment by — September 12, 2012 @ 9:18 am • Nice! An analytic approach would be great. Comment by — September 12, 2012 @ 4:54 pm RSS feed for comments on this post. TrackBack URI Theme: Customized Rubric. Blog at WordPress.com. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 133, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448615908622742, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/81631-prove-set-real-numbers-not-countable-set-print.html
# prove that the set of real numbers is not a countable set Printable View • March 31st 2009, 08:05 AM rosebud prove that the set of real numbers is not a countable set 1. Prove that R is equivalent to (0,1) and (0,1) is equivalent to [0,1]. Conclude that R is equivalent to R. Now prove that [0,1] is uncountable. Consider an arbitrary function f: J--> [0,1] and prove that im f is not equal to [0,1]. Thus, there is no function from J onto [0,1], and so [0,1] is uncountable. Suppose that T is a function from J to [0,1]. 2. Show that there are sequences {a_n}from infinity when n=1 and {b_n} from infinity when n=1 such that [a_1, b_1] is a subset [0,1], and for each n E J, [a_n+1 , b_n+1] is a subset [a_n, b_n] and T(n) is not an element of [a_n, b_n] 3. Show that {a_n} from infinity when n=1 converges; call the limit A. 4. Prove that A E [a_n, b_n] for each n E J. Conclude that A is not an element im T. 5. Finish the proof that [0,1] is uncountable. • March 31st 2009, 04:01 PM HallsofIvy Quote: Originally Posted by rosebud 1. Prove that R is equivalent to (0,1) and (0,1) is equivalent to [0,1]. Conclude that R is equivalent to R. surely that is not what the problem says! That R is equivalent to [0,1] would follow from this but "R is equivalent to R" is trivial. To prove that R is equivalent to (0,1), look at the function f(x)= $\frac{1}{x(x-1)}$. To prove that (0,1) is equivalent to [0,1], write the rational numbers in (0, 1) in an ordering $\{r_1, r_2, r_3, ...\}$ (which is possible because the rational numbers are countable) and define f(x)= x if x is irrational, $f(r_1)= 0, f(r_2)= 1, f(r_n)= r_{n-1}$ for n>2. Quote: Now prove that [0,1] is uncountable. Consider an arbitrary function f: J--> [0,1] and prove that im f is not equal to [0,1]. Thus, there is no function from J onto [0,1], and so [0,1] is uncountable. Suppose that T is a function from J to [0,1]. 2. Show that there are sequences {a_n}from infinity when n=1 and {b_n} from infinity when n=1 such that [a_1, b_1] is a subset [0,1], and for each n E J, [a_n+1 , b_n+1] is a subset [a_n, b_n] and T(n) is not an element of [a_n, b_n] 3. Show that {a_n} from infinity when n=1 converges; call the limit A. 4. Prove that A E [a_n, b_n] for each n E J. Conclude that A is not an element im T. I have no idea what "from infinity" means here. Quote: 5. Finish the proof that [0,1] is uncountable. All times are GMT -8. The time now is 02:46 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281986951828003, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/3160-auditory-signal.html
# Thread: 1. ## auditory signal The ability to recognize an auditory "signal" presented against a background of auditory "noise" is important to a particular job. A company tests for this ability by presenting 100 trials of auditory noise. During each trial the subject must state whether or not a signal has also been presented. A training program has been used to try to improve performance in this test. A random sample of 50 persons, who have had the training program, are administered the auditory test and they make an average of 32.2 errors. a) The company's records indicated that in the past the mean number of errors has been 32.6 with a variance of 2.25. Does the training program improve test performance? (Use alpha less than or equal to 0.05) b) With alpha less than or equal to 0.02, form a confidence interval to estimate the mean number of errors that would be made by all subjects who undergo the training program. [31.71 to 32.69] 2. Originally Posted by aptiva The ability to recognize an auditory "signal" presented against a background of auditory "noise" is important to a particular job. A company tests for this ability by presenting 100 trials of auditory noise. During each trial the subject must state whether or not a signal has also been presented. A training program has been used to try to improve performance in this test. A random sample of 50 persons, who have had the training program, are administered the auditory test and they make an average of 32.2 errors. a) The company's records indicated that in the past the mean number of errors has been 32.6 with a variance of 2.25. Does the training program improve test performance? (Use alpha less than or equal to 0.05) The null hypothesis $H_0$ is that there is no change in the distribution of the number of errors. The critical z-score that a normal distributed RV exceeds with probability 0.95 is -1.64521. (a test with this critical value will reject the null hypothesis if it were true 0.05 of the time, which is an alpha of 0.05) The mean number of errors in a sample of size 50 persons under $H_0$ is 32.5, and the SD of this is $\sqrt{2.25/50}\approx 0.212132$ Therefore we would reject the null hypothesis if the experiment had a mean number of errors $< 32.6-0.212132*1.64521=32.251$ But the results gave a mean number of errors of 32.2 which is less than 32.251, so we rejsct the null hypothesis using this test. RonL 3. Originally Posted by aptiva The ability to recognize an auditory "signal" presented against a background of auditory "noise" is important to a particular job. A company tests for this ability by presenting 100 trials of auditory noise. During each trial the subject must state whether or not a signal has also been presented. A training program has been used to try to improve performance in this test. A random sample of 50 persons, who have had the training program, are administered the auditory test and they make an average of 32.2 errors. b) With alpha less than or equal to 0.02, form a confidence interval to estimate the mean number of errors that would be made by all subjects who undergo the training program. [31.71 to 32.69] For the given level of alpha the confidence interval in z-score will be +/- 2.327. Therefore the interval will be [32.2-2.327*0.2121.32.2+2.327*0.2121]=[31.71,32.69] RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951018214225769, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Turbulence_length_scale&diff=9084&oldid=9083
[Sponsors] Home > Wiki > Turbulence length scale # Turbulence length scale ### From CFD-Wiki (Difference between revisions) | | | | | |----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 23: | | Line 23: | | | | ===Wall-bounded inlet flows=== | | ===Wall-bounded inlet flows=== | | | | | | | - | The turbulence length scale can be estimated (approximately) from the inlet boundary layer thickness. Set <math>l</math> to half the inlet boundary layer thickness. | + | When the inlet flow is bounded by walls with turbulent boundary layers, the turbulence length scale can be estimated (approximately) from the inlet boundary layer thickness. Set <math>l</math> to half the inlet boundary layer thickness. | ## Revision as of 12:39, 15 May 2008 The turbulence length scale, $l$ , is a physical quantity describing the size of the large energy containing eddies in a turbulent flow. The turbulent length scale is often used to estimate the turbulent properties on the inlets of a CFD simulation. Since the turbulent length scale is a quantity which is intuitively easy to relate to the physical size of the problem it is easy to guess a reasonable value of the turbulent length scale. The turbulent length scale should normally not be larger than the dimension of the problem, since that would mean that the turbulent eddies are larger than the problem size. In the k-epsilon model the turbulent length scale can be computed as: $l = C_\mu \, \frac{k^\frac{3}{2}}{\epsilon}$ $C_\mu$ is a model constant which in the standard version of the k-epsilon model has a value of 0.09. ## Estimating the turbulence length scale It is common to set the turbulence length scale to a certain percentage of a typical dimension of the problem. For example, at the inlet to a turbine stage a typical turbulence length scale could be say 5% of the channel height. In grid-generated turbulence the turbulence length scale is often set to something close to the size of the grid bars. ### Fully developed pipe flow In pipe flows the turbulence length scale can be estimated from the hydraulic diameter. In fully developed pipe flow the turbulence length scale is 7% of the hydraulic diameter (in the case of a circular pipe the hydraulic diameter is the same as the diameter of the pipe). Hence: $l = 0.07 \; d_h$ Where $d_h$ is the hydraulic diameter. ### Wall-bounded inlet flows When the inlet flow is bounded by walls with turbulent boundary layers, the turbulence length scale can be estimated (approximately) from the inlet boundary layer thickness. Set $l$ to half the inlet boundary layer thickness.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.876385509967804, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/143776-normal-equations.html
# Thread: 1. ## normal equations Hi, for this model, $y_{ijkl} = \mu + T_i + B_j + S_k + TB_{ij} + e_{ijkl},$ $i = 1,2,3; j = 1..3; k = 1..2; l=1..5$ am i supposed to have 1 + 3 + 3 + 2 + 5 = 14 normal equations? or am i just meant to have 1 + 3 + 3 + 2 = 9 normal equations? 2. I use matrices $\hat\beta=\left(X^t X\right)^{-1}X^tY$ 3. Originally Posted by matheagle I use matrices $\hat\beta=\left(X^t X\right)^{-1}X^tY$ Thanks but im wondering actually how many normal equations am i supposed to find? Because the way we do it, is we find the error sum of squares of the model and then differentiate it wrt each of the terms i.e. $\mu, T_1, T_2, T_3, B_1$ etc. so so far i have 9 normal equations, but I'm wondering if for the interaction term $TB_{ij}$ which now i realise i didnt put into my question, (and i will after this post) do i differentiate it with respect to $TB_{11}, TB_{12}, TB_{13}, TB_{21}$ etc. ?? 4. You should have one equation per parameter that you're differentiating with respect to. 5. Originally Posted by matheagle You should have one equation per parameter that you're differentiating with respect to. Thanks but like I'm really stupid as in when i just read your post, i dont get it.. so like when you say parameter are you refering to the i, j, k and l's? or are you referring to T, B, S, TBs? 6. Originally Posted by Dgphru Thanks but like I'm really stupid as in when i just read your post, i dont get it.. so like when you say parameter are you refering to the i, j, k and l's? or are you referring to T, B, S, TBs? The mu's tau's beta's... are the unknown parameters and you need to differentiate (yuk, the projection matrix is so much easier) wrt each of these and solve all those equations. 7. Originally Posted by matheagle The mu's tau's beta's... are the unknown parameters and you need to differentiate (yuk, the projection matrix is so much easier) wrt each of these and solve all those equations. Thank you again. So just confirming, this means that I should have 5 normal equations?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296230673789978, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/65378-inequality-prime-counting-function-print.html
# Inequality with the prime-counting function Printable View • December 17th 2008, 09:13 AM HTale Inequality with the prime-counting function Hi there guys, I hope I've posted this in the right section; I'm going through a paper, and there is one statement which is: $n^{\pi(n)} < 3^n$ for sufficiently large $n$, where $\pi(n)$ is the prime-counting function defined as the function counting the number of prime numbers less than or equal to $n$. I'm having real difficulty justifying this claim, and I'd be really grateful if someone could help me out. Thanks in advance, HTale • December 17th 2008, 11:07 AM Opalg Quote: Originally Posted by HTale I'm going through a paper, and there is one statement which is: $n^{\pi(n)} < 3^n$ for sufficiently large $n$, where $\pi(n)$ is the prime-counting function defined as the function counting the number of prime numbers less than or equal to $n$. Take logs, and it says $\pi(n)<\frac n{\ln n}\ln3$. But ln(3)>1, so the result will follow from the prime number theorem. All times are GMT -8. The time now is 08:11 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419031143188477, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/43958/cyclic-groups-whose-every-non-identity-member-is-a-generator
# Cyclic groups whose every non-identity member is a generator Let G be a cyclic group. There's a theorem which states that if |G| is a prime, then every non-identity member of G is a generator. What about a cyclic group whose order is not prime: Is there such a group whose every non-identity member is a generator? Are there other necessary/sufficient conditions regarding groups whose every non-identity member is a generator? (Beyond primality of |G|.) - 5 The statement should be that every non-identity element is a generator, since the identity element of a group is a generator if and only if the group is trivial. – Zev Chonoles♦ Jun 7 '11 at 21:39 @Zev: Thanks, corrected. – Sadeq Dousti Jun 8 '11 at 1:54 ## 1 Answer To answer your first question: no, a cyclic group whose order is not prime must contain non-identity (thanks Zev!) elements that are not generators. Let $G$ be a cyclic group, let $g$ be any generator of $G$, and let $n$ be the order of $G$. Then for any $d$ that divides $n$, the subgroup generated by $g^d$ is not all of $G$ (this subgroup has $n/d$ elements, but $G$ has $n$ elements). To answer your second question: for every non-identity element of $G$ to be a generator, $G$ must be a cyclic group with prime order. If $G$ weren't a cyclic group, then $G$ wouldn't have any generators at all (the definition of "cyclic group" is "a group that can be generated by a single element"), and the answer to your first question shows that the order of $G$ must be prime. - 4 – t.b. Jun 7 '11 at 21:46 1 +1. Great answer, simple and to the point! – Sadeq Dousti Jun 8 '11 at 1:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285479187965393, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1577/is-it-possible-to-create-an-easy-to-use-encryption-decryption-method-that-will-n?answertab=votes
# Is it possible to create an easy to use encryption/decryption method that will never be comprimised? In the comments of the question "Why programming languages don't provide simple encryption methods?" the following statement was made: A well thought out, tested and understood standard that has undergone extensive review by the crypto community has a much better [chance] of avoiding compromise than a system designed by a single engineer using a fairly low level library. to me such a system would have the following requirements: • Encryption would require nothing more than a string of text to encrypt and an easily programmatic producible "key". • Decryption would require nothing more than an easily programmatic producible "key". • The result would not ever be able to be determined with out access to the key even given a reasonably huge finite (IE more than we can ever expect to have available) amount of computing power. • No method of attack would ever trivialize the determining the key or the source text used for the encryption. My opinion is that the nature of encryption is that it is impossible for a standard like this to work given an infinite amount of computing power. We may be able to do this for the computing power of today but eventually given enough power it will be trivial to decrypt any scheme if we know how the scheme works and it only requires a single key. Is such a scheme possible? - I agree with your analysis that given infinite computational resources such a scheme, outside of a OTP, would likely be compromised. Generally success of a crypto-system is defined by showing that the best possible attack is brute forcing the key-space. I think the more interesting question is if such a scheme is possible given key-space bounded compute time (a more typical definition of security). The most interesting question, to me, is how can we increase our trust in crypto-systems given that non-cryptographically trained engineers will be (mis)using them (fool proof security). – Ethan Heilman Jan 6 '12 at 20:10 @EthanHeilman - The problem is computing power is continually increasing and new processor types like GPU's create new functionality that basically trivializes some encryption cracking. So any standard and secure encryption that could be built in would need to be able to withstand brute force. – Chad Jan 6 '12 at 20:54 I think we can set the bar pretty high in terms of brute force. No one is concerned that AES will be broken due to brute forcing the key. In fact it is quite easy to create a crypto-system with a key so large a computer the size of the universe couldn't brute force it. For instance using all the atoms in the universe ($10^82$ atoms) as computers capable of computing 1 trillion keys a second, one could brute force roughly 2^314 keys a second. To brute force a 512-bit key would take roughly $10^{52}$ years (far far longer than the lifetime of the universe). – Ethan Heilman Jan 6 '12 at 21:23 @EthanHeilman - I am confused you indicated that the Rijndael was too complex to be considered simple in the other thread. – Chad Jan 6 '12 at 21:31 I have no problem with Rijndael (AES) per se, what I have a problem with is that the library is expecting the engineer to turn a secure block-cipher (AES) into a secure crypto-system (AES is fine but it needs all the other stuff: padding, authentication, IV generation, a chaining mode). The default crypto libraries that an engineer encounters should operate at the level of crypto-systems not at the level of primitives. For example bcrypt does a decent job of this with hashing passwords (with some reservations). The default interface is BCrypt::Engine.hash_secret(password, password_salt). – Ethan Heilman Jan 6 '12 at 21:42 ## 2 Answers If the key is: • generated with an unpredictable truly random uniform generator (not a pseudo-random generator); • as long as the data to encrypt; • used for only one message ever; then this is the One-Time Pad model, and you can encrypt data by a simple bitwise XOR (no need for an explicit function, just XOR). Otherwise, there is no solution which resists attackers with infinite computational abilities. Shannon's thesis is all about that. - Basically, if there is sufficient information in the ciphertext, then an attacker can break it. If there is insufficient information, then no. For example, I picked a number from 1 to 10 and encrypted it by adding a key (from 0 to 100) to it. I got 41. You have infinite computing power. What number did I pick? – David Schwartz Jan 9 '12 at 23:37 @DavidSchwartz - Maybe not but I know your key was between 0 and 41... and I know that the encrypted number was <= 42. I have reduced the pool of potentials by ~59% – Chad Jan 10 '12 at 13:53 Two words: Vernam Cipher – user1364 Jan 11 '12 at 20:09 Is such a scheme possible? Theoretically "Yes", but only under a certain, single condition... It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is 1. truly random, 2. never reused, 3. kept secret from all possible attackers, 4. and of equal or greater length than the message. Most ciphers, apart from the "one-time-pad", can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to make use of the cipher. In such cases, effective security could be achieved if it is proven that the effort required (example: the "work factor", as Claude Shannon defines it) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. As no such proof has been found to date related to a "one-time-pad", the "one-time-pad" remains the only theoretically unbreakable cipher. Read again: ...theoretically unbreakable... UPDATE As this came up in the comments... when it comes to creating a one-time pad, there are several hardware solutions and software implementations that would satisfy the "truly random one-time pad" definition. Some initial info can be found at http://en.wikipedia.org/wiki/One-time_pad#True_randomness , but if you really want to dive into this a bit more, you'll probably want to check on "Randomness Recommendations for Security", which is available at http://www.ietf.org/rfc/rfc1750.txt Oh, and while I'm updating my answer: the (as OP calls it) "scheme" we're talking about is commonly known as Vernam Cipher, just in case you want to cross-check my answer using search engines. ;) RC4 is an example of a Vernam cipher that is widely used on the Internet. More information about the Vernam Cipher, it's history, it's inventor and related patents can be found at http://en.wikipedia.org/wiki/Gilbert_Vernam - Actually, there's an infinite set of ciphers that are cannot be broken, but they all share the 4 points you list. Trivially, for any bijective function f, `f(OTP(x))` is such a cipher. – MSalters Jan 9 '12 at 14:07 I agree with your answer but am not able to accept it because we do not have a method to generate a true random one time pad in a computer program. – Chad Jan 9 '12 at 14:25 @Chad : to accept an answer, you need to click the "checkmark" next to the answer. ;) – user1364 Jan 10 '12 at 0:13 @e-sushi I know how to accept it. I am saying that this does not really meet the criteria since we can not programmatically generate a truly random one time pad. – Chad Jan 10 '12 at 13:43 @Chad : of course you can. But you're correct that "ye average programming language" will not satisfy as they mostly use pseudo-randomizers. Yet, using the correct means, it can be done. Remember that a random pad can be generated from any piece of random data. In fact, you could even grab random data by fetching cloud movements from satellite images. On a more "regular" level, there are hardware random number generators and other things. Check the update in my answer. – user1364 Jan 11 '12 at 20:00 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313540458679199, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/80586-funtion-3-uniform-continuous-random-variables.html
# Thread: 1. ## A Funtion of 3 uniform, continuous random variables The question is: Given X,Y,Z are uniformly distributed on (0,1), and are independant, prove that (XY)^Z is also uniform on (0,1) I know the PDF of XY. My main question is, how do I work out the PDF of W^Z, where W has pdf -ln(t) and Z has pdf of 1? (Or, even better, how do I do it with general functions?). Thanks for any help. 2. I'd probably do a 2-2 transformation and then integrate out the dummy variable. 3. But... how? How do I know what I'm integrating, how do I know how to combine the PDFs in the right way, how do I know what the limits are? 4. I've been busy with grading exams, but here's a thought. Try the CDF of your animal. Now, I didn't verify your density of W nor did I draw what I'm proposing. But here it is... $P(W^Z\le a)=P(W\le a^{1/z} )=\int_0^1\int_0^{a^{1/z}} f(w,z)dwdz$ $=\int_0^1\int_0^{a^{1/z}} f(w)f(z)dwdz =\int_0^1\int_0^{a^{1/z}} f(w)dwdz$. BUT you should draw this and make sure the bounds of integration are correct. Back to my grading... 5. I've just tried that. I consistently get as part of the answer the integral of e^(1/x). I even tried the question a different way using the same approach (Taking the ln of (xy)^z and considering zlnx as a random variable, lnx being exponentially distributed) and I still got the integral of e^(1/x) in my answer. We have most definitely never been taught anything about that function in our course, and nor have we ever integrated an infinite series. Something is either fundamentally wrong with what I'm doing or it's one of -those- questions which we're not actually supposed to be able to answer. Either way, I don't think I'll get anywhere with it, so I'll leave it. I now know how to attempt these types of questions though - so thanks for that 6. I think I did it. I was worried about a negative sign, but that's not a problem. I also have the density of W=XY as $-\ln w$ on (0,1). I then let $A=W^Z$ and $B=W$. Hence $Z={\ln A\over \ln B}$ and $W=B$. My Jacobian is $\biggl|{1\over A\ln B}\biggr|$. BUT 0<B<1, SO $\biggl| {1\over A\ln B} \biggr|= {-1\over A\ln B}$ or if you wish ${1\over A\ln B^{-1}}$. So, the $\ln B$'s cancel and I get ${1\over A}$ for the joint density. NOW, our region is 0<B<1 and $0<{\ln A\over \ln B}<1$. That translates to 0<B<A<1, since both numbers are in (0,1). Thus the density of A is $f_A(a)=\int_0^A {dB\over A} =1$ on 0<A<1. 7. That helps a lot - I have the question done now, thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653587937355042, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/236046/is-every-above-the-second-level-of-the-arithmetical-hierarchy-independent-of-p/236065
# Is every φ above the second level of the arithmetical hierarchy independent of PA? If I am not wrong, every $\Sigma_n$ (or $\Pi_n$ ) statement $\phi$ is equivalent to a statement that says that a given Turing machine halts (or doesn't halt) on input $C$ using a $\Sigma_{n-1}$-oracle. Then, because we do not have oracles, we shouldn't be able to prove any such statement with $n\geq 2$ within PA, even if $\phi$ is true in the standard N. If the answer to the question is yes, does that mean that any theory stronger than PA that proves a $\phi$ true in N (but independent of PA) equivalent to introducing axioms about oracles? Are then math theories (or formal systems) stronger than PA ontologically equivalent to educated guesses about oracles? Bonus question: Is the CH or any other axiom of set theory and beyond equivalent to some kind of transfinite halting question using some kind of "super" oracle? More distilled question: I agree that even if ϕ is independent from PA, ϕ∨¬ϕ is not, because it is a tautology. Then, for instance, if ϕ∨¬ϕ is Σ3, then ϕ∨¬ϕ is equivalent to state that a given Turing machine halts on input C using a Σ2-oracle (and that will be true). Now, does the fact that we were able to prove ϕ∨¬ϕ mean that the oracle is not necessary? or stating it in a different way, doest it mean that, even if ϕ∨¬ϕ is Σ3, and because it can be proved, it is also equivalent to state that the same Turing machine will halt on input C even without an oracle? - ## 1 Answer The statement about oracles is known as Post's theorem. You can certainly prove statements in PA of arbitrarily high quantifier complexity. For example, every instance of $\phi \lor \lnot \phi$ is provable in PA, regardless of the quantifier structure of $\phi$. Similarly, the induction scheme in PA contains formulas that are arbitrarily high in the arithmetical hierarchy. The continuum hypothesis is (equivalent to) a $\Sigma^2_1$ formula. Thus you can indeed view it as solved by the "super oracle" that is able to decide this kind of type 2 quantification, namely existential quantifiers over functions from $\omega^\omega$ to $\omega$ applied to formulas of second-order arithmetic with variables for such functions. But at that level we are very, very far from "computability" in the usual sense. - I am confused now (I mean, more than before): if we can prove $\phi \lor \lnot \phi$ for any $\phi$ (which I agree) what is the meaning Post's theorem? I mean, does it mean that there are oracles that we can "know", I mean solve a non-computable halting problem? – julian fernandez Nov 13 '12 at 19:27 Just because we can prove $\phi \lor \lnot \phi$ does not mean in general that we can prove $\phi$ nor that we can prove $\lnot \phi$. Post's theorem shows, among other things, that if we just look at the formulas that are $\Sigma^0_n$ for some $n$ then the oracle $\emptyset^{(n)}$ is able to compute the set of such formulas that are true in the standard model. But there is no way to compute that oracle by just searching through proofs. – Carl Mummert Nov 13 '12 at 20:16 Indeed for an independent $\Sigma^0_1$ sentence $\psi$, we will be able to prove $\psi \lor \lnot \psi$ but not able to prove $\psi$ itself nor prove its negation. So whether this sentence is true in the standard model - which is what the oracle $\emptyset'$ can tell us - we cannot discover it by searching through all the proofs. – Carl Mummert Nov 13 '12 at 20:19 I think I understand what you said, but I have a more specific question, which is too long for a comment, so I added it at the end of the original question. – julian fernandez Nov 13 '12 at 23:08 2 An example of a nontrivial statement provable in PA is this finite version of Ramsey's theorem: for every $k$ and $M$ there is an $N$ such that for every $k$-coloring of the natural numbers from $1$ to $N$ there are at least $M$ numbers in that range which receive the same color. That statement takes some work to express as a formula in the language of arithmetic, but just looking at the quantifiers in English shows it is at least $\Pi^0_4$. – Carl Mummert Nov 15 '12 at 16:18 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414408802986145, "perplexity_flag": "head"}
http://medlibrary.org/medwiki/Boiling_point
Boiling point Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Boiling water The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid[1][2] and the liquid changes into a vapor. A liquid in a vacuum has a lower boiling point than when that liquid is at atmospheric pressure. A liquid at high-pressure has a higher boiling point than when that liquid is at atmospheric pressure. In other words, the boiling point of a liquid varies depending upon the surrounding environmental pressure. For a given pressure, different liquids boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, 1 atmosphere.[3][4] At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point is now (as of 1982) defined by IUPAC as the temperature at which boiling occurs under a pressure of 1 bar.[5] The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing). Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Usually, boiling points are published with respect to atmospheric pressure (101.325 kilopascals or 1 atm). At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. If the heat of vaporization and the vapor pressure of a liquid at a certain temperature is known, the normal boiling point can be calculated by using the Clausius-Clapeyron equation thus: $T_B = \Bigg(\frac{\,R\,\ln(P_0)}{\Delta H_{vap}}+\frac{1}{T_0}\Bigg)^{-1}$ $T_B$ where: = the normal boiling point, K = the ideal gas constant, 8.314 J · K−1 · mol−1 = is the vapor pressure at a given temperature, atm = the heat of vaporization of the liquid, J/mol = the given temperature, K = the natural logarithm to the base e Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased so is saturation temperature. If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased. The boiling point of water is 100 °C (212 °F) at standard pressure. On top of Mount Everest, at 8,848 m (29,029 ft) elevation, the pressure is about 252 Torr (33.597 kPa)[6] and the boiling point of water is 71 °C (159.8 °F). The boiling point decreases 1 °C every 285 m of elevation, or 1 °F every 500 ft. There are two conventions regarding the standard boiling point of water: The normal boiling point is 99.97 degrees Celsius at a pressure of 1 atm (i.e., 101.325 kPa). Until 1982 this was also the standard boiling point of water, but the IUPAC now recommends a standard pressure of 1 bar (100 kPa).[7] At this slightly reduced pressure, the standard boiling point of water is 99.61 degrees Celsius.[8] Relation between the normal boiling point and the vapor pressure of liquids A typical vapor pressure chart for various liquids The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid. The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids.[9] As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (-24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. Properties of the elements Further information: List of elements by boiling point The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point.[10] Boiling point as a reference property of a pure compound As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points. In general, compounds with ionic bonds have high normal boiling points, if they do not not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area. Binary boiling point diagram of two hypothetical only weakly interacting components without an azeotrope Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure. Impurities and mixtures In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water. In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases. See also • Boiling-point elevation • Critical point (thermodynamics) • Ebulliometer • Joback method (Estimation of normal boiling points from molecular structure) • Subcooling • Superheating • Trouton's constant References 1. David.E. Goldberg (1988). 3,000 Solved Problems in Chemistry (1st ed.). McGraw-Hill. ISBN 0-07-023684-4 [Amazon-US | Amazon-UK]. Section 17.43, page 321 2. Louis Theodore, R. Ryan Dupont and Kumar Ganesan (Editors) (1999). Pollution Prevention: The Waste Management Approach to the 21st Century. CRC Press. ISBN 1-56670-495-2 [Amazon-US | Amazon-UK]. Section 27, page 15 3. General Chemistry Glossary Purdue University website page 4. Kevin R. Reel, R. M. Fikar, P. E. Dumas, Jay M. Templin, and Patricia Van Arnum (2006). AP Chemistry (REA) - The Best Test Prep for the Advanced Placement Exam (9th ed.). Research & Education Association. ISBN 0-7386-0221-3 [Amazon-US | Amazon-UK]. Section 71, page 224 5. Standard Pressure IUPAC defines the "standard pressure" as being 105 Pa (which amounts to 1 bar). 6. Perry, R.H. and Green, D.W. (Editors) (1997). (7th ed.). McGraw-Hill. ISBN 0-07-049841-5 [Amazon-US | Amazon-UK]. 7. Howard DeVoe (2000). Thermodynamics and Chemistry (1st ed.). Prentice-Hall. ISBN 0-02-328741-1 [Amazon-US | Amazon-UK]. Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Boiling point", available in its original form here: http://en.wikipedia.org/w/index.php?title=Boiling_point • Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833723068237305, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/133891/kernel-of-a-homomorphism-from-a-free-group-into-mathbbz
# Kernel of a homomorphism from a free group into $\mathbb{Z}$. Let $F$ be a non-abelian free group, that is, a free group of rank at least $2$, and let $\phi: F \rightarrow \mathbb{Z}$ be a nontrivial group homomorphism. How to prove that the kernel of $\phi$ is not finitely generated? - ## 3 Answers This follows since the rank of any infinite-index subgroup of a nonabelian free group is infinite. One way to prove this is by covering space theory. An infinite index subgroup corresponds to a cover of a wedge of circles where every point has infinitely many preimages. Then, if the cover is not a tree, there is at least one loop, $L$. Acting on $L$ by the infinite group of deck transformations gives infinitely many copies of $L$, but some of them may intersect or even coincide. However, since each loop has finitely many edges, only finitely many copies of $L$ can intersect $L$. So we can produce infinitely many disjoint loops. Thus the graph has infinite rank. - I think there might be a simpler proof, using elementary concepts. Anyway, thanks Jim Conant for the interesting answer. I am thinking about it. – rla Apr 19 '12 at 13:05 There is a useful result you could use to prove this. Let $F$ be a free group and $E \leq F$ with $|F:E| = \infty$. Suppose that $\exists \: \{1\} \neq N \unlhd G$ with $N \leq E$. Then the rank of $E$ is infinite. The proof is as follows and assumes familiarity with the theory of Schreier generators of subgroups of free groups. Let $F$ be free on $X$, let $U$ be a Schreier transversal of $E$ in $F$ and, for $g \in F$, denote the element in $U \cap Eg$ by $\overline{g}$. Let $1 \neq w = a_1 \cdots a_l \in N \leq E$, with $a_i \in X^{\pm 1}$. For $u \in U$, $Euw = Euwu^{-1}u = Eu$, since $uwu^{-1} \in N \leq E$. So $\overline{uw} = u$, and $uw \not\in U$, so there is a least $k$ such that $ua_1 \cdots a_k \notin U$. Let $u_k := ua_1 \cdots a_{k-1}$. Then $u_k \in U$ and $u_ka_k \notin U$, so $u_ka_k\overline{u_ka_k}^{-1}$ is not trivial. Since $U$ is infinite and $l$ is fixed, there is an infinite subset $V$ of $U$ and a fixed $k$ with $1 \le k \le l$, such that $k$ is minimal with $u_ka_k \notin U$ for all $u \in V$. Then $\left\{ u_ka_k\overline{u_ka_k}^{-1} : u \in V \right\}$ is an infinite subset of the set of Schreier generators of $E$, and hence $E$ has infinite rank. - For a completely different proof, that uses only the universal property of free groups, you could factor your map $\phi:F \to {\mathbb Z}$ as $\phi:F \to H \to {\mathbb Z}$, where $H$ is the wreath product ${\mathbb Z} \wr {\mathbb Z}$, and observe that the kernel of $H \to {\mathbb Z}$ is infinitely generated. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492442607879639, "perplexity_flag": "head"}
http://mathoverflow.net/questions/116734/publishing-a-bad-paper/117769
## Publishing a bad paper? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) First, I apologize if mathoverflow is a bad fit for this question, but it is the only place where I can think to get advice from professionals given my circumstance. I'm also sorry about any vagueness in my post since I need to make sure I maintain anonymity - anonymity is also why I can't get advice from folks in my department. The short story is that I am a graduate student in a math related field and have been reached out to by a faculty member for writing up a paper about something we discussed to be submitted to a low ranking Mathematics journal. The material is very basic, coming very close to being trivial observations, and happens to be of, I suspect, no interest to anyone anywhere. Anyone who cared to prove what we proved would probably be able to do so within a day at most, if the results aren't already well known. At best, I think the results make good homework problems. Despite this, the faculty member seems excited about it. He works in a field that another faculty member has described as "toxic" to getting a job in academics, which is my ultimate goal, and advised me that I would probably be smart to leave papers in this research topic off my C.V. unless it is accepted to a top-tier journal. I'm in the strange position of working on this paper because I don't want to alienate him or offend him, but I'm hoping that the paper is rejected just so that my name isn't attached to the paper. This will be my first article submitted for publication, and I'm a little uneasy about even having random editors - who I conceivably could run into in the future - viewing the work. So, my question to you: is this sort of thing worth getting worked up about? Should I just go along with it, figuring that it is highly unlikely that it will negatively impact my career, or is there some legitimate concern? Is there a chance that publishing trash might help me just because it increases my publication rate? - 31 Strange situation. I'd say to tell your "co-author" that you're busy with other stuff and are not really that interested in the material, so he should go ahead and finish the paper himself. Ask that he include an acknowledgement saying: "I thank XXX for his assistance with parts of this paper," but not list you as a co-author. To answer your question, I think that a paper that is essentially a homework exercise on a topic of little interest published in a not-very-good journal is not going to help you get a job, and especially if it is the only publication on your CV, it might hurt. – Joe Silverman Dec 18 at 20:26 17 Maybe slightly more diplomatically: you could say that you don't feel that your contribution was enough (did your co-author propose the problem for instance - maybe this could be a justification) to have your name on the paper; and you'd prefer just to be acknowledged in the paper? – Anthony Quas Dec 18 at 20:41 9 Should this be CW like usual advice questions? – Benjamin Steinberg Dec 18 at 20:42 17 I am puzzle by the notion of "toxic" field. Do such things really exist? – Felix Goldberg Dec 18 at 21:47 8 Something you should do is get an opinion from someone in the paper's area regarding the value of the paper. Often things that are routine in one area of mathematics are relatively unknown in another area and publication in the language of the second area can have an impact. If the opinion you get is that the paper is substandard, don't allow your name on it (Anthony's suggestion is good). Otherwise, make sure the paper is worded modestly. – Brendan McKay Dec 18 at 23:16 show 10 more comments ## 8 Answers When all else fails, try honesty. I don't mean that you have to tell the professor to his face that you think his field is toxic and that the paper is garbage. But if your honest opinion is that the paper is too trivial to be worth publishing and that you're worried that it might hurt your career, then I would tell the professor that. If you can suppress your name from the paper more easily, just by declining to work on it, then by all means do that. But it sounds like you've already worked on the paper and can't extricate yourself that easily at this point. In that case I'd recommend just telling the professor that you have had second thoughts and would like to remove your name from the paper, and explain why. The fact that you're a student and he's a professor makes this a scarier prospect, but I don't think your difference in social status should stop you from giving your honest professional opinion on the quality of the work. Intellectual honesty is what we are all striving for in our profession, after all. What's the point in being a scholar if you have to sacrifice honesty? And maybe you're wrong after all and the paper is more interesting than you think. You won't find this out unless you give the professor an opportunity to openly defend the paper against honest criticism. Honesty is so rare that it tends to confuse people, who are more accustomed to dealing with lies and excuses than with the straight dope. I know from experience that being honest does risk being misunderstood by people who assume that I can't possibly be telling the truth, so there is some risk of misunderstanding. In the long run, though, I believe that developing a reputation for honesty pays off handsomely, in terms of inner peace if nothing else. - 1 I really like this +ve answer. – S. Sra Dec 19 at 16:44 2 I think in principle this answer is laudable, but geographical/political/cultural factors may make it imprudent... "what's the worst that can happen?" is a question that should not always be asked rhetorically. – Yemon Choi Dec 19 at 17:54 2 @Yemon: I agree. There are risks, and one should do one's best to assess the risks. But I do want to emphasize that there are long-term payoffs to honesty that are often overlooked. – Timothy Chow Dec 19 at 19:06 3 There are three caveats though. One is that there are no such things as a universal truth or an unambiguous sentence. The other one is that you always pay some price for your words unless you just "go with the flow" and you should balance your checkbooks. The third one is that for many people the form of the sentence means and matters more than the content of the sentence. But overall, I agree with Timothy, so I'm upvoting. :) – fedja Dec 20 at 7:07 @fedja: I agree. Honesty is a mode of personal interaction and not a sentential predicate. This would go without saying among any non-mathematical audience... – Timothy Chow Dec 20 at 17:03 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I agree with what Rodrigo says: you don't have to include a paper which you do not like in your CV. In general, people are usually evaluated by their best papers not by the worst. From my own experience, I can conclude that my own opinion about my papers does not always coincide with the public opinion. There were several papers I hesitated to publish, but they are more appreciated than some papers I am proud with. There were also results I did not publish, considering them too light-weight, and later had regrets. And finally "toxic subject" is on my opinion a nonsense. - • First, to the issue of "toxic" fields. To everyone out there who wrote that the idea is nonsense, there are people who are still working on elementary-ish approaches to special cases of Fermat's Last Theorem. There could be good math there, but if all I knew about somebody was that he submitted an elementary proof that $x^{17}+y^{17}\not=z^{17}$, I would be disimpressed, and suspect crank-hood. • There are also several comments about not putting it on your CV. If you make it onto a shortlist, you will be searched for at least through Google, the arXiv, and Math-SciNet. If it's out there, it will be attached to you. But also, if you have a couple of serious papers, and also one coauthored on a light topic, that would speak well of you (as a job candidate). • Have you considered the possibility that this professor knows his field better than you? Perhaps the result is interesting, but the question hadn't arisen before. There are, after all, problems that are easy to solve once they are stated in the right way. It's also possible that it's one of those problems that isn't too hard if it is in a book, and you know which chapter/section is relevant, but perhaps people in his field don't usually have your exact background. Perhaps this was the warm-up question, and for "the paper" he has some nice generalization or application in mind that still needs to be worked out. • The bottom line is that in math your name is your brand, and you need to nurture and develop and protect that brand. I know mathematicians that have pseudonyms that they use for exactly this class of publication: good enough to publish, not good enough to lump in with my "real" work. - 2 Very nice answer. +1 – Felix Goldberg Dec 19 at 17:59 While it is true that if you have 5-6 good research papers after that, nobody will care about what you wrote in that paper (assuming it is as bad as you present it), if it is one of your only 3 official publications by the moment of the job search (whether you include it into your CV or not doesn't matter because once you reach the short list, the mathscinet and the grapevine become more important sources of information about you than your own presentation of yourself) and the other two papers are not readily available, you are a toast. One thing you can do is to check quickly if what you've done is, indeed, well-known in some form. If it is, you are off the hook because knowingly publishing a published result as new is a no-no, so you may just regretfully say that somebody has "undercut" you there and you need to work on another project now. Whether there is any other clean exit depends on many details you haven't provided and many people gave you good advice already, so I'll stop here. - 9 Fedja's advice gives you a rather machiavellic way out of your situation. Write quickly up a second (perhaps very short and badly written) paper containing the result, put it under another name on the arXiv, go to your collegue and say: "Sorry, I have just seen this on the arXiv, we have been doubled." – Roland Bacher Dec 19 at 8:14 4 I think Roland must be kidding --- the price of such deception can be huge! (for vote=0;; vote--) – S. Sra Dec 19 at 12:23 3 That solution Roland suggested is kinda crossing the line to me for many reasons... Anyway, I feel like the professor might get upset when he sees the published date on arXiv. He might think someone beat him to it because the lazy grad student was slacking off, not writing up the result quickly? – Yuichiro Fujiwara Dec 19 at 12:35 8 Yeah, if it is discovered, you are out of the game for good. I wouldn't recommend anything that risky. In principle, there is another way (a hard one though): try to turn what you have into something worthy by your standards. There are no such things as "toxic fields" or "top tier journals"; there are just people who are incapable of doing math. and people with snobbish attitudes. So, you can always make lemonade out of a lemon, but sometimes it requires a lot of strength to squeeze the juice and I have no idea if you are up to the task or not... – fedja Dec 19 at 13:42 1 I'm never kidding but I lie always. – Roland Bacher Dec 19 at 17:19 show 1 more comment I would like to add my own two cents in here as somebody who works in a "toxic" area. 1. In essentially all areas of mathematics there is both good work and bad work being done. Mainstream areas, however, tend to have a larger number of people doing good work (because, after all, few people are willing to spend 6-10 years attacking the hardest problem in an unpopular field when almost nobody will pat you on the back when you are done). Because of that nobody will simply dismiss you out of hand for working in, say, Number Theory despite the fact that there are also plenty of bad papers published in this area. On the other hand, unfashionable areas also have good people trying to do good work who publish in very good journals. If you want to go that route you must be very dedicated and be prepared to have papers rejected by good journals without being sent to a referee and to not get job interviews because you work in area X. So you should only work in such an area if you love the mathematics. On the other hand publishing a paper in a toxic area in a top journal can sometimes have the opposite effect: people realize that publishing in a top journal in certain areas is more difficult than others and will say wow he/she published in ((insert journal name)) on ((insert unfashionable area)), he/she must be doing good work. So the point is good work in a toxic area can still end up receiving recognition if you work hard enough to get it recognized. 2. However, you seem to indicate that it is not such good work in a toxic area that is at issue. Here the question becomes more complicated because there is a scientific issue and a human issue. Suppose that at a conference I spend an evening discussing mathematics with a colleague and we make progress on some problem he/she is interested in and he/she wants to make a paper out of it. Then there is a human question. If I tell this person that I only want to be thanked and not be an author, they might be offended. Maybe this person is a friend or for some other reason I don't want to offend them. The scientific vs. human issue can be complex. The more senior you are the less effect having a paper in a weak journal is because people won't really notice it. Nonetheless at any point in one's career people's feelings have importance, too. I don't have any good answer on how to balance this aspect. 3. Grad students often underestimate the difficulty of a problem they have solved without seemingly having done much. For instance, there is the following famous story in computer science. It was long an open problem whether non-deterministic space complexity classes were closed under complement because the obvious approach to recognize the complement of a language does not preserve space complexity. People in the area generally believed that space complexity wasn't preserved by complementation. The story goes that a certain grad student (somebody at MO probably knows who and can edit this) arrived late to class and thought that the prof was asking a homework problem rather than presenting an open problem and came up with a clever, but elementary proof that space complexity classes were in fact closed under complement. Sometimes people coming with a fresh viewpoint and with training in other areas of math see things that nobody in the area saw and although the arguments seem like elementary observations, they may be important. - 2 The grad student who proved the closure under complement of nondeterministic space classes was Róbert Szelepcsényi (though I don’t vouch for the homework story). The theorem was independently proved at about the same time by Neil Immerman. – Emil Jeřábek Dec 19 at 21:04 Besides the effect of bad papers in job search, I consider some kind of scientific dignity for myself and I try to do the right thing in the similar situations. So, I suggest you do the same. For example I have experienced at least two similar situations: 1. When I was a graduate student and even at the present time, I knew if I share my papers with some of the old professors by putting their names as the coauthor they are going to help me in my career and hopefully do not cause official difficulties for me. But I was not able to convince myself to do this, first because it is a lie, secondly because doing this kind of things makes me hate myself and that's the worst thing can happen for some one. 2. One of my colleagues suggested once we share our papers with each other so that by writing one paper, it will be counted two publications for us. I know this suggestion sounds juvenile and stupid, but it really happened. Because of scientific dignity, again, I rejected this suggestion. - 7 Concerning point 2: but, if you collaborated with your collegue for real, wouldn't the time spent in obtaining each one of those results be about one half of the time needed working alone (and, still, both of you would have his/her name on all the papers)? So, collaborating for real would give results at the same speed, plus you would learn more (and you would not cheat!). – Qfwfq Dec 18 at 21:42 to Qfwfq: Don't get me wrong. I am not against collaboration in writing papers. My colleague suggested, we share finished papers. He had a paper in fuzzy mathematics and I had a paper in noncommutative geometry. In fact there is almost no way we could ever collaborate. – Vahid Shirbisheh Dec 18 at 23:53 @Vahid: good on you. I have seen some papers which must surely have been produced using method 2. – Yemon Choi Dec 19 at 17:51 Didn't Hardy and Littlewood have an agreement like this? And of course there's the gratuitous addition of Bethe to the Alpher-Gamow paper. – Allen Knutson Jan 8 at 14:54 You do not want to alienate professors in your department either, so I would say let it be published and forget about it. I don't think you are required to cite it in your CV. After all there is no fixed format for a resume; you put in whatever you estimate highlights your positive features. As for people judging you from having read the paper, it seems there is little chance of that given that you are not going to work in that 'toxic" field later on. Also, your referee will probably not be a Fields medalist, so there should be no big harm from that quarter either. - I am tempted to attack some bits of the responses you have gotten so far! Instead I will tell you that I believe the answer is very simple. 1. Put yourself in the professors shoes. (I have to assume you have the ability to do that, even if it requires great effort.) 2. Now, do what you would want a student to do if the tables were turned. As simple (and as old hat) as this sounds, it is a cure for many ills that is far too rarely practiced. (Another relevant fact: scientists - mathematicians included - almost always take themselves and what they do far too seriously!) - +1 for last sentence! – Alexander Chervov Jan 1 at 10:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9767971634864807, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/190237/does-the-identity-principle-hold-on-meromorphic-functions
# Does the identity principle hold on meromorphic functions? Does the identity principle hold for meromorphic functions? (The identity principle states that if $Q$ is a connected set and $f(z)= g(z)$ for all $z$ in some subset $A$ of $Q$ which has limit points in $Q$, then $f(z) = g(z)$ for all $z$ in $Q$) - Use the identity principle on the complement of the poles of both functions. – Qiaochu Yuan Sep 2 '12 at 22:19 2 So then set is still connected, but what if in removing the poles, you remove the limit point of the subset, or if at a point one function has a removable singularity and the other has a pole? – confused Sep 2 '12 at 22:30 1 Work with $f - g$. If $f$ and $g$ agree on a set with a limit point, then the limit point is a removable singularity of $f - g$. ("Poles of both functions" should have been "poles of either function" above; I meant for "both" to be applied to "poles" and not to "functions.") Also, you want $Q$ to be open. – Qiaochu Yuan Sep 2 '12 at 22:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180945158004761, "perplexity_flag": "head"}
http://mathoverflow.net/questions/13016/generic-fiber-of-morphism-between-non-singular-curves/13037
## Generic fiber of morphism between non-singular curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is prop 2.6b on p.28 of Silverman's the Arithmetic of Elliptic curves. It says that let $\phi: C_1 \rightarrow C_2$ be a non-constant map of projective non-singular irreducible curve. (probably over an algebraically closed field, but I am not too sure) Then for all but finitely many $Q \in C_2$, #$\phi^{-1} (Q) = deg_s (\phi)$, where RHS is the separability degree of the function fields. I don't understand Silverman's proof. The proof just says that it is Hartshorne II.6.8, and I don't understand how it is related to this proposition at all. Hartshorne II.6.8 roughly states that if $f: X \rightarrow Y$ is a morphism where $X$ is a complete nonsingular curve over an algebraically closed field $k$, and $Y$ is any curve over $k$, then either $f(X) = pt$ or $Y$, and in the latter case, $f$ is finite morphism and $[K(X):K(Y)] < \infty$. Can anyone show a proof of the proposition? I failed to show that the set of all such $Q$ is open myself, can anyone shed some light on this? Thanks! - You need an algebraically closed base field in the statement. Otherwise there can be points $Q\in C_1$ such that the degree $[k(Q):k(\phi (Q))]>1$ in which case the number of points in the fibre $\phi^{-1}(\phi (Q))$ is less than the degree. – Hagen Knaf Jan 26 2010 at 10:04 Can you give an example? I have very bad intuition about these stuff. – Ho Chung Siu Jan 26 2010 at 11:15 ## 3 Answers Here is a complete proof: as remarked in the answer by Norondion, we can reduce to the case when $C_1 \rightarrow C_2$ is generically separable, i.e. $k(C_1)$ is separable over $k(C_2)$. Let $A \subset k(C_1)$ be a finite type $k$-algebra consisting of the regular functions on some non-empty affine open subset $U$ of $C_2$ (it doesn't matter which one you choose), so that $k(C_2)$ is the fraction field of $A$. By the primitive element theorem, we may write $k(C_1) = k(C_2)[\alpha]$, where $\alpha$ satisfies some polynomial $f(\alpha) = \alpha^n + a_{n-1}\alpha^{n-1} + \cdots + a_1 \alpha + a_0 = 0,$ for some $a_i$ in $K(C_2)$. Now the $a_i$ can be written as fractions involving elements of $A$, i.e. each $a_i = b_i/c_i$ for some $b_i,c_i \in A$ (with $c_i$ non-zero). We may replace $A$ by $A[c_0^{-1},\ldots,c_{n-1}^{-1}]$ (this corresponds to puncturing $U$ at the zeroes of the $c_i$), and thus assume that in fact the $a_i$ lie in $A$. The ring $A[\alpha]$ is now integral over $A$, and of course has fraction field equal to $k(C_2)[\alpha] = k(C_1)$. It need not be that $A[\alpha]$ is integrally closed, though. We are going to shrink $U$ further so we can be sure of this. By separability of $k(C_1)$ over $k(C_2)$, we know that the discriminant $\Delta$ of $f$ is non-zero, and so replacing $A$ by $A[\Delta^{-1}]$ (i.e. shrinking $U$ some more) we may assume that $\Delta$ is invertible in $A$ as well. It's now not hard to prove that $A[\alpha]$ is integrally closed over $A$. Thus $\text{Spec }A[\alpha]$ is the preimage of $U$ in $C_1$ (in a map of smooth curves, taking preimages of an affine open precisely corresponds to taking the integral closure of the corresponding ring). In other words, restricted to $U \subset C_2$, the map has the form $\text{Spec }A[\alpha] \rightarrow \text{Spec }A,$ or, what is the same, $\text{Spec }A[x]/(f(x)) \rightarrow \text{Spec A}$. Now if you fix a closed point $\mathfrak m \in \text{Spec }A,$ the fibre over this point is equal to $\text{Spec }(A/\mathfrak m)[x]/(\overline{f}(x)) = k[x]/(\overline{f}(x)),$ where here $\overline{f}$ denotes the reduction of $f$ mod $\mathfrak m$. (Here is where we use that $k$ is algebraically closed, to deduce that $A/\mathfrak m = k,$ and not some finite extension of $k$.) Now we arranged for $\Delta$ to be in $A^{\times}$, and so $\bar{\Delta}$ (the reduction of $\Delta$ mod $\mathfrak m$, or equivalently, the discriminant of $\bar{f}$) is non-zero, and so $k[x]/(\bar{f}(x))$ is just a product of copies of $k$, as many as equal to the degree of $f$, which equals the degree of $k(C_1)$ over $k(C_2)$. Thus $\text{Spec }k[x]/(\bar{f}(x))$ is a union of that many points, which is what we wanted to show. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Do you have questions to Hartshorne's proof or just how to deduce Silverman's result from it? You can factor the field extension of the function field into a purely separable and purely inseparable extension, so WLOG $\phi$ is separable as a purely inseparable morphism is a universal homeomorphism. As $f$ is finite, it is affine, so it looks locally like $\mathrm{Spec}(B) \to \mathrm{Spec}(A)$. As $\phi$ is separable, the discriminant of $B/A$ is $\neq 0$, which gives us that $f$ is unramified outside a finite set of points (the primes which don't divide the discriminant). - Thanks for your answer. I don't know how to deduce Silverman's result from the proposition in Hartshorne. In fact I don't even see how they are related. As for your proof, the last line "f is unramified outside a finite set of points ... " is not something I was aware of. I know that for a number field $K$, $p$ ramifies in $K$ iff $p$ divides disc $K$. So are you saying that the relative version is also true? If so, where may I find a reference? Thanks! Also, is "f is finite" what Silverman's referring to when he points to that proposition in Hartshorne? – Ho Chung Siu Jan 26 2010 at 10:00 It is true for Dedekind schemes (which curves are). Check out this script of Szamuely: renyi.hu/~szamuely/gal6-7.pdf Yes, I use that $f$ is finite. – norondion Jan 26 2010 at 15:23 Example of a finite morphism with inert points: Define $C_1 := \mathrm{Spec}(\mathbb{R}[x,y])$, where $\mathbb{R}[x,y]:=\mathbb{R}[X,Y]/(X^2+Y^2+1)$ and $C_2 := \mathbb{A}^1_\mathbb{R}$. Let the morphism $\phi$ be given by the ring extension $\mathbb{R}[x,y] / \mathbb{R}[x]$. Then the fibre above every rational point of $C_2$ consists of one element only, because the equation $X^2+Y^2+1=0$ has no real solutions. However $\phi$ has degree $2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 105, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477291107177734, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/107302-problems-cubes.html
# Thread: 1. i have an additional problems which i also have difficulty solving with 3. Find the distance in inches between two vertices of a cube that are farthest from each other if an edge measures 10 inches 4. If an edge of a cube is increased by 33%, by how much is the total surfaced area increased? 2. Originally Posted by FailCalculus i have an additional problems which i also have difficulty solving with 3. Find the distance in inches between two vertices of a cube that are farthest from each other if an edge measures 10 inches $\textcolor{red}{\sqrt{10^2 + 10^2 + 10^2}}$ 4. If an edge of a cube is increased by 33%, by how much is the total surfaced area increased? $\textcolor{red}{A = 6s^2}$ $\textcolor{red}{A_{new} = 6\left(\frac{4}{3} s\right)^2}$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9745752811431885, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58567/spectral-gap-of-a-product-of-markov-processes
## Spectral gap of a product of Markov processes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For `$m \in [N] \equiv \{1,\dots, N\}$`, let $Q^{(m)}$ be the generator of a (well-behaved) continuous-time Markov process on a finite state space $[n_m]$. Write $J \equiv (j_1,\dots,j_N) \in \prod_m [n_m]$ with $j_m \in [n_m]$. The composite Markov generator corresponding to running each of the $N$ processes independently has off-diagonal entries given by `$Q^\otimes_{JJ'}=\lim_{\Delta \downarrow 0} \mathbb{P}(J \overset{\Delta}{\rightarrow} J')/\Delta$` for $J \ne J'$, where the probability of a transition from $J$ to $J'$ in a time interval of duration $\Delta$ is indicated. In this limit a.s. at most one of the component processes can execute a transition, so that the only nonzero off-diagonal terms are of the form `$Q^\otimes_{JJ'} = Q^{(M)}_{j_M j'_M}$` with $j'_m = j_m$ for $m \ne M$. What can be said about the spectral gap of $Q^\otimes$ or the mixing time of the underlying process? Of particular interest is the case $Q^{(m)} \equiv Q$. I've looked in the most obvious place (Levin, Peres, and Wilmer) and in some less obvious places, and I haven't seen this anywhere (perhaps I'm making a silly oversight). In the literature "product chains" usually mean something rather different than the discrete-time analogue of the above, and in general I expect that any terminology is overloaded. With that in mind, specific references would be most helpful. - hmm, could you maybe translate it to a purely linear-algebraic question on eigenvalues of Q-matrices? I think you would find more people able to help you then. – Federico Poloni Mar 15 2011 at 22:10 I have also found recent papers by Ycart and coworkers that touch on this sort of thing. Diaconis and Saloff-Coste have also written a relevant paper. – Steve Huntsman Mar 31 2011 at 20:47 ## 2 Answers The spectral gap is just the smallest of the spectral gaps of the component chains. The matrix $Q$ can be simply written as the sum of $Q^{(1)}\otimes I\otimes\ldots\otimes I$, $I\otimes Q^{(2)}\otimes I\otimes\cdots\otimes I$ etc. If the matrix $Q^{(i)}$ has eigenvalues `$(\lambda^{(i)}_j)_{j=1}^{n_i}$`, then the matrix $Q$ has eigenvalues `$\lambda^{(1)}_{i_1}+\lambda^{(2)}_{i_2}+\ldots+\lambda^{(N)}_{i_N}$` (with eigenvectors `$e^{(1)}_{i_1}\otimes \cdots \otimes e^{(N)}_{i_N}$`). - I checked the sum formula for $Q$ and the eigenvalues on a small random case and they appear to be correct. I still need to understand how to prove it but it will be instructive to do that myself. Thanks! – Steve Huntsman Mar 16 2011 at 21:27 I think the only thing to check is that $Q$ is what I said it was. Once you've got that you can just plug in the eigenvectors and see that they have the claimed eigenvalues. Since the number of eigenvectors matches the dimension, this must give the complete spectral decomposition of $Q$. <p> For checking that $Q$ is as claimed, of course it's sufficient just to pick one component. – Anthony Quas Mar 16 2011 at 21:41 I gather from the literature that this is well known. The key phrase is either "tensor sum" or "Kronecker sum" in conjunction with "Markov". – Steve Huntsman Mar 29 2011 at 22:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi Steve, Does section 12.4 of Levin, Peres, and Wilmer address your question? My first impression is that your problem corresponds to taking the weight $w_i$ in (12.19) of their book as $1/N$. Then the gap according to corollary 12.12 should be $\min_i w_i \gamma_i$ where $\gamma_i$ is the spectral gap of $Q^{(i)}$. Continuous time or discrete time shouldn't make a big difference to the spectral gap, as the former is just the exponentiation of the latter. - A closer analogue would be the exercise 12.7 they mention. The spectral gap of this discrete-time case is very easily dealt with. But I don't see how it adapts to the continuous-time case. – Steve Huntsman Mar 15 2011 at 22:03 Actually since you suggested that the off-diagonal block entries of the generator are all zero, due to the fact jumps cannot happen simulatneously in different components, I think your product chain is closer to the independent jump version as in corollary 12.12. Exercise 12.7 describes a chain which when translated into the continuous setting would have simultaneous action in each component. Thus the generator would have nonzero entries everywhere. Is that right? – John Jiang Mar 16 2011 at 1:02 I guess it depends on what the key property being related is. In my case the archetypal situation is independent copies of the same process, hence my remark about 12.7. But I see your point. – Steve Huntsman Mar 16 2011 at 3:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427358508110046, "perplexity_flag": "head"}
http://www.citizendia.org/Harmonic_mean
In mathematics, the harmonic mean (formerly sometimes called the subcontrary mean) is one of several kinds of average. Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and In Mathematics, an average, or central tendency of a Data set refers to a measure of the "middle" or " expected " value of Typically, it is appropriate for situations when the average of rates is desired. The harmonic mean is the number of variables divided by the sum of the reciprocals of the variables. In Mathematics, a multiplicative inverse for a number x, denoted by 1&frasl x or x &minus1 is a number which The harmonic mean H of the positive real numbers a1, a2, . In Mathematics, the real numbers may be described informally in several different ways . . , an is defined to be $H = \frac{n}{\frac{1}{a_1} + \frac{1}{a_2} + \cdots + \frac{1}{a_n}} = \frac{n}{\sum_{i=1}^n \frac{1}{a_i}}$ That is, the harmonic mean of a group of terms is the reciprocal of the arithmetic mean of the reciprocals. In Mathematics, a multiplicative inverse for a number x, denoted by 1&frasl x or x &minus1 is a number which In Mathematics and Statistics, the arithmetic Mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided ## Examples In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average. A ratio is an expression which compares quantities relative to each other In Mathematics, an average, or central tendency of a Data set refers to a measure of the "middle" or " expected " value of For instance, if a vehicle travels a certain distance at a speed x (e. g. 60 kilometres per hour) and then the same distance again at a speed y (e. g. 40 kilometres per hour), then its average speed is the harmonic mean of x and y (48 kilometres per hour), and its total travel time is the same as if it had traveled the whole distance at that average speed. However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 50 kilometres per hour. In Mathematics and Statistics, the arithmetic Mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds, and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed. The weighted mean is similar to an Arithmetic mean (the most common type of Average) where instead of each of the data points contributing equally to the final average ) Similarly, if one connects two electrical resistors in parallel, one having resistance x (e. |- align = "center"| |width = "25"| | |- align = "center"| || Potentiometer |- align = "center"| | | |- align = "center"| Resistor| | g. 60Ω) and one having resistance y (e. The ohm (symbol Ω) is the SI unit of Electrical impedance or in the Direct current case Electrical resistance, g. 40Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of x and y (48Ω): the equivalent resistance in either case is 24Ω (one-half of the harmonic mean). However, if one connects the resistors in series, then the average resistance is the arithmetic mean of x and y (with total resistance equal to the sum of x and y). And, as with previous example, the same principle applies when more than two resistors are connected, provided that all are in parallel or all are in series. In finance, the harmonic mean is used to calculate the average cost of shares purchased over a period of time. For example, an investor purchases \$1000 worth of stock every month for three months and the prices paid per share each month were \$8, \$9, and \$10, then the average price the investor paid is \$8. 926 per share. However, if the investor purchased 1000 shares per month, the arithmetic mean (which turns out to be \$9. 00) would be used. Note that in this example, the investor buying \$1000 worth of the stock each month means buying 125 shares at \$8 the first month, 111. 11 shares at \$9 the second month, and 100 shares at \$10 in the third month. Fewer shares are purchased at higher prices while more shares are purchased at lower prices. Thus more weight is given to the lower prices than the higher prices in the calculation of the average cost per share (\$8. 926). If the investor had instead purchased 1000 shares each month then equal weight would be given to high and low purchase prices, leading to an average cost per share of \$9. 00. This explains why the harmonic mean is less than the arithmetic mean. An interesting consequence arises from basic algebra in problems of working together. As an example, if a gas-powered pump can drain a pool in 4 hours and a battery-powered pump can drain the same pool in 6 hours, then it will take both pumps $\frac {{6} \cdot {4}} {{6} + {4}},$ which is equal to 2. 4 hours, to drain the pool together. Interestingly, this is one-half of the harmonic mean of 6 and 4. This consequence arises in any work problem of n people. Shown here in simplified form, $\frac{H}{n} = \frac{1}{\frac{1}{a_1} + \frac{1}{a_2} + \cdots + \frac{1}{a_n}} = \frac{1}{\sum_{i=1}^n \frac{1}{a_i}}$ ## Harmonic mean of two numbers For just two numbers, the harmonic mean can be written as: $H = \frac {{2} {a_1} {a_2}} {{a_1} + {a_2}}.$ In this case, their harmonic mean is related to their arithmetic mean, $A = \frac {{a_1} + {a_2}} {2},$ and their geometric mean, $G = \sqrt {{a_1} \cdot {a_2}},$ by $H = \frac {G^2} {A}.$ so $G = \sqrt {{A} {H}}$ , i. In Mathematics and Statistics, the arithmetic Mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided The geometric mean in Mathematics, is a type of Mean or Average, which indicates the central tendency or typical value of a set of numbers e. the geometric mean is the geometric mean of the arithmetic mean and the harmonic mean. Note that this result holds only in the case of just two numbers. ## Relationship with other means The harmonic mean is one of the 3 Pythagorean means. The three classical Pythagorean means are the Arithmetic mean ( A) the Geometric mean ( G) and the Harmonic mean ( H) For a given data set, the harmonic mean is always the least of the three, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (as the geometric mean is actually the geometric mean applied to the other two means as shown above). In Mathematics and Statistics, the arithmetic Mean (or simply the mean) of a list of numbers is the sum of all the members of the list divided The geometric mean in Mathematics, is a type of Mean or Average, which indicates the central tendency or typical value of a set of numbers The geometric mean in Mathematics, is a type of Mean or Average, which indicates the central tendency or typical value of a set of numbers The geometric mean in Mathematics, is a type of Mean or Average, which indicates the central tendency or typical value of a set of numbers When the variables are set to equal each other, the harmonic mean is equal to the geometric mean, which equal to the arithmetic mean. For example, if you use 2 and 2 for your variables, you will find that the harmonic mean, geometric mean and arithmetic mean are all equal to each other (each of their values is 2). It is the special case M − 1 of the power mean. A generalized mean, also known as power mean or Hölder mean, is an abstraction of the Pythagorean means including arithmetic, geometric It is equivalent to a weighted arithmetic mean with each value's weight being the reciprocal of the value. The weighted mean is similar to an Arithmetic mean (the most common type of Average) where instead of each of the data points contributing equally to the final average Since the harmonic mean of a list of numbers tends strongly toward the least elements of the list, it tends (compared to the arithmetic mean) to mitigate the impact of large outliers and aggravate the impact of small ones. The arithmetic mean is often incorrectly used in places calling for the harmonic mean. [1] In the speed example above for instance the arithmetic mean 50 is incorrect, and too big. Such an error was apparently made in a calculation of transport capacity of American ships during World War I. World War I (abbreviated WWI; also known as the First World War, the Great War, and the War to End All The arithmetic mean of the various ships' speed was used, resulting in a total capacity estimate which proved unattainable. ## References 1. ^ *Statistical Analysis, Ya-lun Chou, Holt International, 1969, ISBN 0030730953
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306718707084656, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/5591/why-do-people-use-p-values/5592
Why do people use p-values? Roughly speaking a p-value gives a probability of the observed outcome of an experiment given the hypothesis (model). Having this probability (p-value) we want to judge our hypothesis (how likely it is). But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome? In more details. We have a coin. We through it 20 times and we get 14 heads (14 out of 20 is what I call "outcome of experiment"). Now, our hypothesis is that the coin is fair (probabilities of head and tail are equal to each other). Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin). But if we want to estimate the probability of the model, why don't we calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)? - You would still have to model your experiment somehow to be able to compute the likelihood-function. – Raskolnikov Dec 17 '10 at 11:06 10 Pete Dixon wrote an article back in 1998 called "Why scientists value p-values" (psychonomic.org/backissues/1631/R382.pdf) that might be an informative read. A good follow-up would be Glover & Dixon's 2004 paper on the likelihood ratio as a replacement metric (pbr.psychonomic-journals.org/content/11/5/791.full.pdf). – Mike Lawrence Dec 17 '10 at 14:18 2 Mike, that looks suspiciously like a good answer to me. What's it doing in the comments? – Matt Parker Dec 17 '10 at 16:20 – doug Dec 20 '10 at 8:12 People don't use p-values, statisticians do. (Couldn't resist a pithy saying that's also true. Of course, once you start properly qualifying each noun, it loses its pithiness.) – Wayne Dec 30 '11 at 20:00 7 Answers Computing the probability that the hypothesis is correct doesn't fit well within the frequentist definition of a probability (a long run frequency), which was adopted to avoid the supposed subjectivity of the Bayesian definition of a probability. The truth of a particular hypothesis is not a random variable, it is either true or it isn't and has no long run frequency. It is indeed more natural to be interested in the probability of the truth of the hypothesis, which is IMHO why p-values are often misinterpreted as the probability that the null hypothesis is true. Part of the difficulty is that from Bayes rule, we know that to compute the posterior probability that a hypothesis is true, you need to start with a prior probability that the hypothesis is true. A Bayesian would compute the probability that the hypothesis is true, given the data (and his/her prior belief). Essentially in deciding between frequentist and Bayesian approaches is a choice whether the supposed subjectivity of the Bayesian approach is more abhorrent than the fact that the frequentist approach generally does not give a direct answer to the question you actually want to ask - but there is room for both. In the case of asking whether a coin is fair, i.e. the probability of a head is equal to the probability of a tail, we also have an example of a hypothesis that we know in the real world is almost certainly false right from the outset. The two sides of the coin are non-symmetric, so we should expect a slight asymmetry in the probabilities of heads and tails, so if the coin "passes" the test, it just means we don't have enough observations to be able to conclude what we already know to be true - that the coin is very slightly biased! - 3 – Ben Bolker Dec 17 '10 at 14:32 6 Being very close to fair is not the same thing as being exactly fair, which is the null hypothesis. I was pointing out one of the idiosyncrasies of hypothesis testing, namely that we often know that the null hypothesis is false, but use it anyway. A more practical test would aim to detect whether there is evidence that the coin is significantly biased, rather than significant evidence that the coin is biased. – Dikran Marsupial Dec 17 '10 at 15:13 1 Hi there, maybe I am mistaken but I thought in science, you can never say that the alternative hypothesis is true, you can only say that the null hypothesis is rejected and you accept the alternative hypothesis. To me the p value reflects the chance you will make a type 1 error, i.e. that you will reject the alternative hypothesis and accept the null hypothesis (say p=.05 or 5% of the time. It is important to distinguish between type 1 error and type 2 error, and the role that power plays in your modelling of events. – user2238 Dec 21 '10 at 7:06 3 For frequentist tests, I would use an even weaker statement, which is that you either "reject the null hypothesis" or you "fail to reject the null hypothesis", and don't accept anything. The key point being that (as in the case of the biased coin) sometimes you know a-priori that the null hypothesis is not true, you just don't have enough data to demonstrate that it isn't true; in which case it would be odd to "accept" it. Frequentist tests have type-I and type-II error rates, but that doesn't mean that they can talk of the probability of a particular hypothesis being true, as in the OP. – Dikran Marsupial Dec 22 '10 at 8:50 1 @user2238 The p-value is the chance of a type I error only when the null hypothesis is "simple" (not composite) and it happens to be true. For example, in a one-sided test of whether a coin is biased towards tails ($H_0: p\lt 0.5$), using a two-headed coin guarantees the chance of a type-I error is zero even though the p-value from any finite sample will be nonzero. – whuber♦ Dec 30 '11 at 19:45 show 4 more comments Nothing like answering a really old question, but here goes.... p-values are almost valid hypothesis tests. This is a slightly adapted exerpt taken from Jaynes's 2003 probability theory book (Repetitive experiments: probability and frequency). Suppose we have a null hypothesis $H_0$ that we wish to test. We have data $D$ and prior information $I$. Suppose that there is some unspecified hypothesis $H_A$ that we will test $H_0$ against. The posterior odds ratio for $H_A$ against $H_0$ is then given by: $$\frac{P(H_A|DI)}{P(H_0|DI)}=\frac{P(H_A|I)}{P(H_0|I)}\times\frac{P(D|H_AI)}{P(D|H_0I)}$$ Now the first term on the right hand side is independent of the data, so the data can only influence the result via the second term. Now, we can always invent an alternative hypothesis $H_A$ such that $P(D|H_AI)=1$ - a "perfect fit" hypothesis. Thus we can use $\frac{1}{P(D|H_0I)}$ as a measure of how well the data could support any alternative hypothesis over the null. There is no alternative hypothesis that the data could support over $H_0$ by greater than $\frac{1}{P(D|H_0I)}$. We can also restrict the class of alternatives, and the change is that the $1$ is replaced by the maximised likelihood (including normalising constants) within that class. If $P(D|H_0I)$ starts to become too small, then we begin to doubt the null, because the number of alternatives between $H_0$ and $H_A$ grows (including some with non-negligible prior probabilities). But this is so very nearly what is done with p-values, but with one exception: we don't calculate the probability for $t(D)>t_0$ for some statistic $t(D)$ and some "bad" region of the statistic. We calculate the probability for $D$ - the information we actually have, rather than some subset of it, $t(D)$. Another reason people use p-values is that they often amount to a "proper" hypothesis test, but may be easier to calculate. We can show this with the very simple example of testing the normal mean with known variance. We have data $D\equiv\{x_1,\dots,x_N\}$ with an assumed model $x_i\sim Normal(\mu,\sigma^2)$ (part of the prior information $I$). We want to test $H_0:\mu=\mu_0$. Then we have, after a little calculation: $$P(D|H_0I)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{N\left[s^2+(\overline{x}-\mu_0)^2\right]}{2\sigma^2}\right)$$ Where $\overline{x}=\frac{1}{N}\sum_{i=1}^{N}x_i$ and $s^2=\frac{1}{N}\sum_{i=1}^{N}(x_i-\overline{x})^2$. This shows that the maximum value of $P(D|H_0I)$ will be achieved when $\mu_0=\overline{x}$. The maximised value is: $$P(D|H_AI)=(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)$$ So we take the ratio of these two, and we get: $$\frac{P(D|H_AI)}{P(D|H_0I)}=\frac{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2}{2\sigma^2}\right)}{(2\pi\sigma^2)^{-\frac{N}{2}}\exp\left(-\frac{Ns^2+N(\overline{x}-\mu_0)^2}{2\sigma^2}\right)}=\exp\left(\frac{z^2}{2}\right)$$ Where $z=\sqrt{N}\frac{\overline{x}-\mu_0}{\sigma}$ is the "Z-statistic". Large values of $|z|$ cast doubt on the null hypothesis, relative to the hypothesis about the normal mean which is most strongly supported by the data. We can also see that $\overline{x}$ is the only part of the data that is needed, and thus is a sufficient statistic for the test. The p-value approach to this problem is almost the same, but in reverse. We start with the sufficient statistic $\overline{x}$, and we caluclate its sampling distribution, which is easily shown to be $\overline{X}\sim Normal\left(\mu,\frac{\sigma^2}{N}\right)$ - where I have used a capital letter to distinguish the random variable $\overline{X}$ from the observed value $\overline{x}$. Now we need to find a region which casts doubt on the null hypothesis: this is easily seen to be those regions where $|\overline{X}-\mu_0|$ is large. So we can calculate the probability that $|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0|$ as a measure of how far away the observed data is from the null hypothesis. As before, this is a simple calculation, and we get: $$\text{p-value}=P(|\overline{X}-\mu_0|\geq |\overline{x}-\mu_0||H_0)$$ $$=1-P\left[-\sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}\leq\sqrt{N}\frac{\overline{X}-\mu_0}{\sigma}\leq \sqrt{N}\frac{|\overline{x}-\mu_0|}{\sigma}|H_0\right]$$ $$=1-P(-|z|\leq Z\leq |z||H_0)=2\left[1-\Phi(|z|)\right]$$ Now, we can see that the p-value is a monotonic decreasing function of $|z|$, which means we essentially get the same answer as the "proper" hypothesis test. Rejecting when the p-value is below a certain threshold is the same thing as rejecting when the posterior odds is above a certain threshold. However, note that in doing the proper test, we had to define the class of alternatives, and we had to maximise a probability over that class. For the p-value, we have to find a statistic, and calculate its sampling distribution, and evaluate this at the observed value. In some sense choosing a statistic is equivalent to defining the alternative hypothesis that you are considering. Although they are both easy things to do in this example, they are not always so easy in more complicated cases. In some cases it may be easier to choose the right statistic to use and calculate its sampling distribution. In others it may be easier to define the class of alternatives, and maximise over that class. This simple example account for a large amount of p-value based testing, simply because so many hypothesis tests are of the "approximate normal" variety. It provides an approximate answer to your coin problem also (by using the normal approximation to the binomial). It also shows that p-values in this case will not lead you astray, at least in terms of testing a single hypothesis. In this case, we can say that a p-value is a measure of evidence against the null hypothesis. However, the p-values have a less interpretable scale than the bayes factor - the link between p-value and the "amount" of evidence against the null is complex. p-values get too small too quickly - which makes them difficult to use properly. They tend overstate the support against the null provided by the data. If we interpret p-values as probabilities against the null - $0.1$ in odds form is $9$, when the actual evidence is $3.87$, and $0.05$ in odds form is $19$ when the actual evidence is $6.83$. Or to put it another way, using a p-value as a probability that the null is false here, is equivalent to setting the prior odds. So for p-value of $0.1$ the implied prior odds against the null are $2.33$ and for p-value of $0.05$ the implied prior odds against the null are $2.78$. - 2 +1. "...choosing a statistic is equivalent to defining the alternative hypothesis that you are considering" strikes me as a deep insight. – whuber♦ Dec 30 '11 at 19:46 Good answer. It is worth noting (though obvious) that working with a class of alternatives that is larger than $k$ for some small $k$ can often be computationally prohibitive, let alone if one has to work with an infinite or uncountable number of alternatives, which may also occur in practice. A big plus of the p-value approach is that it is often (usually?) computationally simple/tractable. – Faheem Mitha Dec 31 '11 at 6:46 1 @faheemmitha- you are right about the combinatorical explosion, however this does not occur for the approach i describe (in fact you can show that the bayes approach is effectively defining residuals). This is because we only need to define the class then maximise. We do not need to evaluate each alternative, just find the best one. – probabilityislogic Jan 7 '12 at 11:53 Your question is a great example of frequentist reasoning and is, actually quite natural. I've used this example in my classes to demonstrate the nature of hypothesis tests. I ask for a volunteer to predict the results of a coin flip. No matter what the result, I record a "correct" guess. We do this repeatedly until the class becomes suspicious. Now, they have a null model in their head. They assume the coin is fair. Given that assumption of 50% correct when is everything is fair, every successive correct guess arouses more suspicion that the fair coin model is incorrect. A few correct guesses and they accept the role of chance. After 5 or 10 correct guesses, the class always begins to suspect that the chance of a fair coin is low. Thus it is with the nature of hypothesis testing under the frequentist model. It is a clear and intuitive representation of the frequentist take on hypothesis testing. It is the probability of the observed data given that the null is true. It is actually quite natural as demonstrated by this easy experiment. We take it for granted that the model is 50-50 but as evidence mounts, I reject that model and suspect that there is something else at play. So, if the probability of what I observe is low given the model I assume (the p-value) then I have some confidence in rejecting my assumed model. Thus, a p-value is a useful measure of evidence against my assumed model taking into account the role of chance. A disclaimer: I took this exercise from a long forgotten article in, what I recall, was one of the ASA journals. - Brett, this is interesting and a great example. The model here to me seems to be that people expect the order of heads and tails to occur in a random fashion. For example, if I see 5 heads in a row, I infer that this is an example of a non-random process. In fact, and I may be wrong here, the probability of a toin coss (assuming randomness) is 50% heads and 50% tails, and this is completely independent of the previous result. The point is that if we threw a coin 50000 times, and the first 25000 were heads, provided the remaining 25000 were tails, this still reflects a lack of bias – user2238 Dec 21 '10 at 7:26 @user2238: Your last statement is true, but it would be extrordinarily rare. In fact, seeing a run of 5 heads in 5 tosses would happen just 3% of the time if the coin is fair. It is always possible that the null is true and we have witnessed a rare event. – Brett Magill Oct 13 '11 at 15:43 As a former academic who moved into practice, I'll take a shot. People use p-values because they are useful. You can't see it in textbooky examples of coin flips. Sure they're not really solid foundationally, but maybe that is not as necessary as we like to think when we're thinking academically. In the world of data, we're surrounded by a literally infinite number of possible things to look into next. With p-value computations all you need as an idea of what is uninteresting and a numerical heuristic for what sort of data might be interesting (well, plus a probability model for uninteresting). Then individually or collectively we can scan things pretty simple, rejecting the bulk of the uninteresting. The p-value allows us to say "If I don't put much priority on thinking about this otherwise, this data gives me no reason to change". I agree p-values can be misinterpreted and overinterpreted, but they're still an important part of statistics. - A side note to the other excellent answers: on occasion there are times we don't. For example, up until very recently, they were outright banned at the journal Epidemiology - now they are merely "strongly discouraged" and the editorial board devoted a tremendous amount of space to a discussion of them here: http://journals.lww.com/epidem/pages/collectiondetails.aspx?TopicalCollectionId=4 - "Roughly speaking p-value gives a probability of the observed outcome of an experiment given the hypothesis (model)." but it doesn't. Not even roughly - this fudges an essential distinction. The model is not specified, as Raskolnikov points out, but let's assume you mean a binomial model (independent coin tosses, fixed unknown coin bias). The hypothesis is the claim that the relevant parameter in this model, the bias or probability of heads, is 0.5. "Having this probability (p-value) we want to judge our hypothesis (how likely it is)" We may indeed want to make this judgement but a p-value will not (and was not designed to) help us do so. "But wouldn't it be more natural to calculate the probability of the hypothesis given the observed outcome?" Perhaps it would. See all the discussion of Bayes above. "[...] Now we calculate the p-value, that is equal to the probability to get 14 or more heads in 20 flips of coin. OK, now we have this probability (0.058) and we want to use this probability to judge our model (how is it likely that we have a fair coin)." 'of our hypothesis, assuming our model to be true', but essentially: yes. Large p-values indicate that the coin's behaviour is consistent with the hypothesis that it is fair. (They are also typically consistent with the hypothesis being false but so close to being true we do not have enough data to tell; see 'statistical power'.) "But if we want to estimate the probability of the model, why we do not calculate the probability of the model given the experiment? Why do we calculate the probability of the experiment given the model (p-value)?" We actually don't calculate the probability of the experimental results given the hypothesis in this setup. After all, the probability is only about 0.176 of seeing exactly 10 heads when the hypothesis is true, and that's the most probable value. This isn't a quantity of interest at all. It is also relevant that we don't usually estimate the probability of the model either. Both frequentist and Bayesian answers typically assume the model is true and make their inferences about its parameters. Indeed, not all Bayesians would even in principle be interested in the probability of the model, that is: the probability that the whole situation was well modelled by a binomial distribution. They might do a lot of model checking, but never actually ask how likely the binomial was in the space of other possible models. Bayesians who care about Bayes Factors are interested, others not so much. - 2 Hmm, two down votes. If the answer is so bad it would be nice have some commentary. – conjugateprior Dec 19 '10 at 10:57 I liked this answer. Sometimes people down vote answers because it is not similar to a textbook and try to rid all sites of discussions containing a taint of common sense or laymen like description. – Vass Sep 21 '11 at 13:00 I didn’t downvote but I think a problem is that your point is not clear. – Elvis Dec 30 '11 at 23:04 I will only add a few remarks; I agree with you that the overuse of $p$-values is harmfull. • Some people in applied stats misinterpret $p$-values, notably understanding them as the probability that the null hypotheses is true; cf these papers: P Values are not Error Probabilities and Why We Don’t Really Know What "Statistical Significance" Means: A Major Educational Failure. • An other common misconception is that $p$-values reflect the size of effect detected, or their potential for classification, when they reflect both the size of sample and the size of effects. This leads some people to write papers to explain why variables that have been shown "strongly associated" to a character (ie with very small p values) are poor classifiers, like this one... • To conclude, my opinion is that $p$-values are so widely used because of publications standards. In applied areas (biostats...) their size are sometimes the sole concern of some reviewers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522106051445007, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/264544/how-to-find-number-of-prime-numbers-up-to-to-n
# How to find number of prime numbers up to to N? Is there any way or function to find out the number of primes numbers up to any number? (Say $10^7$ or $10^{30}$ or $200$ or $300$?) - I really do not understand your question. Please rephrase it and give an example of what you want? From what I understand, you are searching for a way to find a interval of N numbers out of which none is prime? – CBenni Dec 24 '12 at 11:52 1 I think you're looking for this en.wikipedia.org/wiki/Prime-counting_function. There is no known expicit formula for this, but we do how this function behaves asymptotically, that is the famous prime-number theorem en.wikipedia.org/wiki/Prime_number_theorem – Mohan Dec 24 '12 at 11:54 1 Ok, now I can understand the question. Dont shorten number with no. (especially not without the dot) ;) – CBenni Dec 24 '12 at 11:55 ## 5 Answers $$\pi(n) \approx \frac{n}{\ln(n)}$$ where $\pi(n)$ is the number of primes less than $n$ and $\ln(n)$ is the natural logarithm of $n$. (Googling 'Prime Number Theorem' will tell you more! But this seems particularly nice for a one-page intro: http://primes.utm.edu/howmany.shtml#pnt ) - So no one till date found out the number of primes less than $n$ can be found out by using square root of $n$ too? and using some other numbers.. – Shan Dec 24 '12 at 12:32 @Shan Short answer: no! – Peter Smith Dec 24 '12 at 12:58 There is no known expicit formula for this, but we do know how this function behaves asymptotically, that is the famous prime-number theorem. It states that $$\pi(n) \approx n/ln(n)$$ But there are certain algorithms for calculating this function. One such example is here Computing π(x): The Meissel, Lehmer, Lagarias, Miller, Odlyzko method - 1 – Hurkyl Dec 24 '12 at 13:16 The answers above are very correct and state the Prime Number Theorem. Note that below, $\pi(n)$ means the primes less than or equal to $n$. Pafnuty Chebyshev has shown that if $$\lim_{n \to \infty} {\pi(n) \over {n \over \ln(n)}}$$exists, it is $1$. There are a lot of values that are approximately equal to $\pi(n)$ actually, as shown in the table. - One of the closest approximations to $\pi(x)$ is the log-integral, $\mathrm{Li}(x)$. The asymptotic expansion is easy to derive using integration by parts: $$\begin{align} \mathrm{Li}(x) &=\int_2^n\frac{\mathrm{d}t}{\log(t)}\\ &=\frac{n}{\log(n)}+C_1+\int_2^n\frac{\mathrm{d}t}{\log(t)^2}\\ &=\frac{n}{\log(n)}+\frac{n}{\log(n)^2}+C_2+\int_2^n\frac{\mathrm{2\,d}t}{\log(t)^3}\\ &=\frac{n}{\log(n)}+\frac{n}{\log(n)^2}+\frac{2n}{\log(n)^3}+C_3+\int_2^n\frac{\mathrm{3!\,d}t}{\log(t)^4}\\ &=\frac{n}{\log(n)}\left(1+\frac1{\log(n)}+\frac2{\log(n)^2}+\dots+\frac{k!}{\log(n)^k}+O\left(\frac1{\log(n)^{k+1}}\right)\right) \end{align}$$ Thus, using the first two terms in the asymptotic series, $$\begin{align} \frac{n}{\log(n)}\left(1+\frac1{\log(n)}+\dots\right) &=\frac{n}{\log(n)\left(1-\frac1{\log(n)}+\dots\right)}\\ &\approx\frac{n}{\log(n)-1} \end{align}$$ Therefore, $\dfrac{n}{\log(n)-1}$ is a better approximation than $\dfrac{n}{\log(n)}$ for large $n$. - Why none of these give exact values for $π(x)$ ? – Shan Dec 26 '12 at 5:07 For one, $\pi(x)$ is a discrete function, taking only integer values, whereas $\mathrm{Li}(x)$ is continuous. Similarly, primes clump in certain places; however, $\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{Li}(x)=\frac1{\log(x)}$ is monotonically decreasing. – robjohn♦ Dec 26 '12 at 13:30 I've discovered a marvelous algorithm for counting the primes up to N. For each integer $n\leq N$, do the following: 1. Set $i=2$. 2. Reduce $n$ modulo $i$. 3. If you got zero in step 2, increase $n$ by 1 and go to step 1. 4. Increase $i$ by 1. 5. If $i\geq\sqrt{n}$, you've got a prime! Add 1 to your count, increase $n$ by 1. If $n\leq N$, kindly return to step 1. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8864747881889343, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6932/learning-class-field-theory-local-or-global-first/6956
## Learning Class Field Theory: Local or Global First? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've noticed that there seem to be two approaches to learning class field theory. The first is to first learn about local fields and local class field theory, and then prove the basic theorems about global class field theory from the corresponding local facts. This often means that global class field theory is given the idelic formulation, as local fields have already been covered. Alternatively, I'm about to take a course on class field theory (which is the sequel to an undergraduate course on algebraic number theory and basic zeta/L-functions) which dives directly into global class field theory and will follow the original (1920s) formulations (ideal-theoretic) and proofs of the basic results. I'm wondering what are people's opinions of the two different approaches to class field theory. Does it make more sense to start local and go global, or is it a better idea to learn the subject more historically? I asked my professor here at Princeton about it, since I was aware that Harvard's CFT course starts with local, he responded that since what we're really interested in are number fields anyway, it's much more relevant to proceed immediately with global class field theory. Thoughts? EDIT/UPDATE: Based on input from this thread and more experience, here is the approach I've decided to follow: 1. Learn global class field theory using more elementary proofs, following something like Janusz (or another source if you don't like Janusz's style) 2. Learn the cohomology-heavy proofs of local class field theory. I particularly like Milne's notes for this. 3. Continue and learn the proof of global class field theory using cohomology of ideles. You could just continue in Milne, or try the chapters in Cassels-Frohlich - 8 I'll just remark that history is repeating itself: the fairly recent proof by Harris and Taylor of local Langlands for GL_n is a global proof, and as far as I know no local proof is currently known. – Kevin Buzzard Nov 27 2009 at 13:15 I should add that the approach I've decided to use is the following, and I recommend that others try it: 1. Learn global class field theory using more elementary proofs, following something like Janusz (or another source if you don't like Janusz's style) 2. Learn the cohomology-heavy proofs of local class field theory. I particularly like Milne's notes for this. 3. Continue and learn the proof of global class field theory using cohomology of ideles. You could just continue in Milne, or try the chapters in Cassels-Frohlich. – David Corwin Aug 23 2010 at 2:58 I looked at several books on CFT. The only one I feel I understood, at least partially, is Weil's "Basic Number Theory". I find Weil's writing very clear. The book is almost selfcontained. The main prerequisite is the theory of locally compact abelian groups. I believe this is unavoidable, but I'm not sure. At any rate, this makes things more transparent. [Weil's approach is a local to global one.] – Pierre-Yves Gaillard Aug 23 2010 at 5:14 @Pierre-Yves: You really don't need the theory of locally compact abelian groups to learn class field theory. It might be true that that book is great if you have that prerequisite, but I imagine that many who want to learn class field theory will not have that prerequisite. – David Corwin Aug 23 2010 at 13:35 Dear Davidac897: Thanks for your answer. I don't doubt that you're right, but I'll still try to make the following point. CFT is about loc. cpt ab. gps (LCAG): adele rings, idele groups, Galois groups, are LCAG, and the morphisms between them are continuous, that is LCAG morphisms. Also, the theorems about LCAG needed to read the book are very clearly stated by Weil, and very easy to understand (as far as their statement is concerned). You can take them for granted. [Why is the adele ring defined as a restricted product? To make it locally compact.] – Pierre-Yves Gaillard Aug 24 2010 at 5:10 show 3 more comments ## 9 Answers I learned class field theory from the Harvard two-semester algebraic number theory sequence that Davidac897 alluded to, so I can really only speak for the "local first" approach (I don't even know what a good book to follow for doing the other approach would be, although I found this interesting book review which seems relevant to the topic at hand.). This is a tough question to answer, partly because local-first/global-first is not the only pedagogical decision that needs to be made when teaching/learning class field theory, but more importantly because the answer depends upon what you want to get out of the experience of learning class field theory (of course, it also depends upon what you already know). Class field theory is a large subject and it is quite easy to lose the forest for the trees (not that this is necessarily a bad thing; the trees are quite interesting in their own right). Here are a number of different things one might want to get out of a course in class field theory, in no particular order (note that this list is probably a bit biased based on my own experience). (a) a working knowledge of the important results of (global) class field theory and ability to apply them to relevant situations. This is more or less independent of the items below, since one doesn't need to understand the proofs of the results in order to apply them. I second Pete Clark's recommendation of Cox's book /Primes of the form x^2 + ny^2/. Now on to stuff involved in the proofs of class field theory: (b) understanding of the structure and basic properties of local fields and adelic/idelic stuff (not class field theory itself, but material that might be taught in a course covering class field theory if it isn't assumed as a prerequisite). (c) knowledge of the machinery and techniques of group cohomology/Galois cohomology, or of the algebraic techniques used in non-cohomology proofs of class field theory. Most of the "modern" local-first presentations of local class field theory use the language of Galois cohomology. (It's not necessary, though; one can do all the algebra involved without cohomology. The cohomology is helpful in organizing the information involved, but may seem like a bit much of a sledgehammer to people with less background in homological algebra.) (d) understanding of local class field theory and the proofs of the results involved (usually via Galois cohomology of local fields) as done, e.g. in Serre's /Local Fields/. (e) understanding of class formations, that is, the underlying algebraic/axiomatic structure that is common to local and global class field theory. (Read the Wikipedia page on "class formations" for a good overview.) In both cases the main results of class field theory follow more or less from the axioms of class formations; the main thing that makes the results of global class field theory harder to prove than the local version is that in the global case it is substantially harder to prove that the class formation axioms are in fact satisfied. (f) understanding the proofs of the "hard parts" of global class field theory. Depending upon one's approach, these proofs may be analytic or algebraic (historically, the analytic proofs came first, which presumably means they were easier to find). If you go the analytic route, you also get: (g) understanding of L-functions and their connection to class field theory (Chebotarev density and its proof may come in here). This is the point I know the least about, so I won't say anything more. There are a couple more topics I can think of that, though not necessary to a course covering class field theory, might come up (and did in the courses I took): (h) connections with the Brauer group (typically done via Galois cohomology). (i) examples of explicit class field theory: in the local case this would be via Lubin-Tate formal groups, and in the global case with an imaginary quadratic base field this would be via the theory of elliptic curves with complex multiplication (j-invariants and elliptic functions; Cox's book mentioned above is a good reference for this). Obviously, this is a lot, and no one is going to master all these in a first course; although in theory my two-semester sequence covered all this, I feel that the main things I got out of it were (c), (d), (e), (h), and (i). (I already knew (b), I acquired (a) more from doing research related to class field theory before and after taking the course, and (f) and (g) I never really learned that well). A more historically-oriented course of the type you mention would probably cover (a), (f), and (g) better, while bypassing (b-e). Which of these one prefers depends a lot on what sort of mathematics one is interested in. If one's main goal is to be able to use class field theory as in (a), one can just read Cox's book or a similar treatment and skip the local class field theory. Algebraically inclined people will find the cohomology in items (c) and (d) worth learning for its own sake, and they will find it simpler to deal with the local case first. Likewise, people who prefer analytic number theory or the study of L-functions in general will probably prefer the insights they get from going via (g). I'm not sure I'm reaching a conclusion here: I guess what I mean to say is -- I took the "modern" local-first, Galois cohomology route (where by "modern" we actually mean "developed by Artin and Tate in the 50's") and, being definitely the algebraic type, I enjoyed what I learned, but still felt like I didn't have a good grip on the big picture. (Note: I learned the material out of Cassels and Frohlich mostly, but if I had to choose a book for someone interested in taking the local-first route I'd probably suggest Neukirch's /Algebraic Number Theory/ instead.) Other approaches may give a better view of the big picture, but it can be hard to keep an eye on the big picture when going through the gory details of proving everything. (PS, directed at the poster, whom I know personally: David, if you're interested in advice geared towards your specific situation, you should of course feel welcome to contact me directly about it.) - Alison, the link to a book review in your first paragraph is broken and it's not clear from your text what book it is, so only you can fix that. – KConrad Aug 23 2010 at 0:15 1 Edited to fix the link. For further reference, in case the link changes again, it was to an maa.org review, written by Gouvea, of Nancy Childress's Class Field Theory. – Alison Miller Aug 23 2010 at 0:39 @Alison Miller: Could you please elaborate on what you said: "I learned the material out of Cassels and Frohlich mostly, but if I had to choose a book for someone interested in taking the local-first route I'd probably suggest Neukirch's /Algebraic Number Theory/ instead."? Why do you think Neukirch is a better choice? – Brian Apr 7 2011 at 2:49 (caveat: I've read neither book in its entirety, so this is based on only having read parts of both, and for different purposes) I find Neukirch to be less dense, more elegant, and generally a joy to read. Cassels and Frolich is lecture notes from an instructional conference, which means it has different chapters by different people, and is generally a bit rougher around the edges (in some ways this is a good thing of course). – Alison Miller Apr 7 2011 at 3:49 @Alison Miller: Thanks! – Brian Apr 7 2011 at 16:10 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is no royal road to class field theory -- to understand it well is going to take lots of time and multiple exposures no matter what. That said, when covering this material in courses at UGA I have had some success with the following approach: first discuss the statements of global class field theory in the classical ideal-theoretic language, and give some motivation for these results. For instance, Cox's book Primes of the Form x^2 + ny^2 is good for this: some lecture notes for a course based on Cox's book are available at http://www.math.uga.edu/~pete/primesoftheform.html Then I would recommend studying local class field theory from the perspective of Galois cohomology. For this, Serre's book Local Fields is still a classic; Jim Milne has some very nice lecture notes as well. Only then would I venture into the realm of idele-theoretic global class field theory. But again, these are just my two cents. - Yes, as you correctly describe it, there are two main approaches to class field theory, the classical (1920s) approach in terms of ideals,and the later (Chevalley-Artin-Tate) approach in terms of ideles and cohomology. The first takes you to the main theorems more quickly and easily, but the second gives you much more. Fortunately, they are not incompatible, so learning the classical approach will be a big help if you then decide to learn the second approach. - As many people have indicated above, class field theory is large and difficult, and no approach is going to make it easy. My personal experience was that it was crucial to understand the statements of the main theorems of class field theory well before learning any of the proofs. I tried to learn class field theory from many books and teachers before succeeding, and I think this is what made everything click for me. To my mind, the results of class field theory are a beautiful cohesive whole. It is often easy to see how they are consistent with and partially imply each other, while seeing why any one of them is true is very difficult. For this purpose, I would suggest learning the global statements before the local ones, because they are more elementary and because you probably have more experience with extensions of number fields than with extensions of local fields. I don't think it matters so much what order you learn the proofs in. - Perhaps no one else will share my opinion but I am a fan of Neukirch's approach to both local and global class field theory as presented in his book on algebraic number theory. Neukirch constructs an abstract framework including several objects and conditions which then induce a concept of a class field theory. All this is modeled upon the situation of local fields and the correspondence between prime elements and Frobenius automorphisms for unramified extensions. Hence, one has a nice motivation for this approach and moreover it is pretty elementary (although I have to admit that the verification of the multiplicativity of the reciprocity morphism is "dirty" but one can just believe this and save some time). In particular, group cohomology is not used. Neukirch then shows how to really get local class field theory from this abstract approach. The verification of the conditions mentioned above is not that hard (the existence theorem requires additional work). So, I think that this is a great path to the general concept of class field theories with local class field theory being the first example and motivation. The point is that from the same abstract framework one can also get global class field theory. This is unfortunately more technical but I think that this also provides a lot of insight. For me the cohomological approach via Nakayama-Tate duality was always a mystery. I think if one does not learn how group cohomology appeared implicitly via the algebra theoretic considerations in class field theory, then it will remain to be a mystery. But this may just be a result of my lack of knowledge... One could remark that a drawback of Neukirch's approach is that one does not get information about higher cohomology groups which is important elsewhere. But as Neukirch's class field axiom implies that the discrete module under consideration gives a class formation, I am not sure if this is really true... - My experience was similar to that of David Speyer. My first encounter with class field theory was the very motivating "Introduction to the construction of class fields" by H. Cohn. That book motivates through questions on rational triangles, primes represented by quadratic forms, analogies from Riemann surfaces and the use of modular forms, Klein's icosahedron. An other very motivating aspect of it is the topological idea behind Kato's generalization, describing $\pi_1^{ab}$ of a scheme in different ways, and the geometry associated with that and ${\mathbb{F}}_1$-yoga. Conc. the cohomological approach, my impression was opposite to that of Arminius. - There are three language for class field theory,the first is analytic,such as Iwasawa's book,second is using ideal or idele,such as Niekrich's book,the last is more powerful--cohomolgy,such as Milne's notes on his homepage. My experience was local to global using cohomolgy. - I think Iwasawa's book"Local class field" is the best one for you.If you cannot find it in your library,maybe I can send a e-vision to you.My email is: [email protected]. - Does anyone have any idea where I could find a book or notes which develop class field theory using the elementary analytic methods? EDIT: See http://mathoverflow.net/questions/8351/reference-for-learning-global-class-field-theory-using-the-original-analytic-proo for the answers to this question. - 3 It would be most appropriate to post this as a question in its own right (or at least as an edit to the original question above), and you're likely to get more answers that way. – Alison Miller Dec 6 2009 at 21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545871615409851, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/32676/transcendental-galois-theory
# Transcendental Galois Theory Is there a good reference on transcendental Galois Theory? More precisely, if $K/k$ admits a separating transcendence basis (or maybe if it is a separably generated extension) it seems to me that many of the usual theorems of Galois theory go through. Moreover, the group $\text{Aut}(K/k)$ seems to have additional structure; namely it should be an algebraic group over $k$. For example, it seems to me that $k(x_1, ..., x_n)/k$ has automorphism group $GL_n(k)$. (EDIT: As Qiaochu Yuan points out, this is incorrect; the automorphism group at least must contain $PGL_{n+1}(k)$, acting via its action on the function field of $\mathbb{P}_k^n$.) This sort of thing must be well-studied; if so, what are the standard references on the subject? I have seen Pete L. Clark's excellent (rough) notes on related subjects here but they seem not to address quite these sorts of questions. - Isn't the automorphism group more something like $\text{PGL}_{n+1}(k)$? – Qiaochu Yuan Apr 13 '11 at 6:11 How do you figure? I could just be being silly but I don't even see an action of $PGL_{n+1}$... – Daniel Litt Apr 13 '11 at 6:16 I think the action is by generalized fractional linear transformations, e.g. when $n = 1$ we can send $x_1$ to any $\frac{ax_1 + b}{cx_1 + d}$ and this has inverse the corresponding inverse fractional linear transformation. – Qiaochu Yuan Apr 13 '11 at 6:22 I'm pretty sure you're right; as I've remarked in my edit, this action should arise via the natural action of $PGL_{n+1}$ on the function field of $\mathbb{P}^n_k$. – Daniel Litt Apr 13 '11 at 6:45 ## 1 Answer For every $n \geq 1$, there is a natural effective action of $\operatorname{PGL}_{n+1}(k)$ on $k(x_1,\ldots,x_n)$. In fact $\operatorname{PGL}_{n+1}(k)$ is the automorphism group of $\mathbb{P}^n_{/k}$, the action being the obvious one induced by the action of $\operatorname{GL}_{n+1}(k)$ on the vector space $k^{n+1}$ in which $\mathbb{P}^n$ is the set of lines. However, no one said this was the entire automorphism group of $k(x_1,\ldots,x_n)$! It is when $n = 1$ -- for instance because every rational map from a smooth curve to a projective variety is a morphism ("valuative criterion for properness"). However, $\operatorname{PGL}_{n+1}(k)$ is known not to be the entire automorphism group of $k(x_1,\ldots,x_n)$ when $n > 1$. Rather, the full automorphism group is called the Cremona group. For $n = 2$ we have a problem in the geometry of surfaces, and it was shown (by Max Noether when $k = \mathbb{C}$) that the automorphism group here is generated by the linear automorphisms described above together with a certain set of simple, well-understood birational maps, called quadratic maps or indeed Cremona transformations. But even when $n = 2$ this automorphism group is not an algebraic group: it's bigger than that. When $n \geq 3$ it is further known that the linear automorphisms and the Cremona transformations do not generate the whole automorphism group, and apparently no one has even a decent guess as to what a set of generators might look like. I had the good fortune of hearing a talk by James McKernan on (in part) this subject within the last few months, so I am a bit more up on this than I otherwise would be. Anyway, he gave us the sense that this is a pretty hopeless problem at present. For instance, see this recent preprint in which a rather eminent algebraic geometer works rather hard to prove a seemingly rather weak result about finite subgroups of the three dimensional Cremona group! So, yes, this is a different sort of question from the ones considered in my rough note on transcendental Galois theory. To all appearances it's a much harder question... - Thanks! I should have realized that we could allow birational automorphisms of $\mathbb{P}^n$ as well; I wonder if there is some way of functorially identifying an algebraic subgroup (e.g. look at automorphisms that fix some basepoint on $\mathbb{P}^n$...). – Daniel Litt Apr 13 '11 at 7:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462425708770752, "perplexity_flag": "head"}
http://mathoverflow.net/questions/11343?sort=oldest
## algorithm for calculating the Chow groups of a variety over a finite field ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there an algorithm for calculating the Chow groups of a variety over a finite field? It is know that $H^{2i,i}_\mathrm{mot}(X,\mathbf{Z}) = CH^i(X)$. In how many cases does this help us? - Are there any algorithms for varieties over, say, C or Q? – Kevin Lin Jan 10 2010 at 17:21 @Kevin: A^1 is isomorphic to Pic. Since there is no known algorithm for computing Pic of an elliptic curve over Q, there can't be a known algorithm for computing Chow groups over Q either. – David Speyer Jan 10 2010 at 20:08 3 @David: there is an algorithm for computing Pic of an elliptic curve over Q. We just haven't proved it terminates yet! :-) – Kevin Buzzard Jan 10 2010 at 20:52 Fair enough. As you say, the right statement is that there is an algorithm which is conjectured to always terminate and, when it terminates, it computes Pic of an elliptic curve over Q. – David Speyer Jan 20 2010 at 22:02 Hey, have you had any luck with this question? – Dror Speiser Jun 11 2011 at 12:51 ## 1 Answer I am not an expert, but let me point out that computing $CH^0(X)$ (which is freely generated by the irreducible components) is already quite hard. Algorithms do exist in this case, see page 206 of "Ideals, varieties and algorithms" by Cox, Little, O'Shea for references. I know of no way to compute the class groups (which can be identified with $CH^1(X)$ for smooth $X$) in general, but I will be very interested in what other people have to say about this. Of course, in special situations, more is known. For example, the total Chow group of quadric hypersurfaces (at least up to tensoring with $\mathbb Q$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351778626441956, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/292806/the-measure-of-one-angle-of-an-octagon-is-twice-that-of-the-other-seven-angles
# The measure of one angle of an octagon is twice that of the other seven angles. What is the measure of each angle? Help would be greatly appreciated. - 1 – Parth Feb 2 at 14:48 2 It is up to you what you accept, however, my post is not really an answer. It is a comment that can't be posted as a comment. Clayton and amWhy have given actual answers. – robjohn♦ Feb 2 at 19:50 ## 3 Answers Images don't work in comments, so I am posting this even though it is not really an answer. Using Clayton's answer, and because Valentine's Day is just a couple of weeks away, it seemed appropriate to post this image of the octagon: $\hspace{3.5cm}$ - Thanks for the display picture! – Parth Feb 2 at 14:46 Prediction #0129384: This will get 100+ upvotes. – Parth Feb 2 at 14:50 1 +1 I want THIS heart for my valentine gravatar - (background blue?) ;-) Than it be made so that all the lengths of the sides are congruent? (hint: challenge!) – amWhy Feb 2 at 14:55 @amWhy: I started out making all the sides congruent, but as the angles are fixed (and therefore all the sides must be parallel to those as drawn), the bottom two sides need to be a different length. – robjohn♦ Feb 2 at 15:01 1 Would the downvoter care to comment? I explained why I was posting a non-answer, although I realize that that doesn't make it an answer. – robjohn♦ Feb 2 at 18:14 show 1 more comment Hint: An octagon has $1080^\circ$, and you have the equation $$2\theta+\theta+\cdots+\theta=9\theta=1080^\circ.$$ - +1 magic $9\theta$. – Babak S. Feb 3 at 3:34 The sum of the interior angles of an octagon is $1080^\circ$. There are eight interior angles, one twice the measures of all others: $7 \theta + (2\theta) = 1080^\circ$ Now solve for $\theta$ (which is the measure of the 7 equal angles), and then compute $2\theta$, the measure of the angle that is twice the size of the other 7 interior angles. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438839554786682, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/11/08/the-character-table-of-s4/?like=1&source=post_flair&_wpnonce=a082172354
The Unapologetic Mathematician The Character Table of S4 Let’s use our inner tensor products to fill in the character table of $S_4$. We start by listing out the conjugacy classes along with their sizes: $\displaystyle\begin{array}{cc}e&1\\(1\,2)&6\\(1\,2)(3\,4)&3\\(1\,2\,3)&8\\(1\,2\,3\,4)&6\end{array}$ Now we have the same three representations as in the character table of $S_3$: the trivial, the signum, and the complement of the signum in the defining representation. Let’s write what we have. $\displaystyle\begin{array}{c|ccccc}&e&(1\,2)&(1\,2)(3\,4)&(1\,2\,3)&(1\,2\,3\,4)\\\hline\chi^\mathrm{triv}&1&1&1&1&1\\\mathrm{sgn}&1&-1&1&1&-1\\\chi^\perp&3&1&-1&0&-1\\\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\end{array}$ Just to check, we calculate $\displaystyle\langle\chi^\perp,\chi^\perp\rangle=(1\cdot3\cdot3+6\cdot1\cdot1+3\cdot-1\cdot-1+8\cdot0\cdot0+6\cdot-1\cdot-1)/24=1$ so again, $\chi^\perp$ is irreducible. But now we can calculate the inner tensor product of $\mathrm{sgn}$ and $\chi^\perp$. This gives us a new line in the character table: $\displaystyle\begin{array}{c|ccccc}&e&(1\,2)&(1\,2)(3\,4)&(1\,2\,3)&(1\,2\,3\,4)\\\hline\chi^\mathrm{triv}&1&1&1&1&1\\\mathrm{sgn}&1&-1&1&1&-1\\\chi^\perp&3&1&-1&0&-1\\\mathrm{sgn}\otimes\chi^\perp&3&-1&-1&0&1\\\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\end{array}$ which we can easily check to be irreducible. Next, we can form the tensor product $\chi^\perp\otimes\chi^\perp$, which has values $\displaystyle\begin{aligned}\chi^\perp\otimes\chi^\perp\left(e\right)&=9\\\chi^\perp\otimes\chi^\perp\left((1\,2)\right)&=1\\\chi^\perp\otimes\chi^\perp\left((1\,2)(3\,4)\right)&=1\\\chi^\perp\otimes\chi^\perp\left((1\,2\,3)\right)&=0\\\chi^\perp\otimes\chi^\perp\left((1\,2\,3\,4)\right)&=1\end{aligned}$ Now, this isn’t irreducible, but we can calculate inner products with the existing irreducible characters and decompose it as $\displaystyle\chi^\perp\otimes\chi^\perp=\chi^\mathrm{triv}+\chi^\perp+\mathrm{sgn}\otimes\chi^\perp+\chi^{(5)}$ where $\chi^{(5)}$ is what’s left after subtracting the other three characters. This gives us one more line in the character table: $\displaystyle\begin{array}{c|ccccc}&e&(1\,2)&(1\,2)(3\,4)&(1\,2\,3)&(1\,2\,3\,4)\\\hline\chi^\mathrm{triv}&1&1&1&1&1\\\mathrm{sgn}&1&-1&1&1&-1\\\chi^\perp&3&1&-1&0&-1\\\mathrm{sgn}\otimes\chi^\perp&3&-1&-1&0&1\\\chi^{(5)}&2&0&2&-1&0\end{array}$ and we check that $\displaystyle\langle\chi^{(5)},\chi^{(5)}\rangle=(1\cdot2\cdot2+6\cdot0\cdot0+3\cdot2\cdot2+8\cdot-1\cdot-1+6\cdot0\cdot0)/24=1$ so $\chi^{(5)}$ is irreducible as well. Now, we haven’t actually exhibited these representations explicitly, but there is no obstacle to carrying out the usual calculations. Matrix representations for $V^\mathrm{triv}$ and $V^\mathrm{sgn}$ are obvious. A matrix representation for $V^\perp$ comes just as in the case of $S_3$ by finding a basis for the defining representation that separates out the copy of $V^\mathrm{triv}$ inside it. Finally, we can calculate the Kronecker product of these matrices with themselves to get a representation corresponding to $\chi^\perp\otimes\chi^\perp$, and then find a basis that allows us to split off copies of $V^\mathrm{triv}$, $V^\perp$, and $V^\mathrm{sgn}\otimes V^\perp$. 4 Comments » 1. Is there a general procedure for filling out the character table, or is it a matter of trying different combinations until you find a new irreducible character orthogonal to the ones you’ve already got (and iterating until you have a full basis)? Comment by Joe English | November 9, 2010 | Reply 2. We’ll come up with a general straightforward position for symmetric groups eventually. Comment by | November 9, 2010 | Reply 3. [...] We can check this in the case of and , since we have complete character tables for both of them: [...] Pingback by | November 17, 2010 | Reply 4. Small error, it’s the complement of the trivial rep, not the signum…. Comment by Charles Waldman | May 2, 2013 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164782762527466, "perplexity_flag": "head"}
http://www.purplemath.com/learning/viewtopic.php?f=15&t=2327&p=6763
# The Purplemath Forums Helping students gain understanding and self-confidence in algebra. ## Discrete Math Logic Problems Sequences, counting (including probability), logic and truth tables, algorithms, number theory, set theory, etc. 2 posts • Page 1 of 1 ### Discrete Math Logic Problems by toogood4007 on Wed Oct 19, 2011 4:58 am Can some help me answer through some of these questions Problem 1) Let the following predicates be given: . . .B(x) = "x is boring" . . .H(x) = "x is a historical essay" . . .E(x) = "x is expensive" . . .M(x, y) = "x is more interesting than y" (a) Write the following statements in predicate logic: . . .1. All historical essays are boring. . . .2. There are some boring books that are not historical essays. . . .3. All boring historical essays are expensive. (b) Write the following predicate logic statement in everyday English. Don't just give a word-for-word translation; your sentence should make sense. . . .$\forall x.[H(x)\, \rightarrow\, \exists y.(\neg E(y)\, \wedge \, M(y, x)]$ (c) Formally negate the statement from part (b). Simplify your negated statement so that no quantifier lies within the scope of a negation. I have finished Problem 1 part a, but the rest is a blur. I am really lost and any help will be great. Thanks. I solved Problem 1, but i have no idea how to solve problem 2,3,4. Problem 2) In this problem, all variables range over the set of all integers. Recall that the relation a | b (read "a divides b") is defined as $a\, |\, b\, \equiv\, ''\exists c.b\, =\, a\, \dot\, d''$ 1. Formulate the following statement in English and prove that it is true (for all x, a, b): . . .$\forall a,b,x.\left(x|a\, \wedge\, x|b\, \rightarrow\, x|(a\, +\, b)\right)$ Using common precedence rules, the above statement should be interpreted as . . .$\forall a,b,x.\left(\left(\left(x|a\right)\wedge\left(x|b\right)\right)\, \rightarrow\, \left(x|(a\, +\, b)\right)\right)$ 2. Same as (1), but for the statement . . .$\forall a,b,x.\left(x|a\, \wedge\, x|(a\, +\, b)\, \rightarrow\, x|b\right)$ 3. Prove again the statement in (2), but rather than proving it from scratch, give a proof using the statement in part (1). Problem 3) For each of the following statements: . . .Formulate the statement in prepositional logic. You may use all of the . . .standard logic connectives and quantifiers. You may also use integer . . .constants (1, 2, 3, etc.), arithmetic operations (+, ×, etc.), the . . .predicate $Odd\, \equiv\, \exists m.n\, =\, 2m\, +\, 1$, and the relation a|b. . . .State if the statement is true or false. . . .Prove your answer correct; i.e., prove either the statement or the . . .negation of the statement. In all statements, the variables range of . . .the set of nonnegative integers. 1. The product of any two odd integers is odd. 2. For any two numbers x and y, the sum x + y is odd if and only if the product x × y is odd. 3. For all integers a, b, and c, if a divides b and b divides c, then a divides c. Problem 4) Also in this problem, all variables range over the nonnegative integers. Recall the definition of "prime": . . .$Prime(n)\, =\, ''\left(n\, >\, 1\right)\, \wedge\, \left(\forall m.(m|n)\, \rightarrow\, (m=1)\, \vee\, (m=n)\right)'';$ i.e., a number bigger than 1 is prime if and only if it is only divisible by 1 and itself. A number is composite if it can be written as the product of smaller numbers: . . .$Composite(n)\, =\, ''\exists a.b(a<n)\, \wedge\, (b<n)\, \wedge\, (n=a\times b)''$ Prove that any number bigger than 1 is prime if and only if it is not composite. You can break up your proof into two parts as follows: . . .1. First prove that for any n > 1, if n is prime, then n is not composite. . . .2. Next prove that for any n > 1, if n is not composite, then n is prime. In the solution to this problem you can use (without proof) the fact that for any positive integers x and y, if x | y, then x < y. Remember that your solution will be graded both for correctness and clarity. This is especially important for this problem, as the proof involved is a bit longer than the proofs in the previous problems. toogood4007 Posts: 1 Joined: Wed Oct 19, 2011 4:40 am Sponsor Sponsor ### Re: Discrete Math Logic Problems by nona.m.nona on Wed Oct 19, 2011 2:33 pm toogood4007 wrote: i have no idea how to solve problem 2,3,4. Have proofs not been covered in your class yet? Because that is a topic of study in its own right. If you truly "have no idea how to" proceed, then we probably cannot assist. However: Problem 2) In this problem, all variables range over the set of all integers. Recall that the relation a | b (read "a divides b") is defined as $a\, |\, b\, \equiv\, ''\exists c.b\, =\, a\, \dot\, d''$ 1. Formulate the following statement in English and prove that it is true (for all x, a, b): . . .$\forall a,b,x.\left(x|a\, \wedge\, x|b\, \rightarrow\, x|(a\, +\, b)\right)$ Since you did Problem 1, part (b), you can at least do the "formulate" part of this exercise. Turning to the proof part, start by applying the definition: If x|a, then there exists some q so qx = a. If x|b, then there is some r so rx = b. Use substitution in "a + b", factor, and apply the definition again. 2. Same as (1), but for the statement . . .$\forall a,b,x.\left(x|a\, \wedge\, x|(a\, +\, b)\, \rightarrow\, x|b\right)$ Apply the definition again. See where this leads. Problem 3) For each of the following statements: . . .Formulate the statement in prepositional logic.... . . .State if the statement is true or false. . . .Prove your answer correct.... 1. The product of any two odd integers is odd. Since you did Problem 1, part (b), you can do the "formulate" part for each of these. For this particular exercise, what have you found when you multiplied various pairs of odd integers? When you stated two generic odd integers in terms of the definition, multiplied out these expressions, and simplified, what did you get? 2. For any two numbers x and y, the sum x + y is odd if and only if the product x × y is odd. When you added factors of odd composite numbers, what did you find? 3. For all integers a, b, and c, if a divides b and b divides c, then a divides c. When you applied the definition, what did you find? nona.m.nona Posts: 198 Joined: Sun Dec 14, 2008 11:07 pm 2 posts • Page 1 of 1 Return to Discrete Math • Board index • The team • Delete all board cookies • All times are UTC
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 11, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8773210644721985, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/7666?sort=votes
## Lax Functors and Equivalence of Bicategories? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Lax functors of bicategories were introduced at the very inception of bicategories, and I'm trying to get a better feel for them. They are the same as ordinary 2-functors, but you only require the existence of a coherence morphism, not an isomorphism. The basic example I'm looking at are when you have a lax functor from the singleton bicategory to a bicategory B. These are just object b in B with a monad T on B. My Question: If I have an equivalence of bicategories A ~ A', do I a get equivalent bicategories of lax functors Fun(A, B) and Fun(A', B)? If not, is there any relation between these two categories? So let me be more precise on the terminology I'm using. I want to look at lax functors from A to B. These are more general then strong/pseudo and much more general then just strict functors. For a lax functor we just have a map like this: $F(x) F(g) \to F(fg)$ for a strong or pseudo functor this map is an isomorphism, and for strict functor it is an identity. I don't care about strict functors. I'm guessing that these form a bicategory Fun(A,B), with the 1-morphism being some sort of lax natural transformation, etc, but I don't really know about this. Are there several reasonable possibilities? When I said equivalence between A and A' what I meant was I had a strong functor F:A --> A' and a strong functor G the other way, and then equivalences (not isomorphisms) FG = 1, and GF = 1. This seems like the most reasonable weak notion of equivalence to me, but maybe I am naive. I haven't thought about equivalences using lax functors. Would they automatically be strong? What I really want to understand is what sort of functoriality the lax functor bicategories Fun(A, B) have? - You're asking about lax functors and not about pseudofunctors, right? Just making sure. – Harry Gindi Dec 3 2009 at 14:23 2 Chris, could you make a couple of things in the question more explicit? First, when you say "equivalence of bicategories", do you intend the functors F and G going back and forth to be weak/pseudo, or just lax? And do you intend FG to be isomorphic to the identity, or just equivalent? (Standard terminology, which I'm no fan of, would choose the first answer to both questions.) Finally, are the 1-morphisms in Fun(A, B) the weak/pseudo natural transformations, or the lax ones? – Tom Leinster Dec 3 2009 at 16:50 As far as the 1-morphisms in Fun(A,B) are concerned, I would be happy to know what happens in either case, i.e. where we use only weak/pseudo transformations or when we use the lax ones. – Chris Schommer-Pries Dec 3 2009 at 18:07 Thanks for clarification. I agree, this is very much the most reasonable notion of equivalence. It's just that in the (questionable) classical terminology, this is called "biequivalence", with "equivalence" reserved for when FG is actually isomorphic to the identity. – Tom Leinster Dec 3 2009 at 18:56 1 Thank you for asking this question! I no longer feel guilty about not believing in lax functors. – Reid Barton Dec 3 2009 at 19:49 show 2 more comments ## 2 Answers First of all, for any two bicategories A and B, there is a bicategory $Fun_{x,y}(A,B)$ where x can denote either strong, lax, or oplax functors, and y can denote either strong, lax, or oplax transformations. There's no problem defining and composing lax and oplax transformations between lax or oplax functors, and the lax/oplax-ness doesn't even have to match up. It's also true that two x-functors are equivalent in one of these bicategories iff they're equivalent in any other one. That is, any lax or oplax transformation that is an equivalence is actually strong/pseudo. Where you run into problems is when you try to compose the functors. You can compose two x-functors and get another x-functor, but in general you can't whisker a y-transformation with an x-functor unless x = strong, no matter what y is, and moreover if y isn't strong, then the interchange law fails. Thus you only get a tricategory with homs $Fun_{x,y}(A,B)$ if x=y=strong. (In particular, I think this means that there isn't a good notion of "equivalence of bicategories" involving lax functors.) For a fixed strong functor $F\colon A\to A'$, you can compose and whisker with it to get a functor $Fun_{x,y}(A',B) \to Fun_{x,y}(A,B)$ for any x and y. However, the same is not true for transformations $F\to F'$, and the answer to your question is (perhaps surprisingly) no! The two bicategories are not equivalent. Consider, for instance, A the terminal bicategory (one object, one 1-morphism, one 2-morphism) and A' the free-living isomorphism, considered as a bicategory with only identity 2-cells. The obvious functor $A' \to A$ is an equivalence. However, a lax functor from A to B is a monad in B, and a lax functor from A' to B consists of two monads and a pair of suitably related "bimodules". If some lax functor out of A' is equivalent to one induced by composition from A (remember that "equivalence" doesn't depend on the type of transfomation), then in particular the two monads would be equivalent in B, and hence so would their underlying objects. But any adjunction in B whose unit is an isomorphism gives rise to a lax functor out of A', if we take the monads to be identity 1-morphisms, the bimodules to be the left and right adjoint, and the bimodule structure maps to be the counit and the inverse of the unit. And of course can have adjunctions between inequivalent objects. By the way, I think your meaning of "equivalence" for bicategories is becoming more standard. In traditional literature this sort of equivalence was called a "biequivalence," because for strict 2-categories there are stricter sorts of equivalence, where you require either the functors to be strict, or the two composites to be isomorphic to identities rather than merely equivalent to them, or both. These stricter notions don't really make much sense for bicategories, though. For instance, in a general bicategory, even identity 1-morphisms are not isomorphisms, so if "equivalence" were to demand that FG be isomorphic to the identity, a general bicategory wouldn't even be equivalent to itself! - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I was surprised (and delighted) to discover that the answer is no. Here is an example; in fact it's a special case of the example Mike referred to, but worked out in more detail. Let S be a set and cd(S) the codiscrete category on S (Hom(x, y) = • for every x and y in S). Let BSet denote the one-object bicategory corresponding to the monoidal category (Set, ×). As remarked here on the nlab, a lax functor cd(S) → BSet is the same data as a category with object set S. However, there is a catch: the morphisms (whether lax or strong natural transformations) are not just functors between categories fixing the objects. Instead, if C and D are categories corresponding to two lax functors cd(S) → BSet, then a morphism (X, F) from C to D consists of • for each object s of S, a set X(s), • for each pair of objects s and t of S, a map F(s, t) : C(s, t) × X(t) → X(s) × D(s, t), • such that obvious identity and composition laws hold. Depending on which notion of natural transformation you want to take, the map F(s, t) might be an isomorphism, or it might go the other way. I'll consider all three possibilities simultaneously. We can compose 1-morphisms: the composition (Z, H) of (X, F) and (Y, G) has Z(s) = X(s) × Y(s), and H formed from F and G in an obvious way. The identity morphism has X(s) = • and F(s, t) = id. We also have 2-morphisms from (X, F) to (Y, G), which are families of maps X(s) → Y(s) making some diagrams involving X, F, Y, G commute. This doesn't yet seem to be any familiar 2-category. Let's extract the maximal 2-groupoid by taking only invertible 1- and 2-morphisms. (That's certainly an equivalence-invariant thing to do.) The invertible 2-morphisms are the ones where the maps X(s) → Y(s) are isomorphisms. For (X, F) to be invertible, we first need for every s, an object Y(s) and an isomorphism X(s) × Y(s) = •. That can only happen if X(s) = • for every s. (This is the only property of (Set, ×) I really care about: it has no invertible objects besides •.) Now F is just an ordinary functor C → D which is the identity on objects, and for (X, F) to be invertible F needs to be an isomorphism. When (X, F) and (Y, G) are invertible, so X(s) = Y(s) = •, there is at most one 2-morphism between them, exactly when F and G are equal as maps C(s, t) → D(s, t). In conclusion, the maximal 2-groupoid contained in the 2-category of lax functors cd(S) → BSet is the 1-groupoid of categories with object set S and isomorphisms fixing the objects. Note that it didn't matter for the argument which notion of natural transformation I chose. Now as S varies over nonempty sets, these resulting groupoids are nonequivalent categories. However, surely the 2-categories cd(S) are all equivalent under any reasonable definition, since they equivalent 1-categories. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344286322593689, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/294156/is-a-finite-dimensional-vector-space-always-countable
# Is a finite dimensional vector space always countable? Given a vector space of finite dimension, can we always find an injective map to the natural numbers? z. - 5 $\Bbb R$ is a one-dimensional vector space over $\Bbb R$. – Brian M. Scott Feb 4 at 1:53 @BrianM.Scott, is this true in ZF+AD? – alancalvitti Feb 4 at 2:59 1 @alancalvitti: The reals are always uncountable. – Brian M. Scott Feb 4 at 3:05 @BrianM.Scott, what about in ZF? 1. Vector spaces may have no bases. 2. Vector spaces may have two bases with different cardinalities. (Herrlich AC) – alancalvitti Feb 4 at 3:07 1 Furthermore what Brian wrote is always true. Every field is a one dimensional vector space over itself. – Asaf Karagila Feb 4 at 9:14 show 1 more comment ## 1 Answer Depends over what field. If the field is finite or countable, e.g. $\mathbb Q$, then yes. If the field is uncountable, e.g. $\mathbb R$, then no. The reason is that $|\mathbb F^n|=|\mathbb F|^n$, and if $|\mathbb F|\leq\aleph_0$ then $|\mathbb F|^n\leq\aleph_0$. - 1 You rock, thank you. – Ziggy Feb 6 at 1:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8148651123046875, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/118315/infinitely-many-units-in-mathbbz-sqrtd-for-any-d1/118567
# infinitely many units in $\mathbb{Z}[\sqrt{d}]$ for any $d>1$. I am working through Neukirch's Algebraic Number Theory on my own. Exercise 6 in Section 1 (page 5) is to show that the ring $\mathbb{Z}[\sqrt{d}]$, for any squarefree rational integer $d>1$, has infinitely many units. I know that in $\mathbb{Z}[\sqrt 2]$ there are infinitely many units, because $(\sqrt{2} + 1)(\sqrt{2} - 1) = 1$ and then taking $n$th powers shows that $(\sqrt{2} + 1)^n$ is a unit for any $n\ge 1$. Similarly in $\mathbb{Z}[\sqrt{3}]$, we have $(2+\sqrt{3})(2-\sqrt{3}) = 1$, and then $(2+\sqrt{3})^n$ for $n\ge 1$ is an infinite family of units. I can find other "fundamental units" for other specific values of $d$. But it seems I have to show that the (Pell) equation $a^2 - db^2 = \pm 1$, for any $d>1$, has an integer solution $(a, b) \ne (\pm 1, 0)$, because I know if I can find one solution, then I can get infinitely many. But from my limited knowledge of Pell's equation this is a difficult problem (using techniques such as continued fractions.) Maybe there is a simpler nonconstructive proof that I'm missing. Any hints or suggestions? - 3 You can write down a proof by specializing one of the standard proofs of Dirichlet's unit theorem (certain aspects of the proof simplify). I don't know if this is the intended solution though. – Qiaochu Yuan Mar 9 '12 at 19:21 Actually finding a solution to the Pell equation is a computational pain, but as for existence, to quote Wikipedia's page on it: "Lagrange proved that for any natural number n that is not a perfect square there are x and y > 0 that satisfy Pell's equation." – Hurkyl Mar 9 '12 at 19:24 I'm deleting my answer because I realized that it didn't really answer your question. Assuming there is a fundamental unit (which is assured by Dirichlet's unit theorem), the method I gave will work to find the unit. However, I can't think of a easier way of showing existence than either Qiaochu's suggestion above or using continued fractions. – Dane Mar 10 '12 at 5:43 1 @rgb: So you first ask for a proof, and now it turns out that what you really want is to read Neukirch's mind. Or to complain that the book is not as self-contained as advertised. From the book, all I can tell is that (1) the chapter "answers to all exercises" is missing, and (2) it is too late to ask Jürgen Neukirch (from the Foreword). For all I can tell your exercise 6 quite a bit harder than its place would suggest. However the proof I gave is easier than a first glance suggests; it's just that a few technical details need to be settled for a complete and explicit proof. – Marc van Leeuwen Mar 12 '12 at 15:19 1 @rgb: My proof is non-constructive on a double count, if that is any merit: it is by contradiction form the non-existence of a fundamental unit, and Minkowski's theorem is also existence throught contradiction. I rather doubt there is a really simple proof; I've done all points except existence of a fundamental unit with my students without much pain, but upon consulting collegues who've also done this we could not come up with better than below for existence. Also note that the known bounds for where the unit can be found are extremely bad. – Marc van Leeuwen Mar 12 '12 at 17:47 show 2 more comments ## 2 Answers I don't actually know any of the standard proofs of Dirichlet's unit theorem that Qiaochu Yuan refers to, but I think they might use Minkowski's theorem about the existence of nonzero lattice points in sufficiently large centrally symmetric convex subsets of $\mathbf R^n$. I'll give a proof for the specific question asked here, which uses Minkowski's theorem, but only in the very simple case of a parallelogram. I'll write the proof top-down, so that one sees how the theorem is used before I'll state (and prove) it. First I recall some generalities about the ring $R=\mathbf Z[\sqrt d]$ for $d$ a positive squarefree number, to get started. The additive group of $R$ is free Abelian of rank $2$ with generators $1,\sqrt d$. The norm map $N:R\to\mathbf Z$ given by $a+b\sqrt d\mapsto a^2-db^2$ is multiplicative, and the units of $R$ are precisely the elements with norm $\pm1$. One has the non-trivial unit $-1$, but it is of finite order; the point to prove is therefore that the subgroup of positive units is non-trivial (once a positive unit${}\neq1$ is found, its powers form an infinite set of units). I will reason by contradiction, so assume that $1$ is the unique positive unit of $R$. The first step will be to show that this would imply that for any $n\in\mathbf N$, the number of positive $r\in R$ with $|N(r)|=n$ is finite, in fact at most $n^2$. Lemma. For any $n\in\mathbf N_{>0}$, the number of principal ideals of $R$ that contain $n$ is at most $n^2$. Proof. Since these ideals all contain $nR$, they all map to principal ideals of $R/nR$, and the mapping is injective. The number of principal ideals of $R/nR$ cannot exceed the number $n^2$ of its elements. QED The bound given here is far from sharp, but finiteness is all we need. Under the hypothesis that $1$ is the unique positive unit of $R$, two positive elements of $R$ generate distinct principal ideals, and if $|N(r)|=n$, the ideal generated by $r$ contains $n$, so the lemma justifies our claim. This means that for every $M>0$ there is some $\varepsilon_M>0$ such that for all $a,b\in\mathbf Z$ with $0\neq|a+b\sqrt d|<\varepsilon_M$ one has $|N(a+b\sqrt d)|\geq M$. We shall show that for sufficiently large $M$ this contradicts Minkowski's theorem. Note that $N(a+b\sqrt d)=(a+b\sqrt d)(a-b\sqrt d)$, so we can bound $N(a+b\sqrt d)$ above if in addition to the value $a+b\sqrt d$ we also bound its conjugate $a-b\sqrt d$. Now the linear endomorphism of $\mathbf R^2$ sending $\binom ab\mapsto\binom{a+b\sqrt d}{a-b\sqrt d}$ has determinant $-2\sqrt d$, so the conditions $|a+b\sqrt d|<x$ and $|a-b\sqrt d|<y$ define the interior of a parallelogram of area $4\frac{xy}{2\sqrt d}$, for any $x,y>0$. Minkowski's theorem says such a parallelogram will contain a nonzero lattice point whenever its area is greater than $2^2=4$. So here is how to obtain a contradiction: take any $M>2\sqrt d$ and put $$x_0=\min\{ z\in R \mid z>0 \land |N(z)|\leq M\}\qquad\text{and}\qquad y_0=\frac M{x_0}.$$ Then $\frac{x_0y_0}{2\sqrt d}>1$, so Minkowski's theorem ensures the existence of $a,b\in\mathbf Z$, not both $0$, with $|a+b\sqrt d|<x_0$ and $|a-b\sqrt d|<y_0$. But then on one hand $|N(a+b\sqrt d)|>M$ by the choice of $x_0$ (and the fact $N(-z)=N(z)$), but on the other hand $|N(a+b\sqrt d)|=|a+b\sqrt d||a-b\sqrt d|<x_0y_0=M$, a contradiction. Minkowski's theorem. Any centrally symmetric convex subset $S$ of $\mathbf R^d$ of volume greater than $2^d$ contains a nonzero element of $\mathbf Z^d$. Proof. The map $f:\mathbf R^d\to(2\mathbf Z)^d$ is locally area-preserving, so its restriction to $S$ cannot be injective since the total area at arrival is $2^d$. If $s,s'\in S$ have $s\neq s'$ and $f(s)=f(s')$ then by central symmetry $-s'\in S$, and by convexity $\frac{s-s'}2\in S$; therefore, since $f(s-s')\in(2\mathbf Z)^d$, one has $\frac{s-s'}2\in S\cap(\mathbf Z^d\setminus\{0\})$. QED - Here is a link to incredible notes from a class given by Keith Conrad on ANT: http://www.math.uconn.edu/~salisbury/notes/AlgNumThy.pdf On page 18 and 19 (of the actual notes, rather than the pdf count) is a method for the case of $\mathbb{Z}[\sqrt{2}]$. Maybe you can apply it to other cases. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950896680355072, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65463/hausdorff-dimension-for-invariant-measure/65473
Hausdorff dimension for invariant measure? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A fractal set has a Hausdorff dimension. In some cases, we may generate a fractal by iterating $f,$ and let the fractal be the set of starting points $x$ such that $|f^{\circ n}(x)|$ is bounded as $n$ grows. (The julia set and the sierpinski triangle are such sets, if one allows $f$ to be a Hutchinson operator). We may also have an invariant measure, $\mu,$ that is, $\mu(A) = \mu(f^{-1}(A)).$ The support of $\mu$ is the fractal set. My question is: is there a way to modify the "dimension" notion to take this invariant measure into account somehow? Some parts of the fractal might be more dense, and thusly should "contribute more" to the dimension. An idea would be to use box-counting, but instead of just counting if it is occupied or not, one uses the invariant measure on the box instead. Has this been studied? - 5 Answers There are a wide variety of notions of dimension of a measure. Your basic intuition is completely correct: for a dynamical system, the dimension of a natural invariant measure provides more relevant information than the dimension of the invariant set, since the system may spend more time in certain parts of the space. For sufficiently homogeneous measures, all reasonable notions of dimension will agree. By sufficiently homogeneous'' I mean something very precise: that $$C^{-1} \, r^s \le \mu(B(x,r)) \le C\, r^s$$ for some constant $C\ge 1$, some $s\ge 0$ and all points $x$ in the support of $\mu$. Of course the dimension in this case is $s$. Such measures are often called Ahlfors-regular, and an example is the natural measure on the middle-thirds Cantor set. For more general measures, the local dimension is one of the most important concepts and has already been mentioned: $$\dim(\mu,x)=\lim_{r\to 0}\frac{\log \mu(B(x,r))}{\log r}.$$ But this is really a function of the point $x$ (and not even, as the limit in the definition may not exist, although one can always speak of upper and lower local dimensions). There are several ways to globalize the information given by the local dimensions. Perhaps the easiest is to take the essential supremum/infimum of the upper/lower local dimensions. This results in four global concepts of dimensions, known as upper/lower packing/Hausdorff dimensions of the measure. They turn out (somewhat surprisingly) to be closely connected to the dimensions of the sets the measure sees''. For example, the upper Hausdorff dimension of a probability measure $\mu$ (that is, the essential supremum of the lower local dimensions), is the same as the infimum of the Hausdorff dimension of $A$ over all Borel sets $A$ of full measure. A finer study is provided by the multifractal spectrum of a measure $\mu$: for each $\alpha$, we form the level set $E_\alpha$ of all points $x$ where $\dim(\mu,x)=\alpha$. Then we try to understand how the size of $E_\alpha$ depends on $\alpha$, for example by studying the function $\alpha\to \dim_H(E_\alpha)$. There are (many!) other useful concepts of dimension which are not directly related to local dimension. In computing lower bounds for the Hausdorff dimension, the potential method is widely applicable: if a measure $\mu$ satisfies that the energy integral $$I_s(\mu) = \int \frac{d\mu(x)\, d\mu(y)}{|x-y|^s}$$ is finite, then the support of $\mu$ has Hausdorff dimension at least $s$. So it makes sense to think of $\sup\{s: I_s(\mu)<\infty\}$ as a notion of dimension of $\mu$. This is often called the (lower) correlation dimension, and is one instance of a more general family of dimensions indexed by a real number $q$ (correlation dimension corresponds to $q=2$, and has several alternative definitions, perhaps pointing to its importance). Yet another notion of dimension has a dynamical underpinning. Given a probability measure $\mu$ say on the unit cube $[0,1]^d$, we may consider the entropy $H_k(\mu)$ of $\mu$ with respect to the partition into dyadic cubes of side length $2^{-k}$. We then define the entropy (also called information) dimension of $\mu$ as $$\lim_{k\to\infty} \frac{H_k(\mu)}{k\log 2}.$$ This is just a sample of the diverse zoo of dimensions of a measure. Which ones to use depends on the context and what you are able to compute/prove. Coming back to invariant measures, it is very often the case that the local dimension exists and takes a constant value at almost every point. Such measures are called exact dimensional, and have the property that lower and upper Hausdorff dimension, as well as entropy dimension, are all equal to this almost sure value. (But correlation dimension may be strictly smaller, and the multifractal spectrum may still be very rich; in other words, even though attained on a set of measure zero, other local dimensions may still be relevant). Proving that measures invariant under certain class of dynamics are exact dimensional may be very challenging. Eckmann and Ruelle conjectured in 1984 that hyperbolic measures ergodic a $C^{1+\delta}$ diffeomorphism are exact dimensional. This was proved by Barreira, Pesin and Schmeling in 1999; the paper appeared in Annals. For invariant measures, there is often a strong connection between their dimension and other dynamical characteristics (at least generically). The conformal expanding case is the easiest: in this case one has the well-known formula dimension=entropy/Lyapunov exponent". The nonconformal situation is much harder, but still a lot of deep research has been done, for example Ledrappier-Young theory. - Thank you very much! This was a great answer! – Per Alexandersson May 20 2011 at 22:06 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hausdorff dimension of a measure is studied, yes. The mathematical texts should treat this. Falconer, Fractal Geometry 2nd edition p. 209 Edgar, Integral, Probability, and Fractal Measures p. 123 - The way it's usually done is as follows: $$\dim_H \mu = \inf \{ \dim_H Z \mid \mu(Z) = 1 \}.$$ You can also study box dimension of measures, but there you take an infimum over all sets $Z$ with $\mu(Z) \geq 1-\epsilon$, and then a limit as $\epsilon \to 0$. In addition to the books Gerald mentions, you can find a comprehensive discussion of this in Dimension Theory in Dynamical Systems by Yakov Pesin, and a more introductory discussion in Chapter 4 of Lectures on Fractal Geometry and Dynamical Systems by Yakov Pesin and Vaughn Climenhaga. - There's a very good notion of "local" dimension of a measure at a point $x$: $$\dim_x(\mu) = \lim_{r\rightarrow 0}\frac{\log\mu(B_r(x))}{\log r}$$ where $B_r(x)$ is the ball of radius $r$ centered at $x$. (Intuitively, we expect that in a $d$-dimensional space, the volume of a ball is proportional to the $d$th power of the radius, which immediately leads to this definition.) In general, the local dimension isn't defined everywhere and depends on $x$ when it is, but under certain conditions, it is constant $\mu$-almost everywhere, in which case it makes sense to call it the dimension of the measure. In many cases, the measure $\mu$ is more interesting than its support, and the dimension defined thusly will reflect this. For example, consider a stochastic map on the interval $[0,1]$ that maps it affinely onto $[0,1/4]$, $[1/4,3/4]$, or $[3/4,1]$, each with probability 1/3. If $\mu$ is the invariant measure, then the support of $\mu$ is the whole interval, but you can check that $\dim_x(\mu)=\frac{3\log 3}{5\log 2}$ for $\mu$-a. e. $x$ (if I didn't screw it up). You can also verify that there are plenty of exceptional $x$ for which $\dim_x(\mu)$ is something else or undefined. - Falconer's other book: The geometry of fractal sets contains a somewhat detailed discussion. The main surprising result is that the dimension of measure is smaller than the measure of its support. This follows from what is called the thermodynamic formalism. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 6, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938190221786499, "perplexity_flag": "head"}
http://asmeurersympy.wordpress.com/2011/05/
# Aaron Meurer's SymPy Blog My blog on my work on SymPy and other fun stuff. ## Update for the Beginning of the Summer May 26, 2011 So the Google Summer of Code coding period officially started on Monday, and in solidarity with the students, I will be blogging once a week about various things. Some of the posts will just be about what I have done that week. Others will be continuations of my Risch Algorithm series of blog posts (see parts 0, 1, 2, and 3). This week, I will do the former. I have spend the past several weeks preparing for the release. The main thing right now is to clear out the issues that are blocking the release. I merged in a branch that included all of my polys related fixes from my integration3 branch. Along with similar branch from earlier that had some non-polys related fixes (like some fixes to the integrals), all of my fixes from integration3 not directly related to my implementation of the Risch Algorithm should no be in master. Once those issues are fixed, I should be ready to make a release candidate for the release. The last release was over a year ago (March 2010), and we’ve racked up quite a few changes since then. A few big ones are: • The new polys. This is (in my opinion) the biggest change. Because of the new polys, everything is faster, and simplification is far more powerful than it was before. This is for a few reasons. The biggest reason is that the new polys allow polynomials in any kind of expression, not just Symbols. This means that you can do things like factor the expression $\cos^2{x} + 2\cos{x} + 1$. As you can imagine, many simplifications of complex expressions are nothing more than polynomial simplifications, where the polynomial is in some function. In addition to this, the new polys have a much faster implementation, and if you have gmpy installed, it will use that and be even faster. There are also several faster algorithms, like a faster algorithm for multivariate factorization, that have been implemented. These all lead to blazing fast simplification and polynomial minipulation in SymPy. • The Quantum Module. Unfortunatly, I can’t say much about this, since I don’t know anything about quantum physics. Furthermore, at the time of the writing of this blog post, that part of the release notes hasn’t been written yet. Suffice it say that thanks to two GSoC projects from last summer (see this and this page), we now have a quantum physics module. A lot of the stuff in that module, from my understanding, is unique to SymPy, which is very exciting. (By the way, if you’re interested in this, Brian Granger can tell you more about it). • Various backwards incompatible changes. We’ve taken advantage of the fact that this will be a point release (0.7.0) to clean up some old cruft. • We’ve renamed the functions `abs()` and `sum()` to `Abs()` and `summation()`, respectively, because they conflicted with built-in names (although thanks to `__abs__` magic, `abs(expr)` will still work with the built-in `abs()` function). • This will be the last release to support Python 2.4. This will be a big benefit to not have to support Python 2.4 anymore after this release. There were a ton of features added in Python 2.5 that we have had to either manually re-implement (like any() and all()), or have had to do without (like the with statement). Also, this will make porting to Python 3 much easier (this is one of our GSoC projects). • We split the class Basic, which is the base class of all SymPy types, into Basic and a subclass Expr. Mathematical objects like `cos(x)` or `x*y*z**2` are instances of Expr. Objects that do not make sense in mathematical expressions, but still want to have some of the standard SymPy methods like .args and .subs() are Basic. For example, a Set object is Basic, but not Expr. • Lots of little bug fixes and new features. See the release notes. Once we have the release out, I plan to go back to work on the Risch Algorithm. I am very close to finishing the exponential case, which means that once I do, any transcendental elementary function built up of only exponential extensions could be integrated or proven not to have an elementary integral by my algorithm. I also want to start getting the code ready to merge with the main code base, so that it can go in the next release (0.7.1). Finally, I want to announce that I have been selected for a student sponsorship to the SciPy 2011 conference in Austin, TX in the week of July 11. Mateusz and I will be presenting a tutorial on SymPy. This will be the first time I have ever attended a conference, and I am very excited.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498293399810791, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/122465/strongly-complete-profinite-groups
## Strongly Complete Profinite Groups. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've been reading about profinite groups and have encountered the notion of strong completeness. I.e. that a profinite group $G$ is strongly complete if it is isomorphic to it's profinite completion or equivalently, if every subgroup of finite index is open. My problem is that I am not understanding why these conditions are equivalent. I cannot find a reference for this fact; every mention of it I find states that this equivalence is "obvious." I believe the equivalence stems from the fact that if all subgroups of finite index of $G$ are open then the set of subgroups of finite index forms a fundamental system of open neighborhoods of $1$ in $G$ which allows one to reconstruct the topology. I would appreciate any help understanding this, or any references on this fact. - ## 1 Answer The profinite completion is the inverse limit of all quotients by finite index normal subgroups. Any profinite group is the inverse limit of its quotients by open normal subgroups. Since open normal subgroups have finite index a profinite group is strongly complete iff the open normal subgroups are cofinal among finite index notmal subgroups. But this is equivalent to all finite index subgroups are open. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173744320869446, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/System_of_linear_equations
# System of linear equations A linear system in three variables determines a collection of planes. The intersection point is the solution. In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example, $\begin{alignat}{7} 3x &&\; + \;&& 2y &&\; - \;&& z &&\; = \;&& 1 & \\ 2x &&\; - \;&& 2y &&\; + \;&& 4z &&\; = \;&& -2 & \\ -x &&\; + \;&& \tfrac{1}{2} y &&\; - \;&& z &&\; = \;&& 0 & \end{alignat}$ is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by $\begin{alignat}{2} x & = & 1 \\ y & = & -2 \\ z & = & -2 \end{alignat}$ since it makes all three equations valid.[1] The word "system" indicates that the equations are to be considered collectively, rather than individually. In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, the coefficients of the equations are real or complex numbers and the solutions are searched in the same set of numbers, but the theory and the algorithms apply for coefficients and solutions in any field. For solutions in an integral domain like the ring of the integers, or in other algebraic structures, other theories have been developed. See, for example, integer linear programming for integer solutions, Gröbner basis for polynomial coefficients and unknowns, or also tropical geometry for linear algebra in a more exotic structure. ## Elementary example The simplest kind of linear system involves two equations and two variables: $\begin{alignat}{5} 2x &&\; + \;&& 3y &&\; = \;&& 6 & \\ 4x &&\; + \;&& 9y &&\; = \;&& 15&. \end{alignat}$ One method for solving such a system is as follows. First, solve the top equation for $x$ in terms of $y$: $x = 3 - \frac{3}{2}y.$ Now substitute this expression for x into the bottom equation: $4\left( 3 - \frac{3}{2}y \right) + 9y = 15.$ This results in a single equation involving only the variable $y$. Solving gives $y = 1$, and substituting this back into the equation for $x$ yields $x = 3/2$. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) ## General form A general system of m linear equations with n unknowns can be written as $\begin{alignat}{7} a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = \;&&& b_1 \\ a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = \;&&& b_2 \\ \vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; && &&& \;\vdots \\ a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = \;&&& b_m. \\ \end{alignat}$ Here $x_1, x_2,\ldots,x_n$ are the unknowns, $a_{11},a_{12},\ldots,a_{mn}$ are the coefficients of the system, and $b_1,b_2,\ldots,b_m$ are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. ### Vector equation One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. $x_1 \begin{bmatrix}a_{11}\\a_{21}\\ \vdots \\a_{m1}\end{bmatrix} + x_2 \begin{bmatrix}a_{12}\\a_{22}\\ \vdots \\a_{m2}\end{bmatrix} + \cdots + x_n \begin{bmatrix}a_{1n}\\a_{2n}\\ \vdots \\a_{mn}\end{bmatrix} = \begin{bmatrix}b_1\\b_2\\ \vdots \\b_m\end{bmatrix}$ This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed. ### Matrix equation The vector equation is equivalent to a matrix equation of the form $A\bold{x}=\bold{b}$ where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. $A= \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix},\quad \bold{x}= \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix},\quad \bold{b}= \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{bmatrix}$ The number of vectors in a basis for the span is now expressed as the rank of the matrix. ## Solution set The solution set for the equations x − y = −1 and 3x + y = 9 is the single point (2, 3). A solution of a linear system is an assignment of values to the variables x1, x2, ..., xn such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: 1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution. ### Geometric interpretation For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, which may be a flat of any dimension. ### General behavior The solution set for two equations in three variables is usually a line. In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns: 1. Usually, a system with fewer equations than unknowns has infinitely many solutions or sometimes unique sparse solutions (compressed sensing). Such a system is also known as an underdetermined system. 2. Usually, a system with the same number of equations and unknowns has a single unique solution. 3. Usually, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is usually equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: | | | | |--------------|---------------|-----------------| | | | | | One equation | Two equations | Three equations | The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. Keep in mind that the pictures above show only the most common case. It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). In general, a system of linear equations may behave differently than expected if the equations are linearly dependent, or if two or more of the equations are inconsistent. ## Properties ### Independence The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. The equations x − 2y = −1, 3x + 5y = 8, and 4x + 3y = 7 are not linearly independent. For example, the equations $3x+2y=6\;\;\;\;\text{and}\;\;\;\;6x+4y=12$ are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations $\begin{alignat}{5} x &&\; - \;&& 2y &&\; = \;&& -1 & \\ 3x &&\; + \;&& 5y &&\; = \;&& 8 & \\ 4x &&\; + \;&& 3y &&\; = \;&& 7 & \end{alignat}$ are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. ### Consistency The equations 3x + 2y = 6 and 3x + 2y = 12 are inconsistent. A linear system is consistent if it has a solution, and inconsistent otherwise. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten such as the statement 0 = 1. For example, the equations $3x+2y=6\;\;\;\;\text{and}\;\;\;\;3x+2y=12$ are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations $\begin{alignat}{7} x &&\; + \;&& y &&\; = \;&& 1 & \\ 2x &&\; + \;&& y &&\; = \;&& 1 & \\ 3x &&\; + \;&& 2y &&\; = \;&& 3 & \end{alignat}$ are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there are an infinitude of solutions. ### Equivalence Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice-versa. Two systems are equivalent if either both are inconsistent or each equation of any of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. ## Solving a linear system There are several algorithms for solving a system of linear equations. ### Describing the solution When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left hand sides are the names of the unknowns and right hand sides are the corresponding values, for example $(x=3, \;y=-2,\; z=6)$. When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like $(3, \,-2,\, 6)$ for the previous example. It can be difficult to describe a set with infinite solutions. Typically, some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: $\begin{alignat}{7} x &&\; + \;&& 3y &&\; - \;&& 2z &&\; = \;&& 5 & \\ 3x &&\; + \;&& 5y &&\; + \;&& 6z &&\; = \;&& 7 & \end{alignat}$ The solution set to this system can be described by the following equations: $x=-7z-1\;\;\;\;\text{and}\;\;\;\;y=3z+2\text{.}$ Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: $y=-\frac{3}{7}x + \frac{11}{7}\;\;\;\;\text{and}\;\;\;\;z=-\frac{1}{7}x-\frac{1}{7}\text{.}$ Here x is the free variable, and y and z are dependent. ### Elimination of variables The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: 1. In the first equation, solve for one of the variables in terms of the others. 2. Plug this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown. 3. Continue until you have reduced the system to a single linear equation. 4. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: $\begin{alignat}{7} x &&\; + \;&& 3y &&\; - \;&& 2z &&\; = \;&& 5 & \\ 3x &&\; + \;&& 5y &&\; + \;&& 6z &&\; = \;&& 7 & \\ 2x &&\; + \;&& 4y &&\; + \;&& 3z &&\; = \;&& 8 & \end{alignat}$ Solving the first equation for x gives x = 5 + 2z − 3y, and plugging this into the second and third equation yields $\begin{alignat}{5} -4y &&\; + \;&& 12z &&\; = \;&& -8 & \\ -2y &&\; + \;&& 7z &&\; = \;&& -2 & \end{alignat}$ Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second equation yields z = 2. We now have: $\begin{alignat}{7} x &&\; = \;&& 5 &&\; + \;&& 2z &&\; - \;&& 3y & \\ y &&\; = \;&& 2 &&\; + \;&& 3z && && & \\ z &&\; = \;&& 2 && && && && & \end{alignat}$ Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the first equation yields x = −15. Therefore, the solution set is the single point (x, y, z) = (−15, 8, 2). ### Row reduction Main article: Gaussian elimination In row reduction, the linear system is represented as an augmented matrix: $\left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 3 & 5 & 6 & 7 \\ 2 & 4 & 3 & 8 \end{array}\right]\text{.}$ This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination. The following computation shows Gauss-Jordan elimination applied to the matrix above: $\begin{align}\left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 3 & 5 & 6 & 7 \\ 2 & 4 & 3 & 8 \end{array}\right]&\sim \left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 0 & -4 & 12 & -8 \\ 2 & 4 & 3 & 8 \end{array}\right]\sim \left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 0 & -4 & 12 & -8 \\ 0 & -2 & 7 & -2 \end{array}\right]\sim \left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 0 & 1 & -3 & 2 \\ 0 & -2 & 7 & -2 \end{array}\right] \\ &\sim \left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 0 & 1 & -3 & 2 \\ 0 & 0 & 1 & 2 \end{array}\right]\sim \left[\begin{array}{rrr|r} 1 & 3 & -2 & 5 \\ 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & 2 \end{array}\right]\sim \left[\begin{array}{rrr|r} 1 & 3 & 0 & 9 \\ 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & 2 \end{array}\right]\sim \left[\begin{array}{rrr|r} 1 & 0 & 0 & -15 \\ 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & 2 \end{array}\right].\end{align}$ The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. ### Cramer's rule Main article: Cramer's rule Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system $\begin{alignat}{7} x &\; + &\; 3y &\; - &\; 2z &\; = &\; 5 \\ 3x &\; + &\; 5y &\; + &\; 6z &\; = &\; 7 \\ 2x &\; + &\; 4y &\; + &\; 3z &\; = &\; 8 \end{alignat}$ is given by $x=\frac {\,\left| \begin{matrix}5&3&-2\\7&5&6\\8&4&3\end{matrix} \right|\,} {\,\left| \begin{matrix}1&3&-2\\3&5&6\\2&4&3\end{matrix} \right|\,} ,\;\;\;\;y=\frac {\,\left| \begin{matrix}1&5&-2\\3&7&6\\2&8&3\end{matrix} \right|\,} {\,\left| \begin{matrix}1&3&-2\\3&5&6\\2&4&3\end{matrix} \right|\,} ,\;\;\;\;z=\frac {\,\left| \begin{matrix}1&3&5\\3&5&7\\2&4&8\end{matrix} \right|\,} {\,\left| \begin{matrix}1&3&-2\\3&5&6\\2&4&3\end{matrix} \right|\,}.$ For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. ### Matrix solution If the equation system is expressed in the matrix form $A\bold{x}=\bold{b}$, the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by $\bold{x}=A^{-1}\bold{b}$ where $A^{-1}$ is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore-Penrose pseudoinverse of A, denoted $A^g$, as follows: $\bold{x}=A^g \bold{b} + (I-A^gA)\bold{w}$ where $\bold{w}$ is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using $\bold{w}=\bold{0}$ satisfy $A\bold{x}=\bold{b}$ — that is, that $AA^g\bold{b}=\bold{b}.$ If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, $A^g$ simply equals $A^{-1}$ and the general solution equation simplifies to $\bold{x}=A^{-1}\bold{b} + (I - A^{-1}A)\bold{w} = A^{-1}\bold{b} + (I-I)\bold{w} = A^{-1}\bold{b}$ as previously stated, where $\bold{w}$ has completely dropped out of the solution, leaving only a single solution. In other cases, though, $\bold{w}$ remains and hence an infinitude of potential values of the free parameter vector $\bold{w}$ give an infinitude of solutions of the equation. ### Other methods While systems of three or four equations can be readily solved by hand, computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. ## Homogeneous systems See also: Homogeneous differential equation A system of linear equations is homogeneous if all of the constant terms are zero: $\begin{alignat}{7} a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = \;&&& 0 \\ a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = \;&&& 0 \\ \vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; && &&& \,\vdots \\ a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = \;&&& 0. \\ \end{alignat}$ A homogeneous system is equivalent to a matrix equation of the form $A\textbf{x}=\textbf{0}$ where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. ### Solution set Every homogeneous system has at least one solution, known as the zero solution (or trivial solution), which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: 1. If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. 2. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. ### Relation to nonhomogeneous systems There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: $A\textbf{x}=\textbf{b}\qquad \text{and}\qquad A\textbf{x}=\textbf{0}\text{.}$ Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as $\left\{ \textbf{p}+\textbf{v} : \textbf{v}\text{ is any solution to }A\textbf{x}=\textbf{0} \right\}.$ Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. ## See also • LAPACK (the free standard package to solve linear equations numerically; available in Fortran, C, C++) • Row reduction • Simultaneous equations • Arrangement of hyperplanes • Linear least squares • Matrix decomposition • Matrix splitting • Iterative refinement ## Notes 1. Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005. ## References See also: Linear algebra#Further reading ### Textbooks • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0 • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8 • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall • Strang, Gilbert (2005), Linear Algebra and Its Applications
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225381016731262, "perplexity_flag": "head"}
http://mathoverflow.net/questions/13282/good-algebraic-number-theory-books/13315
## Good Algebraic Number Theory Books ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have just finished a master's degree in Mathematics and want to learn everything possible about algebraic number fields and especially applications to the generalized Pell equation (my thesis topic), $x^2-Dy^2=k$, where $D$ is square free and $k \in \mathbb{Z}$. I have a solid foundation in Modern Algebra and Elementary number theory as well as Analysis. Does anyone have any suggestions? I am currently reading Harvey Cohn's 'Advanced Number Theory' with slow but marked progress. Thanks. - 21 To learn 'everything possible' about algebraic number fields you'll have to reserve, say, your next 500 years... – Mariano Suárez-Alvarez Jan 28 2010 at 21:24 10 Ersnt Kunz starts the forword to his Introduction to Commutative Algebra and Algebraic Geometry with: «It has been estimated that, at the present state of our knowledge, one could give a 200 semester course on commutative algebra and algebraic geometry without ever repeating himself.» His subject is not unique in that respect! – Mariano Suárez-Alvarez Jan 28 2010 at 21:53 There is a very similar thread here mathoverflow.net/questions/8097/… perhaps you will find some useful suggestions there. – Grétar Amazeen Jan 28 2010 at 22:48 4 Pierre Samuel's "Algebraic Theory of Numbers" gives a very elegant introduction to algebraic number theory. It doesn't cover as much material as many of the books mentioned here, but has the advantages of being only 100 pages or so and being published by dover (so that it costs only a few dollars). Reading this would certainly prepare you well for some of the more advanced books that require more of a commitment to go through. – Ben Linowitz Feb 1 2011 at 20:59 2 Cohn's book is well worth reading carefully, and Ireland and Rosen is an excellent text too. – Emerton Feb 2 2011 at 15:09 show 1 more comment ## 14 Answers Though Mariano's comment above is no doubt true and the most complete answer you'll get, there are a couple of texts that stand apart in my mind from the slew of textbooks with the generic title "Algebraic Number Theory" that might tempt you. The first leaves off a lot of algebraic number theory, but what it does, it does incredibly clearly (and it's cheap!). It's "Number Theory I: Fermat's Dream", a translation of a Japanese text by Kazuya Kato. The second is Cox's "Primes of the form $x^2+ny^2$, which in terms of getting to some of the most amazing and deepest parts of algebraic number theory with as few prerequisites as possible, has got to be the best choice. For something a little more encyclopedic after you're done with those (if it's possible to be "done" with Cox's book), my personal favorite more comprehensive reference is Neukirch's Algebraic Number Theory. - 3 Agreed. Neukirch is an amazing book, not least because of how seriously it takes the analogy between number fields and function fields. – Qiaochu Yuan Jan 28 2010 at 23:33 1 Were the other books in Kato's series every translated? – Kevin O'Bryant Jul 7 2010 at 4:54 Number Theory 2 was just released by the AMS! – Cam McLeman Nov 9 2011 at 13:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I know of very few more endearing books on the subject than Ireland and Rosen's A Classical Introduction to Modern Number Theory. - 4 I know of none. – Pete L. Clark Jul 7 2010 at 13:09 Marcus's Number Fields is a good intro book, but its not in Latex, so it looks ugly. Also doesn't do any local (p-adic) theory, so you should pair it with Gouvea's excellent intro p-adic book and you have great first course is algebraic number theory. - 2 Yes, someone should typeset Marcus's book again in LaTeX. – lhf Jan 29 2010 at 1:11 2 Marcus wrote the book much before Wiles proved the FLT; so the introductory chapter on solving FLT for regular primes etc is fascinating. Also, the book has lots of concrete problems and exercises. – Abhishek Parab Jul 7 2010 at 5:31 2 Marcus' book is my first choice. Very hands-on, many exercises, and clear explanations. More than makes up for the fact that it's typewritten. Once you have understood something, you can then go and read about it again in Neukirch (which covers a lot more stuff), it will give you a different perspective. – Laurent Berger Feb 1 2011 at 18:39 +1 For Marcus, really one of the best books on the subject. And yes, you are not the only people thinking that it deserves to be retyped! – Maurizio Monge Feb 1 2011 at 19:36 Many people have recommended Neukirch's book. I think a good complement to it is Janusz's Algebraic Number Fields. They cover roughly the same material. Neukirch's presentation is probably the slickest possible; Janusz's is the most hands on. I love them both now, but I found Janusz understandable at a point when Neukirch was still completely impenetrable. Neither of these is particularly strong on the Pell equation, though. - I would recommend you take a look at William Stein's free online algebraic number theory textbook. It is especially useful if you want to learn how to compute with number fields, but it is still extremely readable even if you skip the details of the computational examples. - It might be at too basic/introductory a level for your purposes, but as an undergraduate I really liked Stewart and Tall's Algebraic Number Theory (2nd edition?) - Look at: http://mathoverflow.net/questions/13106/map-of-number-theory The book recommended there (Manin/Panchishkin's "Introduction to Modern Number Theory") does seem amazingly comprehensive, and very readable. - If you want to have a pretty solid foundation of this subject, then you are suggested to read the book Lectures on Algebraic Number Theory by Hecke which is extremely excellent in the discussion of topics even important nowadays, or the report of number theory by Hilbert whose foundation is indeed solid. In addition, Gauss's book, being a little old and hard, is a good reference on quadratic forms and it itself offers two different kinds of proofs of the quadratic reciprocity law which are all excellent to me. The last but not the least, I would like to confirm once more the book by Jurgen Neukirch which notes the connection between ideals and lattices, i.e. algebraic numbers and geometry. - See Solving the Pell Equation (reviewed here). You probably know Lenstra Jr.'s article in the AMS Notices. There's also Primes of the form $x^2 + ny^2$ by Cox. - I could be wrong, but I think Borevich and Shafarevich cover material related to Pell's equation. If not, then it is still an excellent book on algebraic number theory as is Serre's "A Course in Arithmetic". However Serre does not discuss Pell's equation. (I also found Cohn difficult to read =) - In particular, in view of the focus of your studies, I suggest the following additional book; where additional is meant that I would not suggest it as the only book (see below for explanation). There is a fairly recent book (in two volumes) by Henri Cohen entitled "Number Theory" (Graduate Texts in Mathematics, Volumes 239 and 240, Springer). [To avoid any risk of confusion: these are not the two GTM-books by the same author on computational number theory.] It contains material related to Diophantine equations and the tools used to study them, in particular, but not only, those from Algebraic Number Theory. Yet, this is not really an introduction to Algebraic Number Theory; while the book contains a chapter Basic Algebraic Number Theory, covering the 'standard results', it does not contain all proofs and the author explictly refers to other books (including several of those already mentioned). However, I could imagine that a rich exposition of how the theory you are learning can be applied to various Diophantine problems could be valuable. Final note: the book is in two volumes, the second one is mainlyon analytic tools, linear forms in logarithms and modular forms applied to Diophantine equations; for the present context (or at least initially), the first volume is the relevant one. - Serge Lang's Algebraic Number Theory has a lot of general theoretical material. - If you want to learn class field theory (which you should at some point, after you have read an introductory book on algebraic number theory), then "Algebraic Number Theory" edited by Cassels and Fröhlich is a classic that doesn't get old. It has been recently reprinted by the LMS. - I am amazed not to see the book "Introductory Algebraic Number Theory," by Alaca & Williams, listed here. I find it to be one of the most clear math books on an advanced topic, ever. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518125653266907, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/53991/the-theory-of-strings-stretching-between-intersecting-d-branes?answertab=votes
The theory of strings stretching between intersecting D-branes I am trying to understand various aspects of intersecting D-branes in terms of the gauge theories on the worldvolume of the D-branes. One thing I'd like to understand is the worldvolume action for strings stretching between the D-branes. One thing I have considered is $M+N$ D-branes initially coincident but then an angle $\theta$ developing between $M$ and $N$ of them. The gauge symmetry is broken from $U(M\times N)$ to $U(M) \times U(N)$ with the off-block-diagonal terms of the gauge field becoming massive. I anticipate that $\theta$ is the vev of some Higgs field that mediates this transition. What is the action of this Higgs field and where in the string spectrum does it come from? - 1 Answer First of all, if a stack of $M$ branes is rotated relatively to a previously coincident stack of $N$ branes, it's clear that the degrees of freedom that encode the relative angle $\theta$ are nothing else than the transverse scalars determining the position/orientation of these two stacks. Any D-brane or any stack of D-branes may be rotated in any way and the quanta of the scalar fields that remember the positions are just open string modes attached to these D-branes with both endpoints. If you study the location/orientation of a D-brane or a stack of D-brane, it's its own degree of freedom that has nothing to do with the behavior of other D-branes. So the Higgs fields arise from the normal scalars determining the transverse positions of these stacks of D-branes and the action for these D-branes is still the same D-brane Dirac-Born-Infeld action. Now, you apparently want to see how this degree of freedom that you call $\theta$ – it's just an awkwardly chosen "degree of freedom" that can't be invariantly separated from other degrees of freedom determining the shape of the D-branes – break the $U(M+N)$ symmetry down to $U(M)\times U(N)$ and your wording makes it sound like you believe it is just ordinary Higgs mechanism in the whole space. However, it's important to realize that this breaking of the gauge symmetry doesn't occur uniformly in the whole space. In fact, near the intersection of the stacks, the gauge symmetry is approximately enhanced to the original $U(M+N)$. What's important is that the off-block-diagonal blocks transforming as $(M,N)$ under $U(M)\times U(N)$ arise from open strings whose one end point sits at one stack and the other end point sits at the other stack. The distances between the stacks go like $\theta \cdot D$ where $D$ is the distance between the intersection. So the open string modes get an extra mass $\theta D T$ where $T$ is the string tension. To summarize, the symmetry breaking is always described by the ordinary (non-Abelian) Dirac-Born-Infeld action. Your "rotation of stacks relatively to each other" only differs from the ordinary "separation of parallel stacks in the transverse dimension" by the fact that the distance/separation between the stacks depends on the location along the branes. It is meaningless to ask for any new actions because the (non-Abelian) Dirac-Born-Infeld action always describes all the low-energy dynamics of similar systems. The stringy/D-brane dynamics is always governed by the same laws and one should only learn it once. Let me mention that whenever the distance between the stacks exceeds the string length, all the off-diagonal open string modes are string-scale heavy and it is inconsistent to keep them unless you also keep the excited string harmonics in the spectrum. So a derivation of "effective field theory" that would neglect the stringy tower but that would just freely describe the higgsed large string-scale masses of the off-block-diagonal modes would be inconsistent. - Is it true then that at any $\theta>0$ the theory on the intersection is the dimensionally reduced DBI? Should there be some correction from integrating out the massive off-diagonal degrees of freedom? – user404153 Feb 19 at 20:15 1 Yup, integrating out massive degrees of freedom always contributes corrections, especially when SUSY is broken which it is if you're rotating by theta just in one two-plane. But if you do, the whole situation is unstable in general so it makes no sense to discuss some high-precision things. The branes from the stacks will tend to reconnect with each other and gradually escape away from the intersection. I mean X will become )( if you understand me, and the parentheses will get further and further... If you rotate in 2 or more 2-planes, to make it SUSY, it's a different question. – Luboš Motl Feb 19 at 20:37 1 – Luboš Motl Feb 20 at 8:16 1 Thanks for your help and the reference. It's much appreciated! – user404153 Feb 20 at 19:03 1 It was a pleasure! – Luboš Motl Feb 21 at 7:06 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350746870040894, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19526?sort=newest
## Reference request for a “well-known identity” in a paper of Shepp and Lloyd ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I ran into a "well-known identity" on page 345 of Shepp and Lloyd's On ordered cycle lengths in a random permutation: $$\int_x^{\infty} \frac{\exp(-y)}y dy = \int_0^x \frac{1-\exp(-y)}y dy - \log x - \gamma,$$ where $\gamma$ is the Euler constant. I am clueless as to how it is derived. Any reference to the derivation of such formulae would suffice, but an explicit solution will also be appreciated. - This refers to an earlier question: mathoverflow.net/questions/19392/… – Michael Lugo Mar 27 2010 at 18:23 4 A link to an online copy of the paper (preferably not behind a paywall) would be highly appropriate here. – Scott Morrison♦ Mar 27 2010 at 18:46 I added a JSTOR link. The article is also at ifile.it/tfja0wc, but the link is probably temporary. – Anton Geraschenko♦ Mar 27 2010 at 19:09 Meta discussion: meta.mathoverflow.net/discussion/313/… – Harry Gindi Mar 27 2010 at 19:15 ## 3 Answers You can apply WZ theory to such identities. In particular, both sides satisfy $$x*z''(x) + (x+1)z'(x)$$ Picking $x=1$ as the initial condition (since the DE is regular there, that helps), we see that both sides evaluate to $Ei(1,1)$ and their derivatives both evaluate to $-1/e$, so they are equal. I got that differential equation using Maple's PDEtools[dpolyform] function, which uses Groebner bases over differential polynomials to 'solve' this problem. All the rest is classical analysis (as in A course of modern analysis by Whittaker and Watson, 1926 - which is unfortunately not material that is taught very much anymore, I certainly had to learn a lot of that 'on my own'). [Edit: fixed an error in the evaluation of the derivative, I pasted in the wrong line] - Yes that's a tremendously rich text much of which I am not familiar with. Thanks for the hard work! – John Jiang Mar 27 2010 at 21:08 What is "WZ theory"? – Mariano Suárez-Alvarez Mar 27 2010 at 21:59 1 Wilf-Zeilberger, probably, see en.wikipedia.org/wiki/Wilf-Zeilberger_pair – Reid Barton Mar 27 2010 at 22:02 @Reid: correct. – Jacques Carette Mar 27 2010 at 22:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. So one of the approaches to proving the equality in the question is via the following three steps: First differentiate both sides of the equation to see that they agree up to a constant. This reduces to showing the case of $x = 1$, for which $\log x = 0$. Next we apply integration by parts to get $$\int_1^{\infty} \exp(-y)/y dy - \int_0^1 \frac{1-\exp(-y)}{y}dy = \int_0^{\infty} \exp(-y) \log y dy$$ Finally observe that $\Gamma'(1)$ equals the RHS, by differentiating under the integral sign, valid because things are decaying fast enough at infinity. So it remains to show $\Gamma'(1) =\gamma$. I saw a soft argument (i.e., without using infinite product) in the link scipp.ucsc.edu/~haber/ph116A/psifun_10.pdf This is re-exposed below: first we establish that for $\Psi(x) = \log \Gamma(x)$, $$\Psi'(x+1) = \Psi'(x) + 1/x$$ This is easy enough since we have we have the functional equation $\Gamma(x+1) = x\Gamma(x)$. Next using stirling approximation we get $$\Psi(x+1) = (x+1/2)\log x -x + 1/2 \log 2 \pi + O(1/x)$$ and then they differentiate this and claim that $O(1/x)' = O(1/x^2)$, which is clearly false (take $f(x) = 1/x cos(e^x)$). But I found in Wikipedia another formula that gives the precise error term in terms of an integral of the monotone function $arctan(1/x)$. So this is enough to establish $O(1/x^2)$ for the error term in the derivative of $\Psi$. So we get the asymptotics $\lim_{x \to \infty} \Psi'(x+1) = \log(x)$, from which we get $\Psi'(1) = \gamma$. Now notice $\Psi'(x) = \Gamma'(x)/ \Gamma(x)$, and $\Gamma(1) = 1$, so $\Gamma'(1) = \gamma$ also. - This identity appears on the Wikipedia page for the "exponential integral": http://en.wikipedia.org/wiki/Exponential_integral#Definition_by_Ein I imagine you can get it by integrating the Taylor series and playing around. Wikipedia, and several other places on the web, point to the book by Abramovitz and Stegun. - Thanks for the nice reference on wiki! I was looking at the wrong place. – John Jiang Mar 27 2010 at 19:46 2 You can prove the identity up to a constant factor by differentiating with respect to x, so it only remains to prove it for x = 1. This should be a little easier. – Qiaochu Yuan Mar 27 2010 at 19:55 Indeed, it makes it a lot easier. Using integration by part, one can show that $\int_1^{\infty} \exp(-y)/y dy - \int_0^1 \frac{1-\exp(-y)}{y}dy = \int_0^{\infty} \exp(-y) \log y dy which is listed as equal to$-\gamma\$ in the following wiki page: en.wikipedia.org/wiki/… I am yet to figure out why that formula in the wiki page is true. – John Jiang Mar 27 2010 at 20:51 1 The integral $\int_0^\infty e^{-t}\log t\,dt$ equals $\Gamma'(1)$. This can be evauated as $-\gamma$ using the infinite product for the gamma function. – Robin Chapman Mar 28 2010 at 7:58 Thanks Robin. I will write a short summary of the proof combining all the ingredients given so far. – John Jiang Mar 28 2010 at 18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9103254675865173, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/91188/intuition-for-graphing-sine-cosine/91234
Intuition for graphing Sine/Cosine So I'm working my way through some basic trig (Khan Academy) - I'm trying to get a better intuition for what graphs of sine and cosine represent. I've seen the nice unit circle animations that do explain it fairly well. But would it be fair to say that: The period of the curve of the sine or cosine function is how fast one full rotation of the circle happens. So as the period decreases I imagine the circle or "wheel" speeding up. The amplitude of the curve of the sine or cosine function is the radius of this imaginary circle. So as the amplitude of the cure increases so does the size of the circle or "wheel". So I could imagine speeding up and increases the size of a wheel by decreasing the period and increasing the amplitude of this circle. Thanks! - 4 Answers Yep, your wheel analogy is pretty good. Depending on how detailed you want to look at it, you could explain more about why and how in relation to math, but in general you are correct. Do you need a more detailed explanation? Hope you can figure it out! - No I think I'm good now, thanks for the feedback. – drc Dec 13 '11 at 20:38 This is a very interesting questions that normally arises in the context of oscialltions and waves. Consider for instance the ODE: $$m\cdot\dfrac{d^2}{dt^2}x + 2\gamma\dfrac{d}{dt}x+D\cdot x = f\left(t\right)$$ This is the so calles harmonic oscialltor where $m$ denotes the mass, $\gamma$ denotes the friction coefficient (translated from the German "Reibungskoeffizient"), $D$ denotes the spring constant for the spring you're dealing with and $f$ is some external force. Consider now the case where $f = 0, \gamma = 0$. Then, the equation reduces to: $$\dfrac{d^2}{dt^2}x + \Omega ^{2}\cdot x = 0, \Omega^2 = D/m$$ A solution is then given by: $$x = C\cdot\exp(i\cdot\Omega\cdot t)+C^{\star}\exp(-i\cdot\Omega\cdot t)$$, where the star in the exponent denotes complex conjugation. By choosing now the initial conditions for the movement adequately, you can determine $C$ and hence its complex conjugate. Now, if the amplitude is increased then this means a different initial condition, i.e., the pendulum is removed a bit further from its state in inertia. If the frequency is increased, then we know, that either the mass $m$ has been decreased or that the spring constant has been increased (e.g. by choosing a different spring). --Remark: The frequency is given by the formula: $\Omega = \sqrt{D/m}$ with the constants' meaning staying the same as above.-- Now, it is known that the frequency, or to be more precise, angular frequency, is related to the period time as follows: $$T = 2\pi/\Omega$$ Hence, your intuitive picture was completely right. As frequency speeds up, period time decreases, hence the oscillation speeds up as well. I am speaking of oscialltions instead of circular movements because it is possible via introducting polar coordinates to consider the latter ones as oscillations as well. This can easily be verified be regarding the Newtonian gravitational law and applying this for instance to the movement of the earth around the sun. Approximately (indeed, some rough approximation because $\epsilon_{\text{Earth}} \neq 0$), we can regard this as a movement on a circle, regarding the earth and sun both as mass points. Now, the crucial point is, by setting a coordinate system, we can measure angles. By considering e.g. the projection of the radius vector of the earth on one of our coordinates lines, say the "x-axis", we obtain a term like $r_x = r_{\text{middle}} \cdot \cos(\omega_{\text{earth around sun}}\cdot t)$ where use has been made of $\omega = \phi t$ and $\phi$ denotes the angle in the coordinate system transformed to polar coordinates. Some interesting stuff: 1. Whenever you deal with a potential minimum in physics, you can locally approximate the underlying equations of motions (in Newtonian dynamics) by some harmonic oscillator. 2. The gravitational constant $G$ can be measured e.g. by using a torsion pendulum. - 7 I don't think that the OP would use this answer. The question was simply about the graph of Sine/Cosine, which seems to be a beginner's question. You gave an answer which contains differential equations and advanced physics. – Beni Bogosel Dec 13 '11 at 19:39 1 @BeniBogosel Thanks. You're right. But David Heider, I appreciate the effort. – drc Dec 13 '11 at 20:37 The $(x,y)$ coordinates of a point that lies on the unit circle are $(\cos\theta,\sin\theta)$. - Technically, the $\sin()$ and $\cos()$ functions always have an argument consisting of an angle (i.e. speed of the angle changing does not matter), they always have a period of $2\pi$ and have an amplitude of $1$. We can change the period of the function by scaling the argument. For example $\sin(2\pi x)$ has a period of $1$. We change the amplitude by multiplying by a constant. For example $2\sin(2\pi x)$ produces an amplitude of 2. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386996030807495, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98821?sort=oldest
## How often do people read the work that they cite? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have the following question: How likely it is that an author carefully read through a paper cited by him? Not everyone reads through everything that they have cited. Sometimes, if one wants to use a theorem that is not in a standard textbook, one typically finds another paper which cites the desired result and copies that citation thereby passing the responsibility of ensuring correctness to someone else. This saves a lot of time, but seems to propagate inaccurate citations and poor understanding of the work being cited. The question is thus about what should authors' citing policy be, and to what extent authors should verify results they are citing rather than using them as black boxes. - 2 Okay, I'll split it. – Vidit Nanda Jun 4 at 23:11 2 The comments to this question mathoverflow.net/questions/43147/… may be of help. – Joel Reyes Noche Jun 5 at 10:02 1 Perhaps the question could be edited to have a more neutral tone? – Joel David Hamkins Jun 6 at 14:32 1 Joel, it is community wiki and you certainly seem to have enough reputation. Maybe you should just edit it to your liking? – Vidit Nanda Jun 7 at 15:01 1 Related question: mathoverflow.net/questions/23758/… – Timothy Chow Jun 15 at 15:40 show 8 more comments ## 10 Answers It depends whether I am citing the Poincare Conjecture or the five lemma. But I agree that either way, one should understand what one is doing well enough to use a result properly. - 1 I picked the Poincare Conjecture for a few reasons: (1) it has been carefully checked; (2) the probability is 0 that an error got past the experts, whereas I would find that error; (3) the statement looks to be very useful even to people who don't know its proof; (4) it would be an insane amount of work for most people, if they could do it at all, to check the proof. The five lemma is a different story, though I suppose I'm not sure who is originally responsible for developing/discovering it. – tweetie-bird Jun 5 at 15:40 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This may vary depending on the field you work in and what kind of papers you're writing, in addition to your personal style as a mathematician. I mean whether you bother to check the details of a citation probably has a lot to do with how much effort it costs you. In my line of work the citations are usually to other papers within my area of expertise and I do in fact try to look pretty closely at what was done and whether it really gives what I need. But there have been times when I needed something outside my comfort zone and was content to take the word of an expert that so-and-so's theorem did the trick. On the point about propagation of errors, it seems to me that there is a sort of correction mechanism. If a result is being cited incorrectly, eventually it may lead to a contradiction which would then be unraveled by tracing the problem back. Does anyone have any examples of something like this actually happening? Oh, I should add that some portion of my citations are "courtesy" citations along the lines of "so-and-so did something related". I'm afraid to say in cases like these my reading of the cited paper will sometimes have been very superficial. - 17 I do have an example of "correction mechanism". Working on a paper based on a published result, we were able to prove something "too strong". (Namely, we almost had $\mathsf P\neq\mathsf{NP}$.) We then were able to find the error in the cited paper. The problem was that the cited paper itself cited an older result, but forgetting some genericity hypotheses. Therefore, the result was possible to repair by adding some genericity conditions in it too. Yet, the correct version does not imply anything interesting with regards to our original question... – Bruno Jun 5 at 6:50 1 Very interesting! – Nik Weaver Jun 5 at 12:37 I normally do. Right now I'm facing a tough choice though: to read David-Semmes book in honest or to write something like "We prove that A implies B. The reader can juxtapose that with the claim on page ... in [] that B implies C" instead of "We prove that A implies C" in the introduction. Being as lazy as I am, I am inclined to go for the second option but that will certainly make the paper less "sexy", so my co-authors do not feel very happy about it. However this shows how you can avoid both reading the papers you refer to and the uncertainty about whether what you declare proved is actually proved: separate the part you prove from the part you refer to in a crystal clear way and take credit for the reduction only rather than for the full statement (which, frankly speaking, is as much credit as you can really claim anyway). The correction mechanism Nik mentioned works primarily in the way that most things just go unnoticed because nobody reads those papers or uses them in any way. When something is really important, it gets a lot of attention and somebody finally straightens things out. However, that doesn't happen fast and I have learned it hard way. My 2002 Duke paper joint with Treil and Volberg on the system Tb theorem has an error in the proof. It had been cited a lot of times before the error was finally spotted and corrected by Tuomas Hytonen around 2010. This also shows that an erratic argument isn't always useless or fatally flawed. Sometimes it is just an "incomplete proof". To my shame, I should also mention that it was one of the cases when I didn't read the final draft carefully and relied on my co-authors to do that. Apparently, they had a similar attitude... - Mostly, I read those part of the papers that I need to cite in my works. Sometimes though, I read them almost completely before starting the paper (to get some ideas and learn the techniques). There are also occasions that I read only a few pages of the paper before citing it. - Often not, and for lots of different reasons. Citations about related work, background information etc. are often given to give the reader the context of your work, but your work does not necessarily depend on the results therein. Seminal results from decades ago are sometimes in German, French or Russian, and while I might occasionally struggle through papers in the first two languages if I desperately need to see how something was proved, I will usually take the results on trust, especially if they are heavily cited. And then, some results are just too complicated, but are sufficiently well accepted that there is really no choice but to rely on them - most group theorists will use the Classification of Finite Simple Groups when necessary, but few, if any, will understand the full details of the entire proof. On similar lines, I cannot possibly read the Robertson-Seymour series of papers on Graph Minors. Yet I believe the main results in them, and if and when I refer to the fact that any minor-closed class of graphs has a finite set of excluded minors, then it is pretty much obligatory to refer to the relevant paper in the series. If correctness is seriously in doubt, then it would presumably be necessary to work back not just to the papers you cite, but to the papers they cite, and so on. - As for finite simple groups or similar deep theorems: There are more than two options. For example, you might want to use some ideas in the proof and adopt them to a similar situation, or you even want to generalize them, modify them etc ... of course then you have to go into the details and should also present them to the reader of your paper who is not familar with the cited paper. – Martin Brandenburg Jun 6 at 16:49 We are more inclined to read the details when we are young, for several reasons. First, because we are more energetic than when getting old. Second, because we are a little bit idealistic, while everything tends to become relative after we have seen all kinds of behaviors in science. Last, because we are often single-minded at the beginning of our career; we do not have too much to read and we are not disturbed by administrative duties. Of course, It is of great importance to have a clear and complete understanding of the papers that we cite. This is why young mathematicians play such an important role. - 16 I must add, as a young mathematician, that since we know less than our more experienced colleagues, reading papers carefully is the only way to be sure that you understand it correctly. – Andrei Smolensky Jun 6 at 9:51 1 @Andrei: perfectly right. – Denis Serre Jun 6 at 12:15 3 Yes, eventually this idealism just fades away ... – Martin Brandenburg Jun 6 at 16:52 I like papers with big reference lists. Sometimes these citations lists are even more useful for me than papers themselves, since I can find some others papers. And this describes context and rises new questions for further work. So since I like big reference lists in papers by others, I also try to include many references, 95% of them just "someone have done something related", and of course I do not read carefully such papers, just may be understanding what question has been asked and what is the context of this question. However there are some (I remember 2) cases where I need to heavily rely on the results by others without deep checking the proofs (since it would cost too much effort). I do not like doing so, but my experience says me that it should sometimes be done. In general I think it much depends on the area you work, in my field it seems to me simple useful constructions are valuable, rather than complicated proofs. I like what Igor Pak wrote here : http://mathoverflow.net/questions/85269/presenting-work-in-progress/85283#85283 let me quote: "For example, in Enumerative Combinatorics and Discrete Probability, two areas close to me, these priorities are sort of opposite. In the former, there are very few open problems. A nice new formula or a new bijection construction, even if only conjectured and checked by a computer, is already a lot of progress. Once you convince yourself that you can finish the proof, you can start giving talks - people will trust your judgement. However, in Discrete Probability, there are lots of open problems and conjectures, often delicate and technically difficult. I would advise NOT to speak about your results until the proofs are fully written and carefully checked by somebody. This might work once or twice, but eventually there will be a seemingly trivial mistake which you overlooked in the first draft. Unfortunately, often enough such mistakes can completely destroy your proof." So I think in the first type of areas deep checking is not relevant, but in the second it is very necessary. (In my (subsub)field situation is like 1, which makes my life more easy). - I think it was Thom who said was that it was immoral for a mathematician to base his work on results he/she did not understand. Sometimes I think this attitude would kill off whole areas of research, but it is probably the way to go. In my case it makes a difference if I need someone's work for to prove a neat corollary or as an argument in the proof of a main result. - 3 I think that this statement is far too general to be universially true. For instance, would it be immoral to use Fermat's last theorem in order to prove something just because you do not understand what Weyl was doing in order to prove it? I think that using a result that is accepted to be true by basically everybody is no immoral. In fact, it would be immoral to not publish a great result just because you need a result for it that you do not fully understand. – Sebastian Jun 6 at 9:51 4 It all depends on the meaning "results that he/she did not understand": Taken literally, this just means "results whose statement (s)he does not understand", which is a perfectly reasonable viewpoint (although, personally, I would not call it "immoral", just "wrong"). If you add the word "proof" in this maxim, then, I would argue that such sentiment was still quite reasonable 50 years ago, when one could understand pretty much any proof from any area of mathematics after spending few months on them. Things changed since then (classification of finite simple groups is another good example). – Misha Jun 6 at 12:41 2 It was Wiles (of course), sorry about that. At least I got Fermat right. – Sebastian Jun 6 at 17:28 2 @Sebastian: Of course the statement is somewhat exagerated, but it is nice to understand things and to explain them correctly after one has understood them. There is to me a marked difference between a paper where an author tries to communicate something (s)he understood and a paper where someone is trying to plant a flag and tell the rest of the world "this is MY theorem". That said, I love to read and digest ideas, so I agree somewhat with Thom not because of moral reasons, but because it brings me pleasure to read a good paper and to make contact with another mind. – alvarezpaiva Jun 6 at 18:05 6 @Misha: You are right in that it is impossible to understand all proofs, but I think that in many cases we can borrow a nice technique from the "Russian school": one can at least try to understand the proof in some simple, but representative cases and master enough examples to feel that the theorem is "right". – alvarezpaiva Jun 6 at 18:08 If it is a recent or not so well known result I am citing then I read completely to make sure it is correct. If it is a well known result that is in essentially any relevant book then it would be enough to know that the result is there and the exposition is good. - [offtopic] Since I cannot comment, let me just throw in an old story I heard from my professor. Some time back, a paper by Einstein and Preuss was being cited all around. Now, we all know some names that collaborated with Einsten, but this Preuss is kinda unknown. Turns out that the journal name of the reference Einstein, A. (1931). Sitzungsber. Preuss. Akad. Wiss. ... after some citations, got to be promoted to coauthor. NICE! Here's some (german) reference: http://de.wikipedia.org/wiki/S._B._Preuss Now just to account to the statistics, I know some people that skip the reading of some papers to present seminars and talks, but I think they do check the stuff before writing something up. As to me, I try to read some stuff and then check the references and references of references until I give up. But that doesn't matter, since I'm far from publish anything at all, as it seems. Cheers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.970703125, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/thermodynamics
# Tagged Questions Covers the study of (mostly homogeneous) macroscopic systems from a heat/energy/entropy point of view. Maybe combine with [tag:statistical-mechanics]. learn more… | top users | synonyms 0answers 26 views ### Lattice model completely constrained by boundary data I am dealing with a lattice model that has the peculiar property that if I specify all the spins on the boundary, by local conservation laws, the whole lattice configuration (throughout the whole ... 1answer 70 views ### Does brown but transparent swimming pool water heat significantly faster than western style highly chlorinated pools? Eastern European swimming pools are often brown tinted water. i was told it was the color of the chemical to keep the pools clean, but who knows. These pools did not smell unsanitary and may have even ... 1answer 44 views ### How does an earthen pot keep water cool? I understand that evaporative cooling takes place thanks to small pores contained in the pot and that allow some water to go through and evaporate. However I couldn't understand clearly whether water ... 0answers 43 views ### Air pressure in balloon I have to calculate the air pressure inside of an hot air balloon. After some searching I found out that I can use the ideal gas law: PV = nRT (from Wikipedia) So to get the pressure in the balloon I ... 1answer 80 views ### Is it possible (theoretically) to divide Black Hole into two parts? [duplicate] I have read that it's not possible. 1answer 35 views ### What's the criteria for black hole thermodynamically stability? (And dynamical?) It looks like usual criteria (positivity of Hessian; what geometrically means a cancave of entropy) is no useful, becouse entropy is not additive and not extensive for black hole. Then what is the ... 0answers 39 views ### What is the physical meaning of fact, that Reissner-Nordstrom black hole is thermodynamically unstable? It is known, that Reissner-Nordstrom black hole is thermodynamically unstable [1]. Does it mean, that there is no Reissner-Nordstrom black hole in physical world? Does it mean, that there may be ... 3answers 161 views ### Definition of entropy In physics, the word entropy has important physical implications as the amount of "disorder" of a system. In mathematics, a more abstract definition is used. The (Shannon) entropy of a variable $X$ is ... 0answers 19 views ### Can one get clear ice crystals from a dirty suspension? Euteictic freeze crystallization is a method where an electrolytic solution is cooled and separated into a stream of (relativly) clean, pure ice and a salty brine. I know anectdotally of wine ... 0answers 70 views ### Mechanical Equivalent of Heat Recently I have been looking up James Joule's experiment regarding the mechanical equivalent of heat. After viewing some drawings of the apparatus, I assumed that the lines holding the weights would ... 1answer 45 views ### Thermodynamics, PV diagrams? My teacher told me that the total amount of work done on or by a gas can be represented by the area enclosed in the process in a PV diagram. This is only valid for non isothermic processes, right? 2answers 158 views ### First law of thermodynamics? The first law says that the change in internal energy is equal to the work done on the system (W) minus the work done by the system (Q). However, can $Q$ be any kind of work, such as mechanical work? ... 0answers 47 views ### Is there axiom of entropy's additivity in thermodynamics? [closed] I would be glad for some links on books. 0answers 20 views ### Total massflow through heat exchanger I am working on a project and I stumbled on a problem. The project is to design a heat pump to replace the old system (actual problem, not some homework problem). There are 100 or so induction units ... 0answers 30 views ### How connected thermodynamical stability and dynamical stability for black holes? Criteria for thermodynamical stability is the convex of entropy. But for black hole entropy is non-additive. 4answers 84 views ### The Preference for Low Energy States The idea that systems will achieve the lowest energy state they can because they are more "stable" is clear enough. My question is, what causes this tendency? I've researched the question and been ... 2answers 31 views ### Gas Circulation Using Pressure Difference Dear all, see attached picture Please, is it possible to have the gas recirculated from the gas phase to the liquid as described in the diagram assuming the gas is not soluble in the water. These ... 0answers 50 views ### Explain to me how this water cooler works? Goodday all, I was recently reading up on a few projects that might be of interest to me when I found "CPU Bong water coolers", there isn't much online on them so I figure I would ask y'all. If you ... 0answers 13 views ### What is the effect of an increase in pressure on latent heat of vaporization? What is latent heat of vaporization ($L_v$) in the first place? Wikipedia seems to indicate that it is the energy used in overcoming intermolecular interactions, without taking into account at all any ... 1answer 67 views ### Error in Sear's and Zemansky's University Physics with Modern Physics 13th Edition (Young and Freeman)? I was reading up on the Ideal Gas Equation in University Physics with Modern Physics by Young and Freeman when I chanced upon a seemingly illogical mathematical equation. Can anyone rectify this ... 3answers 69 views ### How do you determine the heat transfer from a P-V diagram? I doubt this question has been addressed properly before, but if there are similar answers, do direct them to me. I am currently studying the First Law of Thermodynamics, which includes the p-V ... 0answers 19 views ### Heat transfer in fluid between two horizontal plates vs unconfined case I often see the correlation for turbulent heat transfer between liquid cells published by Globe and Dropkin (1959). In the original paper the fluid was confined between two horizontal plates and ... 1answer 679 views ### Beginner Thermal Dynamics Question [closed] A syringe is set up filled with air (mainly N2 and O2) as shown in the diagram below. The surface area of the syringe is 15.3 cm2. The initial pressure inside the syringe is Pi = 114 kPa and the ... 1answer 52 views ### How can I understand a Vortex Tube and its efficiency? A Vortex Tube takes a pressurized input stream, most typically of a gas, and creates two output streams with a temperature differential. Apparently, it has been described as a Maxwell's Demon. Both ... 1answer 58 views ### Why does compressing a piston increase the internal energy? When we compress a piston, its total internal energy increases, however I don't understand why. As the piston compresses, the temperature should change, as the total energy density increases. As a ... 1answer 42 views ### Michaelis-Menten derivation for 2 enzyme substrates We know that the Michaelis-Menten derivation for the following reaction: $E + S \rightleftharpoons ES \rightarrow E + P$ However, what if the reaction took place in a different scenario whereby: \$E ... 1answer 95 views ### Mathematical proof of non-negative change of entropy $\Delta S\geq0$ I understand that we can prove that for any process that occurs in an isolated and closed system it must hold that $$\Delta S\geq0$$ via Clausius' theorem. My question is, how can I prove this in a ... 0answers 48 views ### Calculate how hot PLA will become I am trying to attach the shaft of a brass heating tip to a PLA component. My problem is that the tip will have to reach a temperature of about 200°C and the PLA can only handle a temperature of about ... 2answers 74 views ### Does it make a sense to speak about age of electron or atom? It's possible that this question is too soft or even quite senseless for this forum, but I will ask nevertheless. Everyday (macroscopic) things, like a grandfather's pendulum clock or the grandfather ... 6answers 2k views ### How do whisky stones keep your drink cold? From a discussion in the DMZ (security stack exchange's chat room - a place where food and drink are important topics) we began to question the difference between how ice and whisky stones work to ... 0answers 27 views ### Calculating the change in entropy in a melting process I have a homework question that I'm completely stumped on and need help solving it. I have a $50\, \mathrm{g}$ ice cube at $-15\, \mathrm{C}$ that is in a container of $200\, \mathrm{g}$ of water at ... 1answer 79 views ### Is there a relativity-compatible thermodynamics? I am just wondering that laws in thermodynamics are not Lorentz invariant, it only involves the $T^{00}$ component. Tolman gave a formalism in his book. For example, the first law is replaced by the ... 1answer 44 views ### Calculating the coefficient of thermal expansion in liquid I am trying to write a matlab function that calculates the coefficient of thermal expansion of water from a given temperature. From what I understand the thermal expansion coefficient is calculated as ... 0answers 42 views ### What defines the adiabatic flame temperature? In a case, I have to solve, I need to describe the combustion of natural gas (Groningen natural gas to be more specific). However I am having some problems understanding the adiabatic flame ... 0answers 39 views ### Lambda transition data points of $\require{mhchem}\ce{^4He}$ I'm looking to get some data on the lambda transition of $\require{mhchem}\ce{^4He}$. I need the data points of the specific heat vs. temperature graph, if that makes sense. 0answers 23 views ### Finding rms velocity in isothermal process [closed] I think none of these options are correct, I just need someone to confirm. Please, as no option is matching with $V_{rms}$. 1answer 41 views ### If a balloon is continuously filled with air and stays at a constant shape and size will there be any empty space in the balloon? If a container like a balloon but with constant volume is filled, is it possible to pack air molecules so closely together that they don't have any empty space between them? If so, what would this ... 1answer 36 views ### What is the work done by an ideal gas? What is the work done by an ideal gas when final pressure and volume are both different from its initial pressure and volume or when both pressure and volume changes ? 1answer 44 views ### Heisenberg's uncertainty and $0 K$ temperature when a body is subjected to $0 K$ temperature, it becomes rigid. hence if we see in terms of quantum the lattice vibration decreases, resulting in no change in the direction of the Random velocity, ... 1answer 38 views ### How do I calculate the heat lost or gained by surroundings? [closed] How do I calculate the heat lost or gained by surroundings (Q surr) given mass ($m$), change in temp ($\Delta T$), and specific heat ($c$)? What equation would I use? How can I tell whether it's lost ... 0answers 51 views ### When do we have $p = -\frac{\partial F}{\partial V}$? [closed] In what context can we say that : $$p = -\frac{\partial F}{\partial V}$$ ? 0answers 30 views ### Negative temperature [duplicate] How can we prove that if a negative-temperature system is in contact with a positive-temperature system, then the heat flow from the first to the second (and finally, the temperature of the second ... 1answer 72 views ### The effects of heat on gravitational fields In boiling soapy water, globs of soap coalesce as the temperature increases to boiling. Does this mean that temperature increases the gravitational pull of bodies? 1answer 64 views ### Finding equation of state from thermal expansion coefficient and isothermal compressibility I'm stuck on a problem that I found in a book (Modern Thermodynamic with Statistical Mechanics, Helrich S., problem 5.2). The text of the problem is that: Consider a solid material for which: ... 2answers 43 views ### How is it possible to equate the internal energy at constant volume with the internal energy of an adiabatic process? I hope my question makes sense. My problem is that, I have read through numerous textbooks that nC(cons. volume)dT = -PdV when deriving the relationship between T and V for an adiabatic process, ... 3answers 130 views ### Integrating factor $1/T$ in 2nd Law of Thermodynamics How would you prove that $1/T$ is the most suitable integrating factor to transform $\delta Q$ to an exact differential in the second law of thermodynamics: $$dS = \frac{\delta Q}{T}$$ Where $dS$ is ... 3answers 211 views ### The notion of an adiabatic process in thermodynamics -vs- quantum mechanics I'm confused about the terminology in the two contexts since I can't figure out if they have a similar motivation. Afaik, the definitions state that quantum processes should be very slow to be called ... 1answer 48 views ### Time constant of ice melt I'm familiar with problems of "how much ice can you melt given some amount of energy", but I'm writing to get some clarification on the time constant of this event. This question might be somewhat ... 0answers 53 views ### Newton's cooling law I want to know few things regarding the practical of Newton's cooling law. 1). What are the other possible ways of making external conditions constant except using two calorimeters one within one? ... 1answer 62 views ### Why does Hydrogen molar heat capacity reach 7/2 R? If a diatomic gas like Hydrogen has 6 maximum degrees of freedom why its molar heat capacity reaches at high temperatures $C_V = \frac{7}{2} R$ and not $C_V = \frac{6}{2} R= 3R$? molar heat capacity ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343692660331726, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/53957/frequency-of-light/53971
# Frequency Of Light I am confused on few topics... What is meant by "Frequency of Light"? Does the Photon(s) vibrate, that is known as its frequency? If the Photons vibrate, then they have a specific frequency, then What is meant by "Higher frequency light" as used in Photo-electric Effect? In which directions/axis do they vibrate to have a specific frequency. Why is it that nothing can go faster than the Speed of light. Does these two things have any relation to each other? -Thanks. - You might want to divide this up into 2 questions, 1 for photons and frequency, and another for wave particle duality. – DJBunk Feb 14 at 19:11 1 – joshphysics Feb 14 at 19:23 ## 1 Answer The frequency of light refers to the relation $$\lambda\nu = c$$ where $\lambda$ is the wavelength, $c$ the speed of light and $\nu$ the frequency of the light (also sometimes denoted by $f$). Note that this seems to suggest that light consists of waves. However, this is misleading at best. Light also exhibits particle-like behaviour. The photo-electric effect is a typical example, since it was this experiment that led Einstein to postulate the existence of photons, "light-particles" if you will.$^1$ More in detail, it was considered paradoxical that the emission of electrons did not depend on the amplitude of the incident light, rather it depended on its frequency. Einstein clarified this problem by assuming the existence of photons, individual entities ("quanta") with many particle-like attributes, e.g. energy given by $E = h\nu$. This relation is exactly what Einstein used to resolve the paradox: only a photon with a high enough frequency $\nu$ has enough energy to knock an electron out of the material it is incident on. As for your second question, why $c$ is the maximum speed and if there is a relation between this and the frequency of light, this answer could be a place to start: http://physics.stackexchange.com/a/2288/16660. But I would make it into a different question if you want more information. $^1$ Although you hear people call them that, it's misleading to speak of particles because it suggests light consists of particles. Physicists are careful to use the term quanta instead of particles but what exactly those quanta are, no one can satisfyingly answer. All we know is that they exhibit both particle- and wave-like behaviour. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683243036270142, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2009/03/13/extremal-combinatorics-on-permutations/?like=1&source=post_flair&_wpnonce=5092e672ce
Gil Kalai’s blog ## Extremal Combinatorics on Permutations Posted on March 13, 2009 by We talked about extremal problems for set systems: collections of subsets of an $n$ element sets, - Sperner’s theorem, the Erdos-Ko-Rado theorem, and quite a few more. (See here, here and here.) What happens when we consider collections of permutations rather than collection of sets? Not so much is known on extremal problems for families of permutations. David Ellis, Ehud Friedgut, and Haran Pilpel have recently proved an old conjecture of Deza and Frankl regarding an analog for permutations of the Erdos-Ko-Rado Theorem. They showed that for every $k$ if $n$ is sufficiently large, any family of permutations on an $n$-element set such that every two permutations in the family agree in at least $k$ places, contains at most $(n-k)!$ permutations. The proof uses spectral methods and representations of the symmetric group. ### Like this: This entry was posted in Combinatorics and tagged Erdos-Ko-Rado theorem, Extremal combinatorics, Permutations. Bookmark the permalink. ### 6 Responses to Extremal Combinatorics on Permutations 1. James Lee says: This is a very cool paper, with the possible exception of the line “we finally proceed to eat the pudding.” Is that British or Hebrew? 2. Gil Kalai says: I wonder what other theorems of extremal combinatorics extend nicely to collections of permutations. Also, can we think of interesting extensions to other combinatorial structures. 3. Gil says: For example, is there an analog for Sperner’s theorem for permutation? 4. Shreevatsa says: @James Lee: That line is about “the proof of the pudding is in the eating”—a very English proverb, not Hebrew (AFAIK). 5. Janos Korner says: Dear Gil, Sperner’s problem (which in fact, was Schreier’s who asked Sperner this question) is a question about the asymptotic growth of the largest symmetric clique in the power graphs of a directed one-edge graph. In this optic we have generalized this problem to permutations in arXiv:0809.1522. This is the last in a series of papers on the “permutation capacity” of graphs and digraphs. Needless to say that the problems are generalizations of Shannon capaicity from finite graphs to infinite graphs and digraphs, so most of them are unsolved and we only have bounds some of which are non-trivial. janos 6. Gil says: Thanks a lot, Janos. This is very interesting. Here is the link to the paper by Gerard Cohen, Emanuela Fachini, Janos Korner, and here are its opening lines. How many permutations of the first n natural numbers can we find such that any two of them place two consecutive natural numbers somewhere in the same position? This problem was introduced in [10] where the authors conjectured that the maximum number of such permutations is exactly the middle binomial coefficient ${{n} \choose {n/2}}$. I suppose what I had in mind are questions about collections of permutations which form an anti chain with respect to the strong (Bruhat) orderings or with respect to the weak order, or perhaps some other order. I am not sure if these questions are the same or if they were studied but probably there are various ways to extend Sperner’s theorem to permutations. • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226958751678467, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/89118?sort=votes
## Categorification and Schur functors ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Are Schur functors and categorification somehow related ? If yes, probably looking on Schur functors (which I know) one can illustrate on this example why "categorification" (which I do not know) is so important/popular now ? (I am intersted to learn somehting about "categorification", but I would prefer to have some "good entering point", meaning to relate it to what I know). Schur functors - are some functors from category of vector spaces to itself. http://en.wikipedia.org/wiki/Schur_functor For example take vector space and send it to $S^n(V)$ n-th symmetric power (can be antisymmetric). "Categorification" - briefly looking at MO discussions about it and abstracts of some papers, I got an impression that it is about realizing certain algebras as functors of some categories. For example abstact of Khovanov's lectures http://arxiv.org/abs/1008.5084 contains the following sentence " diagrammatic categorification of positive halves of quantum groups". Is my understanding correct ? PS http://mathoverflow.net/questions/4841/what-precisely-is-categorification http://mathoverflow.net/questions/89001/algebraic-relations-between-schur-functors - 1 The nlab article on Schur functors is highly recommended ncatlab.org/nlab/show/Schur+functor. The message is that the category of Schur functors is just End(U), where U is the forgetful functor from cocomplete (or at least Cauchy-complete) tensor categories to categories. – Martin Brandenburg Feb 22 2012 at 13:59 ## 1 Answer Categorification can be thought of as the process of replacing equalities with isomorphisms (in some category). A basic example is replacing a numerical combinatorial identity such as $$2^n = \sum_{k=0}^n {n \choose k}$$ with a bijection between two sets (an isomorphism in $\text{Set}$); in this case, between the set of subsets of ${ 1, 2, ... n }$ and the disjoint union of the sets of subsets of size $k$ over all $k$. As another example, the category $\text{FinSet}$ of finite sets itself categorifies the rng $\mathbb{N}_{\ge 0}$ of non-negative integers; the coproduct categorifies addition and the product categorifies multiplication. (Also taking Hom-sets categorifies exponentiation.) In this way we replace equalities such as $1 + 1 = 2$ with isomorphisms $1 \sqcup 1 \cong 2$ (where $1$ is the set with one element, $2$ is the set with two elements, and $\sqcup$ is the coproduct). Schur functors categorify the theory of symmetric functions (specifically of Schur functions). Taking the direct sum of Schur functors categorifies addition of symmetric functions, taking the tensor product categorifies multiplication of symmetric functions, and composing categorifies plethysm. So here we replace equalities between expressions involving symmetric functions with natural isomorphisms between expressions involving Schur functors. As far as the linked arXiv paper goes, the idea is to replace a ring $A$ by a category $C$ with direct sums and tensor products such that the latter distributes over the former, such as the category of $(R, R)$-bimodules for some ring $R$. Equality of elements in $A$ is then replaced by isomorphism between objects in $C$. Taking the Grothendieck group of $C$ (a form of decategorification) should then recover the original ring. One reason to attempt to do this is that the category $C$ may have a distinguished set of objects that provides a distinguished set of generators for $A$. For example, the Hecke algebra $H_n$ of $S_n$ turns out to be the Grothendieck group of the category of Soergel bimodules. Among these we can distinguish the indecomposable modules, and these turn out to give the Kazhdan-Lusztig basis of $H_n$. - @Qiauchu Yuan Thank You for the answer ! What do you mean by "composing" symmetric functions ? – Alexander Chervov Feb 21 2012 at 18:30 Added phrase: "For example abstact of Khovanov's lectures arxiv.org/abs/1008.5084 contains the following senteces " diagrammatic categorification of positive halves of quantum groups"." Can you comment on it ? It does not seems like "identities" categorification. – Alexander Chervov Feb 21 2012 at 18:34 1 @Alexander: Schur functors are the things that can be composed. Decategorifying this construction gives rise to plethysm of symmetric functions. – Qiaochu Yuan Feb 21 2012 at 18:38 @Alexander: it is. See Proposition 2 on p. 23. – Qiaochu Yuan Feb 21 2012 at 18:43 @Qiauchu Can plethysm be described solely in terms of symmetric functions and what does it mean from this point of view ? – Alexander Chervov Feb 22 2012 at 5:48 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.863286554813385, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/169238/finding-height-of-a-triangle
# Finding Height of a Triangle I am having problem with the following question: From the triangle which is greater Area of the Triangle ABC or 27 Now how would I find the height of such a triangle. - 4 You do not need to. Note the height is less than 9. – David Mitra Jul 10 '12 at 22:39 ## 2 Answers From the information given, it is not possible to determine the height of the triangle. However, you can still answer the question. Taking $6$ as the base of the triangle, you know that the height must be less than or equal to $9$. It is less than $9$, if the triangle is not right. So the area of the triangle is less than or equal to $\frac{1}{2}(9)(6) = \frac{1}{2}(54) = 27$. Hence the area of the triangle is less than or equal to $27$. If the triangle is not right, then the area is less than 27. - Were the angle at $B$ right, the triangle would have an area of 27. That gives the triangle the maximum height. Hence, the area of the triangle is less than 27 as drawn, since the angle at $B$ is nonright. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507822394371033, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/tagged/public-key?page=5&sort=newest&pagesize=15
# Tagged Questions Public key cryptography describes all cryptographic algorithms which have a pair of keys, one key that performs encryption and one key that performs decryption. One of these keys is made available publicly, allowing anyone to send messages that only the holder of the private key can read. You should ... 3answers 292 views ### Is key size the only barrier to the adoption of the McEliece cryptosystem, or is it considered broken/potentially vulnerable? A recent paper showed that the McEliece cryptosystem is not, unlike RSA and other cryptosystems, weakened as drastically by quantum computing because strong Fourier sampling cannot solve the hidden ... 8answers 845 views ### RSA with small exponents? Just to establish notation with respect to the RSA protocol, let $n = pq$ be the product of two large primes and let $e$ and $d$ be the public and private exponents, respectively ($e$ is the inverse ... 2answers 144 views ### Protocol to generate Client Certificates at the start of a SSL session automatically? A more secure form of 'cookie' could be created for SSL communications through the following method. The client generates and requests the server to sign a certificate. Then the client authenticates ... 7answers 1k views ### How can SSL secure a two-way communication with only one key-pair? As I understand it, SSL involved the use of a public-private key pair. How does this enable two-way communication? Suppose I have some server with which I wish to communicate securely. I connect to ... 3answers 3k views ### How can I use asymmetric encryption, such as RSA, to encrypt an arbitrary length of plaintext? RSA is not designed to be used on long blocks of plaintext like a block cipher, but I need to use it to send a large message. How can I do this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922054648399353, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/209547/how-do-i-differentiate-this-function-with-e-to-the-x-and-fraction
# How do i differentiate this function with e to the x and fraction? I am lost on deriving this one function, because im kind of confused with the $9e^x$ and the fraction part. Maybe if someone can guide me through the steps that would be awesome. $$y=9e^x+\frac{2}{\sqrt[3]{x}}$$ - Is this homework? – VF1 Oct 8 '12 at 23:58 ## 3 Answers For a function $f(x)$ and its derivative with respect to x $f'(x)$, $$\frac{d}{dx} c f(x) = c f'(x)$$ You may want to look up what the derivative of the exponent function is. That's an important one to know. Also, for the fraction part, consider that: $$\frac{2}{\sqrt[3]{x}} = \frac{2}{x^{1/3}} = 2 x^{-1/3}$$ For this one you may want to look up the power rule. - This is a rather cook-book answer, but. . . As said above, use that $\frac{d}{dx}cf(x)=c\frac{d}{dx}f(x)$ and $\frac{2}{\sqrt[3]{x}}=2x^{-\frac{1}{3}}.$ Follow that up with the fact that $\frac{d}{dx}e^x=e^x$ and $\frac{d}{dx}x^m=mx^{m-1}.$ The first follows from many things (particularly the definition of $e^x$). The second follows from the definition of the derivative. - Just for instruction's sake, I'm going to solve this with logarithms. Bear in mind: $$\log(a \cdot b) = \log a + \log b$$ and $$\log(a^x) = x \log a$$ and $$\log e = 1$$ Let $y_1 = 9e^x$ and $y_2 = 2x^{-1/3}$. Now, $$\log y_1 = \log 9 + x$$ Differentiating w.r.t $x$, $$\frac{1}{y_1} \frac{dy_1}{dx} = 1$$ Thus, $\frac{dy_1}{dx} = y_1 = 9e^x$ (which is not surprising since that's a property of the exponential function). Now, $\log y_2 = \log 2 -\frac{1}{3} \log x$. Hence, differentiating w.r.t $x$, $$\frac{1}{y_2} \frac{dy_2}{dx} = -\frac{1}{3} \frac{1}{x}$$ Thus, $$\frac{dy_2}{dx} = -\frac{2}{3} x^{-4/3}$$ And therefore, $$\frac{dy}{dx} = 9e^x -\frac{2}{3} x^{-4/3}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224700331687927, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/217727/probability-of-a-map-being-surjective
# “Probability” of a map being surjective What is the probability of randomly putting $n$ elements into $k$ boxes ($k\leq n$) such that there is no empty box? I have two different ideas: • I could use the principle of inclusions and exclusions with $A_i=\{\text{box$i$is empty}\}$: \begin{align}P(\text{no empty boxes})&=1-P\left(\bigcup_{i=1}^k A_i\right)=1-\sum_{i=1}^{k-1} (-1)^{k-1} \binom{k}{i}\left(\frac{k-i}{k}\right)^n\\&=\sum_{i=0}^{k-1}(-1)^k\binom{k}{i}\left(\frac{k-i}{k}\right)^n\end{align} • There are $\binom{n+k-1}{k-1}$ different ways of putting $n$ elements in $k$ boxes. Putting the elements in such that no box remains empty should be the same as putting $n-k$ elements in $k$ boxes i.e. $\binom{n-1}{k-1}$. So the probability that no box is empty should be \begin{align}P(\text{no empty boxes})&=\frac{\binom{n-1}{k-1}}{\binom{n+k-1}{k-1}}\end{align} The two solutions are different. I'm pretty sure the first on is correct. Where is the mistake in the second one? Is the problem that the different ways of putting the elements in the boxes are not equal-probable? Can I fix the second attempt somehow? - I don't follow you on some things in the second approach. If we have 10 boxes and 12 things, how is putting the things in so none are empty analagous to puttint $n-k=12-10=2$ things into 10 boxes? I think you're assuming that during the first ten placements the bins get filled, and then you put the remaining two somewhere. But I'm not seeing why you can assume that the boxes get filled as soon as possible. And I also don't see your n-1 choose k-1 as being the number of ways to put n-k things into k boxes. – coffeemath Oct 21 '12 at 4:49 In the second approach I ignore the order, the elements get put in (may be that's the problem, why the different ways are not equal-probable). So I'd argue putting 12 elements in 10 boxes with the constraint, that all boxes get filled, is the same as putting 2 elements in 10 boxes. The number of ways of doing is is $\binom{n-1}{k-1}=\binom{11}{9}$ since I can view the ten boxes as $9$ barriers and putting 2 elements in 10 boxes as choosing 9 barriers out of $2+9=11$ elements. – Julian Oct 21 '12 at 7:13 ## 1 Answer To see what’s wrong with the second approach, look at the case $n=k=2$. Call the $2$ objects $a$ and $b$. The $4$ equally likely outcomes are: $$\begin{array}{c|c} \text{Box }1&\text{Box }2\\ \hline a,b\\ a&b\\ b&a\\ &a,b \end{array}$$ Clearly the desired probability is $\frac12$. When you say that there are $\binom{2+2-1}{2-1}=3$ possible distributions of the $2$ objects amongst the $2$ boxes, you’re talking about indistinguishable objects: the second and third cases in the table above are counted as a single distribution of the objects. The appropriate table is now this one, in which only one of the three distributions has objects in both boxes. $$\begin{array}{c|c} \text{Box }1&\text{Box }2\\ \hline x,x\\ x&x\\ &x,x \end{array}$$ Your second calculation gives the probability that a randomly selected distribution of $n$ indistinguishable objects amongst $k$ boxes has no empty boxes. In your case the objects being distributed really are distinguishable: there’s a difference between the identity function on $\{0,1\}$ and the function $f(x)=1-x$, though both are surjective. Your second calculation, however, treats these as the same function: each puts one object in each of the two boxes, so to speak. - Thank you! I think I got it. So the first calculation is correct? – Julian Oct 21 '12 at 18:20 @Julian: You’re welcome. Yes, it looks okay. (And even if I missed some minor algebraic error, the method is definitely okay.) – Brian M. Scott Oct 21 '12 at 18:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121536612510681, "perplexity_flag": "head"}
http://terrytao.wordpress.com/2009/01/19/an-inverse-theorem-for-the-uniformity-seminorms-associated-with-the-action-of-finfty_p/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # An inverse theorem for the uniformity seminorms associated with the action of F^infty_p 19 January, 2009 in math.DS, paper | Tags: characteristic factor, cocycles, finite fields, Gowers uniformity norm, polynomials, Tamar Ziegler, Vitaly Bergelson Vitaly Bergelson, Tamar Ziegler, and I have just uploaded to the arXiv our paper “An inverse theorem for the uniformity seminorms associated with the action of $F^\infty_p$“. This paper establishes the ergodic inverse theorems that are needed in our other recent paper to establish the inverse conjecture for the Gowers norms over finite fields in high characteristic (and to establish a partial result in low characteristic), as follows: Theorem. Let ${\Bbb F}$ be a finite field of characteristic p.  Suppose that $X = (X,{\mathcal B},\mu)$ is a probability space with an ergodic measure-preserving action $(T_g)_{g \in {\Bbb F}^\omega}$ of ${\Bbb F}^\omega$.  Let $f \in L^\infty(X)$ be such that the Gowers-Host-Kra seminorm $\|f\|_{U^k(X)}$ (defined in a previous post) is non-zero. 1. In the high-characteristic case $p \geq k$, there exists a phase polynomial g of degree <k (as defined in the previous post) such that $|\int_X f \overline{g}\ d\mu| > 0$. 2. In general characteristic, there exists a phase polynomial of degree <C(k) for some C(k) depending only on k such that $|\int_X f \overline{g}\ d\mu| > 0$. This theorem is closely analogous to a similar theorem of Host and Kra on ergodic actions of ${\Bbb Z}$, in which the role of phase polynomials is played by functions that arise from nilsystem factors of X.  Indeed, our arguments rely heavily on the machinery of Host and Kra. The paper is rather technical (60+ pages!) and difficult to describe in detail here, but I will try to sketch out (in very broad brush strokes) what the key steps in the proof of part 2 of the theorem are.  (Part 1 is similar but requires a more delicate analysis at various stages, keeping more careful track of the degrees of various polynomials.) – A model case – The theorem needs to be proven for all functions f with non-zero Gowers-Host-Kra norm in all ergodic systems, but to simplify the exposition let us look at a fairly specific type of function in a fairly specific type of system.  Namely, we will assume that the system X is a circle extension $X = Y \times_\rho S^1$ of another ergodic ${\Bbb F}^\omega$-system Y by a cocycle $\rho: {\Bbb F}^\omega \times Y \to S^1$, i.e. a measurable function obeying the cocycle identity $\rho(g+h,y) = \rho(g,y) \rho(h,T_g y)$ (1) for all $g,h \in F^\omega$ and almost every $y \in Y$.  The circle extension X is defined as the Cartesian product $\{ (y,u): y \in Y, u \in S^1 \}$ with shift map $T_g: (y,u) \mapsto (T_g y, \rho(g,y) u)$ (here we abuse notation a bit and use $T_g$ to simultaneously denote the action on X and on Y).  The function $f\in L^\infty(X)$ we shall pick is the vertical coordinate function $f(y,u) := u$. Finally, we make the assumption that the base space Y (or more precisely, its $\sigma$-algebra) is already generated by phase polynomials of bounded degree. Very roughly speaking, we can reduce the general case to the above case by an induction on k, together with some standard theory of Mackey and Furstenberg on isometric extensions and group extensions (cf. my lectures on the topological dynamics counterpart to these topics), and of Furstenberg-Weiss on characteristic factors, together with some Fourier analysis to reduce matters from (abelian) group extensions to circle extensions.  These reductions are standard in the ergodic theory literature, but somewhat technical, and I will not discuss them further here. – Cocycles and coboundaries – Our task is to show that the vertical coordinate function $f(y,u)=u$, which we assume to have non-zero Gowers-Host-Kra seminorm, correlates with some bounded degree polynomial.  To do this, it suffices to express the cocycle $\rho$ in the form $\rho(g,y) = P(g,y) \Delta_g F(y)$ (2) for some measurable functions $F: Y \to S^1$, $P: {\Bbb F}^\omega \times Y \to S^1$, where $\Delta_g F(y) := F(T_g y) \overline{F(y)}$ and $P(g,\cdot)$ is a phase polynomial of bounded degree for each $g \in {\Bbb F}^\omega$.  Indeed, we can rearrange (2) as $\Delta_g [ f(y,u) / F(y) ] = P(g,y)$ and so all derivatives of $f(y,u) / F(y)$ are phase polynomials of bounded degree, which implies that $f(y,u)/F(y)$ is itself a phase polynomial of bounded degree, thus $f(y,u)$ is the product of F with a phase polynomial of bounded degree.  But $F \in L^\infty(Y)$, which is generated by phase polynomials of bounded degree, and so $f(y,u)$ can be approximated by a linear combination of phase polynomials of bounded degree, and thus in particular must correlate with at least one of them, as desired. It thus remains to establish (2).  Cocycles of the form $\Delta_g F(y)$ are known as coboundaries, and two cocycles which differ by a coboundary are said to be cohomologous [see my previous post for further discussion].  Thus, our task is to show the cocycle $\rho$ is cohomologous to a phase polynomial of bounded degree. – The finite type condition – We have yet to use the hypothesis that the vertical coordinate function $f: (y,u) \mapsto u$ has non-zero $U^k(X)$ norm.   For this, we use a key observation of Host-Kra that this hypothesis implies (in fact it is basically equivalent to) a certain finite type condition on the cocycle $\rho$.  We derive this condition somewhat informally as follows.  If $\|f\|_{U^k(X)}$ is non-zero, then this means that the expected value of the expression $d^{[k]} f( x ) := \prod_{\omega \in \{0,1\}^k} f(x_\omega)^{\hbox{sgn}(\omega)}$ (3) is non-zero, where $x = (x_\omega)_{\omega \in \{0,1\}^k}$ ranges over “k-dimensional parallelopipeds” in X.  (This can be made precise by using the cubic measures $\mu^{[k]}$ of Host and Kra, but let us ignore this technical detail here.) One way to make a k-dimensional parallelopiped is to start with a (k-1)-dimensional parallelopiped $y = (y_\omega)_{\omega \in \{0,1\}^{k-1}}$, shift it by a group element g, and glue the two (k-1)-dimensional parallelopipeds together.  When one does so, the quantity (3) simplifies to $d^{[k-1]} \rho(g,y) := \prod_{\omega \in \{0,1\}^k} \rho(g,y_\omega)^{\hbox{sgn}(\omega)}$.  (4) Note that as $\rho$ does not depend on the vertical coordinate u, one can view the $y_\omega$ as living in Y rather than in X. We expect (4) to have a non-zero average in some sense.  If we define the limiting average $F(y) := {\Bbb E}_g d^{[k-1]} \rho(g,y)$ (where ${\Bbb E}_g$ denotes the limit after averaging g along a Følner sequence) we thus expect F to be non-zero at least some of the time; to simplify the discussion let us pretend that it is in fact always non-zero.  The cocycle equation (1) implies that $F( T_g y ) = d^{[k-1]} \rho(g, y) F(y)$ and so, after defining $F' := F/|F|$, we see that $d^{[k-1]} \rho(h, y)$ is a coboundary: $d^{[k-1]} \rho(g,y) = \Delta_g F(y)$.  (5) When $\rho$ obeys this condition, we say that it is of type $<k-1$.  The challenge is now to “integrate” the derivative $d^{[k-1]}$ out of (5) and obtain (2). – Vertical differentiation – The parallelopiped-based derivatives $d^{[k-1]}$ are difficult to work with directly (unless k is very small).  It is convenient to replace them with a simpler type of derivative, a “vertical” derivative.  To explain this, we first decompose Y as an extension $Y = Z \times_\phi U$ of a “simpler” system Z (where “simpler” means, roughly speaking, that we can generate it using polynomials of strictly lower degree than what one needs for Y), where U is a compact abelian group and $\phi: {\Bbb F}^\omega \times Z \to U$ is a cocycle.  It turns out that we can always decompose Y in this fashion, because Y is generated by polynomials; furthermore we can make the cocycle $\phi$ a polynomial of bounded degree. Once one does this, the system Y not only has an action of ${\Bbb F}^\omega$, but also has an action of the vertical group U, which commutes with the ${\Bbb F}^\omega$ action.  Given any $u \in U$, we can now define the vertical derivative $\Delta_u f$ of any function $f \in L^\infty(Y)$ by the formula $\Delta_u f(y,v) := f(y,uv) \overline{f(y,v)}.$ It turns out that the general theory of cubic measures, as worked out by Host and Kra, allows one to relate vertical derivatives $\Delta_u$ to the parallelogram derivatives $d^{[k-1]}$ (basically, the point is that the group U happens to preserve the cubic measures $\mu^{[k-1]}$ when applied correctly).  Because of this, it is possible to deduce from (5) the “Conze-Lesigne type” equation $\Delta_{u_1} \ldots \Delta_{u_{k-1}} \rho(g,y) = \Delta_g F_{u_1,\ldots,u_{k-1}}(y)$ (5′) for all $u_1,\ldots,u_{k-1} \in U$.  This equation is more tractable than (5) for a number of reasons, one of which being that y now lives in the system Y, rather than being a $k-1$-dimensional parallelopiped of points in Y.  Our task is now to repeatedly integrate away the vertical derivatives on the left-hand side of (5′) to obtain (2). This procedure will be done one derivative at a time.  Let us focus on the final step.  For this, we assume that we have already obtained an equation of the form $\Delta_u \rho(g,y) = P_u(g,y) \Delta_g F_u(y)$ (5”) for all $u \in U, g \in {\Bbb F}^\omega$, and almost every $y \in Y$, where $P_u(g,\cdot)$ is a phase polynomial of bounded degree and $F_u(y)$ is a measurable function taking values in $S^1$; thus every vertical derivative of $\rho$ is cohomologous to a bounded degree phase polynomial.  (A technical but important point: it is possible to select $P_u$ and $F_u$ so that $F_u(y)$ is jointly measurable in both u and y, and similarly for $P_u(g,y)$.  This is ultimately possible because the space of bounded degree phase polynomials turns out to be discrete modulo constants.)  The task is now to integrate (5”) to obtain (2).  Actually, we will establish a weaker form of (2), namely $\rho(g,y) = P(g,y) \rho'(g,z) \Delta_g F(y)$ (6) where $y = (z,u)$ and $\rho': {\Bbb F}^\omega \times Z \to S^1$ is some measurable function.  One can show that if (6) holds, then $\rho'$ obeys similar properties to the cocycle $\rho$, in particular the finite type condition (5).   (It turns out to not quite be a cocycle, though, but merely a quasi-cocycle: a cocycle modulo phase polynomials.  This is an important technical difficulty, but let us ignore it for this discussion.)  Since Z is a simpler system than Y, it is possible to combine (6) with a suitable inductive argument to recover (2). To summarise our progress so far, we are now at the point where every vertical derivative of $\rho$ is cohomologous to a polynomial (equation (5”)), and wish to integrate this information to conclude that $\rho$ itself is cohomologous to a polynomial, times a vertically-invariant function. – Reduction to the finite U case – We now analyse the equation (5”) further.  We have a cocycle relation $\Delta_{uv} \rho(g,y) = \Delta_u \rho(g,y) \Delta_v \rho(g,V_u y)$ where $V_u: (z,v) \mapsto (z,uv)$ denotes the action of vertical rotation by u.  Inserting this relation into our hypothesis (5”), we see that $\Delta_g [F_{uv} / ( F_u V_u F_v )]$ is a phase polynomial of bounded degree, and thus $F_{uv} / (F_u V_u F_v)$ is also a phase polynomial of bounded degree.  In other words, $F_u$ is a quasi-cocycle in u; it obeys the cocycle equation $F_{uv} = F_u V_u F_v$ modulo phase polynomials of bounded degree. Suppose temporarily that $F_u$ was a genuine cocycle in u.  Then, it is possible (using some Fourier analysis) to conclude that $F_u$ is in fact a coboundary in u, $F_u = \Delta_u F$; this is basically because the action of U on Y is free and thus has no non-trivial cohomology.  (One can get some idea of this by looking at the case when U and Y are finite.)  We can now rewrite (5”) as $\Delta_u (\rho(g,y)/\Delta_g F(y)) = P_u(g,y)$. In particular, $P_u$ must be a cocycle in U, and is thus also a coboundary: $P_u = \Delta_u P$.  Since the $P_u$ are phase polynomials of bounded degree, and the cocycle $\phi$ underlying the extension $Y = Z \times_\phi U$ is also a phase polynomial of bounded degree, it is possible to ensure that the “antiderivative” P of the $P_u$ of the phase polynomials of bounded degree is also a polynomial of bounded degree.  (Verifying this “integration lemma” requires a certain amount of algebraic computation – analogous to that required to show that the composition of two polynomials is again a polynomial, as was done for instance in this previous post of mine – but we will ignore this issue here.)  We thus see that $\rho(g,y) / (P(g,y) \Delta_g F(y))$ has vanishing u derivative for every U, and is thus of the form $\rho'(g,z)$ for some $\rho'$, thus yielding (5”) as desired.  Thus we see that we would be done if F was a genuine cocycle in U. Unfortunately, F is only a quasi-cocycle in U; the cocycle equation only holds modulo phase polynomials.  However, one can show that the space of phase polynomials (after quotienting out by constants) is discrete and thus at most countable.  (This discreteness is closely related to the local testability of polynomials that was mentioned in a previous post.)  Because of this, and the countable pigeonhole principle, we can find a single polynomial P for which the modified cocycle equation $F_{uv} = F_u (V_u F_v) P$ holds (modulo constants) for a set E of (u,v) of positive measure in $U \times U$. Now let us recall a basic fact from measure theory: if E is a positive measure subset of a compact abelian group U, then the difference set E-E contains an open neighbourhood of the origin.  (This is basically because sets of positive measure can be approximated by open sets, as per Littlewood’s principle.).  Using facts like this, and manipulating the above modified cocycle equation a few times, one eventually arrives at the conclusion that F is in fact a genuine cocycle on an open neighbourhood of the origin in U.  (Here I am oversimplifying a number of technical details, which can be found in the paper.  I will mention though that it was crucial here that $F_u$ was measurable in u, a fact alluded to earlier.) Up until now, we have not used at all any properties of the underlying group ${\Bbb F}^\omega$ (other than that it is abelian, discrete, and amenable).  Now, for the first time, we exploit the finite characteristic.  Because of this finite characteristic, all phase polynomials take values in roots of unity (modulo constants); since Y is given by polynomials, it is possible to show that U is then a torsion group (in fact, one can show it is the direct product of at most countably many cyclic groups).  As a consequence of this, every open neighbourhood of the origin in U contains an open subgroup.  Thus, we have managed to show that $F_u$ is a cocycle, not for $u \in U$, but rather for $u \in U'$ where U’ is an open subgroup of U. This is close enough to what we originally wanted that we can try repeating the above arguments.  Doing this, we find at the end of the day that we can “quotient out” U’ from the problem and effectively reduce U to the quotient group U/U’.  Since U is compact and U’ is open, U/U’ is finite.  Thus, we have effectively reduced matters to the case in which U is finite. – The finite U case – Now let U be finite.  The classification of finite abelian groups tells us that U is the product of cyclic groups.  For simplicity let us suppose that U is a single cyclic group, and specifically the group $C_p$ of $p^{th}$ roots of unity; this is not the most general case, but it illustrates the method (the general case is handled by a more complicated version of the arguments below). Let e be the generator of U, thus $U = \{1,e,\ldots,e^{p-1}\}$.  By the previous arguments, we know that F is a quasi-cocycle in U, thus $F_{e^{j+k}} = F_{e^j} V_e^j F_{e^k} \times P_{j,k}$ (6) for some phase polynomial $P_{j,k}$ of bounded degree.  We would like to eliminate the polynomials $P_{j,k}$ from this equation to get a genuine cocycle. We can partially do this by redefining the $F_{e^j}$ for $j=2,3,\ldots,p-1$ in terms of $F_e$ by the formula $F_{e^j} := \prod_{i=0}^{j-1} V_e^i F_e$.  (7) One can show (using (6)) that this redefinition does not significantly affect (5”), and now we have the exact cocycle condition $F_{e^{j+k}} = F_{e^j} V_e^j F_{e^k}$ (8) whenever $0 \leq j,k$ and $j+k < p$.  However, (8) need not hold for all j, k.  In order for this to be true, one must obey the line cocycle condition $\prod_{i=0}^{p-1} V_e^i F_e = 1$ (9) (which would make the definition in (7) periodic in j).  Conversely, if (9) holds then the cocycle given by (7) is a true cocycle in U, thus this condition completely describes the “cohomology” of U-cocycles here. Now, in general, the $F_e$ that we initially start with need not satisfy (9).  However, by using (6), one can show that the left-hand side of (9), while not identically 1, is at least a phase polynomial of bounded degree.  At this point we use a key algebraic lemma (which I hope to comment more on in a future post): every phase polynomial of bounded degree has a $p^{th}$ root which is also a phase polynomial of bounded (but slightly higher) degree.  Thus we have $\prod_{i=0}^{p-1} V_e^i F_e = Q^p$ for some polynomial Q of bounded degree.  But then what we can do is divide each $F_e$ by Q; this does not significantly affect (5”), but now recovers the line cocycle condition (9), and allows us to make $F_u$ a cocycle on U, at which point one can argue as before. The multidimensional case, when U is a product of cyclic groups, is a little more sophisticated than this; in addition to the line cocycle condition (9), there is an additional “zero curvature” condition $\Delta_{e_j} F_{e_i} = \Delta_{e_i} F_{e_j}$ that has to be obtained before one can build a cocycle in U, but it turns out that a suitably multidimensional generalisation of the formula (7) can ensure this. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… ## 6 comments 20 January, 2009 at 1:18 am bengreen A tour de force which I shall enjoy reading on a forthcoming train journey. But please, please can you think of an alternative to the triangle-with-a-dot notation for multiplicative derivative? Dear Terry, the paper is extremely well written and indeed a joy to read (even to an outsider like me). One typo: on page 8 (eq 1.2) there’s an extra 0 in the matrix. Do you expect the sharpest constant $c=c(p,k,\delta)$ in theorem 1.27 to have a nice explicit form (at least for k=1 or k=p)? Finally I’d humbly concur that the current notation for the multiplicative derivative is a bit heavy to the eye, a trick to make the black dot a little smaller would to use the \scriptscriptsize command, here’s a wordpress try without preview: either this $\Delta\!\!\!\!{\scriptscriptstyle \bullet}$ or that $\Delta\!\!\!\!\!{\scriptscriptstyle \bullet}$. Thanks for the correction! We actually had a lot of trouble deciding on the notation for the multiplicative derivative; we had considered $\dot \Delta_h$ or $\Delta^\times_h$ or ${\Delta\!\!\!\cdot\ \ \!\!}_h$ but felt that the dots were too inconspicuous. I guess we erred in the other direction instead. I like the scriptsize idea (though my version of LaTeX is so far refusing to acknowledge \scriptscriptsize), I’ll try to see if I can centre the dot in the middle of the triangle. [update: OK, I managed to jerry rig this: ${\Delta\!\!\!\!\!\!\hbox{\raisebox{0.4ex}{\tiny\ \ \textbullet}}\ \ \!\!}_h$.] (Alternatively, since we use multiplicative differentiation far more often than additive differentiation, another option is to make $\Delta_h$ default to multiplicative and use some other notation for additive. But we felt that this might be confusing since $\Delta_h$ denotes additive differentiation in many other places in the literature. Anyway, I welcome further suggestions.) [...] with respect to the shift . (This need to consider families is an issue that also comes up in the finite field ergodic theory analogue of the inverse conjectures, due to the unbounded number of generators in that case, but [...] [...] handle the general case, which was worked out in two papers, one by myself and Ziegler, and one by Bergelson, Ziegler, and myself. It turns out that it is convenient to phrase these arguments in the language of ergodic theory. [...] [...] for the Gowers norm, this time for vector spaces over a fixed finite field of prime order; with Vitaly Bergelson, we had previously established this claim when the characteristic of the field was large, so the [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 148, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227424263954163, "perplexity_flag": "head"}
http://conservapedia.com/Triangle
# Triangle ### From Conservapedia A triangle is a three-sided geometric shape. In Euclidean geometry, each side of a triangle is perfectly straight, and the sum of the internal angles of a triangle is always 180º. A triangle is defined by any three points that are not collinear. ## Naming conventions Usually, the vertices of a triangle are counter-clockwise denoted by big Latin letters, while small Latin letters are used for the sides: Each side will have the same letter as the opposite vertex. The angles are denoted by Greek letters, if possible, the letter of an angle will correspond to the letter of the adjacent vertex. Often, the sides will be referenced by the adjacent vertices: in the triangle on top of this page, we have $a = \overline{BC} = \overline{CB}, b = \overline{CA} = \overline{CA}, c = \overline{BA} = \overline{AB}$. Similarly, the angles are denoted as $\alpha = \angle CAB, \beta = \angle ABC, \gamma = \angle BCA$. ## Types of triangles A right triangle has one 90º angle. Right triangles have special properties (see trigonometry). An isosceles triangle has two equal angles, and two equal sides. An equilateral triangle has three equal sides, and three 60º angles. If a triangle is not one of the above, it is a scalene triangle -- that is, a triangle with no congruent angles. An obtuse triangle has one angle that measures more than 90o. | | | | | |----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------| | | | | | | Right Triangle | Isosceles Triangle | Equilateral Triangle | Obtuse Triangle | | $\angle CBA =90^\circ = \pi/2$ | $\angle BCA = \angle CAB$ | $\angle ABC = \angle BCA = \angle CAB$ | $\angle ACB > 90^\circ$ | ## Congruence of triangles Triangles can be proven congruent in the following ways: Side-Angle-Side (SAS): If two sides are equal and the included angle is equal to another triangle, then the triangles are congruent. Side-Side-Side (SSS): If three sides of one triangle are equal to three sides of another triangle, then the triangles are congruent. Angle-Side-Angle (ASA): If two angles and the included side of one triangle are equal the ones of another triangle, then the triangles are congruent. Angle-Angle-Side (AAS): If two angles and a side that is not included are equal to the ones of another triangle, then the triangles are congruent. The SSA (Side-Side-Angle) cannot prove triangles congruent unless it is a right angle, where it is known as the HL (Hypotenuse-Leg) Theorem. AAA (Angle-Angle-Angle) cannot prove triangles congruent either. In hyperbolic geometry, however, it does prove congruence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974552750587463, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/76629/domain-of-holomorphy/76632
## Domain of Holomorphy ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I found the following definition of domain of holomorphy in several places. Def1: A connected open set $\Omega$ in the n-dimensional complex space ${\mathbb{C}}^n$ is called a domain of holomorphy if there do not exist non-empty open sets $U \subset \Omega$ and $V \subset {\mathbb{C}}^n$ where $V$ is connected, $V \not\subset \Omega$ and $U \subset \Omega \cap V$ such that for every holomorphic function $f$ on $\Omega$ there exists a holomorphic function $g$ on $V$ with $f = g$ on $U$. From what I understand, intuitively speaking, $\Omega$ is a domain of holomorphy if we can find a function $g$ which is holomorphic on $\Omega$ such that it cannot be extended beyond the boundary of $\Omega$. Naively thinking, I would have written down the following definition for domain of holomorphy. Def2: A connected open set $\Omega\subset \mathbb{C}^n$ is a domain of holomorphy if there is a $g$ which is homolorphic on $\Omega$ such for that any open $V\subset \mathbb{C}^n$ with $V\cap \partial\Omega\neq\phi$ there is no holomorphic function $F$ on $V$ with $F\vert_{V\cap\Omega}=g\vert_{V\cap\Omega}$. Could someone please explain to me the need for the more complicated definition 1. - ## 2 Answers Further to Ben's answer, it might be useful to picture the situation in $\mathbb{C}$. (Of course in $\mathbb{C}$ every domain is a domain of holomorphy, but we can still exhibit the same phenomenon that causes us to need the more complicated definition.) The principal branch of the logarithm $f := \operatorname{Log}$ is defined on the slit plane $\Omega :=\mathbb{C}\setminus (-\infty,0]$. The function $f$ does not extend holomorphically, or even continuously, to any point of $(-\infty,0]$. However, let $x\in (\infty,0)$, set $\delta := -x$, and consider the disk $V := D(x,\delta)$ and the half-disk $$U := {z\in V: \operatorname{Im} z > 0}.$$ Then the restriction of $f$ to $U$ extends analytically to a holomorphic function $g$ on $V$, where $g(z)=f(z)$ if $z\in U$, $g(z) = \log(-z)+i\pi$ if $z$ belongs to the diameter $(2x,0)$ of $V$, and $g(z)=\operatorname{Log}(z) + 2\pi i$ otherwise. The point is that the slit plane is not a "natural" maximal domain of definition for the function $\operatorname{Log}$. (If we were looking for such a domain, it would be a spiralling Riemann surface spread over the punctured plane ...) In other words, suppose you have a domain of holomorphy for some analytic function, but you only know the germ of this function at some point. Then you can reconstruct the domain uniquely as the maximal domain to which the germ can be analytically continued. Of course the interesting thing is that there are domains that are not domains of holomorphy, when $n\geq 2$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Quoting Steven Krantz, Function Theory of Several Complex Variables, p. 6: the definition of domain of holomorphy is complicated because we must allow for the possibility (when dealing with an arbitrary open set $U$ rather than a smooth domain $\Omega$) that $\partial U$ may intersect itself. Picture a cigar, and then bend it to make a necklace, making the two ends just touch. The interior of that set is an open set whose boundary, so to speak, touches itself. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329460859298706, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/85845-radius-convergence-endpoints.html
# Thread: 1. ## Radius of convergence / endpoints I am really having trouble with this problem. The n'th term is given in the problem, (a) Determine the radius of convergence of the series. (b) What are the endpoints of the interval of convergence? left endpoint = x = right endpoint = x = Does the series converge at the left endpoint? Does the series converge at the right endpoint? I did the ratio test and got [(-1^n)*(x^n+1) / (n+1)] * [n / -1^(n-1) * x^n] I get infinity when i get an x in the numerator when i take the limit. Usually the x's cancel out. is this a special case? maybe i am doing something wrong, or don't know something. And I have no clue on how to find the end points even. 2. Originally Posted by VkL I am really having trouble with this problem. The n'th term is given in the problem, (a) Determine the radius of convergence of the series. (b) What are the endpoints of the interval of convergence? left endpoint = x = right endpoint = x = Does the series converge at the left endpoint? Does the series converge at the right endpoint? I did the ratio test and got [(-1^n)*(x^n+1) / (n+1)] * [n / -1^(n-1) * x^n] I get infinity when i get an x in the numerator when i take the limit. Usually the x's cancel out. is this a special case? maybe i am doing something wrong, or don't know something. And I have no clue on how to find the end points even. $\lim_{n \to \infty} \left|\frac{(-1)^n x^{n+1}}{n+1} \cdot \frac{n}{(-1)^{n-1} x^n}\right| < 1<br />$ since we're working with the absolute value of the ratio, the (-1) factors can be ignored. simplifying ... $\lim_{n \to \infty} \left|\frac{x \cdot n}{n+1}\right| < 1$ $|x| \lim_{n \to \infty} \frac{n}{n+1} < 1$ $|x| \cdot 1 < 1$ $-1 < x < 1$ I leave you to check the endpoints for possible convergence. 3. on step 3, why was the absolute value of x taken out of the limit? 4. Originally Posted by VkL on step 3, why was the absolute value of x taken out of the limit? Because $n>0$, so it is irrelevant. For the endpoints, plugging in $1$ gives you $\sum_{n=1}^{\infty}\frac{(-1)^n}{n}$. This is the alternating harmonic series, so what can we say about it? Plugging in $-1$ gives you $\sum_{n=1}^{\infty}\frac{-1}{n}$. This is the negative harmonic series, so what can we say about it? Spoiler: The first one converges (to $\ln 2$) and the second one diverges. So this power series converges on $x=(-1,1]$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102492928504944, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/115441-clarification-quotient-rule-print.html
# Clarification on the Quotient Rule Printable View • November 18th 2009, 02:17 PM NBrunk Clarification on the Quotient Rule Hello, and thanks for your help in advance. I recently learned the quotient rule in finding the derivative of more complicated expressions and had a question on it. The way we've been taught is that when given f(x)/g(x), the derivative is [g(x)*f'(x)-f(x)*g'(x)]/g(x)^2 My question has to do with when the more complicated expression is in the denominator of the function we are to differentiate. A good example is stated below: ....x..... (x+2)^2 (The periods are present just to make the fraction look better.) In this situation, the denominator of the derivative gets very complicated if I square [(x+2)^2], so is it legitimate to switch which expression is f(x) and g(x), making the derivative have x^2 in the denominator and changing the numerator accordingly? I realize my question is a bit confusing, so please don't hesitate to ask questions...just hoping to keep expressions as simplified as possible. • November 18th 2009, 02:26 PM skeeter Quote: Originally Posted by NBrunk Hello, and thanks for your help in advance. I recently learned the quotient rule in finding the derivative of more complicated expressions and had a question on it. The way we've been taught is that when given f(x)/g(x), the derivative is [g(x)*f'(x)-f(x)*g'(x)]/g(x)^2 My question has to do with when the more complicated expression is in the denominator of the function we are to differentiate. A good example is stated below: ....x..... (x+2)^2 (The periods are present just to make the fraction look better.) In this situation, the denominator of the derivative gets very complicated if I square [(x+2)^2], so is it legitimate to switch which expression is f(x) and g(x), making the derivative have x^2 in the denominator and changing the numerator accordingly? I realize my question is a bit confusing, so please don't hesitate to ask questions...just hoping to keep expressions as simplified as possible. how more complicated? ... squaring $(x+2)^2$ results in $(x+2)^4$ ... you are not required to expand the result. • November 18th 2009, 02:26 PM tom@ballooncalculus No, sorry! The different things done to f and g depend exactly on their being numerator and denominator respectively. However, you don't have to square the bottom, you can just raise the power by one... will edit in a pic in a couple of mins http://www.ballooncalculus.org/asy/d...quotientX2.png ... where... http://www.ballooncalculus.org/asy/chain.png ... is the chain rule, here wrapped inside the legs-uncrossed version of... http://www.ballooncalculus.org/asy/prod.png ... the product rule. Straight continuous lines differentiate downwards (integrate up) with respect to x, and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule). Could have done without the chain rule shape in this case, as the inner function differentiates to 1. Anyway, this (which is essentially the same as Calculus26's below) is one way to approach quotients - as a product in which one part of the product is raised to a negative power. __________________________________________ Don't integrate - balloontegrate! http://www.ballooncalculus.org/examples/gallery.html http://www.ballooncalculus.org/asy/doc.html • November 18th 2009, 02:38 PM Calculus26 For x/(x+2)^2 Simply f ' = [(x+2)^2 - x(2(x+2))]/ (x+2)^4 = [x+2 - 2x]/(x+2)^3 = (2-x)/(x+2)^3 You don't want to square out (x+2) As to your more general question you could make the substitution u = x+2 x= u-2 then you have f(u) = (u-2)/u^2 then df/dx = df/du*du/dx = (u^2 - (u-2)2u)/u^4 *1 = [4-u]/u^3 =[2-x]/(x+2)^3 as before All times are GMT -8. The time now is 05:46 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283177256584167, "perplexity_flag": "middle"}
http://mathdl.maa.org/mathDL/23/?pa=content&sa=viewDocument&nodeId=3287&bodyId=3541
Search ## Search Loci: Keyword Advanced Search # Loci Page 3 of 8 show printer friendly send to a friend # Visualizing Lie Subalgebras using Root and Weight Diagrams by Aaron Wangberg (Winona State Univ.) and Tevian Dray (Oregon State Univ.) ### 2.3 Weights and Weight Diagrams An algebra $$g$$ can also be represented using a collection of $$d \times d$$ matrices, with $$d$$ unrelated to the dimension of $$g$$. The matrices corresponding to the basis $$h_1, \cdots, h_l$$ of the Cartan subalgebra can be simultaneously diagonalized, providing $$d$$ eigenvectors. Then, a list $$w^m$$ of $$l$$ eigenvalues, called a weight, is associated with each eigenvector. Thus, the diagonalization process provides $$d$$ weights for the algebra $$g$$. The roots of an $$n$$-dimensional algebra can be viewed as the non-zero weights of its $$n \times n$$ representation. Weight diagrams are created in a manner comparable to root diagrams. First, each weight $$w^i$$ is plotted as a point in $$\mathbb{R}^l$$. Recalling the correspondence between a root $$r^i$$ and the operator $$g^i$$, we draw the root $$r^k$$ from the weight $$w^i$$ to the weight $$w^j$$ precisely when $$r^k + w^i = w^j$$, which at the algebra level occurs when the operator $$g^k$$ raises (or lowers) the state $$w^i$$ to the state $$w^j$$. The root and minimal non-trivial weight diagrams of the algebra $$A_2 = su(3)$$ are shown in Figure 2.1 The algebra has three pairs of root vectors, which are oriented east-west (colored blue), roughly northwest-southeast (colored red), and roughly northwest-southeast (colored green). The algebra's rank is the dimension of the underlying Euclidean space, which in this case is $$l=2$$, and the number of non-zero root vectors is the number of raising and lowering operators. The minimal weight diagram contains six different roots, although we've only indicated one of each raising and lowering root pair. Using the root diagram, the dimension of the algebra can now be determined either from the number of non-zero roots or from the number of root vectors extending in different directions from the origin. Both diagrams indicate that the dimension of $$A_2 = su(3)$$ is $$8 = 6+2$$. A complex semi-simple Lie algebra can almost2 be identified by its dimension and rank. We note that the algebra's root system, and hence its root diagram or weight diagram, does determine the algebra up to isomorphism. Figure 2. Root and Minimal Weight Diagrams of $$A_2=su(3)$$ ### 2.4 Constructing Root Diagrams from Dynkin Diagrams In 1947, Eugene Dynkin simplified the process of classifying complex semi-simple Lie algebras by using what became known as Dynkin diagrams [9]. As pointed out above, the Killing form can be used to choose an orthonormal basis for the Cartan subalgebra. Then every root in a rank $$l$$ algebra can be expressed as an integer sum or difference of $$l$$ simple roots. Further, the relative lengths and interior angle between pairs of simple roots fits one of four cases. A Dynkin diagram records the configuration of an algebra's simple roots. Each node in a Dynkin diagram represents one of the algebra's simple roots. Two nodes are connected by zero, one, two, or three lines when the interior angle between the roots is $$\frac{\pi}{2}$$, $$\frac{2\pi}{3}$$, $$\frac{3\pi}{4}$$, or $$\frac{5\pi}{6}$$, respectively. If two nodes are connected by $$n$$ lines, then the magnitudes of the corresponding roots satisfy the ratio $$1:\sqrt{n}$$. An arrow is used in the Dynkin diagram to point toward the node for the smaller root. If two roots are orthogonal, no direct information is known about their relative lengths. We give the Dynkin diagrams for the rank 2 algebras in Figure 3 and the corresponding simple root configurations in Figure 4. For each algebra, the left node in the Dynkin diagram corresponds to the root $$r^1$$ of length 1, colored red and lying along the horizontal axis, and the right node corresponds to the other root $$r^2$$, colored blue. Figure 3. Rank 2 Dynkin Diagrams $$D_2 = so(4)$$ $$A_2 = su(3)$$ $$B_2 = so(5)$$ $$C_2 = sp(2\cdot 2)$$ $$G_2 = Aut(\mathbb{O})$$ Figure 4. Rank 2 Simple Roots In $$\mathbb{R}^l$$, each root $$r^i$$ defines an $$(l-1)$$-dimensional hyperplane which contains the origin and is orthogonal to $$r^i$$. A Weyl reflection for $$r^i$$ reflects each of the other roots $$r^j$$ across this hyperplane, producing the root $$r^k$$ defined by $$r^k = r^j - 2(\frac{r^j {\tiny \bullet} r^i}{|r^i|})\frac{r^i}{|r^i|}$$ According to Jacobson [1], the full set of roots can be generated from the set of simple roots and their associated Weyl reflections. We illustrate how the full set of roots can be obtained from the simple roots using Weyl reflections in Figure 5. We start with the two simple roots for each algebra, as given in Figure 4. For each algebra, we refer to the horizontal simple root, colored red, as $$r^1$$, and the other simple root, colored blue, as $$r^2$$. Step 1 shows the result of reflecting the simple roots using the Weyl reflection associated with $$r^1$$. In this diagram, the black thin line represents the hyperplane orthogonal to $$r^1$$, and the new resulting roots are colored green. Step 2 shows the result of reflecting this new set of roots using the Weyl reflection associated with $$r^2$$. At this stage, both $$D_2 = so(4)$$ and $$A_2 = su(3)$$ have their full set of roots. We repeat this process again in steps 3 and 4, using the Weyl reflections associated first with $$r^1$$ and then with $$r^2$$. The full root systems for $$B_2 = so(5)$$ and $$C_2 = sp(2\cdot 2)$$ are obtained after the three Weyl reflections. Only $$G_2$$ requires all four Weyl reflections. Figure 5. Generating an algebra's full root system using Weyl reflections $$D_2 = so(4)$$ $$A_2 = su(3)$$ $$B_2=so(5)$$ $$C_2=sp(2\cdot 2)$$ $$G_2 = Aut(\mathbb{O})$$ Figure 6. Root Diagrams of Simple Rank $2$ Algebras The full set of roots have been produced once the Weyl reflections fail to produce additional roots. The root diagrams are completed by connecting vertices $$r^i$$ and $$r^j$$ via the root $$r^k$$ precisely when $$r^k + r^i = r^j$$. From the root diagrams in Figure 6, it is clear that the dimension of $$A_2=su(3)$$ is 8 and the dimension of $$G_2$$ is 14, while both $$B_2=so(5)$$ and $$C_2=sp(2\cdot 2)$$ have dimension $$10$$. Further, since the diagram of $$B_2$$ can be obtained via a rotation and rescaling of the root diagram for $$C_2$$, it is clear that $$B_2$$ and $$C_2$$ are isomorphic. Wangberg, Aaron and Tevian Dray, "Visualizing Lie Subalgebras using Root and Weight Diagrams," Loci (February 2009), DOI: 10.4169/loci003287
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 81, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8937576413154602, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/227470/compute-sum-i-j-1-infty-frac-1iji2j2
# Compute $\sum_{i,j=1}^{\infty} \frac{(-1)^{i+j}}{i^2+j^2}$ Compute $$\sum_{i,j=1}^{\infty} \frac{(-1)^{i+j}}{i^2+j^2}$$ - 6 I don't think it converges absolutely, which means you have to say something about the order in which the terms are taken. – Gerry Myerson Nov 2 '12 at 11:30 ## 1 Answer Let us compute $$\lim_{s\to 1}\sum_{(m,n)\neq(0,0)}\frac{(-1)^{m+n}}{(m^2+n^2)^s}$$ since that's what you want to know anyway. It's mostly about arithmetics in $\mathbb{Z}[i]$, as $m^2+n^2=(m+in)(m-in)=:N(m+in)$. Also $(-1)^{m+n}=(-1)^{m^2+n^2}$, so we're after $$f(s)=\sum_{\alpha\in\mathbb{Z}[i]-\{0\}}\frac{(-1)^{N\alpha}}{(N\alpha)^s}.$$ Notice that $N\alpha$ is even iff $(1+i)\vert\alpha$. Using unique factorization to primes (and the fact that there are $4$ units) we get $$\sum_{\alpha\in\mathbb{Z}[i]-\{0\}}\frac{(-1)^{N\alpha}}{(N\alpha)^s}=4(-1+2^{-s}+4^{-s}+\dots)\times\prod_\pi\frac{1}{1-(N\pi)^{-s}},$$ where $\pi$ runs over all primes in $\mathbb{Z}[i]$ except for $1+i$. Now either $N\pi=p$ where $p\equiv 1\text{ mod } 4$ is a prime, and it occurs for two $\pi$'s, or $N\pi=q^2$ where $q\equiv 3\text{ mod } 4$ is a prime (that's when $\pi=q$). As $$\frac{1}{1-2^{-s}}\prod_p\frac{1}{(1-p^{-s})^2}\prod_q\frac{1}{1-q^{-2s}}$$ $$=\zeta(s)\prod_p\frac{1}{1-p^{-s}}\prod_q\frac{1}{1+q^{-s}}$$ $$=\zeta(s)(1-3^{-s}+5^{-s}-7^{-s}+9^{-s}-\dots),$$ we have $$f(s)=4(2^{1-s}-1)\zeta(s)(1-3^{-s}+5^{-s}-7^{-s}+9^{-s}-\dots).$$ Since $\lim_{s\to1}(s-1)\zeta(s)=1$, $\lim(2^{1-s}-1)/(s-1)=-\log2$, and $$\lim_{s\to1}1-3^{-s}+5^{-s}-7^{-s}+9^{-s}-\dots=\pi/4$$ we get $$\lim_{s\to 1}f(s)=-\pi\log2.$$ (I'm amazed that I got the right answer:) - 1 I'm amazed, too :-) What a great answer! – joriki Nov 5 '12 at 9:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609230756759644, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1116/threshold-secret-sharing-how-to-create-a-shared-secret-from-pre-existing-secre/1118
# Threshold Secret sharing - How to create a shared secret from pre existing secret parts? In usual $(t, n)$ secret sharing schemes, a secret $S$ is split into $n$ parts so that any $t$ out of $n$ parts reconstruct the original secret. So, suppose that there is a group of $n$ participants each one has a secret $x_i$ ($x_i$ may be its private key). My question is, is it possible to create a secret $S$ using the prexisting secrets $x_i$ ($i=1...n$) so that with any $t$ out of $n$ from these secrets ($x_i$) we can find the secret $S$? - Is it a requirement that the secret $S$ be the same no matter which $t$ of the $n$ parties participate to create $S$? – mikeazo♦ Nov 1 '11 at 22:16 Yes, it is the same. At the beginning, the secret S does not exist, it will be generated by the "n" pre-existing secrets "xi", then, the resulting secret S (the same) may be found from any t out of n secret "xi" – Hamouid Nov 1 '11 at 22:24 1 The use of the word "split" in "a secret $S$ is split into $n$ parts" leaves the impression that the parts are smaller than $S$ by a factor of $n$. In a secret-sharing scheme, $n$ shares are constructed from $S$ such that any $t$ shares suffice to reconstruct $S$. In Shamir's secret-sharing scheme, each share is exactly the same size as $S$. So what the OP wants is a created secret $S$ (same size as the pre-determined shares $x_i$) such that any $t$ shares can recover the secret? It doesn't matter what the created secret is as long as it is recoverable? Why is it of any interest? – Dilip Sarwate Nov 3 '11 at 13:27 ## 3 Answers I'd like to suggest a potentially interesting reformulation (or variant) of the problem as a form of secure multi-party computation: Given $k$, $n$ and $m$, is there a protocol by which $n$ participants $i \in \lbrace 1, \dotsc, n \rbrace$ may, without the help of a trusted external party, each compute a share $s_i$ such that • there exists a secret $S \in \mathbb Z / m \mathbb Z$ that is uniquely determined by any subset of $k$ shares (and can be efficiently calculated from them), and • during the course of the protocol, no group of $k-1$ colluding participants can learn sufficient information to allow them to guess $S$ with probability higher than $1/m$? Further, if such a protocol does exist, does it require assumptions about the computational capacity of the participants, or can it be made information-theoretically secure like conventional secret sharing schemes? As Thomas Pornin's answer shows, such a protocol does exist when $k = n$: each participant simply selects $s_i$ independently and uniformly from $\mathbb Z / m \mathbb Z$, with $S \equiv s_1 + \dotsb + s_n \mod m$. Thomas's answer also shows that, for $1 < k < n$, at least some communication between the participants must be necessary to establish the shares. Addendum: There's actually a very simple way to do this. Each participant $i$ chooses a random element $x_i$ of a finite field $\mathbf F_m$, generates $n$ subshares $y_{i,1}$ to $y_{i,n}$ of it using Shamir's scheme of order $k$, and sends each subshare $y_{i,j}$ to participant $j$. Each participant $j$ then adds the subshares they receive together to obtain their share $s_j = y_{1,j} + \dotsb + y_{n,j}$. By the linearity of Shamir's secret sharing, interpolating any $k$ of the shares $s_j$ then yields the secret $S = x_1 + \dotsb + x_n$. (Edited to incorporate PulpSpy's space-saving suggestion; see comments.) - 2 Actually due to the homomorphism of Shamir secret sharing, each participant can add together all the subshares they receive and just remember the sum. If you interpolate k of them, you will get the sum of the xi's which is S in this case. – PulpSpy Nov 14 '11 at 22:58 Excellent point, thanks! It does seem to require that $m$ is a prime power (which my original version does not), but that's usually the case anyway. – Ilmari Karonen Nov 15 '11 at 5:05 On a general basis, no. If $t \lt n$, then the first $t$ values $x_1$ to $x_t$ are sufficient to rebuild the secret $S$, regardless of the values of $x_{t+1}$ to $x_n$. Therefore, those last values have no influence whatsoever on $S$. On the other hand, values $x_{n-t+1}$ to $x_n$ should be sufficient to also rebuild the secret, and since the last $t$ of them have no influence whatsoever, you can rebuild the secret with $x_{n-t+1}$ to $x_t$, i.e. less than $t$ values, possible no value at all if $t \leq n/2$. In other words, it cannot possibly work. (The intuitive way is the following: if the secret values $x_i$ are pre-existing, then they do not have the redundancy on which sharing schemes strive.) If $t = n$ (all shares $x_i$ are needed to rebuild the secret) then it becomes easy: just XOR all of them together. Possibly, hash all $x_i$ with SHA-256 to get "random looking" 256-bit strings, and XOR these together: this will work better if the $x_i$ do not all have the same size, or have some common structure. If you can have some extra public storage, then you can use regular Shamir's Secret Sharing, which, for a secret $S$ you can choose, yields shares $v_i$. Then, have each participant symmetrically encrypt $v_i$, with a key derived from (the SHA-256 of) his $x_i$; the resulting ciphertexts are then stored in the public storage area. That's an extra requirement (a storage area) but not as big a requirement than having each participant store a new confidential value somewhere. - Thank you for your answer, it's very interested, but about the shamir's secret sharing, to my knowledge, we cannot create a secret from n pre-existing secrets (xi) where any t<n of theme are sufficient to find the generated secret. – Hamouid Nov 2 '11 at 4:17 That's the point. If you use Shamir's scheme, you get a whole new set of secret values to store (which I call $v_i$); but that storage can be a public shared disk (as opposed to, say, a smartcard per participant) because each participant already has a secret value $x_i$ and can use it as a symmetric key to encrypt his $v_i$. – Thomas Pornin Nov 2 '11 at 12:07 EDIT: $\,$ This only works when t=1, as your previous question makes me believe you are most interested in. Yes, see Can one generalize the Diffie-Hellman key exchange to three or more parties?. The security of that is based on the difficulty of the "generalized Diffie-Hellman problem". - Thank you for the answer. Yes of course, I'm interested in my previous question. The Diffie-Hellman key exchange is a good idea to create and share a secret from a set of other pre-existing secrets, but I do not understand why you say that "this only works when t=1" Please, can you explain me what do you mean by that? – Hamouid Nov 2 '11 at 4:09 Well, as described, each person could calculate the secret on their own (after all the messages have been sent), and I can't think of any way to modify the procedure to avoid that. – Ricky Demer Nov 2 '11 at 5:50 1 I think you are confused on the definition of secret sharing. Secret sharing means at long as $t$ out of $n$ people get together, they can reconstruct the secret. What you seem to be saying is that the $n$ people get together and exchange messages, then each individual can reconstruct the secret. That is not the same. – mikeazo♦ Nov 5 '11 at 14:28 What else is secret sharing when t=1 ? $\hspace{1 in}$ – Ricky Demer Nov 5 '11 at 21:22 1 To me, secret sharing is when there is initially a secret, which is somehow shared so that the parties (or some subset) can reconstruct the original secret. What you are advocating is more of a secret (or key) distribution. There is no "original" secret, the parties run Diffie-Hellman to distribute a secret which was, prior to the protocol, unknown to all parties. – mikeazo♦ Nov 5 '11 at 22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 94, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269389510154724, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34878/x-not-simply-connected-and-x-x-contractible/34932
## X not simply connected and X-x contractible ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I was wondering if there is a nice counterexample to the following question. Suppose $X$ is a CW-complex which is not simply connected and there is a point $x\in X$ such that $X-x$ is contractible. Is $X$ homotopy equivalent to a wedge of circles? Maybe we do not even need the CW-complex condition. - 9 Take two 2-spheres. Glue them at the north pole ($x$) and the south pole to get a nonsimply connected space $X$ which becomes simply connected after removing $x$, – Donu Arapura Aug 8 2010 at 2:18 ## 2 Answers Take a disconnected space $Y$ that isn't homotopically trivial, for example the disjoint union of two circles, and let $X$ be its suspension. Let $x$ be one of the two "vertices" of the suspension. $X$ isn't simply connected because there's a loop that starts at $x$, goes through one component of $Y$ to get to the other vertex, and returns through a different component of $Y$. If you remove $x$, what remains amounts to the cone on $Y$ (with a collar), so it's contractible (to the other vertex). And $X$ isn't homotopically equivalent to a wedge of circles because the non-trivial homotopy in $Y$ will produce non-trivial higher homotopy in the suspension. - I think we hit upon the same example, but you give an explanation which is more helpful. – Donu Arapura Aug 8 2010 at 2:34 What if we strengthen the condition such that $X$ is not simply connected and $X-x$ is contractible for all $x \in X$. Can we now classify $X$ up to homotopy equivalence? – Manuel Rivera Aug 8 2010 at 13:19 I should have mentioned an even simpler example, starting with $Y$ being the disjoint union of a circle and a point. Then the suspension $X$ can be viewed as a 2-dimensional sphere together with one of its diameters. – Andreas Blass Aug 8 2010 at 22:29 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. (This should have been a comment to Andreas Blass' answer, but it did not fit there.) To answer the stronger question, asked in a comment to Andreas Blass' answer you can argue as follows in the case of a CW complex. Suppose that $X$ is a CW complex, that $X$ is not simply connected, and that for any point $x$ in $X$ the space $X \setminus \{x\}$ is contractible, then $X$ is a circle. If $X$ has cells in dimension at least three, then removing a point from the interior of such a cell does not change the 2-skeleton of the CW complex and hence does not affect the fundamental group (any homotopy between loops can be made to happen within the 2-skeleton). Since we are assuming that $X$ is not simply connected, but that the removal of any point makes the space contractible, it follows that $X$ cannot have cells of dimension three or more. Similarly, removing a point in the interior of a cell of dimension two corresponds to removing a relation for the fundamental group of $X$. Again, since we are assuming that $X$ is not simply connected, the resulting space would have fundamental group surjecting to a non-trivial group and would therefore not be trivial. Therefore we deduce that $X$ has no cells of dimension two either. We are left with $X$ having cells of dimension at most one. Thus $X$ is a wedge of circles and it is now easy to see that the stated condition implies that $X$ is in fact a single circle. With similar arguments it seems that you can show also the following result. Suppose that $X$ is a CW complex such that for every point $x \in X$ the space $X \setminus \{x\}$ is contractible. Then either $X$ is itself contractible (e.g. $S^\infty$), or $X$ is homotopy equivalent (and maybe even homeomorphic) to a sphere. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950288712978363, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/218116/definition-of-quotient-space/218164
# Definition of quotient space Let $W \subset V$ be vector spaces. I don't understand the quotient space $V/W$. I read the Wikipedia and searched this site. I would have thought: say the vector space operation is $+$. let $Q = V/W$. Then $V = W+Q$ by "multiplying across". So $Q$ contains elements of the form $V + (-1)W$. Why isn't this how the quotient space is defined? - 1 You have $Q = \{ \{v\}+W | v \in V \}$. I suspect the term quotient came from group notation? – copper.hat Oct 21 '12 at 16:18 A Vector space is an abelian group! So the quotient space would contain the cosets of $W$! – Citizen Oct 21 '12 at 17:21 2 You can't "solve" equations for sets. If $A=B+C$ as sets, that doesn't imply that $C=A-B$. (Take $A=B$ to be any set and $C=\{0\}$, for example.) – Greg Martin Oct 21 '12 at 17:25 ## 1 Answer Any subspace $\,W\leq V\,$ (over some field $\,\Bbb F\,$) defines an equivalence relation $\,\sim_W\,$ on $\,V\,$ as follows: $$v_1\sim_Wv_2\Longleftrightarrow v_1-v_2\in W$$ 1) Show the above is an equivalence relation 2) If we denote the equivalence clases of the above relation by $\,v+W\,$ (in set theory this would usually be defined as $\,[v]\,\,,\,\,[v]_W\,$ or something similar), then we can define two operations on the set of equivalence classes, denoted by $\,V/W\,$ , as follows: (i) Sum of classes: $\,(v_1+W)+(v_2+W):=(v_1+v_2)+W\,$ (ii) Product by scalar: for any $\,k\in\Bbb F\;\;,\;\;k(v+W):=(kv)+W\,$ 3) Prove both operations above are well defined and they determine a structure of $\,\Bbb F_\,$vector space on $\,V/W\,$ If you know some group theory, the above applies mutatis mutandis to normal subgroups of a group, though the plain equivalence relation (i.e., without the operations) applies to any subgroup of a group. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229587316513062, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Chinese_remainder_theorem
# Chinese remainder theorem The Chinese Remainder Theorem is a result about congruences in number theory and its generalizations in abstract algebra. It was first published in the 3rd to 5th centuries by Chinese mathematician Sun Tzu. In its basic form, the Chinese remainder theorem will determine a number n that when divided by some given divisors leaves given remainders. For example, what is the lowest number n that when divided by 3 leaves a remainder of 2, when divided by 5 leaves a remainder of 3, and when divided by 7 leaves a remainder of 2? A common introductory example is a woman who tells a policeman that she lost her basket of eggs, and that if she took three at a time out of it, she was left with 2, if she took five at a time out of it she was left with 3, and if she took seven at a time out of it she was left with 2. She then asks the policeman what is the minimum number of eggs she must have had. The answer to both problems is 23. ## Theorem statement The original form of the theorem, contained in a third-century AD book The Mathematical Classic of Sun Zi (孫子算經) by Chinese mathematician Sun Tzu and later generalized with a complete solution called Da yan shu(大衍術) in a 1247 book by Qin Jiushao, the Shushu Jiuzhang (數書九章 Mathematical Treatise in Nine Sections) is a statement about simultaneous congruences (see modular arithmetic). Suppose n1, n2, …, nk are positive integers which are pairwise coprime. Then, for any given sequence of integers a1,a2, …, ak, there exists an integer x solving the following system of simultaneous congruences. $\begin{align} x &\equiv a_1 \pmod{n_1} \\ x &\equiv a_2 \pmod{n_2} \\ &{}\ \ \vdots \\ x &\equiv a_k \pmod{n_k} \end{align}$ Furthermore, all solutions x of this system are congruent modulo the product, N = n1n2…nk. Hence $\scriptstyle x \;\equiv\; y \pmod{n_i}$ for all $\scriptstyle 1 \;\leq\; i \;\leq\; k$, if and only if $\scriptstyle x \;\equiv\; y \pmod{N}$. Sometimes, the simultaneous congruences can be solved even if the ni's are not pairwise coprime. A solution x exists if and only if: $a_i \equiv a_j \pmod{\gcd(n_i,n_j)} \qquad \text{for all }i\text{ and }j.$ All solutions x are then congruent modulo the least common multiple of the ni. Sun Tzu's work contains neither a proof nor a full algorithm. What amounts to an algorithm for solving this problem was described by Aryabhata (6th century; see Kak 1986). Special cases of the Chinese remainder theorem were also known to Brahmagupta (7th century), and appear in Fibonacci's Liber Abaci (1202). A modern restatement of the theorem in algebraic language is that for a positive integer $\scriptstyle n$ with prime factorization $\scriptstyle p_1^{r_1} p_2^{r_2} \cdots p_k^{r_k}$ we have the isomorphism between a ring and the direct product of its prime power parts: $\mathbb{Z}/n\mathbb{Z} \cong \mathbb{Z}/p_1^{r_1}\mathbb{Z} \times \mathbb{Z}/p_2^{r_2}\mathbb{Z} \times \cdots \times \mathbb{Z}/p_k^{r_k}\mathbb{Z}.$ ## Existence Existence can be seen by an explicit construction of $\scriptstyle x$. We will use the notation $\scriptstyle [a^{-1}]_b$ to denote the multiplicative inverse of $\scriptstyle a \pmod{b}$ as calculated by the Extended Euclidean algorithm, it is defined exactly when $\scriptstyle a$ and $\scriptstyle b$ are coprime; the following construction explains why the coprimality condition is needed. ### Case of two equations Given the system (corresponding to $\scriptstyle k \,=\, 2$) $\begin{align} x &\equiv a_1 \pmod{n_1} \\ x &\equiv a_2 \pmod{n_2}. \end{align}$ Since $\scriptstyle \gcd(n_1, n_2) \,=\, 1$, we have from Bézout's identity $n_2 [n_2^{-1}]_{n_1} + n_1 [n_1^{-1}]_{n_2} = 1.$ This is true because we agreed to use the inverses which came out of the Extended Euclidian algorithm, for any other inverses, it would not neccessarily hold true, but only hold true $\pmod{n_1n_2}$. Multiplying both sides by $\scriptstyle x$, we get $x = x n_2 [n_2^{-1}]_{n_1} + x n_1 [n_1^{-1}]_{n_2}.$ If we take the congruence modulo $\scriptstyle n_1$ for the right-hand-side expression, it is readily seen that $x \underbrace{n_2 [n_2^{-1}]_{n_1}}_1 + x \underbrace{n_1}_0 [n_1^{-1}]_{n_2} \equiv x \times 1 + x \times 0 \times [n_1^{-1}]_{n_2} \equiv x \pmod {n_1}.$ But we know that $x \equiv a_1 \pmod {n_1},$ thus this suggests that the coefficient of the first term on the right-hand-side expression can be replaced by $\scriptstyle a_1$. Similarly, we can show that the coefficient of the second term can be substituted by $\scriptstyle a_2$. We can now define the value $x := a_1 n_2 [n_2^{-1}]_{n_1} + a_2 n_1 [n_1^{-1}]_{n_2}$ and it is seen to satisfy both congruences by reducing. For example $a_1 n_2 [n_2^{-1}]_{n_1} + a_2 n_1 [n_1^{-1}]_{n_2} \equiv a_1 \times 1 + a_2 \times 0 \times [n_1^{-1}]_{n_2} \equiv a_1 \pmod {n_1}.$ ### General case The same type of construction works in the general case of $\scriptstyle k$ congruence equations. Let $\scriptstyle N \;=\; n_1 n_2 \cdots n_k$ be the product of every modulus then define $x := \sum_{i} a_i \frac{N}{n_i} \left[\left(\frac{N}{n_i}\right)^{-1}\right]_{n_i}$ and this is seen to satisfy the system of congruences by a similar calculation as before. ## Finding the solution with basic algebra and modular arithmetic For example, consider the problem of finding an integer x such that $\begin{align} x &\equiv 2 \pmod{3}\\ x &\equiv 3 \pmod{4}\\ x &\equiv 1 \pmod{5}. \end{align}$ A brute-force approach converts these congruences into sets and writes the elements out to the product of 3×4×5 = 60 (the solutions modulo 60 for each congruence): x ∈ {2, 5, 8, 11, 14, 17, 20, 23, 26, 29, 32, 35, 38, 41, 44, 47, 50, 53, 56, 59, …} x ∈ {3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47, 51, 55, 59, …} x ∈ {1, 6, 11, 16, 21, 26, 31, 36, 41, 46, 51, 56, …}. To find an x that satisfies all three congruences, intersect the three sets to get: x ∈ {11, …}. Which can be expressed as $x \equiv 11 \pmod{60}.$ Another way to find a solution is with basic algebra, modular arithmetic, and stepwise substitution. We start by translating these equivalences into equations for some t, s, and u: • Equation 1: x = 2 + 3 × t (mod 3) • Equation 2: x = 3 + 4 × s (mod 4) • Equation 3: x = 1 + 5 × u (mod 5). Start by substituting the x from equation 1 into equivalence 2: 2 + 3 × t = 3 (mod 4), hence 3 × t = 1 (mod 4), or t = (1/3) (mod 4) = 3 (mod 4), meaning that t = 3 + 4 × s for integer s. Plug t into equation 1: x = 2 + 3 × t (mod 3) = 2 + 3 × (3 + 4 × s) (mod 3) = 11 + 12 × s (mod 3). Plug this x into equivalence 3: 11 + 12 × s = 1 (mod 5). Casting out 5s, we get 1 + 2 × s = 1 (mod 5), or 2 × s = 0 (mod 5), meaning that s = 0 + 5 × u for integer u. Finally, x = 11 + 12 × s = 11 + 12 × (5 × u) = 11 + (60 × u). Since 60 = lcm(3, 4, 5), we have solutions 11, 71, 131, 191, … ## A constructive algorithm to find the solution The following algorithm only applies if the $\scriptstyle n_i$'s are pairwise coprime. (For simultaneous congruences when the moduli are not pairwise coprime, the method of successive substitution can often yield solutions.) Suppose, as above, that a solution is required for the system of congruences: $x \equiv a_i \pmod{n_i} \quad\mathrm{for}\; i = 1, \ldots, k.$ Again, to begin, the product $\scriptstyle N \;=\; n_1n_2 \ldots n_k$ is defined. Then a solution x can be found as follows. For each i the integers $\scriptstyle n_i$ and $\scriptstyle N/n_i$ are coprime. Using the extended Euclidean algorithm we can find integers $\scriptstyle r_i$ and $\scriptstyle s_i$ such that $\scriptstyle r_in_i \,+\, s_iN/n_i \;=\; 1$. Then, choosing the label $\scriptstyle e_i \;=\; s_iN/n_i$, the above expression becomes: $r_i n_i + e_i = 1. \,\!$ Consider $\scriptstyle e_i$. The above equation guarantees that its remainder, when divided by $\scriptstyle n_i$, must be 1. On the other hand, since it is formed as $\scriptstyle s_iN/n_i$, the presence of N guarantees a remainder of zero when divided by any $\scriptstyle n_j$ when $\scriptstyle j \;\ne\; i$. $e_i \equiv 1 \pmod{n_i} \quad \mathrm{and} \quad e_i \equiv 0 \pmod{n_j} \quad \mathrm{for} ~ j \ne i$ Because of this, and the multiplication rules allowed in congruences, one solution to the system of simultaneous congruences is: $x = \sum_{i=1}^k a_i e_i.$ For example, consider the problem of finding an integer x such that $\begin{align} x &\equiv 2 \pmod{3}\\ x &\equiv 3 \pmod{4}\\ x &\equiv 1 \pmod{5}. \end{align}$ Using the extended Euclidean algorithm, for x modulo 3 and 20 [4×5], we find (−13) × 3 + 2 × 20 = 1, i.e. e1 = 40. For x modulo 4 and 15 [3×5], we get (−11) × 4 + 3 × 15 = 1, i.e. e2 = 45. Finally, for x modulo 5 and 12 [3×4], we get 5 × 5 + (−2) × 12 = 1, i.e. e3 = −24. A solution x is therefore 2 × 40 + 3 × 45 + 1 × (−24) = 191. All other solutions are congruent to 191 modulo 60, [3 × 4 × 5 = 60], which means they are all congruent to 11 modulo 60. Note: There are multiple implementations of the extended Euclidean algorithm which will yield different sets of $\scriptstyle e_1 \;=\; -20$, $\scriptstyle e_2 \;=\; -15$, and $\scriptstyle e_3 \;=\; -24$. These sets however will produce the same solution; i.e., (−20)2 + (−15)3 + (−24)1 = −109 = 11 modulo 60. ## Statement for principal ideal domains For a principal ideal domain R the Chinese remainder theorem takes the following form: If u1, …, uk are elements of R which are pairwise coprime, and u denotes the product u1…uk, then the quotient ring R/uR and the product ring R/u1R× … × R/ukR are isomorphic via the isomorphism $f: R/uR \rightarrow R/u_1R \times \cdots \times R/u_k R$ such that $f(x +uR) = (x + u_1R, \ldots, x + u_kR) \quad\mbox{ for every } x \in R.$ This map is well-defined and an isomorphism of rings; the inverse isomorphism can be constructed as follows. For each i, the elements ui and u/ui are coprime, and therefore there exist elements r and s in R with $r u_i + s u/u_i = 1. \,\!$ Set ei = s u/ui. Then the inverse of f is the map $g: R/u_1R \times \cdots \times R/u_kR \rightarrow R/uR$ such that $g(a_1 + u_1R, \ldots, a_k + u_kR) = \left( \sum_{i=1}^k a_i \frac{u}{u_i} \left[\left(\frac{u}{u_i}\right)^{-1}\right]_{u_i} \right) + uR \quad\mbox{ for all }a_1, \ldots, a_k \in R.$ This statement is a straightforward generalization of the above theorem about integer congruences: the ring Z of integers is a principal ideal domain, the surjectivity of the map f shows that every system of congruences of the form $x \equiv a_i \pmod{u_i} \quad\mathrm{for}\; i = 1, \ldots, k$ can be solved for x, and the injectivity of the map f shows that all the solutions x are congruent modulo u. ## Statement for general rings The general form of the Chinese remainder theorem, which implies all the statements given above, can be formulated for commutative rings and ideals. If R is a commutative ring and I1, …, Ik are ideals of R which are pairwise coprime (meaning that $\scriptstyle I_i \,+\, I_j \;=\; R$ for all $i \neq j$), then the product I of these ideals is equal to their intersection, and the quotient ring R/I is isomorphic to the product ring R/I1 × R/I2 × … × R/Ik via the isomorphism $f: R/I \rightarrow R/I_1 \times \cdots \times R/I_k$ such that $f(x + I) = (x + I_1, \ldots, x + I_k) \quad\text{ for all } x \in R.$ Here is a version of the theorem where R is not required to be commutative: Let R be any ring with 1 (not necessarily commutative) and $\scriptstyle I_1,\, \ldots,\, I_n$ be pairwise coprime 2-sided ideals. Then the canonical R-module homomorphism $\scriptstyle R \;\rightarrow\; R/I_1 \,\times\, \cdots \,\times\, R/I_k$ is onto, with kernel $\scriptstyle I_1 \,\cap\, \cdots \,\cap\, I_k$. Hence, $\scriptstyle R/(I_1 \,\cap\, \cdots \,\cap\, I_k) \,\simeq\, R/I_1 \,\times\, \cdots \,\times\, R/I_k$ (as R-modules). ## Applications • In the RSA algorithm calculations are made modulo n, where n is a product of two large prime numbers p and q. 1,024-, 2,048- or 4,096-bit integers n are commonly used, making calculations in $\scriptstyle \Bbb{Z}/n\Bbb{Z}$ very time-consuming. By the Chinese remainder theorem, however, these calculations can be done in the isomorphic ring $\scriptstyle \Bbb{Z}/p\Bbb{Z} \,\oplus\, \Bbb{Z}/q\Bbb{Z}$ instead. Since p and q are normally of about the same size, that is about $\scriptstyle \sqrt{n}$, calculations in the latter representation are much faster. Note that RSA algorithm implementations using this isomorphism are more susceptible to fault injection attacks. • The Chinese remainder theorem may also be used to construct an elegant Gödel numbering for sequences, which is needed to prove Gödel's incompleteness theorems. • The following example shows a connection with the classic polynomial interpolation theory. Let r complex points ("interpolation nodes") $\scriptstyle \lambda_1,\, \ldots,\, \lambda_r$ be given, together with the complex data $\scriptstyle a_{j,k}$, for all $\scriptstyle 1 \,\leq\, j \,\leq\, r$ and $\scriptstyle 0 \,\leq\, k \,<\, \nu_j$. The general Hermite interpolation problem asks for a polynomial $\scriptstyle P(x) \,\in\, \C[x]$ taking the prescribed derivatives in each node $\scriptstyle \lambda_j$: $P^{(k)}(\lambda_j) = a_{j, k}\quad\forall 1 \leq j \leq r, 0 \leq k < \nu_j.$ Introducing the polynomials $A_j(x) := \sum_{k=0}^{\nu_j - 1}\frac{a_{j, k}}{k!}(x - \lambda_j)^k$ the problem may be equivalently reformulated as a system of $\scriptstyle r$ simultaneous congruences: $P(x) \equiv A_j(x) \pmod {(x - \lambda_j)^{\nu_j}}, \quad\forall 1 \leq j \leq r.$ By the Chinese remainder theorem in the principal ideal domain $\scriptstyle \C[x]$, there is a unique such polynomial $\scriptstyle P(x)$ with degree $\scriptstyle \deg(P) \;<\; n \;:=\; \sum_j\nu_j$. A direct construction, in analogy with the above proof for the integer number case, can be performed as follows. Define the polynomials $\scriptstyle Q \;:=\; \prod_{i=1}^{r}(x \,-\, \lambda_i)^{\nu_i}$ and $\scriptstyle Q_j \;:=\; \frac{Q}{(x \,-\, \lambda_j)^{\nu_j}}$. The partial fraction decomposition of $\scriptstyle \frac{1}{Q}$ gives r polynomials $\scriptstyle S_j$ with degrees $\scriptstyle \deg(S_j) \;<\; \nu_j$ such that $\frac{1}{Q} = \sum_{i=1}^{r}\frac{S_i}{(x - \lambda_i)^{\nu_i}}$ so that $\scriptstyle 1 = \sum_{i=1}^{r}S_i Q_i$. Then a solution of the simultaneous congruence system is given by the polynomial $\sum_{i=1}^{r} A_i S_i Q_i = A_j + \sum_{i=1}^{r}(A_i - A_j) S_i Q_i \equiv A_j\pmod{(x - \lambda_j)^{\nu_j}}\quad\forall 1 \leq j \leq r$ and the minimal degree solution is this one reduced modulo $\scriptstyle Q$, that is the unique with degree less than n. • The Chinese remainder theorem can also be used in secret sharing, which consists of distributing a set of shares among a group of people who, all together (but no one alone), can recover a certain secret from the given set of shares. Each of the shares is represented in a congruence, and the solution of the system of congruences using the Chinese remainder theorem is the secret to be recovered. Secret Sharing using the Chinese Remainder Theorem uses, along with the Chinese remainder theorem, special sequences of integers that guarantee the impossibility of recovering the secret from a set of shares with less than a certain cardinality. • The Good-Thomas fast Fourier transform algorithm exploits a re-indexing of the data based on the Chinese remainder theorem. The Prime-factor FFT algorithm contains an implementation. • Dedekind's theorem on the linear independence of characters states (in one of its most general forms) that if M is a monoid and k is an integral domain, then any finite family $\scriptstyle \left(f_i\right)_{i \in I}$ of distinct monoid homomorphisms $\scriptstyle f_i:\, M \,\to\, k$ (where the monoid structure on k is given by multiplication) is linearly independent; i.e., every family $\scriptstyle \left(\alpha_i\right)_{i\in I}$ of elements $\scriptstyle \alpha_i \,\in\, k$ satisfying $\scriptstyle \sum_{i \in I}\alpha_i f_i \;=\; 0$ must be equal to the family $\scriptstyle \left(0\right)_{i \in I}$. Proof using the Chinese Remainder Theorem: First, assume that k is a field (otherwise, replace the integral domain k by its quotient field, and nothing will change). We can linearly extend the monoid homomorphisms $\scriptstyle f_i:\, M \,\to\, k$ to k-algebra homomorphisms $\scriptstyle F_i:\, k\left[M\right] \,\to\, k$, where $\scriptstyle k\left[M\right]$ is the monoid ring of M over k. Then, the condition $\scriptstyle \sum_{i\in I}\alpha_i f_i \;=\; 0$ yields $\scriptstyle \sum_{i \in I}\alpha_i F_i \;=\; 0$ by linearity. Now, we notice that if $\scriptstyle i \;\neq\; j$ are two elements of the index set I, then the two k-linear maps $\scriptstyle F_i:\, k\left[M\right] \,\to\, k$ and $\scriptstyle F_j:\, k\left[M\right] \,\to\, k$ are not proportional to each other (because if they were, then $\scriptstyle f_i$ and $\scriptstyle f_j$ would also be proportional to each other, and thus equal to each other since $\scriptstyle f_i\left(1\right) \;=\; 1 \;=\; f_j\left(1\right)$ (since $\scriptstyle f_i$ and $\scriptstyle f_j$ are monoid homomorphisms), contradicting the assumption that they be distinct). Hence, their kernels $\scriptstyle \mathrm{Ker} F_i$ and $\scriptstyle \mathrm{Ker} F_j$ are distinct. Now, $\scriptstyle \mathrm{Ker} F_i$ is a maximal ideal of $\scriptstyle k\left[M\right]$ for every $\scriptstyle i \,\in\, I$ (since $\scriptstyle k\left[M\right] / \mathrm{Ker} F_i \;\cong\; F_i\left(k\left[M\right]\right) \;=\; k$ is a field), and the ideals $\scriptstyle \mathrm{Ker} F_i$ and $\scriptstyle \mathrm{Ker} F_j$ are coprime whenever $\scriptstyle i \;\neq\; j$ (since they are distinct and maximal). The Chinese Remainder Theorem (for general rings) thus yields that the map $\phi: k\left[M\right] / K \to \prod_{i \in I}k\left[M\right] / \mathrm{Ker} F_i$ given by $\phi\left(x + K\right) = \left(x + \mathrm{Ker} F_i\right)_{i \in I}$ for all $x\in k\left[M\right]$ is an isomorphism, where $\scriptstyle K \;=\; \prod_{i \in I}\mathrm{Ker} F_i \;=\; \bigcap_{i \in I}\mathrm{Ker} F_i$. Consequently, the map $\Phi: k\left[M\right] \to \prod_{i \in I}k\left[M\right] / \mathrm{Ker} F_i$ given by $\Phi\left(x\right) = \left(x + \mathrm{Ker} F_i\right)_{i \in I}$ for all $x \in k\left[M\right]$ is surjective. Under the isomorphisms $\scriptstyle k\left[M\right] / \mathrm{Ker} F_i \,\to\, F_i\left(k\left[M\right]\right) \;=\; k$, this map $\scriptstyle \Phi$ corresponds to the map $\psi: k\left[M\right] \to \prod_{i \in I}k$ given by $x \mapsto \left[F_i\left(x\right)\right]_{i \in I}$ for every $x \in k\left[M\right].$ Now, $\scriptstyle \sum_{i \in I}\alpha_i F_i \;=\; 0$ yields $\scriptstyle \sum_{i \in I}\alpha_i u_i \;=\; 0$ for every vector $\scriptstyle \left(u_i\right)_{i \in I}$ in the image of the map $\scriptstyle \psi$. Since $\scriptstyle \psi$ is surjective, this means that $\scriptstyle \sum_{i \in I}\alpha_i u_i \;=\; 0$ for every vector $\scriptstyle \left(u_i\right)_{i \in I} \,\in\, \prod_{i \in I}k$. Consequently, $\scriptstyle \left(\alpha_i\right)_{i \in I} \;=\; \left(0\right)_{i \in I}$, QED. ## Non-commutative case: a caveat Sometimes in the commutative case, the conclusion of the Chinese Remainder Theorem is stated as $\scriptstyle R/(I_1 I_2\cdots I_k) \,\simeq\, R/I_1 \,\times\, \cdots \,\times\, R/I_k$. This version does not hold in the non-commutative case, since $\scriptstyle(I_1 \,\cap\, \cdots \,\cap\, I_k) \neq (I_1 I_2\cdots I_k)$, as can be seen from the following example Consider the ring R of non-commutative real polynomials in x and y. Let I be the principal two-sided ideal generated by x and J the principal two-sided ideal generated by $\scriptstyle xy \,+\, 1$. Then $\scriptstyle I \,+\, J \;=\; R$ but $\scriptstyle I \,\cap\, J \;\neq\; IJ$. ### Proof Observe that I is formed by all polynomials with an x in every term and that every polynomial in J vanishes under the substitution $\scriptstyle y \;=\; -1/x$. Consider the polynomial $\scriptstyle p \;=\; (xy \,+\, 1)x$. Clearly $\scriptstyle p \,\in\, I \,\cap\, J$. Define a term in R as an element of the multiplicative monoid of R generated by x and y. Define the degree of a term as the usual degree of the term after the substitution $\scriptstyle y \;=\; x$. On the other hand, suppose $q \,\in\, J$. Observe that a term in q of maximum degree depends on y otherwise q under the substitution $\scriptstyle y \;=\; -1/x$ can not vanish. The same happens then for an element $\scriptstyle q \,\in\, IJ$. Observe that the last y, from left to right, in a term of maximum degree in an element of $\scriptstyle IJ$ is preceded by more than one x. (We are counting here all the preceding xs. E.g., in $\scriptstyle x^2yxyx^5$ the last y is preceded by $\scriptstyle 3$ xs.) This proves that $\scriptstyle (xy \,+\, 1)x \,\notin\, IJ$ since that last y in a term of maximum degree ($\scriptstyle xyx$) is preceded by only one x. Hence $\scriptstyle I \,\cap\, J \;\neq\; IJ$. On the other hand, it is true in general that $\scriptstyle I \,+\, J = R$ implies $\scriptstyle I \,\cap\, J \;=\; IJ \,+\, JI$. To see this, note that $\scriptstyle I \,\cap\, J \;=\; (I \,\cap\, J) (I \,+\, J) \;\subset\; IJ \,+\, JI$, while the opposite inclusion is obvious. Also, we have in general that, provided $\scriptstyle I_1,\, \ldots,\, I_m$ are pairwise coprime two-sided ideals in R, the natural map $R / (I_1 \cap I_2 \cap \ldots \cap I_m) \rightarrow R/I_1 \oplus R/I_2 \oplus \cdots \oplus R/I_m$ is an isomorphism. Note that $\scriptstyle I_1 \,\cap\, I_2 \,\cap\, \ldots \,\cap\, I_m$ can be replaced by a sum over all orderings of $\scriptstyle I_1,\, \ldots,\, I_m$ of their product (or just a sum over enough orderings, using inductively that $\scriptstyle I \,\cap\, J \;=\; IJ \,+\, JI$ for coprime ideals $\scriptstyle I,\, J$). ## References • Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89684-2. Section 4.3.2 (pp. 286–291), exercise 4.6.2–3 (page 456). • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 31.5: The Chinese remainder theorem, pp. 873–876. • Laurence E. Sigler (trans.) (2002). Fibonacci's Liber Abaci. Springer-Verlag. pp. 402–403. ISBN 0-387-95419-8. • Kak, Subhash (1986), "Computational aspects of the Aryabhata algorithm", Indian Journal of History of Science 21 (1): 62–71 . • Thomas W. Hungerford (1974). Algebra. Springer-Verlag. pp. 131–132. ISBN 0-387-90518-9. • Cunsheng Ding, Dingyi Pei, and Arto Salomaa (1996). Chinese Remainder Theorem: Applications in Computing, Coding, Cryptography. World Scientific Publishing. pp. 1–213. ISBN 981-02-2827-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 167, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125587940216064, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/15178/what-is-the-highest-accuracy-of-measuring-time-differences-achievable-today
# What is the highest accuracy of measuring time differences achievable today? I was wondering if it would be possible to shorten the distance between detectors when measuring the speed of neutrinos to, say, 7m rather than the current ~700km? In this way the distance traveled would be known directly. Something similar to the coincidence measurements we now do when studying positronium. Is there a limit to the technology used for timing of events or it is only a matter of technical development and there's room for achieving a greater accuracy than the currently known? - Notice that the OPERA number is a fractional difference of few time $10^{-5}$, so you are asking about differences on order of $10^{-5} * (7\text{ m}/3 \times 10^8\text{ m/s})\approx 2 \times 10^{-13}\text{ s}$. – dmckee♦ Sep 28 '11 at 17:02 I know but do you think that at present it's only a technological difficulty to reach that accuracy or its unachievability is intrinsic in the phenomena? – ganzewoort Sep 28 '11 at 17:24 ## 2 Answers This is not really an answer to the question in the title, but a description of why the proposed short baseline neutrino speed measurement is exceedingly difficult. It related to the question in the sense that it explains the limits of the precision with which $\delta t$ can be extracted in a neutrino experiment, without even touching on the kind of ultra-high precision timing work that NIST and related bodies like to do. Getting very high timing precision is possible in many instance, but neutrinos pose a few special challenges. • Even at accelerator beam energies (multiple GeV as in the OPERA beam) the cross-section for neutrino interactions is tiny. So to get any kind of rate at all you do two things 1. Make the detector big. Tens of thousand tones for some distant detectors and a few tons (or at least hundreds of kilograms) for near detectors. A massive detectors has non-trivial sizes so you have to correct for the timing over which signals develop, are detected, and get converted to latch-able electronic signals. You'll note that in the case of the OPERA paper these correction were of order a few to tens of ns each. Each of these corrections carries with it a systematic error. 2. The beams have to be very intense. Ideally you would generate a single bunch of progenitor particles (protons in the case of OPERA) and bang them on target over a time-scale less than your anticipated $\delta t$, and then wait for a time much larger than $\delta t$ before the next bunch arrived. But due the limits of accelerator technology and the neutrino cross-section this is a losing game. In the case of OPERA they pour protons onto the target in small bunches for 10 micro-seconds at a time. There is no unique way to identify the time of origin associated with each neutrino event in the far detector. Thus the statistical method they employed (this is one of my favorite places to suspect the OPERA procedure, though they made a real try at handling it) originally, they have now used a lower statistics short bunch approach which largely removes this as a possible source of error. • Neutrino beams are not well focused. You could be thinking that with a nearby detector You could beat both these problems at once by building a very small detector. You run into two problems. 1. By that point the beam is already meters across, so a realy small detector exacerbates the small cross-section problem. 2. You have to be far enough away to loose the muons as a non-trivial number of these are generated, and even though you can probably ID them on veto around their arrival times (and it has to be a moderately long veto because of the risk of spallation products) you have to go far enough away that the deadtime doesn't kill you. You could use a big sweep magnet after the decay line beam stop. That sounds promising, but then you lose you best tool for determining when you might have spallation products (which you have to veto or subtract), so you need to go far enough downstream to ditch most of them. • The start point is not well defined on short distance scales. Neutrino beams are generated by the decay of high-energy particles in flight. Because the timing of that decay is random on a exponential, you don't know exactly where the neutrinos started. You'll have to measure from some well known place and correct for time of flight of these heavier particles in the horn. Now, we're pretty confident of being able to do this to the few ns scale, it is not going to be possible to do enormously better than that. By the way, if you are thinking that OPERA seems under-optimized for this measurement, that's because it is. This is a parasitic measurement that simply takes advantage of a machine designed to measure neutrino mixing parameters in the $\nu_\mu \to \nu_\tau$ appearance channel, and the need to unambiguously identify $\nu_\tau$ charged-current events (by unambiguously observing the $\tau$-lepton) drives the design of the detector. - To answer the question in the title, $10^{-15}$ seconds can be measured routinely with optical combs (see here for a review). According to Wikipedia, processes in the tenths of a femtosecond can also be measured. EDIT: As Georg pointed out, a frequency comb would not be useful for measuring time-of-flight of particles between two distant locations (and possibly not even for short distances? I don't know). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538244009017944, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/199988/existence-of-a-specific-refinement-of-an-open-cover-on-a-manifold
# Existence of a specific refinement of an open cover on a manifold I've encountered a homework question I don't know how to solve, and most people I asked even tend to go so far as to say that the main statement in the question is false. I'm not really convinced of the latter (the question is from Warner's Foundations of Differentiable Manifolds and Lie Groups), so I was wondering whether anyone can shed some light on the issue. Any help would be greatly appreciated :-) First we define $\{ U_\alpha \}$ to be an open cover of a manifold $M$. The question is now to prove that there exists a refinement $\{ V_\alpha \}$ of $\{ U_\alpha \}$, such that $\overline{ V_\alpha } \subset U_\alpha$ for all $\alpha$ (the closure of every $V_\alpha$ must be a subset of every $U_\alpha$). This is actually all the information that is given... we've been considering some possible 'catches' to the question, but none of them resulted in something useful. I hope somebody can help out! - When you say "the closure of every $V_\alpha$ must be a subset of every $U_\alpha$", you mean "the closure of every $V_\alpha$ must be a subset of the corresponding $U_\alpha$"? – joriki Sep 20 '12 at 23:09 That's actually one of the 'catches' we encountered... the book doesn't say more than this, but it pays great attention to detail, so I assume that this is what they mean. – JorenB Sep 20 '12 at 23:18 ## 2 Answers If your definition of manifold requires $M$ to be second countable and Hausdorff, or even just paracompact and Hausdorff, then $M$ is metrizable and hence both metacompact and normal. Let $\mathscr{U}=\{U_\alpha:\alpha\in A\}$; metacompactness implies that $\mathscr{U}$ has a point-finite open refinement $\mathscr{R}$, meaning that each point of $M$ is in only finitely many members of $\mathscr{R}$. For each $R\in\mathscr{R}$ there is an $\alpha(R)\in A$ such that $R\subseteq U_{\alpha(R)}$. For each $\alpha\in A$ let $W_\alpha=\bigcup\{R\in\mathscr{R}:\alpha(R)=\alpha\}$, and let $\mathscr{W}=\{W_\alpha:\alpha\in A\}$; then $\mathscr{W}$ is a point-finite open refinement of $\mathscr{U}$, and $W_\alpha\subseteq U_\alpha$ for each $\alpha\in A$. It now suffices to find an open cover $\mathscr{V}=\{V_\alpha:\alpha\in A\}$ of $M$ such that $\operatorname{cl}V_\alpha\subseteq W_\alpha$ for each $\alpha\in A$. Such a cover is called a shrinking of $\mathscr{V}$, and it’s a standard result that in a normal space every point-finite open cover is shrinkable. Applying this result, we get the desired $\mathscr{V}$. - The result follows from Zorn's lemma. Consider the family $\mathbb F$ consisting of functions from the index set (say) $I$ into the topology $\tau$ so that $f(\alpha)=U_\alpha$ or $\overline f(\alpha)\subset U_\alpha$ and $\bigcup_{\alpha\in I} f(\alpha)=M$. Let us order $\mathbb F$ by defining $f\leq g$ if $f(\alpha)=g(\alpha)$ for all $\alpha \in I$ so that $f(\alpha)\ne U_\alpha$. It is easy to see that the partial order satisfies the hypothesis of Zorn's lemma and that a maximal element gives you the desire refinement. - If $f_0$ is a maximal element of $\mathbb{F}$, can we be sure $\{f_0(\alpha)\}$ covers $M$? – Pink Elephants Sep 20 '12 at 23:41 @PinkElephants I forgot to add the condition that for all $f\in\mathbb F$ $\bigcup_{\alpha\in I} f(\alpha)=M$. – azarel Sep 21 '12 at 0:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454289674758911, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42329/replacement-and-sets-of-natural-numbers/42372
## Replacement and Sets of Natural Numbers ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It's clear that the axiom of replacement can be used to construct very large sets, such as $$\bigcup_{i=0}^\infty P^i N,$$ where $N$ is the natural numbers. I assume that it can be used to construct sets much lower in the Zermelo hierarchy, such as sets of natural numbers, but I don't know of an example. Is there an easy example? (Just to be clear, I mean an example that requires the use of replacement, not just one where you could use replacement if you wanted to.) I would guess you can cook up an example using Borel determinacy, since that involves games of length $\omega$, but it would be great if there was an even more direct example. Also, I'd be curious to know for any such examples at what stage they first come along in the constructible universe. $\omega + 1$? The first Church-Kleene ordinal? Some other ordinal I've never heard of? - 1 I think you should be more clear on what you mean by "construct", so that you don't allow [x is empty and Borel determinacy holds]. – Ricky Demer Oct 15 2010 at 22:37 This seems like a nice question, but it’s a very tricky one to formulate precisely! When you ask if replacement is required to construct a certain set, you can't (at least, not naïvely) mean “an actual set in a particular model”, since either that model believes replacement or it doesn't. On the other hand, if we instead mean “some formula defining a set”, we run into the issue Ricky points out. – Peter LeFanu Lumsdaine Oct 15 2010 at 23:24 1 Here's what I think is a very good formalization of arsmath's questions. What is the least ordinal alpha such that there exists a set M such that M is a transitive model of ZFC-replacement and V_alpha is not a subset of M? What about L_alpha instead? – Ricky Demer Oct 16 2010 at 0:58 Thanks. I know these kinds of questions turn on the precise formulation, but I have trouble formulating them clearly. What I'm after is just a sense of the additional expressive power of set theory with replacement versus without. Ricky's formulation probably captures what I have in mind. – arsmath Oct 16 2010 at 1:54 Perhaps I'm misunderstanding the convention used in this, but why is the union iterated over the value of $i$ ranging from $0$ to $\inf$, while the terms for the union to be performed on contain only $P$, $n$, and $N$ combined as $P^n N$? Shouldn't $i$ be involved in the term somehow? – sleepless in beantown Oct 16 2010 at 11:09 show 2 more comments ## 3 Answers This probably isn't what you are looking for, but one can write down an explicit Diophantine equation for which ZFC proves that it has a solution, but ZFC minus replacement does not (assuming it is consistent). Namely, use Godel encoding and the solution of Hilbert's 10th problem to write down a Diophantine equation whose only solutions encode proofs of the consistency of "ZFC minus replacement." One wants a "naturally occurring" example instead, but it's hard to say what that means. (Edit: The following is corrected thanks to Andres' comments) For instance, I think the answer to Ricky's formulation in the comments is α = ω+1, but again probably not for the reason you expect. Namely, we can prove in ZFC that ZFC-Repl has a countable transitive model. To do this we start from an arbitrary transitive model (such as $V_{\omega+\omega}$) and apply the downward Lowenheim-Skolem theorem to find a countable submodel. This countable submodel may no longer be transitive, but it is still well-founded, so by Mostowski's collapsing lemma it is isomorphic to an (also countable) transitive model. Since ZFC-Repl has a countable transitive model, $V_{\omega+1}$ (being uncountable) cannot be a subset of all such transitive models. But $V_\omega$ is the set of hereditarily finite sets, which I think have to be in any transitive model since each of them can be uniquely characterized by a formula. - Where could I find the version of Lowenheim-Skolem that gives if ZFC-replacement is consistent then there is a countable transitive model of ZFC-replacement? – Ricky Demer Oct 16 2010 at 5:11 "If ZFC-replacement is consistent then there is a countable transitive model of ZFC-replacement". This is false: Any transitive model (in fact, any $\omega$-model) is correct about arithmetic statements, so the second incompleteness theorem prevents this from happening. – Andres Caicedo Oct 16 2010 at 5:22 Andres, how do you get that from the second incompleteness theorem? – Ricky Demer Oct 16 2010 at 5:37 @Ricky, ordinary Lowenheim-Skolem gives you a countable model, and since ZFC-replacement includes the axiom of foundation, Mostowski collapse implies that any model is isomorphic to a transitive one. – Mike Shulman Oct 16 2010 at 6:05 3 Ricky: If $T$ is consistent, $Con(T)$ is a true arithmetic statement and therefore holds in any $\omega$-model $M$ of any decent theory. If $M$ also happens to be a model of $T$, then $M$ is a model of $T+Con(T)$, and therefore $Con(T+Con(T))$ is true. But then the existence of $M$ is not provable in $T+Con(T)$ if $T$ is strong enough, or else, letting $S=T+Con(T)$, we have that $S$ proves $Con(S)$, against Gödel's 2nd incompleteness. "Strong enough" may be taken to mean that $T$ proves the completeness theorem and interprets PA. Both of these requirements hold for ZFC-replacement. – Andres Caicedo Oct 16 2010 at 6:09 show 5 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The axiom (scheme) of replacement is in some sense only used to get large sets. Namely, if you already have a set $X$, then every subclass of $X$ is a set by separation. Replacement guarantees that certain large objects are sets. Now, in the case of natural numbers one sometimes states the axiom of infinite by saying that there is a set $y$ which is closed under the operation $x\mapsto x\cup\{x\}$. We can assume that there is a single element $a$ of $y$ such that $y$ is the minimal set which contains $a$ and is closed under $x\mapsto x\cup\{x\}$. Now we can define a map $f$ from $y$ to the ordinals by recursion in the natural way. (Mapping $a$ to $0$ and $x\cup\{x\}$ to $f(x)\cup\{f(x)\}$.) The image of this map, the class of natural numbers, is a set by replacement. But now, by the previous remark, every subclass of $\mathbb N$ is a set by replacement. Of course, we could also phrase the axiom of infinite in a more direct way. - In a certain sense you are of course right, but this is what I have in mind. Suppose you have a predicate with one free variable P(x), but with bound variables that quantify over large sets. You could use that predicate to define a subset of the integers by using separation, but in some sense you used replacement "behind the scenes". It seems to me that you could use that to prove the existence of sets that you couldn't ordinarily talk about. I know that's terribly fuzzy... – arsmath Oct 16 2010 at 16:53 Poking around, I came across an incredibly easy example of a small set that require replacement: the transitive closure of a set. It's mentioned in this thread. You can't even construct $V_\omega$ without replacement. Section 4.2 of this survey suggests that you can recover all of these usages of replacement by adding the assertion that every set belongs to a $V_\alpha$ which is itself a set. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475880861282349, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6276/obtaining-a-key-thats-using-ice-algorithm?answertab=votes
# Obtaining a key that's using ice algorithm I have a text file that has been processed by the SNOW steganography tool that uses ICE encryption. However, I do not know the key that would enable me to decrypt and retrieve the message hidden. What tools are available, that are proven to have worked that would help me in retrieving the key? Edited: instead of asking "what ways are there", I've changed to "what tools are there" - ## 1 Answer Brute forcing the password, since no useful cryptanalysis of ICE is known. I'm really more of a theory guy, so I don't know of any tools for this, but there are some resources that I've found. http://security.stackexchange.com/questions/1376/where-can-i-find-good-dictionaries-for-dictionary-attacks answers its title's question, and cryptospecs.googlecode.com/svn/trunk/symmetrical/specs/ice.pdf gives the details of the ICE algorithms. (It might be easier to pull the relevant code from SNOW rather than implementing ICE directly.) www.darkside.com.au/snow/description.html gives somewhat of a description of how passwords are converted into keys; I suspect that they then pad the right with zeros or with a one followed by zeroes, although it would be good to look at SNOW's code to be sure. - Thanks Ricky for the swift response! I appreciated it so much! What you said makes sense. And, I got my dictionaries from a link among the list of links in the link you provided and managed to obtain a key for a jpg that has encrypted files in it by using the stegbreak tool. However, for snow, there seems to be no known tools to perform a dictionary attack. It appears that it is impossible to detect when you have successfully retrieved the correct data. I am really at my wits end on how to resolve this issue of obtaining the 'iced' key... – user4981 Feb 8 at 6:56 – Ricky Demer Feb 8 at 8:52 Thanks Ricky! Unfortunately, I am really inexperienced with this. I don't see how cryptanalysis would help in obtaining the key. And it seems that the information provided is mostly theoretical and not applied in terms of running programs :\ Therefore, is there any existing tool that allows me to use a rainbow table or perhaps other means, to obtain the key on snow? Thanks! – user4981 Feb 10 at 17:42 Conceivably, a brilliant cryptanalysis could guarantee almost-instant recovery of the key from a guess of a specific two of the plaintext blocks for a known ciphertext such that the hamming distance between the guess for those blocks and the actual plaintext for those blocks in at most 15. $\:$ (I'm not aware of any actual result anywhere near that for any computer-era cipher that was ever thought useful. $\:$ Also, as I mentioned in my answer, no useful cryptanalysis of ICE is known.) $\;\;$ – Ricky Demer Feb 11 at 6:56 I'm pretty sure there aren't any exiting tools that will do what you want. $\:$ – Ricky Demer Feb 11 at 6:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526179432868958, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Stochastic_calculus
# Stochastic calculus Calculus Definitions Concepts Rules and identities Integral calculus Definitions Integration by Formalisms Definitions Specialized calculi • Fractional • Stochastic • Calculus of variations Stochastic calculus is a branch of mathematics that operates on stochastic processes. It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. It is used to model systems that behave randomly. The best-known stochastic process to which stochastic calculus is applied is the Wiener process (named in honor of Norbert Wiener), which is used for modeling Brownian motion as described by Louis Bachelier in 1900 and by Albert Einstein in 1905 and other physical diffusion processes in space of particles subject to random forces. Since the 1970s, the Wiener process has been widely applied in financial mathematics and economics to model the evolution in time of stock prices and bond interest rates. The main flavours of stochastic calculus are the Itō calculus and its variational relative the Malliavin calculus. For technical reasons the Itō integral is the most useful for general classes of processes but the related Stratonovich integral is frequently useful in problem formulation (particularly in engineering disciplines.) The Stratonovich integral can readily be expressed in terms of the Itō integral. The main benefit of the Stratonovich integral is that it obeys the usual chain rule and does therefore not require Itō's lemma. This enables problems to be expressed in a coordinate system invariant form, which is invaluable when developing stochastic calculus on manifolds other than Rn. The dominated convergence theorem does not hold for the Stratonovich integral, consequently it is very difficult to prove results without re-expressing the integrals in Itō form. ## Itō integral Main article: Itō calculus The Itō integral is central to the study of stochastic calculus. The integral $\int H\,dX$ is defined for a semimartingale X and locally bounded predictable process H.[citation needed] ## Stratonovich integral Main article: Stratonovich integral The Stratonovich integral of a semimartingale $X$ against another semimartingale Y can be defined in terms of the Itō integral as $\int_0^t X_{s-} \circ d Y_s : = \int_0^t X_{s-} d Y_s + \frac{1}{2} \left [ X, Y\right]_t^c,$ where [X, Y]tc denotes the quadratic covariation of the continuous parts of X and Y. The alternative notation $\int_0^t X_s \, \partial Y_s$ is also used to denote the Stratonovich integral. ## Applications A very important application of stochastic calculus is in quantitative finance, in which asset prices are often assumed to follow stochastic differential equations. In the Black-Scholes model, prices are assumed to follow the geometric Brownian motion. ## References • Fima C Klebaner, 2012, Introduction to Stochastic Calculus with Application (3rd Edition). World Scientific Publishing, ISBN:9781848168312 • Szabados, T. S.; Székely, B. Z. (2008). "Stochastic Integration Based on Simple, Symmetric Random Walks". Journal of Theoretical Probability 22: 203. doi:10.1007/s10959-007-0140-8.  Preprint
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8356890082359314, "perplexity_flag": "middle"}
http://en.wikibooks.org/wiki/Calculus/Finite_Limits
Calculus/Finite Limits | | | | |------------------------------------|----------|-------------------| | ← Limits/An Introduction to Limits | Calculus | Infinite Limits → | | Finite Limits | | | Informal Finite Limits Now, we will try to more carefully restate the ideas of the last chapter. We said then that the equation $\lim_{x\to 2} f(x) = 4$ meant that, when $x$ gets close to 2, $f(x)$ gets close to 4. What exactly does this mean? How close is "close"? The first way we can approach the problem is to say that, at $x=1.99$, $f(x)=3.9601$, which is pretty close to 4. Sometimes however, the function might do something completely different. For instance, suppose $f(x)=x^4-2x^2-3.77$, so $f(1.99)=3.99219201$. Next, if you take a value even closer to 2, $f(1.999)=4.20602$, in this case you actually move further from 4. The reason for this is that substitution gives us 4.23 as x approaches 2. The solution is to find out what happens arbitrarily close to the point. In particular, we want to say that, no matter how close we want the function to get to 4, if we make $x$ close enough to 2 then it will get there. In this case, we will write $\quad\lim_{x\to 2} f(x) = 4$ and say "The limit of $f(x)$, as $x$ approaches 2, equals 4" or "As $x$ approaches 2, $f(x)$ approaches 4." In general: Definition: (New definition of a limit) We call $L$ the limit of $f(x)$ as $x$ approaches $c$ if $f(x)$ becomes arbitrarily close to $L$ whenever $x$ is sufficiently close (and not equal) to $c$. When this holds we write $\lim_{x \to c} f(x) = L$ or $f(x) \to L \quad \mbox{as} \quad x \to c.$ One-Sided Limits Sometimes, it is necessary to consider what happens when we approach an $x$ value from one particular direction. To account for this, we have one-sided limits. In a left-handed limit, $x$ approaches $a$ from the left-hand side. Likewise, in a right-handed limit, $x$ approaches $a$ from the right-hand side. For example, if we consider $\quad\lim_{x\to 2} \sqrt{x-2}$, there is a problem because there is no way for $x$ to approach 2 from the left hand side (the function is undefined here). But, if $x$ approaches 2 only from the right-hand side, we want to say that $\sqrt{x-2}$ approaches 0. Definition: (Informal definition of a one-sided limit) We call $L$ the limit of $f(x)$ as $x$ approaches $c$ from the right if $f(x)$ becomes arbitrarily close to $L$ whenever $x$ is sufficiently close to and greater than $c$. When this holds we write $\lim_{x \to c^+} f(x) = L.$ Similarly, we call $L$ the limit of $f(x)$ as $x$ approaches $c$ from the left if $f(x)$ becomes arbitrarily close to $L$ whenever $x$ is sufficiently close to and less than $c$. When this holds we write $\lim_{x \to c^-} f(x) = L.$ In our example, the left-handed limit $\quad\lim_{x\to 2^{-}} \sqrt{x-2}$ does not exist. The right-handed limit, however, $\quad\lim_{x\to 2^{+}} \sqrt{x-2} = 0$. It is a fact that $\lim_{x\to c} f(x)$ exists if and only if $\lim_{x\to c^+} f(x)$ and $\lim_{x\to c^-} f(x)$ exist and are equal to each other. In this case, $\lim_{x\to c} f(x)$ will be equal to the same number. In our example, one limit does not even exist. Thus $\lim_{x\to 2} \sqrt{x-2}$ does not exist either. | | | | |------------------------------------|----------|-------------------| | ← Limits/An Introduction to Limits | Calculus | Infinite Limits → | | Finite Limits | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214105010032654, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/240085/proving-a-is-invertible-if-a-a2-i/240112
# Proving A is Invertible if $A + A^2 = I$ I'm trying to prove A is invertible by proving there is an $A'$, for $AA' = I$ So I got to this stage $A(I + A) = I$ Now I determine that $A' = I + A$, and from that I get $AA' = I$, I wanted to know if this is valid, casue it doesn't seem to make sense, and I can't find a 'real' solution for such equation. *Notes A is $n * n$, $I$ Respresents the Identity Matrix and is also of the same order. - ## 7 Answers With $A(I+A)=I$ you’re almost there: this shows that $A$ has a right inverse, $I+A$. But a square matrix has a right inverse if and only if it’s actually invertible, so $A$ is invertible. If you don’t already know that theorem, you can prove it in a variety of ways. For instance, $$I=I^T=\big(A(I+A)\big)^T=(I+A)^TA^T\;,$$ so $A^T$ has a left inverse. Suppose that $A^Tx=0$. Then $$x=Ix=(I+A)^TA^Tx=(I+A)^T0=0\;,$$ so the null space of $A^T$ is trivial, containing only the zero vector. Therefore $A^T$ is invertible, and its inverse must be $(I+A)^T$. Thus, $I=A^T(I+A)^T=\big((I+A)A\big)^T$, and therefore $$(I+A)A=I^T=I\;,$$ meaning that $I+A$ is also a left inverse of $A$. Since $I+A$ is both a left and a right inverse of $A$, it’s actually $A^{-1}$, and $A$ is invertible. Added: As tst points out in the comments, I was working much too hard here. We can factor the $A$ out of $A+A^2$ on the right as well as on the left, so not only do we have $A(I+A)=I$, we also have $(I+A)A=I$, and it’s immediate that $I+A=A^{-1}$. - I did understand that case, sorry for not clarifying that in my original post, But I still don't understand how will this imply for a matrix, lets say of order 2, which is invertible. – res Nov 18 '12 at 20:36 @res: I don’t understand what you mean. If you have a $2\times 2$ matrix $A$ such that $A+A^2=\pmatrix{1&0\\0&1}$ then $A$ is invertible, and $A^{-1}=I+A$; that’s all it says. What more do you think is needed? – Brian M. Scott Nov 18 '12 at 20:44 @Brian, I am stating that, having shown that A has a right inverse, since A is square, that right inverse is the inverse of A: i.e., it is also the left inverse (vice versa). I may not have been clear in my comment, so I will delete it. (I shouldn't have referred to the left inverse in my earlier comment as $A^{-1}$, so sorry about that.) It's just that the OP may already be able to move from having found $AA^{\prime} = I,\text{ to}\;\; A^{\prime} = A^{-1}$. – amWhy Nov 18 '12 at 20:47 @amWhy: Yes, and that’s precisely the result that I thought might possibly not yet have been proved in the OP’s course. I thought that the phrasing ‘In case you don’t already know this theorem’ pretty clearly recognized the possibility that the result had been covered. (And yes, I have seen people get that close to the end and not realize that they were there, so that might have been the sticking point even if the result was known.) – Brian M. Scott Nov 18 '12 at 20:52 1 You have $A(I+A)=(I+A)A=I$ so by definition $A^{-1}=I+A$. I think the answer is more complicated than what's needed. – tst Nov 18 '12 at 21:05 show 7 more comments From $A+A^2=I$ you do indeed get that $AA'=I$ for some $A'$, namely $A'=I+A$. And since the distribution laws hold "on both sides", you also get $A'A=I$ for the same $A'$, which proves that $A$ is invertible. - I understand your answer, and I understood it's possible too, I just wonder how will it imply using 'real numbers', and not just theoretically. – res Nov 18 '12 at 20:24 nonpop: I think you just stated what the OP already established. – amWhy Nov 18 '12 at 20:26 2 @res I don't really understand what you mean by "how will it imply using 'real numbers'". Taking a general, say $2\times 2$ matrix and just calculating is probably very messy. That's the power of abstraction, that we can prove this very cleanly using previous results. For a simple concrete example you can take $\phi=\frac{\sqrt 5-1}2$ and the matrix $A=\left(\matrix{\phi&0\\0&\phi}\right)$ and see that it works. – nonpop Nov 18 '12 at 20:49 We have $1=\det{(I)}=\det{(A+A^2)}=\det{(A(I+A))}=\det({A)}\det{(I+A)}$. Thus $\det({A)}\neq 0$ and $A$ is invertible. - Given: $A + A^2 = I$. You set out to prove $A$ is invertible by proving there is an $A'$, such that $AA' = I$. You recognized that $A + A^2 = AI + A^2 = A(I +A) = I.$ So you got to this stage: $A(I + A) = I$. Now you let $A^{\prime} = I + A$, and from that you got $AA' = I$, where $A^{\prime}$ is the right inverse of $A$. Note that, because addition of matrices is commutative, $$A+A^2 = A^2 + A = (A+I)A = = (I + A)A=I.$$ So by the same strategy we used to show that $(I + A)$ is a right inverse of $A$, it follows that $(I+A)$ is also a left inverse of $A$, and hence is THE unique inverse of $A.$ In general, and as you may already know: For any square matrix $M$, if $M$ has a right inverse, then the right-inverse is the unique inverse of $M$, and so it is also a left inverse of $M$ (and vice versa). So you did indeed find the inverse of $A$, in having found a right inverse of $A$, namely $A^{\prime} = I + A = A^{-1}$. Therefore, since $A$ has an inverse, $A$ is invertible. An alternate strategy is to take the determinant of each side of the equation: $A + A^2 = I.$ Note that $$1 = \det{(I)} = \det{(A + A^2)} = \det{(A(I + A))} = \det{(A)}\det{(I + A)} > 0.$$ So $\det{(A)}$ cannot be zero. Thus, $A$ is invertible. What might be confusing you is that you are proving IF $A + A^2 = I$, THEN $A$ is invertible, but that is NOT to say that if $A$ is invertible, then it is always the case $A + A^2 = I$. So you shouldn't expect to find that $A+A^2 = I$ for all invertible matrices $A$, and don't worry if an example of an invertible matrix $A$ for which $A+A^2 = I$ doesn't immediately come to mind. - @mythealias - done, thanks for pointing it out! – amWhy Nov 19 '12 at 13:35 Hint $\$ Over any ring: polynomial $\rm\,f(a) = 0\:$ and $\rm\:f(0)\:$ invertible $\rm\:\Rightarrow\: a\:$ invertible (with two-sided inverse, if $\rm\:a\:$ commutes with all coefficients $\rm\,c_i\,$ of $\rm\,f),\,$ since $$\rm\:c_n a^n + \cdots + c_1 a + c_0 = 0\,\ \Rightarrow\,\ (c_n a^{n-1}+\cdots + c_1) a\, =\, -c_0$$ thus left-multiplying by $\rm\:-c_0^{-1}$ yields a left-inverse for $\rm\:a.\:$ When $\rm\:a\:$ commutes with the coefficients, we can commute $\rm\:a\:$ to the left, showing that the above inverse is a two-sided inverse (in particular such commutativity is true if the coefficients are integers, so universally commutative). - Put $f(x) = x^2 + x - 1$. We have $f(A) = 0$. Since $x = 0$ is not a root of $f$, zero is not an eigenvalue of $A$. Thus, $A$ is invertible. - Could you elaborate what result the last statement is a consequence of? From what I recall, minimal polynomial of a matrix divides its characteristic polynomial but this wouldn't necessarily imply absence of zero eigenvalues. – johnny Nov 18 '12 at 21:26 Since this polynomial sends $A$ to zero, the minimal polynomial of $A$, $m_A$ must satisfy $m_A|f$. Every eigenvalue must be a root of the minimal polynomial. Since $f$ does not have 0 as a root, and $m_A|f$, 0 is not a root of $f$ either. – ncmathsadist Nov 18 '12 at 21:41 Prove if $A+A^2 = I$ then $A$ is invertible Contrapositive If $A$ is singular then $A+A^2 \neq I$ we have $detA = 0$ and $det|A+A^2| = det|A|det|A+I| = 0$ And $detI = 1$ So $A+A^2 \neq I$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 118, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9568372368812561, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/144885/counterexample-for-a-pullback-pushout-situation
# Counterexample for a pullback-pushout situation Suppose you are in the category of sets or more generally in a topos (i.e. sheaf topos) and $f:A\rightarrow C$, $g:B\rightarrow C$ are two morphisms. There is a canonical map $u:D\to C$ from $D$ (defined as the pushout of the diagram $A\leftarrow A\times_C B\rightarrow B$ consisting of the two projections) into $C$. Presumably $u$ doesn't have to be a monomorphism in general, however I can't think of a counterexample. In my situation, it is supplementary given that the two projections $pr_1:A\times_C B\rightarrow A$ and $pr_2:A\times_C B\rightarrow B$ and $f$ are monomorphisms each. Does $u$ have to be a monomorphism then? - It is very unusual for the individual projections to be monomorphisms. Are you considering intersections of subobjects? – Zhen Lin May 14 '12 at 7:39 Unfortunately, I am not familiar with the notation of subobjects in a topos but presumably, you are right. Eventually, I would also like to dualize the question which is possible in the current formulation replacing pushouts by pullbacks and monos by epis. – Robin Fischer May 14 '12 at 8:09 Toposes are not self-dual, so that would be a whole other question. – Zhen Lin May 14 '12 at 8:19 First of all, your answer is very helpful, thank you. Hm, I've thought, an answer could be dualized. Anyway, do you think the dual of the question is also false? – Robin Fischer May 14 '12 at 8:26 Just because it's category theory doesn't mean it can be dualised! (The reason why dualising works in abelian categories is because the opposite of an abelian category is an abelian category. But this is not true for a topos.) In this case, your dual question also has a negative answer. A counterexample can be found in $\textbf{Set}$. – Zhen Lin May 14 '12 at 8:39 ## 1 Answer 1. Let $C = 1$, let $A$ be an object such that $A \to 1$ is not a monomorphism, and let $B = 0$. Then, $A \times_C B = 0$, but $D = A$, so $D \to C$ is not a monomorphism. (For this to work we only need to know that $0$ is a strict initial object.) 2. Let us consider the topos of sheaves on the discrete space $\{ a, b \}$. Let $C = 1$, let $A$ be the subsheaf of $C$ such that $A_a = 1$ and $A_b = 0$, and let $B$ be a sheaf such that $B_a = 1$ and $B_b = 2$. Then, $f : A \to C$ is monic, $g : B \to C$ is epic but not monic, and both $p_1 : A \times_C B \to A$ and $p_2 : A \times_C B \to B$ are monic. But $D_a = 1$ and $D_b = 2$, so $D \to C$ is not monic. Morally, what's happening here is that your hypotheses only guarantee that the restriction of $B$ (considered as a sheaf over $C$) to $A$ is monic, so you have no control over what $B$ looks like over the complement of $A$ in $C$. Nonetheless, this plays a role in the construction of $D$ and so influences whether $D \to C$ is monic or not. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619569778442383, "perplexity_flag": "head"}
http://ilaba.wordpress.com/category/mathematics/mathematics-general/page/2/
# The Accidental Mathematician Because "exact science is not always exact science." ## Sucking at everything else 9 08 2010 The June/July issue of the Notices of the AMS features an interview with Gioia De Cari, a former graduate student in mathematics who quit somewhere along the way, went on to become an actress and a playwright instead, and recently wrote and performed a one-woman show about her mathematics experience. That could have been me, perhaps, in a parallel universe where my graduate student self wasn’t a recent immigrant and had enough of a safety net to be able to contemplate a change of career. Or in another one where I was stuck in Poland instead of going to graduate school in Canada. Or if I had not been available or willing to make several long-distance moves before settling down, or if the only tenure-track academic jobs I could get had been in places where I did not want to live. Even the timeline is close. De Cari was a graduate student at MIT in the late 1980s. I started graduate school in Toronto in 1989. There would have been a small issue involving my acting skills, or more accurately a lack thereof. Still, I could imagine having had a career in the arts instead, or humanities, or something else with little connection to mathematics. I certainly have thought about quitting mathematics, often and extensively at times, especially in the early years when I was less invested in it. And it’s not like I’ve never had any other interests. At one point, back when I was an undergraduate, I briefly entertained the idea of getting a second degree in the humanities. It was not practical to go ahead with it. Read the rest of this entry » Comments : 11 Comments » Categories : art, mathematics: general, women in math ## What if mathematicians wrote travel articles? 29 06 2010 Some time ago I suggested that scientists might not always make the best writers. I guess I wasn’t the only person ever to make this profound observation. Slate has since published this piece on how political scientists would cover the news; see also here. As hilarious as these are, I would say that there’s more to the picture. The story below is inspired by this one (hat tip to Terry Tao). Believe it or not, there are actual reasons why we have to write like this sometimes. I’m as guilty as anyone. In fact, I’m in the middle of revising one of my papers right now… In this article we describe the plane flight that Roger and I took to San Francisco. The purpose of our trip was to meet Sergey, our collaborator on the paper “The structure of fuzzy foils” (J. Fuzzy Alg. Geom. 2003) who also co-organized with me an MSRI workshop in 2005. Our main result was to arrive at the San Francisco airport at the expected time and meet Sergey there. To accomplish this, we relied on a regularly scheduled flight on a commercial airline. For the history of aviation (including commercial aviation) and the general background, we refer the interested reader to Wikipedia (see also Britannica). This article is organized as follows. We first explain a few preliminary steps, including the travel to the airport and the check-in procedure. The main part of the trip was the actual flight, which we discuss in a new paragraph. We conclude with a few remarks on arriving at the destination airport. Read the rest of this entry » Comments : 14 Comments » Categories : mathematics: general, science, writing ## More on mathematics and madness 4 06 2010 In popular movies, a scientist is usually brilliant but troubled. We know that he’s brilliant because we’re told so repeatedly, and we know that he’s troubled because that’s plain to see. He might spend a lot of screen time getting depressed over his lack of creative output and trying to remedy this situation by getting drunk or going out for long walks – anything that will keep him from attempting any actual work. Finally, thanks to divine inspiration, a life-changing event or some other such, he stumbles upon a Great Idea. Now that he’s made his breakthrough, the days and nights go by in a blur as the work flies off his hands, the manuscript pages practically writing themselves. Once it’s all done, the scientist has to snap out of the trance, at which point it’s not uncommon for him to collapse and have a nervous breakdown. I don’t even want to name specific movies – that’s shooting fish in the barrel. The number increases further if you substitute a writer or artist for a scientist. If you’ve seen too many Hollywood films and don’t know better from your own experience, you could be excused for drawing the conclusion that it’s somehow the mental illness that’s responsible for our creativity. I mean, scientific discovery – not to mention art – boils down to blinding flashes of brilliance, and those come hard and fast when you’re seriously kooky, right? And now there’s a medical study that I’m sure I’ll see quoted in support of this. According to a recent article in Science Daily, researchers at Karolinska Institutet have shown that highly creative people and people with schizophrenia have similar dopamine systems. That in turn has been linked to the capacity for what the article calls “divergent thought” (a scientist is quoted to refer to it as “thinking outside the box”, one of the most annoying phrases out there), which contributes both to creative problem solving in healthy people and abnormal thought processes in people with schizophrenia. The long suspected connection – make sure to also check the links under “related articles” – may thus have a basis in brain chemistry. Yay for the Mad Scientist! Read the rest of this entry » Comments : 5 Comments » Tags: madness Categories : mathematics: general, movies, science ## When the truth is gone… 8 03 2010 The main ingredients are simple: a house cat, a large box with an airtight lid, a radiation detector, a radioactive sample, and a container of poison gas such as cyanide. Start by placing the poison gas container inside the box and hooking it up to the radiation detector so that if radiation is detected, the gas container is opened and the gas is released into the box. Place the radioactive sample somewhere near the detector; the sample should be chosen so that there is, say, 50% probability that radioactive decay will be observed within an hour. Now put the cat in the box, close the box, leave the room, and shut the door behind you. According to the superposition principle in quantum mechanics, a radioactive particle does not simply wait a while and then decay at a time chosen randomly according to a given probability distribution. Instead, it evolves into a superposition of a decayed and non-decayed state, and remains so until we check on it by performing an observation. We know that such superposed states must exist, but we never get to see them. The act of taking a measurement causes the particle to actually assume one of the two definite states, either decayed or not, with certainty. A physicist would say that the wave function of the particle collapses upon observation. But what about the cat? If at least one of the radioactive particles decays, the poison gas is released and the cat dies. Otherwise, the cat survives. You will find out what happened once you open the box. Until then, you’re the proud owner of Schrödinger’s cat: alive and dead simultaneously, a quantum superposition of a live cat and a dead cat as dictated by the wave function of the radioactive sample. On the other hand, if you would rather keep the kitty away from dangerous contraptions and settle for the philosophical exercise instead, you could do worse than renting A Serious Man, the latest Coen brothers movie. Read the rest of this entry » Comments : 2 Comments » Categories : mathematics: general, movies ## Putnam 22 11 2009 This has been my first year on the Putnam committee: the committee that selects the problems for the William Lowell Putnam undergraduate competition. The committee consists of 3 members appointed for a 3-year term each (each year, one person’s term ends and another one is appointed in his place) and a fourth person, Loren Larson, who is a “permanent” secretary of the committee. To start with, each committee member proposes some number of problems (normally, at least 10). The problem sets and solutions are then circulated and discussed, and eventually the committee meets in person to decide on the final selection. This is all done in strict confidence and well in advance of the actual competition. I have never written the Putnam. I wrote the Math Olympiad back in the days and qualified for the International Math Olympiad in my last year of high school, but Putnam is not available in Europe. I’m not sure that I would have been interested anyway. I wanted to study the “serious” mathematics: the big theories, the heady generalizations, the grand visions. Olympiads and competitions faded into the distant background and pretty much stayed there until last year. I did point out my Putnam virginity when I was approached about joining the committee, and was told that Putnam does try to engage from time to time people who are not normally on the circuit, if only to have a larger pool of potential ideas. Of course, the advantage of having people on the committee who are on the Putnam circuit is that they know what’s expected, what works and what doesn’t, what has already been used and shouldn’t be recycled, and so on. Last year’s other two committee members – Mark Krusemeyer and Bjorn Poonen – are Putnam veterans, and of course Bjorn is a four-time Putnam fellow. Mark’s term ends this year; I don’t know yet who will be joining us this January. Well, you could call it a steep learning curve. Putnam problems are expected to be hard in a particular way: they should require ingenuity and insight, but not the knowledge of any advanced material beyond the first or occasionally second year of undergraduate studies, and there should be a short solution so that, in principle, an infinitely clever person could solve all 12 problems in the allotted 6 hours. (In reality, that doesn’t happen very often, and I’ve heard that it generates considerable attention when someone comes too close.) The problems are divided into two groups of six – A1-A6 for the morning session and B1-B6 for the afternoon session – and there is a gradation of the level of difficulty within each group. A1 is often the hardest to come up with – it should be the easiest of the bunch, but should still require some clever insight and have a certain kind of appeal. The difficulty (for the competitor, not for us) then increases with each group, with A6 and B6 the hardest problems on the exam. There are also various subtle differences between the A-problems and B-problems; this is something that I would not have been aware of if another committee member hadn’t pointed it out to me. For example, a B1 could involve some basic college-level material (e.g. derivatives or matrices), but this would not be acceptable in an A1, which should be completely elementary. The competition is taking place in two weeks, so you’ll know soon enough what problems we ended up selecting. Meanwhile, it might entertain you to see a few of my duds: problems I proposed that were rejected for various reasons. They will not be appearing on the actual exam and I’m not likely to propose variants of them in the future. The solutions are under the cut, along with an explanation of why each problem is a dud. 1. A ball is shot out of a corner $A$ of a square-shaped billiard table $ABCD$ at an angle $\theta$ to the edge $AB$. The ball travels in a straight line without losing speed; whenever it hits one of the walls of the table, it bounces off it so that the angle of reflection is equal to the angle of incidence. Find all values of $\theta$ such that the ball will hit one of the corners $A,B,C,D$ after bouncing off the walls exactly 2009 times. 2. Are there integer numbers $a_1<a_2<\dots<a_{2009}$ such that $\sum_{i<j}(a_j-a_i)=31415926535$? 3. Given $n$, determine the largest integer $m(n)$ with the property that any $n$ points $P_1,P_2,\dots,P_n$ on a circle must determine at least $m(n)$ obtuse angles $P_iP_jP_k$. Read the rest of this entry » Comments : 3 Comments » Categories : mathematics: general ## La Sagrada Familia and the hyperbolic paraboloid 14 06 2009 I’m travelling in Spain this month – mostly for mathematical reasons, but, well, it’s Spain. Last week I was fortunate to see La Sagrada Familia. La Sagrada Familia is the opus magnum of the great Catalan architect and artist Antoni Gaudí. Gaudí was named to be in charge of the project in 1883, at the age of 31, and continued in that role for the rest of his life. From 1914 until his death in 1926 he worked exclusively on the iconic temple, abandoning all other projects and living in a workshop on site. The construction is still in progress and expected to continue for at least another 20-30 years. The cranes and scaffolding enveloping the temple have almost become an integral part of it. That’s not exactly surprising, given the scale and complexity of the project together with the level of attention to detail that’s evident at every step. Almost every stone is carved separately according to different specifications. Here, for example, is the gorgeous Nativity portal. (Click on the photos for somewhat larger images.) To call Gaudí’s work unconventional would be a major understatement. To call it novelty – don’t even think about it. His buildings are organic and coherent. Everything about them is thought out, reinvented and then put back together, from the overall plan to the layout of the interior, the design of each room, the furnishings, down to such details as the shape of the railings or the window shutters with little moving flaps to allow ventilation. Gaudí’s inspiration came from many sources, including nature, philosophy, art and literature, and mathematics. Read the rest of this entry » Comments : 4 Comments » Categories : art, mathematics: general, photography, travel ## Truth be told: 23 04 2009 Yvan Saint-Aubin in the CMS Notes, on behalf of the new bilingualism committee: Truth be told, writing an elegant, masterful scientific article is possible only when we do so in our mother tongue. Right. You could tell that to Hoory, Linial and Wigderson, the winners of the 2008 Levi Conant prize for the best expository article in the AMS Bulletin or Notices. Or you could read up on Joseph Conrad, who grew up in Ukraine, Russia and Poland and only became fluent in English in his twenties. Of course, Conrad was only writing fiction, which must be way easier than writing a “masterful” scientific article. Don’t judge too quickly what others might or might not be capable of. Update: Emmanuel Kowalski points out in comments that a more accurate translation of the original French would be: Writing scientific texts elegantly is something that can probably only be done in one’s native language. Which I still disagree with, but it doesn’t grate like the English version does. I have also removed the sentence that used to be at the top of this post, because I don’t think I would have made good on it. Comments : 7 Comments » Categories : mathematics: general ## Best of the best? 9 01 2009 The Wall Street Journal is really on a roll, reporting on a Jobs Rated study that names “mathematician” as the best job in the U.S.: According to the study, mathematicians fared best in part because they typically work in favorable conditions — indoors and in places free of toxic fumes or noise — unlike those toward the bottom of the list like sewage-plant operator, painter and bricklayer. They also aren’t expected to do any heavy lifting, crawling or crouching — attributes associated with occupations such as firefighter, auto mechanic and plumber. The study also considers pay, which was determined by measuring each job’s median income and growth potential. Mathematicians’ annual income was pegged at \$94,160 [...] The complete ranking of 200 jobs is here, and here is the explanation of the point system on which the ranking was based. The bottom line is, all point systems should be viewed with a healthy degree of skepticism. I want to make it very clear that I’m not whining here. I like my job. But it’s not at all for the reasons just mentioned. Not because we spend a lot of time indoors – which, incidentally, can be quite unpleasant if you have a windowless office (I did as a graduate student). Not because we work in places free of noise, because, actually, we’re periodically exposed to rather a lot of construction-type noise at UBC. Not because we don’t do heavy lifting and crouching. Not because we don’t have to walk to work 6 miles through the snow in running shoes, uphill both ways, either. Much of the data over at Jobs Rated strikes me as completely divorced from reality. Mathematicians, supposedly, work 45 hours per week – in my experience, 50-60 is more realistic. Low stress levels, just because we don’t operate heavy machinery? Who are you kidding? An annual income of 94K sounds like at least an associate professor at a large research university. Before you get there, you’ll usually have to spend 4-6 years in graduate school (average income at UBC: 20K per year), 2-6 years in one or more postdoc or temporary positions (minimum salary at UBC is 40K; 50K is considered high), and another 4-7 years as tenure-track assistant professor (starting salaries at UBC are around 70K). You’ll also have to make several long-distance moves. That’s the best case scenario as far as the money is concerned. Many more people end up at smaller schools, with significantly lower salaries and higher teaching loads, or as perennial “seasonal” instructors with no job security. Then there are questions that Jobs Rated did not ask at any particular point: how many of us have a choice of where we want to live if we want to stay in this profession? How many get separated from their spouses or partners for years because they can’t get jobs in the same city or state? Why do we do this, then, instead of going into real estate or something? Because, first and foremost, we are attracted to mathematics. We enjoy learning mathematics and doing research work. We enjoy working with students – admittedly, not all the time, but nonetheless. We like being able to work mostly on our own schedule, even if the flip side is that we might end up working at home well past midnight. We’re competitive and we appreciate a good challenge, be it mathematical or professional. Curiously, Jobs Rated seems to view competitiveness as only a negative feature and a stress factor, but doesn’t understand that boredom can be stressful, too. According to the Jobs Rated standards, the best job in the world would involve sitting in an office for several hours a day, not doing anything in particular, and getting paid well for it. But that’s not what we do. Comments : 2 Comments » Categories : academia, mathematics: general ## Blaming the mathematician 22 12 2008 Paul Wilmott explains a couple of things about estimating probabilities in quantitative finance: You are in the audience at a small, intimate theatre, watching a magic show. The magician hands a pack of cards to a random member of the audience, asks him to check that it’s an ordinary pack, and would he please give it a shuffle. The magician turns to another member of the audience and asks her to name a card at random. “Ace of Hearts,” she says. The magician covers his eyes, reaches out to the pack of cards, and after some fumbling around he pulls out a card. The question to you is what is the probability of the card being the Ace of Hearts? Of course, if a card is chosen at random from an ordinary pack of 52 cards, the probability of it being the Ace of Hearts is 1/52. But is that really correct? What if this is not a math problem, but instead you are indeed watching a real-life magic show in a theatre? Do you really believe that the magician doesn’t know exactly where the Ace of Hearts is? Thus the “real” question is: how likely is it that the magician’s script calls for him to draw the Ace of Hearts? That’s certainly one possibility; but there are others, for instance the magician might pull the card from the pocket of an unsuspecting audience member. A member of wilmott.com didn’t believe me when I said how many people get stuck on the one in 52 answer, and can’t see the 100% answer, never mind the more interesting answers. He wrote “I can’t believe anyone (who has a masters/phd anyway) would actually say 1/52, and not consider that this is not…a random pick?” So he asked some of his colleagues the question, and his experience was the same as mine. He wrote “Ok I tried this question in the office (a maths postgraduate dept), the first guy took a fair bit of convincing that it wasn’t 1/52 !, then the next person (a hardcore pure mathematician) declared it an un-interesting problem, once he realised that there was essentially a human element to the problem! Maybe you have a point!” Does that not send shivers down your spine, it does mine. Once you start thinking outside the box of mathematical theories the possibilities are endless. [...] A lot of mathematics is no substitute for a little bit of commonsense and an open mind. I’ll get around to arguing with Wilmott in a moment, but let me first tell you about the number 52. Numbers, of course, are abstract concepts. They don’t have to be associated with counting cards, apples, oranges, or anything else. How, exactly, do we define them in the abstract? Here’s how this was explained to me back when I was an undergraduate math student. We start from the Zermelo-Frankel axioms of set theory, and then proceed as follows. • The Z-F axioms guarantee that the empty set $\emptyset$ exists. We define 0 to be the cardinality of the empty set. • Consider the set $\{\emptyset\}$ whose sole element is the empty set. We define 1 to be the cardinality of this set. • Now consider the set $\{\emptyset,\{\emptyset\}\}$ whose elements are the empty set and the set whose sole element is the empty set. The cardinality of the new set is 2. • Repeat this 50 more times, and you get to 52. Now, we don’t actually go through this procedure every time we have to use an integer number, let alone fractions. The point is, though, that mathematics deals with idealized abstractions and that we tend to be well aware of our limitations as far as real-life problem-solving is concerned. Ask me to solve the differential equation $y'=ky$ and I will tell you, with 100% certainty, that $y=Ce^{kt}$. But is this really the equation that you should be trying to solve? That’s where the mathematician needs to hear from someone who actually understands the context. Pure mathematics, alone, cannot speak on that matter. Wilmott is right to say that it is a problem when mathematics gets to overrule common sense. His diagnosis of the underlying causes, though. gets it exactly backwards. The problem isn’t limited to applications of mathematics, either. Here’s the actor Philip Seymour Hoffman talking about his latest movie Doubt: “What’s so essential about this movie is our desire to be certain about something and say, This is what I believe is right, wrong, black, white. That’s it. To feel confident that you can wake up and live your day and be proud instead of living in what’s really true, which is the whole mess that the world is. The world is hard, and John is saying that being a human on this earth is a complicated, messy thing.” Hoffman paused again. “And I, personally, am uncomfortable with that messiness, just as I acknowledge its absolute necessity. “ And that’s the real “human element” at work. Uncertainty and doubt have been a part of the human condition from time immemorial, but so has our discomfort with them, our struggle against them. We want security and certainty. We long to be reassured – by religion, medicine, mathematics. We want to be told what the future will bring and we want a 100% refund in the unlikely event that the prediction fails. There’s a sense of security in having a formula that lets you make predictions. You get to print nice glossy brochures with charts, graphs and tables, citing scientific publications in top journals. The formula, though, is only as good as the assumptions that went into setting it up. If those are true – every single one of them – then the mathematically predicted outcome is inevitable. Just like we would like it to be. We don’t want to read the fine print. In practice, there’s usually at least one unspoken assumption that fails to hold, namely that the system in question is isolated and there are no more variables to be taken into account. Sometimes it’s reasonable to consider the system as if it were isolated. Other times, it’s not. How do we know? Maybe, really, we don’t. We don’t like to worry about it, though. We prefer to accept the mathematical solution to the easier version of the problem. I’ve seen it in every calculus class I’ve taught. The “word problems” are grossly simplified versions of real-life situations, simplified so that the problem can be solved using first or second-year calculus. Most of them are worded so as to make it clear which formula should be used, and if that’s not obvious right away, it will be after several repetitions in class and on the homework. When I try to ask the students to consider what unspoken assumptions are being made or in what data range the solution will no longer be correct, it tends to fall on deaf ears, because that’s not a part of the problem, is it? Open-ended problems – find a good approximation for something or other, using your common sense to determine what’s “good” – won’t get me far, either, because how exactly am I going to grade it on a test? Mathematics is not the enemy of common sense. Intellectual laziness is. Oh, and those arrogant mathematicians from Wilmott’s story who didn’t “get it”? Sounds too much like a ratemyprofessor complaint. “The first guy” might have been told that this was “a math problem” in a way that suggested strongly that the human factor should be disregarded, as it always is in those calculus problems I’ve just mentioned. The “hardcore pure mathematician” may have been immersed in his work, as we often are, and did not appreciate the interruption. Or he may have suspected one of those “mathematician jokes” that paint the mathematician as a real-life idiot and make the layperson feel oh-so-glad that he’d never learned algebra. Sure, there are mathematicians who live in an ivory tower. There are others who don’t. Consider all scenarios and do not make unwarranted assumptions. Comments : Comments Off Categories : mathematics: general ## The Monty Hall problem 12 08 2008 On a plane flight back from a recent trip, I watched the movie 21. The plot, advertised as “based on a true story”, is roughly as follows. (In case you have not seen the movie and would like to, I will try to avoid major spoilers.) Ben Campbell, an idealistic and somewhat naive MIT student, impresses a math professor (Kevin Spacey) by answering correctly a couple of tricky questions in class. Soon afterwards, the professor asks Ben to join his card-counting blackjack team in return for a share of the profits. The team travels to Las Vegas on weekends, plays blackjack at major casinos, and wins millions of dollars by placing themselves strategically at the right tables and employing the card-counting techniques taught by the professor. Ben refuses at first (“and if you tell anybody, I’ll make sure that you won’t graduate”), but there’s no other way that he can pay for his dream med school, and he’s attracted to a female student on the team, and besides, if he didn’t join the team, there would be no movie, so guess what happens. That’s about the first quarter of the movie, and I’ll leave it there, because this is already enough to raise serious questions about just how close to a “true story” we are here. My first question was, has there really been an MIT math professor who made a fortune off a team of student card players? Wouldn’t that be serious professional misconduct, and would an MIT professor (not a bad job) really take this sort of risk? As it turns out, the movie is somewhat loosely based on the adventures of a real-life MIT card counting team (one of several that MIT has had over the years). However, the teams were all entirely composed of, and run by, the students. There were no professors involved, and the Spacey character is completely fictional. Which also preempted my follow-up question. There’s a classroom scene where the Spacey character asks his students what applications of Newton’s method they know. A student suggests, “Nonlinear equations?…” Spacey responds along the lines of “Yeah, that’s very clever, because this course is called Nonlinear Equations. Why don’t you tell me something I don’t already know.” My impression was that Spacey’s demeanor, and this exchange in particular, was a little bit too snarky. A professor is not supposed to do that if he (or, especially, she) wants to get good student evaluations. The user comments on IMDB include several reviews by authors who appear to be well familiar with casinos, blackjack and card counting, and are not entirely happy with the treatment of the subject. I’ve never played blackjack, or been to a casino, but their criticism makes sense to me. But here’s the reason why I’m writing this post. How exactly does Ben manage to impress Spacey’s character? Spacey asks him the following question: You’re on a game show. The host asks you to choose one of three doors. Behind one of them there’s a car, behind each of the other two there’s a goat. You pick one door, say Door 1. The host, who knows where the car is and is not allowed to reveal that information, opens Door 3, behind which there is a goat. He then asks you if you want to choose Door 2 instead of 1. Is it to your advantage to switch? In case you’d like to think about it, the rest of post is behind the cut. Read the rest of this entry » Comments : 1 Comment » Categories : mathematics: general, movies
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582167863845825, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/88750?sort=votes
## functions satisfying “one-one iff onto” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello Everybody. I need some more examples for the following really interesting phenomenon: ```` A function from the class ... is one-one iff it is onto. ```` Some examples I know: 1) Finite set case: functions from $\lbrace 1,2,\dots,n\rbrace$ to itself is one-one iff onto. 2) Linear operators $T\colon V\rightarrow V,$ where $V$ is a finite-dimensional vector space is also one-one iff onto. 3) Linear operators of the from (I-K) where K is some compact operator acting on a Banach space satisfies this property. This is the famous result of Fredholm. It is very easy to find domains where the result fails. I remember my teacher telling me that 'compactness is the next best thing to finiteness', hence this result which trivially holds in the finite case can happen only in the compact setting. I would like to know, if this is really the case or are there any other examples? Thank you in advance. EDIT: Looking at some answers, I thought it is better if the scope of the question is broadened. Does injection (surjection) imply surjection (injection) and isomorphism/isometry? (i.e. by assuming one-one can I get ontoness and structure preserving properties free) - 1 I heard similar things about compactness being the next best thing after finiteness. I just never understood what is so great about finiteness... – Asaf Karagila Mar 30 2012 at 19:20 ## 16 Answers - Thanks. Very closely related link. The conclusion of the author that we need three different proofs for the same phenomenon is also interesting to ponder over. – Uday Feb 17 2012 at 20:56 1 Thanks, Moshe. There was a follow-up post (golem.ph.utexas.edu/category/2011/12/…) unifying some but not all aspects of the three situations. In particular, there's a unified proof that if (injective implies iso) then (surjective implies iso). – Tom Leinster Feb 17 2012 at 22:07 1 I've expanded on that last fact in my answer on this page. – Tom Leinster Feb 17 2012 at 22:28 In the case of a compact space, it seems that the eventual image is called $\omega$-*limit set*. – Denis Serre Feb 18 2012 at 9:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $A$ is a noetherian ring, then a ring homomorphism $f: A \to A$ is surjective iff it is an isomorphism. (see the accepted answer of this question) - 1 The proof for the (I-K) where K is compact operator(stated in the question) also has a similar proof! The nesting is on kernels. – Uday Feb 17 2012 at 20:24 Uday asked about the double implication "one-to-one iff onto". Here's an observation about the relationship between the two different directions of implication, taken from here. Let $\mathbf{M}$ be a symmetric monoidal closed category. (This is overkill, but I won't attempt to be more precise.) Suppose we have in $\mathbf{M}$ two distinguished classes of map, called the "injections" and the "surjections", with the following two properties: (i) if $s\colon X \to Y$ is a surjection then for all $Z$, the induced map $$s^*\colon \mathbf{Hom}(Y, Z) \to \mathbf{Hom}(X, Z)$$ is an injection, and (ii) any surjection with a left inverse is an isomorphism. Suppose, finally, that every injective endomorphism in $\mathbf{M}$ is an isomorphism. Then every surjective endomorphism in $\mathbf{M}$ is an isomorphism. Proof: let $s\colon X \to X$ be a surjective endomorphism. Then `$s^*\colon \mathbf{Hom}(X, X) \to \mathbf{Hom}(X, X)$` is an injective endomorphism, and therefore an isomorphism. It follows that there exists $t\colon X \to X$ such that `$s^*(t) = 1_X$`, that is, $t\circ s = 1_X$. But then $s$ is a surjection with a left inverse, so $s$ is an isomorphism. The hypotheses hold in the category of finite sets, the category of finite-dimensional vector spaces, and the category of compact metric spaces. In the last case, the maps are the distance-decreasing maps (in the weak sense), "injection" should be interpreted as "isometry into", and "surjection" has its usual meaning. In particular, if you already know that every isometry from a compact metric space into itself is surjective, this lets you deduce that every distance-decreasing surjection from a compact metric space to itself is an isometry. - @Tom Leinster I liked the way duality of injection and surjection is exploited to bring about a unification. A very good observation indeed! Thank you. (To be frank, I have not understood the observation completely.) – Uday Feb 17 2012 at 22:51 "Symmetric monoidal closed category" just means that you have a collection of objects (e.g. finite-dimensional vector spaces), some maps between them (e.g. linear maps), a way of taking the "product" of two objects (e.g. the tensor product), and a way of taking the "space of maps" between two objects (e.g. the vector space Hom(V, W) of linear maps from V to W). There are some axioms and details to fill in, but that's the idea. – Tom Leinster Feb 17 2012 at 23:42 Although I tend to shy away from list questions, this is more fun to contemplate than the pile of exams on my desk right now. So take a compact Riemann surface $X$ with genus $g\ge 2$. A nonconstant holomorphic self map $f:X\to X$ is necessarily an isomorphism. Proof: Surjectivity is automatic by, for example, the open mapping theorem. If $f$ were not injective, then it would have a degree $d>1$. But the Riemann-Hurwitz formula would give $g-1\ge d(g-1)$ which is impossible. - This cannot be. I think that in your application of Riemann-Hurwitz you are assuming that your map $f$ is everywhere unramified, which is a quite strong assumption! – Tommaso Centeleghe Feb 17 2012 at 22:14 RH says $2g-2 = d(2g-2)+\sum_p (e_p-1)$ which gives the inequality I stated. Tell me which part do you not understand? – Donu Arapura Feb 17 2012 at 22:29 oooops.. you are right. I misread what you wrote, I thought you were considering $f:X\rightarrow Y$..good! – Tommaso Centeleghe Feb 17 2012 at 22:49 1 OK, no harm done. Let me edit to make clearer. – Donu Arapura Feb 17 2012 at 22:51 Is there no way we can come out of compactness assumption? – Uday Feb 17 2012 at 23:01 show 2 more comments (edited slightly) A one to one continuous map between two compact $n$-dimensional manifolds without boundary having equal numbers of components must be onto, and in fact a homeomorphism. The image of any connected component must be connected, open (by invariance of domain), and closed (by compactness), and therefore must be a component. - Not quite, for the last point. It should be 'induces an iso on pi_0', else one can take the obvious non-onto map $S^2 \coprod S^2$ to itself. – David Roberts Mar 31 2012 at 0:51 @David: That would not be one-to-one. – S. Carnahan♦ Mar 31 2012 at 3:07 arg, of course. I'll delete my comment when I'm at a computer. – David Roberts Mar 31 2012 at 4:54 A surjective endomorphism of a finitely generated residually finite group is an isomorphism. - 1 This is true if the group is finitely generated, but it's false in general (e.g., free groups of infinite rank). – Stephen S Feb 17 2012 at 22:11 1 Sorry, in my brain all groups are fg. Will fix. – Benjamin Steinberg Feb 17 2012 at 22:43 Let $G$ be a sofic group (see this survey article of Pestov for the definition and various results) - all amenable groups are sofic, as are all free groups, and no groups are known to be non-sofic. Let `$X=\{0,1\}^G$` with the product topology and let $f: X \to X$ be a continuous function which is also a right $G$-map, here $G$ acts on $X$ by shifts. Then if $f$ is injective, it is automatically surjective - this is Gromov's partial solution to the Gottschalk surjunctivity conjecture, which I think is also mentioned in Pestov's article. Now this is not what your question asked for, but if we now look at $C(X)$ and the induced algebra homomorphism $f^* : C(X) \to C(X)$, then `$f^*$` surjective `$\iff$` $f$ is injective (Tietze/Urysohn) `$\iff$` $f$ is bijective (above) `$\iff$` $f^*$ bijective (Tietze/Urysohn) - This is a good one. Isn't Gromov's partial solution an extension of Ax-Grothendieck to some sort of "limit varieties"? – Jon Bannon Feb 17 2012 at 23:45 Jon, I must confess that I don't know how Gromov's proof works, but given that one can define and investigate sofic groups in terms of embeddings into suitable ultrapowers, your suggestion sounds very plausible. – Yemon Choi Feb 18 2012 at 3:31 Let $R$ be a right perfect ring. Then an endomorphism on a finitely presented left $R$-module is injective if and only if it is surjective: reference. - If $A$ is a ring and $M$ is a finitely generated $A$-module, then an $A$-module endomorphism $f:M\rightarrow M$ is surjective iff it is an isomorphism. - Thanks. This statement is a direct generalization of finite dimensional vector spaces. May be we can extend the results to infinite dimensional modules and compact operators defined on them. – Uday Feb 17 2012 at 20:44 Is this true if the ring doesn't satisfy the invariant basis number property: en.wikipedia.org/wiki/… Or did you mean to consider only (say) commutative rings? (I am interested in your example; do you have a reference?) – Todd Trimble Feb 17 2012 at 20:53 @Todd Trimble I doubt this may be true in general. In vector space setting, we will need to have a condition of dimension of the range and kernel to be equal. In generalizing to module, this condition may demand IBN. – Uday Feb 17 2012 at 21:11 1 It's obviously false if IBN fails, but it is true over commutative rings. It's a consequence of (an appropriate form of) Nakayama's lemma. I think you can find it in Atiyah & MacDonald. I have no idea about what happens if commutativity fails but IBN still holds. – Harry Altman Feb 17 2012 at 21:16 1 @Harry: I should have been more clear that my first question was rhetorical. Thanks for the reference! – Todd Trimble Feb 17 2012 at 21:53 show 2 more comments Let $G$ be a discrete group, and let $T:\ell^2(G)\to \ell^2(G)$ be a bounded linear operator which commutes with all right translations: that is, if $\xi\in \ell^2(G)$ and $g\in G$ then $T(\xi\cdot g) =T(\xi)\cdot g$. (In other words, $T$ belongs to the group von Neumann algebra.) Then if $T$ is surjective, it is invertible. This follows from combining a result of Kaplansky with the fact that $C^\ast$-algebras are inverse-closed in containing $C^\ast$-algebras. My own feeling is that the result is tacit folklore but in any case it follows by duality from Theorem 3.2 in this paper. - Thanks. Can we prove that this class has a non-compact operator? – Uday Feb 17 2012 at 21:56 A II_1 von Neumann algebra has no non-zero compact operators (no minimal projections). So yes, this is different from the Fredholm examples. – Yemon Choi Feb 17 2012 at 22:04 In fact, by considering polar decomposition this says that co-isometries are always isometries, which is the defining property of finite von Neumann algebras. – Jesse Peterson Feb 18 2012 at 2:53 Jesse: that's a snappy way to put it, and that's how Kaplansky implicitly puts it. I guess I like to follow in Dixmier's wake by thinking of finite von Neumann algebras as those that have a separating family of finite normal traces. This seems to generalize better to situations where VN(G) is not finite but where the unitization of $C_r^\ast(G)$ has the "left-invertible imples invertible" property, see my older MO question mathoverflow.net/questions/78948/… – Yemon Choi Feb 18 2012 at 3:30 If $f : S \to S$ is volume preserving and $S \subseteq \mathbb{R}^n$ has finite volume, then f is injective iff f is surjective. There is a very elegant proof of Koebe–Andreev–Thurston theorem (given in the book on combinatorial geometry by Agarwal and Pach) using this property. - Erik, I couldn't understand your first sentence, at the grammatical level. Could you rephrase? – Tom Leinster Feb 17 2012 at 22:12 @Tom : oops, I've fixed it now. (The intention was to answer the question "under what circumstances is f injective iff f is surjective", but I just realized the question was not given like that.) – Erik Aas Feb 18 2012 at 9:06 Got it! Thanks. – Tom Leinster Feb 18 2012 at 15:23 Actually, I don't get it. Take n = 1 and S = [0, 1]. Take the map f: S --> S defined by f(x) = 2x (mod 1). Then f is measure-preserving (wrt the usual measure) and surjective, but not injective. Maybe you mean something different by "volume-preserving"...? I had a look in Agarwal and Pach's book, but couldn't find a clean statement of the result to which you're alluding. – Tom Leinster Feb 20 2012 at 11:40 Does $f$ have to be continuous, or something like that? Otherwise, the result seems to be trivially false because you can mess about with the map on a set of measure zero. – gowers Mar 31 2012 at 14:04 An isometry (i.e. distance preserving map) between metric spaces is automatically injective. - How does this address the original question? – Yemon Choi Feb 17 2012 at 22:18 1 Much more interestingly, a distance-decreasing function between compact metric spaces is a surjection iff it is an isometry. See the discussion in the thread linked to by Moshe. – Todd Trimble Feb 17 2012 at 23:21 @Yemon Choi: it doesn't really relate to the question. sorry for this disattention. and thanks for warning. – Hans Feb 18 2012 at 13:00 This addresses the "broader scope" of the question and possibly the comments of Uday on Donu's answer: An injective morphism from an affine algebraic variety over an algebraically closed field to itself is also surjective. Moreover, probably even more surprising is the fact that in the case that the field has characteristic zero (and of course algebraically closed), an injective endomorphism is actually a polynomial automorphism (that is the inverse is also a polynomial map!). See e.g. Chapter 4 of van den Essen's "Polynomial Automorphisms" for proofs of both these statements. Also from the same book: the map $x \mapsto x^3$ from $\mathbb{Q} \to \mathbb{Q}$ shows the necessity of algebraic closedness of the field, and the Frobenius automorphism $x \mapsto x^p$ of an algebraically closed field of characteristic $p > 0$ shows that the second statement is false for positive characteristic. Also, note that both statements are automatically true for proper varieties. - The multiplication maps of modules over an Artin Ring have this property. Artin rings similarly generalize both finite sets and vectors spaces. - If $G \subset \mathbb{C}^n$ is a domain and $f: G \mapsto \mathbb{C}^n$ is an injective holomorphic mapping, then also $f(G)$ is a domain and $f: G \mapsto f(G)$ is biholomorphic. This follows from the fact that, under the same assumptions, if $f$ is injective, then det $J_f(z) \ne 0$ for every $z \in G$. ($J_f$ denotes the complex jacobian.) - The Dixmier conjecture involves an even stronger statement. Let $A_n$ be the Weyl algebra, which is the algebra of polynomial differential operators on $\mathbb C[x_1,\ldots,x_n]$. The conjecture is that any algebra map $f:A_n \to A_n$ is an isomorphism. (Of course, it isn't hard to see that $A_n$ has no two-sided ideals, so any such $f$ is automatically injective.) It was recently proved that the Dixmier conjecture is stably equivalent to the Jacobian conjecture, in the sense that if one conjecture is true for all $n$, then so is the other. (References are given in the wikipedia page.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301475882530212, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/96246/is-this-the-notation-you-use?answertab=oldest
# Is this the notation you use? I've noticed that my terminology is a bit haggard. I do math on my own so I'm not entirely sure how everyone else refers to things and so I need a check. so is this correct: $\lim\limits_{\delta x \to 0}\frac{\delta y}{\delta x} = \frac{dy}{dx}$ Where say, $\delta y$ is the change in distance and $\delta x$ is the change in time and as ${\delta x}$ approaches zero, the whole thing approaches the derivative $\frac{dy}{dx}$. Would this be the correct notation and, while I'm here, is there a quick reference somewhere online for MathJax notation? Also, is $\frac{\delta y}{\delta x} \equiv \frac{\Delta y}{\Delta x}$, or does each delta mean something different? Is there a convention here? - The way most people do things, $dx$ isn't a number, so you're going to have to be more precise about what you mean. – Qiaochu Yuan Jan 4 '12 at 1:51 A d with its top curled to the right isn't really a d but the Greek lowercase letter delta: $\delta$ (`\delta`). Is that what you mean? – Rahul Narain Jan 4 '12 at 1:57 @Rahul, thanks. That's what I was looking for. I've edited my question. Is this the way you would explain the limit of the changes as dx approaches zero? Some math books I've read really try to emphasise this distinction. – Korgan Rivera Jan 4 '12 at 2:02 @KorganRivera: Were you looking for $\delta$ (delta) or for $\partial$ (sometimes called "del", `\partial`). – Arturo Magidin Jan 4 '12 at 4:11 – J. M. Jan 4 '12 at 9:17 ## 2 Answers By definition, the derivative of $y=f(x)$ is $y'=f'(x)=\frac{dy}{dx}=\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$ where $f(x+h)-f(x)$ is the change in $y$ (traditionally denoted $\Delta y$) and $h$ is the change in $x$ (traditionally denoted $\Delta x$). So it's ok to write $\lim_{\Delta x \to 0} \frac{\Delta y}{\Delta x} = \frac{dy}{dx}$. "Everyone" will know what you mean. For the in's and out's of treating $dy/dx$ like a fraction see this question [Edit: Oops! Wrong link! Fixed]. Now as for $\delta y$ and $\delta x$ (lower-case delta: $\delta$ vs. upper-case delta: $\Delta$)...this usually has a different meaning. Check out: Functional Derivative - Thanks Bill, that's what I thought. I've seen $\delta$ and $\Delta$ used interchangeably in some textbooks. But you're saying that the $\delta$ is used when differentiating functions with respect to other functions? – Korgan Rivera Jan 4 '12 at 2:32 As with any notation, different people use it differently. But...usually $\delta$ gets used in variational calculus. If you see it used elsewhere, be careful. It's not standard. – Bill Cook Jan 4 '12 at 2:37 Oh, and more or less "Yes" to your question. :) – Bill Cook Jan 4 '12 at 2:38 Warning: This is an editorial! This is a taste issue. I prefer $$f'(x) = \lim_{h\to 0} {f(x + h) - f(x)\over h} = \lim_{t\to x} {f(t) - f(x)\over t - x}.$$ Either says: the limit of the slopes of the secant lines is the slope of the tangent line. I have never liked the "false fraction" of $dy/dx$. I prefer to think that $$f(x + h) = f(x) + f'(x)h + o(h).$$ In fact, this last form gives the definition of the derivative that abstracts to many dimensions and to the derivative behaving as linear transformation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424812197685242, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/192252-finding-value-make-two-graphs-tangent.html
# Thread: 1. ## Finding value to make two graphs tangent Find a negative value of k so that the graph of y = x2 - 2x + 7 and the graph of y = kx + 5 are tangent? It says the answer is -2-2sqt2 I graphed this and it didn't look tangent to me. Thanks. 2. ## Re: Finding value to make two graphs tangent It does look tangent. Hint: consider the difference of the two functions and find when it has a single root. 3. ## Re: Finding value to make two graphs tangent I am putting it in the calculator wrongly. When I evalued K and then put it in the calculator, it looked tangent. This is how I am putting it in: "(-2-2sqt2)x+5" 4. ## Re: Finding value to make two graphs tangent Originally Posted by benny92000 I am putting it in the calculator wrongly. When I evalued K and then put it in the calculator, it looked tangent. This is how I am putting it in: "(-2-2sqt2)x+5" You need the line and the parabola to intersect at a single point. Therefore you need the discriminant of $x^2 - 2x + 7 - (kx + 5) = 0$ to equal zero .... 5. ## Re: Finding value to make two graphs tangent Ok so I got x^(2) (-2-k)x + 2 = 0 Ultimately I got k^(2) +4k -4 = 0 I used quadratic formula and came up with the correct answer. So theoretically, when you subtract the two equations, the solution(s) of that equation is the intersection(s) of both equations? And with resepect to the calculator, did I plug it on wrongly? Why did they not appear to intersect? 6. ## Re: Finding value to make two graphs tangent Originally Posted by benny92000 So theoretically, when you subtract the two equations, the solution(s) of that equation is the intersection(s) of both equations? I would be careful with terminology. First, one can only talk about the intersection of graphs (which are geometrical figures), not equations. Second, we can draw all points satisfying two equations $f_1(x,y)=0$ and $f_2(x,y)=0$, but this would not be the set of points satisfying $f_1(x,y)-f_2(x,y)=0$. For example, points satisfying 2x + 3y = 0 and x + y = 0 are lines that intersect at just one point (0, 0), but points satisfying x + 2y = 0 fill a whole line. What we have in the original problem are equations $y = x^2 - 2x + 7$ and $y = kx + 5$, to be sure, but not arbitrary equations. Rather, they are definitions of functions with only y in the left-hand side. So, the correct statement is, If you have functions $y=f_1(x)$ and $y=f_2(x)$, then the solutions of the equation $f_1(x)-f_2(x)=0$ give the (x-coordinates of the) intersection points of the graphs of the two functions. Originally Posted by benny92000 And with resepect to the calculator, did I plug it on wrongly? Why did they not appear to intersect? I don't have a graphing calculator, so I am not sure. There are many possible reasons. E.g., you may need to adjust the window size, write "2*sqt(2)" instead of "2sqt2," etc. Try using 1.41 instead of sqt2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935209333896637, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/51171/list
## Return to Question 2 deleted 6 characters in body There are two questions here, an explicit one, and another (more vague) one that motivates it: I am pretty certain the following should have a negative answer, but at the moment I'm not seeing how to argue about this and cannot locate an appropriate reference. In set theory without choice, suppose $X$ is an infinite set such that for every positive integer $n$, we can split $X$ into $n$ (disjoint) infinite sets. Does it follow that $X$ can be split into infinitely many infinite sets? What would be a reasonably weak additional assumption to ensure the conclusion. ("Reasonably weak" would ideally be something that by itself does not suffice to give us that $X$ admits such a splitting, but I am flexible.) This was motivated by a question at Math.SE, namely whether an infinite set can be partitioned into infinitely many infinite sets. This is of course trivial with choice. In fact, all we need to split $X$ is that it can be mapped surjectively onto ${\mathbb N}$. However, without choice there may be counterexamples: A set $X$ is amorphous iff any subset of $X$ is either finite or else its complement in $X$ is finite. It is consistent that there are infinite amorphous sets. If $X$ is infinite and a finite union of amorphous sets, then $X$ is a counterexample. The question is a baby step towards trying to understand the nature of other counterexamples. Note that any counterexample must be an infinite Dedekind finite (iDf) set $X$. One can show that for any iDf $X$, ${\mathcal P}^2(X)$ is Dedekind infinite, and from this it actually follows that either . For any $X$ or Y$, if${\mathcal P}(X)$P}(Y)$ is Dedekind infinite, then $Y$ can be mapped onto $\omega$ (this is a result of Kuratowski, it appears in pages 94, 95 of Alfred Tarski, "Sur les ensembles finis", Fundamenta Mathematicae 6 (1924), 45–95). As mentioned above, our counterexample $X$ cannot be mapped onto $\omega$, so ${\mathcal P}(X)$ must also be an iDf set. The second, more vague, question asks what additional conditions should a counterexample satisfy. 1 # Splitting infinite sets There are two questions here, an explicit one, and another (more vague) one that motivates it: I am pretty certain the following should have a negative answer, but at the moment I'm not seeing how to argue about this and cannot locate an appropriate reference. In set theory without choice, suppose $X$ is an infinite set such that for every positive integer $n$, we can split $X$ into $n$ (disjoint) infinite sets. Does it follow that $X$ can be split into infinitely many infinite sets? What would be a reasonably weak additional assumption to ensure the conclusion. ("Reasonably weak" would ideally be something that by itself does not suffice to give us that $X$ admits such a splitting, but I am flexible.) This was motivated by a question at Math.SE, namely whether an infinite set can be partitioned into infinitely many infinite sets. This is of course trivial with choice. In fact, all we need to split $X$ is that it can be mapped surjectively onto ${\mathbb N}$. However, without choice there may be counterexamples: A set $X$ is amorphous iff any subset of $X$ is either finite or else its complement in $X$ is finite. It is consistent that there are infinite amorphous sets. If $X$ is infinite and a finite union of amorphous sets, then $X$ is a counterexample. The question is a baby step towards trying to understand the nature of other counterexamples. Note that any counterexample must be an infinite Dedekind finite (iDf) set $X$. One can show that for any iDf $X$, ${\mathcal P}^2(X)$ is Dedekind infinite, and from this it actually follows that either $X$ or ${\mathcal P}(X)$ can be mapped onto $\omega$ (this is a result of Kuratowski, it appears in pages 94, 95 of Alfred Tarski, "Sur les ensembles finis", Fundamenta Mathematicae 6 (1924), 45–95). As mentioned above, our counterexample $X$ cannot be mapped onto $\omega$, so ${\mathcal P}(X)$ must also be an iDf set. The second, more vague, question asks what additional conditions should a counterexample satisfy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345795512199402, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/182308-random-variable-convergence-print.html
# random variable convergence Printable View • June 3rd 2011, 08:25 AM waytogo random variable convergence Hello, question is about about one point in proposition saying that convergence of random variable almost surely implies convergence in probability. A sequence $(X_n),n\in \mathbb{N}$ of random variables is said to converge almost surely to the random variable X if $P(\omega\mid X_n\to X, n\to \infty)=1$, which is equivalent to $\forall \varepsilon \forall \delta \exists N : P(\bigcap_{n=N}^\infty\[\omega: \mid {X_n - X}\mid< \varepsilon])\geq 1-\delta$. In proof of this proposition (in book of John B.Thomas) it is said that: $P(\bigcap_{n=N}^\infty\[\omega: \mid {X_n - X}\mid< \varepsilon])\geq 1-\delta$ is equivalent to $P(\bigcap_{n=N}^\infty\[\omega: \mid {X_n - X}\mid\geq \varepsilon])< \delta$ Although, using De Morgan's law and property of pobability measure I get that $P(\bigcap_{n=N}^\infty\[\omega: \mid {X_n - X}\mid< \varepsilon])\geq 1-\delta$ is equivalent to $P(\bigcup_{n=N}^\infty\[\omega: \mid {X_n - X}\mid\geq \varepsilon])< \delta$, which is essentially different statement. Can anybody comment this? • June 3rd 2011, 11:19 AM theodds It looks like a typo to me. For me, the easiest thing to remember is that $X_n \to X$ almost surely if and only if for every $\epsilon > 0$ we have $P([\omega: |X_n - X| \ge \epsilon \mbox{ for infinitely many $n$ }]) = P(\limsup [\omega: |X_n -X| \ge \epsilon]) = 0$ (the first equality being by definition). Then the result follows from $\limsup P([\omega: |X_n - X| \ge \epsilon]) \le P(\limsup [\omega: |X_n - X| \ge \epsilon])$. All times are GMT -8. The time now is 02:53 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287875294685364, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/20334/numerical-integration-of-a-numeric-data-available-as-a-nested-list/20343
# Numerical integration of a numeric data available as a nested list I have some numerical data in the form of a list with the following structure: {...{x,y,z},...} defining a surface z=z(x,y) in a 3D space (x,y,z). The data came from a simulation, and I am post-processing it within Mathematica. The precise numbers entering this list are not very important. One can think of this as of a small simplified example: ```` lst = RandomReal[{-2, 2}, {200, 3}] /. {x_, y_, z_} -> {x, y, 1.5 Exp[-(x^2 + y^2) - x] + z/10}; ```` However, my question is more broad than that. I am aware that there is a Mathematica package enabling one to apply derivatives to such numerical data in order to, say, numerically calculate gradients of z. My question is, if there are some standard Mathematica approaches to numerically calculate integrals either of z=z(x,y), or of some function of z: f[z(x,y)] over some domain in the (x,y) plane. Let me express my interest more clearly, I would be grateful for any proposal from you, but my primary interest is if a standard function or a standard package for this purpose exists. - I thank very much for the answers. I will work them through during next few days. It is difficult to decide which answer is closer to my aim. – Alexei Boulbitch Feb 28 at 13:23 There is, however, one quite clear outcome: there is no such built-in and ready to use Mathematica function making the job. Ok, good to know. – Alexei Boulbitch Feb 28 at 13:24 Probably it is already a technical question. In the data that I have in mind the area in the (x,y) plane is very large. Probably in such a case a good idea would be to split it into parts and make one of the procedures proposed in your answers for each part? In general, are there any restrictions on the size of the (x,y) domain? – Alexei Boulbitch Feb 28 at 13:37 ## 4 Answers You can install the Obtuse package for unstructured grid interpolation from here. Then interpolating your given data as a function `func` is simple. Interpolation Part: ````Off[General::compat]; Needs["Obtuse`"]; On[General::compat]; lst=RandomReal[{-2,2},{200,3}]; cpts=lst[[All,1;;2]]; cptab=lst/.{x_,y_,z_}->{{x,y},1.5 Exp[-(x^2+y^2)-x]+z/10}; func=Interpolation[cptab,Method->"Delaunay"]; (*Fartest x,y coordinates in data *) {xBD, yBD} = MapThread[Prepend, {{Min[#], Max[#]} & /@ Transpose[cpts], {x, y}}] ```` {{x, -1.97466, 1.99827}, {y, -1.97415, 1.98924}} So the plot.. I have chosen the Delaunay interpolation method based on convex hull. You can see the convex-hull in transparent white. Integration Part: Lets try to integrate this function. Take help of the `Boole` function trick from @Mark McClure. ````Graphics`Mesh`MeshInit[]; lst2D = Most /@ lst; poly = lst2D[[ConvexHull[lst2D]]]; InsideTest[x_?NumericQ, y_?NumericQ] := Boole[InPolygonQ[poly, {x, y}]]; test[x_?NumericQ, y_?NumericQ] := InsideTest[x, y]*func[{x, y}]; NIntegrate[test[x, y], Evaluate@xBD, Evaluate@yBD, Method -> {"AdaptiveQuasiMonteCarlo", "SymbolicProcessing" -> 0}, AccuracyGoal -> 3] ```` 5.89323 Code for visualization: Mainly copy-paste from Obtuse documentation! ````Row@{Show[ ContourPlot[func[{x, y}], Evaluate@xBD, Evaluate@yBD, PlotPoints -> 100, PerformanceGoal -> "Speed", ColorFunction -> "Rainbow", PlotRange -> Full, ImageSize -> {400, 400}], ListPlot[cpts, PlotStyle -> {{Black, PointSize[Medium]}}], ListPlot[ Append[cpts[[ConvexHull[cpts]]], cpts[[ConvexHull[cpts]]][[1]]], Joined -> True, PlotStyle -> Directive[White, Thickness[.011], Opacity[.5]]], ImageSize -> 500], Show[Plot3D[Evaluate@func[{x, y}], Evaluate@xBD, Evaluate@yBD, ColorFunction -> "Rainbow", PlotRange -> Full], ListPointPlot3D[Flatten /@ cptab, PlotStyle -> {{PointSize[Large], Black}}, Boxed -> False, Axes -> None], ImageSize -> 500]} ```` - – gpap Feb 27 at 16:23 Thank you. It is very interesting. I will try the method. There is however, the question concerning the domain size. Please have a look right below my question above. – Alexei Boulbitch Feb 28 at 13:42 I look forward to reading other answers to your question because I would like to know more myself. When I faced the same problem, I created an interpolation function over an $(x,y)$-grid. This will effectively mean that your data will have been converted to a continuous (and, depending on the options you use in interpolation, even smooth) function $z(x,y)$ in the $(x,y)$ domain which you can integrate, differentiate, use as input to other functions etc. The catch is that Interpolation in higher dimensions works only in regular grids. The problem of having non-regular grids has been addressed several times here: i.e. see interpolation of 3D data, Improved interpolation of mostly-structured 3d data or Interpolation of mostly-structured 3D data but it is mostly treated on a case by case basis. I would assume a higher order Interpolation function doesn't work on non-regular grids BECAUSE the non-regularity of a grid affects the approach one should be taking and I can't see how to automate this. In any case, if your datasets are of the general form you have posted (i.e. come from a Poisson-sprinkling), there is a few functions in Haneberg's book (if you have access to a scientific library) that do this thing quite well. I can't post the code because I assume it's proprietary but it is not something I couldn't have written myself so, given your reputation, you'd probably find it easy to implement. The idea is to create a regular grid by dividing up your range in $x$ and $y$, then create a nearest function that assigns $z$ values to the points in the grid nearest to your given points and then assign $z$ values to the rest of the points in the regular grid using some distance function of your choice. Once your regular grid has been populated with $z$ values, you can use `Interpolation` as usual. `InverseDistanceGrid2D` from that book is the one I tend to prefer and it works quite well on the dataset you posted - I expect it to work well in any dataset with the same kind of sprinkling of points. ````lst = RandomReal[{-2, 2}, {200, 3}] /. {x_, y_, z_} -> {x, y, 1.5 Exp[-(x^2 + y^2) - x] + z/10}; grid = Flatten[Table[{i, j}, {i, -2, 2, 0.2}, {j, -2, 2, 0.2}], 1]; (* grid chosen so it is roughly twice as dense as your original set*) f = InverseDistanceGrid2D[lst, 3, 2, grid]; (*calling of magical function whose code I can't post*) Show[Plot3D[f[x, y], {x, -2, 2}, {y, -2, 2}, PlotPoints -> 80, Lighting -> "Neutral"], ListPointPlot3D[lst, PlotStyle -> Directive[{Red, PointSize[0.015]}]]] ```` I assume your data are more unpredictable than a Gaussian but if -like here- you know what function to expect, then `FindFit` is what you need. - Thank you very much. The book you mentioned is not available, but I have got the direction. Without the book it may be even better. – Alexei Boulbitch Feb 28 at 13:39 If you want to see the routines in Haneberg's book, you can download them here. It seems to be called `ReciprocalDistanceGrid[]` in the book, tho. Looking at the implementation, it seems to me that one can now implement this in a less cumbersome manner using `Nearest[]`. – J. M.♦ Apr 24 at 0:09 You could interpolate and then integrate the interpolating function. I don't know that this is perfect, but I think any approach you take is likely to error prone, as you presumably already have error in the data. At any rate, here's how to use interpolation. ````SeedRandom[1]; lst = RandomReal[{-2, 2}, {200, 3}] /. {x_, y_, z_} -> {x, y, 1.5 Exp[-(x^2 + y^2) - x] + z/10}; interpF = Interpolation[lst] ```` We've generated a function defined on most of $[-2,2]\times[-2,2]$. You should understand of course, that we shouldn't trust the function evaluated outside of this domain. ````{interpF[0, 0], interpF[2, 2]} ```` To integrate this over an $xy$ domain, let's set up a funcion `test` that returns 0 or 1, depending on whether the point $(x,y)$ is inside the convex hull of the projection of your points onto the $xy$-plane. ````Graphics`Mesh`MeshInit[]; lst2D = Most /@ lst; poly = lst2D[[ConvexHull[lst2D]]]; test[x_?NumericQ, y_?NumericQ] := Boole[InPolygonQ[poly, {x, y}]]; Plot3D[test[x, y]*interpF[x, y], {x, -2, 2}, {y, -2, 2}] ```` Note that we generate a message, but `test` is zero for those points where extrapolation was used and the message generated. Similarly, we can `NIntegrate`: ````NIntegrate[test[x, y]*interpF[x, y], {x, -2, 2}, {y, -2, 2}, Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0}] (* Out: 5.83902 *) ```` - Thank you. I will try. – Alexei Boulbitch Feb 28 at 13:40 I know this is a Mathematica site, and my answer might be considered blasphemy, but it might solve your problem (which is the purpose of all this, I suppose). MATLAB has a function that does exactly that: it's called griddata. Since this is post-processing, I thought you might as well pass your data through MATLAB before handing it to Mathematica. If you don't have a MATLAB license, there's an open source alternative, which I never tried but people say it's working. - Thank you, but unfortunately I am illiterate in MATLAB. I prefer, therefore, to stay within Mathematica, even if there are better solutions. – Alexei Boulbitch Feb 28 at 13:31 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920967161655426, "perplexity_flag": "middle"}
http://nrich.maths.org/2047
### Golden Powers You add 1 to the golden ratio to get its square. How do you find higher powers? ### 2^n -n Numbers Yatir from Israel wrote this article on numbers that can be written as $2^n-n$ where n is a positive integer. ### And So on - and on -and On Can you find the value of this function involving algebraic fractions for x=2000? # Poly Fibs ##### Stage: 5 Challenge Level: Consider the sequence of polynomials given by $P_{n+2}(x)=xP_{n+1}(x)-P_n(x)$ where $P_0(x)=0$ and $P_1(x)=1$ (i) Show that every root of $P_3$ is a root of $P_6$. (ii) Show that every root of $P_4$ is a root of $P_8$. (iii) Show that every root of $P_5$ is a root of $P_{10}$. You can do this by finding the polynomials and then finding their roots (maybe using a computer), but try to find another way to get this result without finding the roots of the polynomials. One of the skills of a research mathematician is making conjectures about results that no-one has thought of and that turn out to be provable. In this problem there is a conjecture about a general result which you may be able to make quite easily although the proof is well beyond the scope of school mathematics. Go on learning mathematics and in a few years you will be able to prove it. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526022672653198, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10370/upper-bound-for-number-of-k-term-arithmetic-progressions-in-the-primes
## Upper bound for number of k-term arithmetic progressions in the primes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Normal heuristics give that number of k-term arithmetic progressions in [1,N] should be about `\[c_k\frac{N^2}{\log^kN}\]` for some constant $c_k$ dependent on k. The paper of Green and Tao gives a similar lower bound for all k (with a much worse constant, but still), and recent work by Green, Tao and Ziegler have established the correct asymptotic for k=3 and k=4. I am looking for a reference which establishes an upper bound for all k - I'm sure I've heard of one, but I can't find mention of the relevant paper anywhere. Of course, if there is a simple proof, that would appreciated as well. That is, I am looking for a reference and/or proof which establishes that the number of k-term arithmetic progressions of primes in [1,N] is at most `\[c_k'\frac{N^2}{\log^kN}\]` for some constant $c_k'$. - ## 1 Answer Well, any standard upper bound sieve (e.g. Selberg sieve, combinatorial sieve, beta sieve, etc.) will give this type of result. I'm not sure where you can find an easily citeable formulation, though. One can get this bound from Theorem D.3 of this paper of Ben and myself on page 67 (see in particular the remark at the bottom of that page). But this is certainly not the first place where such a bound appears. (The Goldston-Yildirim papers will give this result too, but this is also not the first place either.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186604022979736, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/30309-help-required.html
# Thread: 1. ## Help required... Q. $(2x + y)dy - (4x +2y -1)dy = 0$ After using $Z = 2x +y$, I got the answer, $c = 2y -ln(2x + y)$ However, its very different from the answer at the back of the book. I have checked and rechecked but ended up with the same wrong answer. 2. Originally Posted by Altair Q. $(2x + y){\color{red}dy} - (4x +2y -1){\color{blue}dy} = 0$ Mr F says: You have two dy's here. I'm guessing one of them is meant to be a dx. Which one - the red or the blue? After using $Z = 2x +y$, I got the answer, $c = 2y -ln(2x + y)$ However, its very different from the answer at the back of the book. I have checked and rechecked but ended up with the same wrong answer. .. 3. The red one. 4. Originally Posted by Altair The red one. $\frac{dy}{dx} = \frac{2x + y}{4x + 2y - 1}$. Substitute $z = 2x + y \Rightarrow y = z - 2x$. Then $\frac{dy}{dx} = \frac{dz}{dx} - 2$. Therefore: $\frac{dz}{dx} - 2 = \frac{z}{2z - 1}$ $\frac{dz}{dx} = \frac{z}{2z - 1} + 2 = \frac{5z - 2}{2z - 1}$ $\Rightarrow \frac{dx}{dz} = \frac{2z - 1}{5z - 2} = \frac{1}{5} \, \left( 2 - \frac{1}{5z - 2}\right)$. Therefore: $x = \frac{1}{5} \, \left( 2z - \frac{1}{5} \ln |5z - 2| + C \right)$ $\Rightarrow 5x = 2z - \frac{1}{5} \ln |5z - 2| + C$ $\Rightarrow 5x = 2(2x + y) - \frac{1}{5} \ln |5(2x + y) - 2| + C$. I'll leave to you to take it as further as necessary.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9725241661071777, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/299071/question-about-relation-on-rational-numbers
# Question about relation on rational numbers Define the relation on $\mathbb{Q}$ by $$[m,n]<[j,k]$$ if and only if $jn-mk$ belongs to $\mathbb{N}$, $j$ and $m$ belong to $\mathbb{Z}$, $n$ and $k$ belong to $\mathbb{N}$. (a) Show that $<$ is well defined, that is if $(m,n)\sim (m',n')$ and $(j,k)\sim(j',k')$, then $jn-mk$ belongs to $\mathbb{N}$ if and only if $j'n'-m'k'$ belongs to $\mathbb{N}$. Here, $(m,n)\sim (j,k)$ means $mk=jn$. (b) Show that $<$ is a total order relation on $\mathbb{Q}$. I get stuck how to use the conditions: $mn'=m'n$ derived from $(m,n)\sim(m',n')$, $jk'=j'k$ derived from $(j,k)\sim(j',k')$ and $jn-mk$ is an natural number to show that $j'n'-m'k'$ is also an natural number in part (a). Thank you! - 3 Is the relation on $\mathbb Q$ or on $\mathbb{Q\times Q}$ or on $\mathbb{Z\times N}$? The first line indicates one thing, but the second another. – Asaf Karagila Feb 10 at 2:17 @Asaf, I'm guessing that OP has forgotten to tell us that $[m,n]$ is notation for what the rest of us call $m/n$, so it really is a relation on the rationals. – Gerry Myerson Feb 10 at 2:23 1 Elvis, I imagine you aren't just posting this collection of facts so we can all admire them; rather, you want help in establishing them. In which case, it will go better if you tell us how you came across these facts, why it is important to you to establish them, what you know about the terms used in them, how far you have progressed on your own, where you have gotten stuck, and so on. The more you tell us, the better we can help. – Gerry Myerson Feb 10 at 2:26 Hi,[m,n]is an element of ZXZ, I get stuck how to use the conditions: mn'=m'n derived from(m,n)~(m',n'), jk'=j'k derived from (j,k)~(j',k') and jn-mk is an natural number to show that j'n'-m'k' is also an natural number in part (a). Thank you! – Elvis Feb 10 at 19:33 show 2 more comments ## 1 Answer What is it all about? How do you mean '$[m,n]\in\Bbb Q$' ? We are building the numbers out of almost nothing, first the natural numbers $1,2,3,..$ then -to ensure inverse for $+$- the integers, now this $[m,n]$ wants to represent the fraction $m/n$. How to say with these formal pairs that $m/n < j/k$? This is equivalent to $mk<jn$, that is $jn-mk >0$. (I guess, $0\notin\Bbb N$ in your meaning.) Now, this form is acceptable, since division is not available yet, but the expression $jn-mk$ is already defined in $\Bbb Z$. So, for the specific question: we are to prove (excluding division and fractions) basically that $m'/n'=m/n < j/k =j'/k'$ implies $m'/n'<j'/k'$. For this, first let's take only one step: $[m',n']=[m,n]<[j,k] \Rightarrow [m',n']<[j,k]$: So we have $m'n=n'm$ and $mk<jn$. Then, since $n>0$ (denominator), the sign of $m$ and $m'$ is the same. Now assume that $m,m'>0$, and approaching $m'k$ for the proof: $$m'mk<jnm'=jn'm$$ Since $m>0$ is assumed, it follows that $m'k<jn'$. If $m,m'<0$ then the relation symbol will turn twice, and you can also check the case $m=m'=0$. - Thank you, Beci! Your response gave me lots of hints. Just a little modification on your methods. In the case we assume m>0,m'>0,j>0 and j'>0, we can multiply both sides of the inequality of jn>mk by j'm', then we have j'm'jn>j'm'mk. Then we use the fact that mn'=m'n and jk'=j'k, and we have j'jmn'>jk'm'm, that's j'n'jm>m'k'jm. Since by assumption jm>0, we have j'n'>m'k'as we need. – Elvis Feb 11 at 3:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415577054023743, "perplexity_flag": "head"}
http://wiki.chemprime.chemeddl.org/index.php/CoreChem:Effect_of_Adding_a_Reactant_or_Product
CoreChem:Effect of Adding a Reactant or Product - ChemPRIME # CoreChem:Effect of Adding a Reactant or Product ### From ChemPRIME If we have a system which is already in equilibrium, addition of an extra amount of one of the reactants or one of the products throws the system out of equilibrium. Either the forward or the reverse reaction will then occur in order to restore equilibrium conditions. We can easily tell which of these two possibilities will happen from Le Chatelier's principle. If we add more of one of the products, the system will adjust in order to offset the gain in concentration of this component. The reverse reaction will occur to a limited extent so that some of the added product can be consumed. Conversely, if one of the reactants is added, the system will adjust by allowing the forward reaction to occur to some extent. In either case some of the added component will be consumed. We see this principle in operation in the case of the decomposition of HI at high temperatures: 2HI(g) $\rightleftharpoons$ H2(g) + I2(g) In Example 1 from Calculating the Extent of a Reaction we saw that if 1 mol HI is heated to 745 K in a 10-dm3 flask, some of the HI will decompose, producing an equilibrium mixture of composition 1 [HI] = 0.0780 mol dm–3; [I2] = 0.0110 mol dm–3; [H2] = 0.0110 mol dm–3 This is a genuine equilibrium mixture since it satisfies the equilibrium law $K_{c}=\frac{\text{ }\!\![\!\!\text{ H}_{\text{2}}\text{ }\!\!]\!\!\text{ }\!\![\!\!\text{ I}_{\text{2}}\text{ }\!\!]\!\!\text{ }}{\text{ }\!\![\!\!\text{ HI }\!\!]\!\!\text{ }^{\text{2}}}=\frac{\text{(0}\text{.011 mol dm}^{-\text{3}}\text{)(0}\text{.011 mol dm}^{-\text{3}}\text{)}}{\text{(0}\text{.078 mol dm}^{-\text{3}}\text{)}^{\text{2}}\text{ }}=\text{0}\text{.020}=K_{c}$ If an extra mole of H2 is added to this mixture, the concentrations become 2 cHI = 0.0780 mol dm–3; cI2 = 0.0110 mol dm–3; cH2 = 0.110 mol dm–3 The system is no longer in equilibrium (hence the lack of square brackets to denote equilibrium concentrations) as we can easily check from the equilibrium law $\frac{c_{\text{H}_{\text{2}}}\text{ }\times \text{ }c_{\text{I}_{\text{2}}}}{c_{\text{HI}}^{\text{2}}}=\frac{\text{0}\text{.011 mol dm}^{-\text{3}}\text{ }\times \text{ 0}\text{.111 mol dm}^{-\text{3}}}{\text{(0}\text{.078 mol dm}^{-\text{3}}\text{)}^{\text{2}}}=\text{0}\text{.201}\ne K_{c}$ The addition of H2 has increased the concentration of this component. Accordingly, Le Chatelier’s principle predicts that the system will achieve a new equilibrium in such a way as to reduce this concentration. The reverse reaction occurs to a limited extent. This not only reduces the concentration of H2 but the concentration of I2 as well. At the same time the concentration of HI is increased. The system finally ends up with the concentrations calculated in Example 2 from Calculating the Extent of a Reaction, namely, H2+ I2 → 2HI Figure 1 Le Chatelier’s principle: effect of adding a component. At 745 K, HI is partially decomposed into H2 and I2: 2HI $\rightleftharpoons$ H2 + I2. If extra hydrogen (gray) is added to the equilibrium mixture, the system responds in such a way as to reduce the concentration of H2. Some I2 reacts with the H2, and more HI is formed. The equilibrium is shifted to the left. Note, however, that some of the I2 has been consumed, and its concentration is smaller than before. 3 [HI] = 0.0963 mol dm–3; [I2] = 0.001 82 mol dm–3; [H2] = 0.1018 mol dm–3 This is again an equilibrium situation since it conforms to the equilibrium law $\frac{\text{ }\!\![\!\!\text{ H}_{\text{2}}\text{ }\!\!]\!\!\text{ }\!\![\!\!\text{ I}_{\text{2}}\text{ }\!\!]\!\!\text{ }}{\text{ }\!\![\!\!\text{ HI }\!\!]\!\!\text{ }^{\text{2}}}=\frac{\text{0}\text{.1018 mol dm}^{-\text{3}}\text{ }\times \text{ 0}\text{.001 82 mol dm}^{-\text{3}}}{\text{(0}\text{.0963 mol dm}^{-\text{3}}\text{)}^{\text{2}}\text{ }}=\text{0}\text{.02}=K_{c}$ The way in which this system responds to the addition of H2 is also illustrated schematically in Fig. 1. The actual extent of the change is exaggerated in this figure for diagrammatic effect. Le Chatelier's principle can also be applied to cases where one of the components is removed. In such a case the system responds by producing more of the component removed. Consider, for example, the ionization of the weak diprotic acid H2S: H2S + 2H2O $\rightleftharpoons$ 2H3O+ + S2– Since H2S is a weak acid, very few S2– ions are produced, but a much larger concentration of S2–ions can be obtained by adding a strong base. The base will consume most of the H3O+ ions. As a result, more H2S will react with H2O in order to make up the deficiency of H3O+, and more S2–ions will also be produced. This trick of removing one of the products in order to increase the concentration of another product is often used by chemists, and also by living systems. EXAMPLE 1 When a mixture of 1 mol N2 and 3 mol H2 is brought to equilibrium over a catalyst at 773 K (500°C) and 10 atm (1.01 MPa), the mixture reacts to form NH3 according to the equation N2(g) + 3H2(g) $\rightleftharpoons$ 2NH3(g)      ΔHm = – 94.3 kJ The yield of NH3, however, is quite small; only about 2.5 percent of the reactants are converted. Suggest how this yield could be improved (a) by altering the pressure; (b) by altering the temperature; (c) by removing a component; (d) by finding a better catalyst. Solution a) Increasing the pressure will drive the reaction in the direction of fewer molecules. Since Δn = - 2, the forward reaction will be encouraged, increasing the yield of NH3. b) Increasing the temperature will drive the reaction in an endothermic direction, in this case in the reverse direction. In order to increase the yield, therefore, we need to lower the temperature. c) Removing the product NH3 will shift the reaction to the right. This is usually done by cooling the reaction mixture so that NH3(l) condenses out. Then more N2(g) and H2(g) are added, and the reaction mixture is recycled to a condition of sufficiently high temperature that the rate becomes appreciable. d) While a better catalyst would speed up the attainment of equilibrium, it would not affect the position of equilibrium. It would therefore have no effect on the yield. Note: As mentioned in Chaps. 3 and 12, NH3 is an important chemical because of its use in fertilizers. In the design of a Haber-process plant to manufacture ammonia, attempts are made to use as high a pressure and as low a temperature as possible. The pressure is usually of the order of 150 atm (15 MPa), while the temperature is not usually below 750 K. Although a lower temperature would give a higher yield, the reaction would go too slowly to be economical, at least with present-day catalysts. The discoverer of a better catalyst for this reaction would certainly become a millionaire over-night.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345558881759644, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1743/what-is-the-constant-of-the-coppersmith-winograd-matrix-multiplication-algorithm/1747
## What is the constant of the Coppersmith-Winograd matrix multiplication algorithm ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Or at least it's order of magnitude. I've only ever heard it described as "huge", and a google search turned up nothing. Also, given that the Strassen algorithm has a significantly greater constant than Gaussian Elimination, and that Coppersmith-Winograd is greater still, are there any indications of what constant an O(n^2) matrix multiplication algorithm might have? - ## 2 Answers In your second question, I think you mean "naive matrix multiplication", not "Gaussian elimination". Henry Cohn et al had a cute paper that relates fast matrix multiply algorithms to certain groups. It doesn't do much for answering your question (unless you want to go and prove the conjectured results =), but it's a fun read. Also, to back up harrison, I don't think that anyone really believes that there's an $O(n^2)$ algorithm. A fair number of people believe that there is likely to be an algorithm which is $O(n^{2+\epsilon})$ for any $\epsilon > 0$. An $O(n^2 \log n)$ algorithm would fit the bill. edit: You can get a back-of-the-envelope feeling for a lower bound on the exponent of Coppersmith-Winograd based on the fact that people don't use it, even for n on the order of 10,000; naive matrix multiplication requires $2n^3 + O(n^2)$ flops, and Coppersmith-Winograd requires $Cn^{2.376} + O(n^2)$. Setting the expressions equal and solving for $C$ gives that the two algorithms would have equal performance for n = 10,000 (ignoring memory access patterns, implementation efficiency, and all sorts of other things) if the constant were about 627. In reality, it's likely much larger. - 1 But you'd probably be using Strassen multiplication anyway if n was on the order of 10,000, right? – Darsh Ranjan Jan 31 2010 at 11:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In answer to the second part of your question, I think the conventional wisdom is that there isn't a O(n^2) algorithm; analogously to the case for integer multiplication, you shouldn't be able to do better than about O(n^2 log n). (Raz has shown that this is a lower bound in the arithmetic circuits with bounded coefficients model.) What's the implied constant there? Probably just "huge." As far as I know, the reason that people believe that we can achieve close to O(n^2) is basically by analogy with integer multiplication, so if you want some grasp on the constants it might be worthwhile to look at the constants in FFT multiplication. Incidentally, has the appropriate volume of Art of Computer Programming been released, or will it be soon? I know Knuth's a stickler for including these kinds of details, so that might be the most obvious reference apart from the original paper... - If it's in TAOCP, it's in Volume 2, "Seminumerical algorithms". – Michael Lugo Oct 22 2009 at 14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9531844258308411, "perplexity_flag": "middle"}
http://physics.stackexchange.com/users/20032/markovchain?tab=activity
# markovchain reputation 8 bio website location age member for 3 months seen Apr 11 at 2:43 profile views 28 | | | bio | visits | | | |----------|----------------|---------|----------|----------------|----------| | | 365 reputation | website | | member for | 3 months | | 8 badges | location | | seen | Apr 11 at 2:43 | | # 91 Actions | | | | |-------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Mar30 | comment | Why are magnetic lines of force invisible?This is an interesting question. Too bad it was never answered. Some species of birds can see magnetic fields, after all. | | Mar29 | comment | Why is AC more “dangerous” than DC?I think he means that, since he clearly describes it in his question | | Mar27 | revised | What is the meaning of $h_L - h_H$ for a heat engine?added 87 characters in body | | Mar27 | revised | What is the meaning of $h_L - h_H$ for a heat engine?added 104 characters in body | | Mar27 | comment | What is the meaning of $h_L - h_H$ for a heat engine?Also, I know I tag this as homework (as is the practice on this site for homework-y questions), but it's not actually graded homework. I'm just doing problems. | | Mar27 | accepted | In $PdV$, what is the value of $P$? $P_1$ or $P_2$? | | Mar27 | comment | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?I did figure it out. But it's still nice to have an answer to accept. I realized that $U_2 - U_1$ only depends on $T$ and not $P$ but only after I searched my formulas. I dont quite remember all of them yet. But still, thanks! | | Mar27 | asked | What is the meaning of $h_L - h_H$ for a heat engine? | | Mar27 | comment | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?Thanks. Okay, I think I have something. Is it $-W_{12} = U_2 - U_1$? My $U_2 - U_1$ can be taken from my values of pressure and temperature. And there is my specific work. Thanks for the hints (if this is right)! | | Mar27 | comment | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?Oh, wait, do you mean that the function of increase is linear? | | Mar27 | comment | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?I think you're implying the pressure is constant... which would make sense as the set up is a piston cylinder. But I've been given values for $P_1$ and $P_2$. I'm really sorry for not knowing, but I honestly don't know. | | Mar27 | revised | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?edited tags | | Mar27 | comment | In $PdV$, what is the value of $P$? $P_1$ or $P_2$?Ah. Unfortunately I'm not given that function. Is there another way for me to get the specific work? | | Mar27 | asked | In $PdV$, what is the value of $P$? $P_1$ or $P_2$? | | Mar23 | comment | Future light cones inside black holeSince the timelike and spacelike dimensions flip inside a black hole, can you avoid getting older when you cross the event horizon? Even more, can you reverse your aging (just as you can turn around in space)? | | Mar21 | comment | Where are we : On level ground or on a ramp - moving in a train?Why would he mention the pendulum if he didn't want to know about it? Disregarding that is functionally the same as "can we comment where we are... if I'm eating lunch too?" The fact you eat lunch doesnt have to do with anything -- but a pendulum might. Wouldn't that be why it's mentioned in the question? I'm not disputing your answer. You're quite correct. But isn't it interesting to wonder if the pendulum would tilt to one side when the train gets on the ramp? | | Mar21 | revised | How to solve state parameters using these givens for an ideal gas?added tags | | Mar21 | comment | Where are we : On level ground or on a ramp - moving in a train?But the question was, can we do it by observing the pendulum? That is, will it "tilt" towards one side? I think it's the same thing as asking, if we have water in a glass and the train goes up the ramp, will the water level, from my perspective, tilt? | | Mar21 | comment | How to solve state parameters using these givens for an ideal gas?But then I would need to know $E$, $v$ and/or $u$, since I don't know these values yet. Honestly, this is a basic thermo class, and I'm not sure if the inlet and exit velocities are useful or just a red herring, but my teachers feel like giving us a hard time. | | Mar20 | comment | How to solve state parameters using these givens for an ideal gas?The only thing it says in there about turbine efficiency is already written above: $\eta = \frac{w_{actual}}{w_{isentropic}} = \frac{h_i - h_e}{h_i - h_{e)s}}$. That's my problem, because I only solved for the isentropic $T_e$ but I know that my process isn't really isentropic, and I'm having a hard time finding the real value. Isn't that metric $h+1/2V^2$ applicable only to a reversible, isentropic case? Or am I wrong? |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553446769714355, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/202867/intersection-of-a-unit-sphere-of-a-given-norm-in-finite-dimension-with-an-hyperp/202902
# Intersection of a unit sphere of a given norm in finite dimension with an hyperplane. Let $\|\cdot\|$ be a norm on $\mathbb{R}^n$. Let $C:=\{x\in\mathbb{R}^n\,:\,\|x\| \leq 1\}$, that is to say let $C$ be a convex compact symmetric set of non empty interior. Let $H$ be a linear subspace of $\mathbb{R}^n$ of dimension $n-1$. Is it true that there exists $z\in \mathbb{R}^n$ such as $\|z\|=1$ and \begin{align*} \Big[(H \cap \partial C) + \mathbb{R}z\Big] \cap \mathring{C} = \emptyset \quad\text{ ? } \end{align*} If not, please, give me a counter-example ! It appears to me that $z\in H^\perp$ should be a proper candidate, but though I might see why this is working for the $\| \cdot \|_p$ norms for example in low dimensions, I did not manage to prove it generally. Thanks in advance. Xou. - Did you mean $\leq$ in the definition of $C$? – copper.hat Sep 26 '12 at 15:11 What's for you $\,\Bbb Rz\,$?? – DonAntonio Sep 26 '12 at 15:12 @DonAntonio: Presumably $\{\lambda z \}_{\lambda \in \mathbb{R}}$? – copper.hat Sep 26 '12 at 15:13 what is $\mathring{C}$? – robjohn♦ Sep 26 '12 at 15:16 Most likely the interior. – Stefan Sep 26 '12 at 15:28 show 2 more comments ## 2 Answers If $z \neq 0$ the result is not true in general. Here is a counterexample. Let $C = \{ (x_1,x_2) | x_1^2+x_2^2 = 1 \}$. Let $H$ = $\{ x | x_2 = \sqrt{2}\}$. Then $H \cap \partial C = \{(\pm \sqrt{2}, \sqrt{2})^T \}$, and $(H \cap \partial C) + \mathbb{R}z = \{ \lambda z + \sqrt{2} (1,1)^T, \lambda z + \sqrt{2} (-1,1)^T \}_{\lambda \in \mathbb{R}}$. (Picture inspired by @robjohn.) Now let $\phi(x) = \frac{1}{2} (x_1^2+x_2^2)$ and let $z \neq 0$. We will show that for any $z$, there exists $x \in H \cap \partial C$ and a $\lambda$ such that $\phi(x+\lambda z) < \frac{1}{2}$, which implies that $x+\lambda z \in C^\circ$. In particular, all we need to do is to show that $\frac{\partial \phi(x)}{\partial x} z \neq 0$, since if this is the case, it is straightforward to show that there exists a $\lambda$ such that $\phi(x+\lambda z) < \phi(x) = \frac{1}{2}$. Suppose, to the contrary, that $\frac{\partial \phi(x)}{\partial x}z = x^T z = 0$ for all $x \in H \cap \partial C$. Then this implies that $\sqrt{2} (\pm 1, 1) z = 0$, which implies that $z = 0$ since $(\pm 1, 1)^T$ are linearly independent, which contradicts our assumption that $z \neq 0$. - By $\frac{\partial \phi(x)}{\partial x}$, do you mean $\nabla\phi$? At least it seems that $\frac{\partial \phi(x)}{\partial x}$ is supposed to be a vector. Assuming so, then this looks good (+1). – robjohn♦ Sep 26 '12 at 17:53 @robjohn: Sort of, $\frac{\partial \phi(x)}{\partial x} h = \langle \nabla\phi(x), h \rangle$. $\frac{\partial \phi(x)}{\partial x}\in (\mathbb{R}^n)^*$, whereas $\nabla\phi \in \mathbb{R}^n$. (And thanks!) – copper.hat Sep 26 '12 at 19:57 I am so sorry, I think my english is not good enough : I meant a linear hyperplan (i.e. containing 0), that is a subspace of dimension $n-1$. In your example, you could only use lines crossing zero, and you may check that the orthogonal direction does the job for $z$. – xounamoun Sep 26 '12 at 21:14 Unless I am misunderstanding something, the following seems to be a counterexample. Given the standard Euclidean norm in $\mathbb{R}^3$ and the plane $x_1=.6$, $H\cap\partial C$ is the circle $$\{x:x_1=.6\text{ and }x_2^2+x_3^2=.64\}$$ No matter what $z$ is (other than $0$), the elliptic cylinder $(H \cap \partial C) + \mathbb{R}z$ will intersect $\mathring{C}$. $\hspace{3.3cm}$ Given any $z$, if there is an $h\in H\cap\partial C$ so that $h+\mathbb{R}z$ does not intersect $\mathring{C}$, then $z$ must be perpendicular to the normal to $\partial C$ at $h$. In order that $h+\mathbb{R}z$ does not intersect $\mathring{C}$ for all $h\in H\cap\partial C$, $z$ would have to be perpendicular to each normal to $\partial C$ at a point of $H\cap\partial C$. Therefore, $z$ would need to be perpendicular to $(.6,.8,0)$ and $(.6,0,.8)$, thus parallel to their cross product $(.64,-.48,-.48)$. Furthermore, $z$ would need to be perpendicular to $(.6,-.8,0)$ and $(.6,0,.8)$, thus parallel to their cross product $(-.64,-.48,.48)$. However, since $(.64,-.48,-.48)$ and $(-.64,-.48,.48)$ are not parallel, $z$ cannot be parallel to them both, unless it is $0$. Thus, there is no $z$ that so that $\left[(H\cap\partial C)+\mathbb{R}z\right]\cap\mathring{C}=\emptyset$. Revised Question Consider the maximum norm on $\mathbb{R}^3$ given by $$\|x\|=\max(|x_1|,|x_2|,|x_3|)$$ Under this norm, $C$ is a cube of side $2$. Let $H$ be the plane $x_1+x_2+x_3=0$. This plane intersects all $6$ sides of $\partial C$. Each of the following points is on $H$ and a different side of $\partial C$: $$\small\left\{\left(1,-\frac12,-\frac12\right),\left(-\frac12,1,-\frac12\right),\left(-\frac12,-\frac12,1\right),\left(-1,\frac12,\frac12\right),\left(\frac12,-1,\frac12\right),\left(\frac12,\frac12,-1\right)\right\}$$ As argued in the answer to the original question, $z$ must be perpendicular to the normal to $C$ at all points of $H\cap\partial C$. However, the normals to the cube span $\mathbb{R}^3$, so there can be no such $z$ other than $z=0$. True in Two Due to the fact that $\|-x\|=\|x\|$, the normal to $\partial C=\{x:\|x\|=1\}$ at any $x$ is the opposite of the normal at $-x$. In $\mathbb{R}^2$, $H\cap\partial C$ consists of two points, $h$ and $-h$. Thus, if $z$ is perpendicular to the normal to $\partial C$ at $h$, then neither $h+\mathbb{R}z$ nor $-h+\mathbb{R}z$ intersect $\mathring{C}$. - Same remark as above. I am so sorry because you both gave good, precise and furthermore drawn answers ! Since the plane has to pass through zero, again the orthogonal direction seems to work (the intersection is made through one of the "big circles" of the sphere). – xounamoun Sep 26 '12 at 21:17 – robjohn♦ Sep 27 '12 at 14:57 Ok, I agree ! Thanks ! – xounamoun Sep 27 '12 at 16:02 @xounamoun: However, it appears to be true in $\mathbb{R}^2$. – robjohn♦ Sep 27 '12 at 16:53 Yeah, I noticed this before, this is probably because the intersection seems to be always reduced to 2 points. Actually, this question is related to a lemma used in the study of Schauder bases. I was quite sure that the result may not be true generally in finite dimension because it would simplify considerably some classical proof ! Ok I just read the proof " true in two ", I agree also ! – xounamoun Sep 28 '12 at 7:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 111, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395638108253479, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/87028-limits-abs-function.html
# Thread: 1. ## Limits Of An abs function I was told to use algrebaic manpulation to find the limit for lim --> 4 $\frac{2|{x-4}|}{x-4}$ 2. you get 2 if you approach from the right and you get -2 if you appoach from the left. Hence there is no limit. ${|u|\over u}=1$ if u>0 and ${|u|\over u}=-1$ if u<0. Hence there is no limit. Hence there is no limit. Hence there is no limit. 3. So the limit does not exist because the limit from the right is not the same as the limit from the left
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8692672848701477, "perplexity_flag": "middle"}
http://cheaptalk.org/2011/04/18/maximizing-consumer-surplus-part-ii-compromising/
Cheap Talk A blog about economics, politics and the random interests of forty-something professors # Maximizing Consumer Surplus Part II: Compromising April 18, 2011 in Uncategorized | Tags: economics, food and wine, incentives, vapor mill Restaurants, touring musicians, and sports franchises are not out to gouge every last penny out of their patrons.  They want patrons to enjoy their craft but also to come away feeling like they didn’t pay an arm and a leg.  Yesterday I tried to formalize this motivation as maximizing consumer surplus but that didn’t give a useful answer. Maximizing consumer surplus means either complete rationing (and zero profit) or going all the way to an auction (a more general argument why appears below.)  So today I will try something different. Presumably the restaurant cares about profits too.  So it makes sense to study the mechanism that maximizes a weighted sum of profits and consumer’s surplus. We can do that.  Standard optimal mechanism design proceeds by a sequence of mathematical tricks to derive a measure of a consumer’s value called virtual surplus.  Virtual surplus allows you to treat any selling mechanism you can imagine as if it worked like this 1. Consumers submit “bids” 2. Based on the bids received the seller computes the virtual surplus of each consumer. 3. The consumer with the highest virtual surplus is served. If you write down the optimal mechanism design problem where the seller puts weight $\alpha$ on profits and weight $1 - \alpha$ on consumer surplus, and you do all the integration by parts, you get this formula for virtual surplus. $\alpha v + (1 - 2\alpha) \frac{1 - F(v)}{f(v)}$ where $v$ is the consumer’s willingness to pay, $F(v)$ is the proportion of consumers with willingness to pay less than $v$ and $f(v)$ is the corresponding probability density function.   That last ratio is called the (inverse) hazard rate. As usual, just staring down this formula tells you just about everything you want to know about how to design the pricing system.  One very important thing to know is what to do when virtual surplus is a decreasing function of $v$. If we have a decreasing virtual surplus then we learn that it’s at least as important to serve the low valuation buyers as those with high valuations (see point 3 above.) But here’s a key observation: its impossible to sell to low valuation buyers and not also to high valuation buyers because whatever price the former will agree to pay the latter will pay too.  So a decreasing virtual surplus means that you do the next best thing: you treat high and low value types the same. This is how rationing becomes part of an optimal mechanism. For example, suppose the weight on profit $\alpha$ is equal to $0$. That brings us back to yesterday’s problem of just maximizing consumer surplus. And our formula now tells us why complete rationing is optimal because it tells us that virtual surplus is just equal to the hazard rate which is typically monotonically decreasing. Intuitively here’s what the virtual surplus is telling us when we are trying to maximize consumer surplus. If we are faced with two bidders and one has a higher valuation than the other, then to try to discriminate would require that we set a price in between the two. That’s too costly for us because it would cut into the consumer surplus of the eventual winner. So that’s how we get the answer I discussed yesterday.  Before going on I would like to elaborate on yesterday’s post based on correspondence I had with a few commenters, especially David Miller and Kane Sweeney. Their comments highlight two assumptions that are used to get the rationing conclusion:  monotone hazard rate, and no payments to non-buyers.  It gets a little more technical than usual so I am going to put it here in an addendum to yesterday (scroll down for the addendum.) Now back to the general case we are looking at today, we can consider other values of $\alpha$ An important benchmark case is $\alpha = 1/2$ when virtual surplus reduces to just $v$, now monotonically increasing.  That says that a seller who puts equal weight on profits and consumer surplus will always allocate to the highest bidder because his virtual surplus is higher.  An auction does the job, in fact a second price auction is optimal.  The seller is implementing the efficient outcome. More interesting is when $\alpha$ is between $0$ and $1/2$. In general then the shape of the virtual surplus will depend on the distribution $F$, but the general tendency will be toward either complete rationing or an efficient auction.  To illustrate, suppose that willingness to pay is distributed uniformly from $0$ to $1$. Then virtual suplus reduces to $(3 \alpha - 1) v + (1 - 2 \alpha)$ which is either decreasing over the whole range of $v$ (when $\alpha \leq 1/3$), implying complete rationing or increasing over the whole range (when $\alpha > 1/3$), prescribing an auction. Finally when $\alpha > 1/2$ virtual surplus is the difference between an increasing function and a decreasing function and so it is increasing over the whole range and this means that an auction is optimal (now typically with a reserve price above cost so that in return for higher profits the restaurant lives with empty tables and inefficiency.  This is not something any restaurant would choose if it can at all avoid it.) What do we conclude from this?  Maximizing a weighted sum of consumer surplus and profit yields again yields one of two possible mechanisms: complete rationing or an auction.  Neither of these mechanisms seem to fit what Nick Kokonas was looking for in his comment to us and so we have to go back to the drawing board again. Tomorrow I will take a closer look and extract a more refined version of Nick’s objective that will in fact produce a new kind of mechanism that may just fit the bill. Addendum: Check out these related papers by Bulow and Klemperer (dcd: glen weyl) and by Daniele Condorelli. ### Jeff’s Twitter Feed • Whatever doesn't kill me only makes my eventual death anti-climatic. 2 days ago • PhD- 3 weeks ago • The heavens shaking their fists at you. 3 weeks ago • Swappin' swear words at the peanut-free table. 1 month ago • Let me floss next to your fire. 1 month ago ### Email Subscription Join 724 other followers ## 8 comments Yup, Jeff, this covers it. I’ll send this to Jeremy and Paul. They might want to include something about it in their paper. I admire Nick’s empathy with his customers, but I don’t think consumer surplus should be a component of Nick’s objective at all. Isn’t the difference between the price the consumer pays and the minimum price they are willing to pay (summed across all consumers) the consumer surplus? And as long as the expectation of the quality of food/service is met in the price they pay, the consumer will be satisfied, although the lower the price they would pay, the more satisfied they would be. The auction system resolves this issue by asking consumers to set the prices. An individual won’t pay anything more than the benefit they would receive. You could design a system that auctions tickets for different times of the day. With an auction, the price settled upon of a ticket is a perception issue… of the consumers perceiving the quality of the restaurant! They’re WILLING to pay. If they don’t think the price of the auctioned ticket was worth it, they wouldn’t go back. Those that are satisfied, would. If the perception is of a lower quality, prices would reflect that. If they think Nick’s restaurant is golden, prices would reflect that too. Nick wants his customers to be satisfied, but not so satisfied that they perceive they got a huge bargain. The auction hits it on the nose. The only question with the auction is how to construct one that can maximize profit for different size parties, for location of tables, etc. I mean, for a larger party, you may want to set a higher reserve price, and the same for table locations. But with regards to my original point, is there something I’m missing?? this is the point i will address tomorrow. i believe if you stare at Nick’s comment you will see what he is getting at and that it makes good sense. [...] Next in Chicago and proprietors Grant Achatz and Nick Kokonas new ticket policy.   In the previous two installments I tried to use standard mechanism design theory to see what comes out when you feed in [...] Incidentally, the rationing result features in McAfee and McMillan’s paper on bidding rings. When transfers are not feasible the ring allocates the object randomly at the reserve price. i see, thanks. [...] blogged many times about the Next restaurant’s innovative ticket scheme. Potential restaurant goers had to sign up to [...] [...] solution to be a rationing system with a price below market clearing. I devoted a series of posts to this point last year. The basic idea is that the price that the efficiency gains you get from [...]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344695806503296, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18464/what-are-the-correct-axioms-for-a-weakly-associative-monoidal-functor/18546
## Definitions and the main question Recall that a category $\mathcal C$ is monoidal if it is equipped with the following data (two functors, three natural transformations, and some properties): • a functor $\otimes: \mathcal C \times \mathcal C \to \mathcal C$, • a functor $1: \{\text{pt}\} \to \mathcal C$, • a natural transformation $\alpha: (X\otimes Y)\otimes Z \overset\sim\to X\otimes(Y\otimes Z)$ between functors $\mathcal C^{\times 3} \to \mathcal C$ (natural in $X,Y,Z$), • natural transformations $\lambda: 1 \otimes X \overset\sim\to X$, $\rho: X\otimes 1 \overset\sim\to X$ between functors $\mathcal C \to \mathcal C$ (natural in $X$; this uses the canonical isomorphisms of categories $\{\text{pt}\} \times \mathcal C \cong \mathcal C \cong \mathcal C \times \{\text{pt}\}$), • such that $\alpha$ satisfies a pentagon, • and $\alpha,\lambda,\rho$ satisfy some other equations. I tend to be less interested in the unit laws $\lambda,\rho$, which is my excuse for knowing less about their technicalities. In my experience, it's the associativity law $\alpha$ that can have interesting behavior. Let $\mathcal C,\mathcal D$ be monoidal categories. Recall that a functor $F: \mathcal C \to \mathcal D$ is (strong) monoidal if it comes with the following data (two natural transformations, and three properties): • a natural isomorphism $\phi: \otimes_{\mathcal D} \circ (F\times F) \overset\sim\to F\circ \otimes_{\mathcal C}$ of functors $\mathcal C \times \mathcal C \to \mathcal D$, • a natural isomorphism $\varphi: 1_{\mathcal D} \overset\sim\to F\circ 1_{\mathcal C}$ of functors $\{\text{pt}\} \to \mathcal D$, • satisfying some properties, the main one being that the two natural transformations $(FX \otimes_{\mathcal D} FY) \otimes_{\mathcal D} FZ \overset\sim\to F(X\otimes_{\mathcal C} (Y \otimes_{\mathcal C}Z))$ that are built from $\phi, \alpha_{\mathcal C}, \alpha_{\mathcal D}$ agree. This property expresses that the associators in $\mathcal C$, $\mathcal D$ are "the same" under the functor $F$. My question is whether there is a (useful) weakening of the axioms for a monoidal functor that expresses the possibility that the associators might disagree. ## An example: quasiHopf algebras Here is my motivating example. Let $A$ be a (unital, associative) algebra (over a field $\mathbb K$), and let $A\text{-rep}$ be its category of representations. I.e. objects are pairs $V \in \text{Vect}_{\mathbb K}$ and an algebra homomorphism $\pi_V: A \to \text{End}_{\mathbb K}(V)$, and morphisms are $A$-linear maps. Then $A\text{-rep}$ has a faithful functor $A\text{-rep} \to \text{Vect}_{\mathbb K}$ that "forgets" the map $\pi$. Suppose now that $A$ comes equipped with an algebra homomorphism $\Delta: A \to A \otimes_{\mathbb K} A$. Then $A\text{-rep}$ has a functor $\otimes: A\text{-rep} \times A\text{-rep} \to A\text{-rep}$, given by $\pi_{(V\otimes W)} = (\pi_V \otimes \pi_W) \circ \Delta: A \to \text{End}(V\otimes_{\mathbb K}W)$. Just this much data is not enough for $A\text{-rep}$ to be monoidal. (Well, we also need a map $\epsilon: A \to \mathbb K$, but I'm going to drop mention of the unit laws.) Indeed: there might not be an associator. A situation in which there is an associator on $(A\text{-rep},\otimes)$ is as follows. Suppose that there is an invertible element $p \in A^{\otimes 3}$, such that for each $a\in A$, we have $$p\cdot (\Delta \otimes \text{id})(\Delta(a)) = (\text{id} \otimes \Delta)(\Delta(a))\cdot p$$ and $\cdot$ is the multiplication in $A^{\otimes 3}$. Then for objects $(X,\pi_X), (Y,\pi_Y), (Z,\pi_Z) \in A\text{-rep}$, define: $$\alpha_{X,Y,Z} = (\pi_X \otimes \pi_Y \otimes \pi_Z)(p) : ((X\otimes_{A\text{-}{\rm rep}} Y) \otimes_{A\text{-}{\rm rep}} Z) \to (X \otimes_{A\text{-}{\rm rep}} (Y \otimes_{A\text{-}{\rm rep}} Z))$$ You can check that it is in fact a isomorphism in $A\text{-rep}$. Moreover, supposing that $p$ satisfies: $$(\text{id} \otimes \text{id} \otimes \Delta)(p) \cdot (\Delta \otimes \text{id} \otimes \text{id})(p) = (1 \otimes p) \cdot (\text{id} \otimes \Delta \otimes \text{id})(p) \cdot (p \otimes 1)$$ where now $\cdot$ is the multiplication in $A^{\otimes 4}$, then $\alpha$ is an honest associator on $A\text{-rep}$. Then (provided also that $A$ have some sort of "antipode"), the data $(A,\Delta,p)$ is a quasiHopf algebra. Anyway, it's clear from the construction that the forgetful map $\text{Forget}: A\text{-rep} \to \text{Vect}_{\mathbb K}$ is a faithful exact functor which is weakly monoidal in the sense that $\text{Forget}(X \otimes_{A\text{-}{\rm rep}} Y) = \text{Forget}(X) \otimes_{\mathbb K} \text{Forget}(Y)$ — indeed, this is equality of objects, so perhaps it is "strictly" monoidal — but it is not "monoidal" since it messes with the associators. ## Actual motivation My actual motivation for asking the question above is the understand the Tannaka duality for quasiHopf algebras. In general, we have the following theorem: Theorem: Let $\mathcal C$ be an abelian category and $F: \mathcal C \to \text{FinVect}_{\mathbb K}$ a faithful exact functor, where $\text{FinVect}_{\mathbb K}$ is the category of finite-dimensional vector spaces of $\mathbb K$. Then there is a canonical coalgebra $\text{End}^{\vee}(F)$, and $\mathcal C$ is equivalent as an abelian category to the category of finite-dimensional corepresentations of $\text{End}^{\vee}(F)$. For details, see A Joyal, R Street, An introduction to Tannaka duality and quantum groups, Category Theory, Lecture Notes in Math, 1991 vol. 1488 pp. 412–492. The Tannaka philosophy goes on to say that if in addition to the conditions in the theorem, $\mathcal C$ is a monoidal category and $F$ is a monoidal functor, then $\text{End}^{\vee}(F)$ is a bialgebra, and $\mathcal C$ is monoidally equivalent to $\text{End}^{\vee}(F)\text{-corep}$. If $\mathcal C$ has duals, $\text{End}^{\vee}(F)$ is a Hopf algebra. If $\mathcal C$ has a braiding, then $\text{End}^{\vee}(F)$ is coquasitriangular. Etc. My real question, then, is: What is the statement for Tannaka duality for (co)quasiHopf algebras? It seems that the standard paper to answer the real question is: S. Majid, Tannaka-Krein theorems for quasi-Hopf algebras and other results. Contemp. Math. 134 (1992), pp. 219–232. But I have not been able to find a copy of this paper yet. - Maybe looking at what properties the reasonable functor between the module category of a (quasi)Hopf algebra and that of a guage-equivalent (quasi)algebra might provide a hint on the correct axioms. – Mariano Suárez-Alvarez Mar 17 2010 at 5:44 I was under the impression that the Berkeley math library had copies of Contemp. Math. Is this not the case? – S. Carnahan♦ Mar 17 2010 at 8:47 @SC: We probably do, so I'll look this afternoon. This is one of those: I was trying to figure it out, came home, thought more, decided I didn't know, and posted the question. We don't seem to have online access, is what I meant. – Theo Johnson-Freyd Mar 17 2010 at 15:13 ## 5 Answers Theo, I believe the notion that you're looking for is called quasi-fiber functor, and is discussed in: http://www-math.mit.edu/~etingof/tenscat1.pdf It is also discussed how to do Tannakian formalism in the presence of a quass-fiber functor, and you get back a quasi-Hopf algebra, as you desire. I'll omit repeating an explanation here, since it's given nicely in the appropriate section linked. -david - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One way to extract likely answers is to employ the Baez-Dolan periodic table, and consider monoidal categories as 2-categories with one object, and monoidal functors with various degrees of strictness as functors (or even correspondences) between them satisfying various amounts of compatibility. There are also some natural-sounding weakenings in this setting coming from homotopy theory, e.g., E[1] functors on E[1] infinity-categories in Lurie's DAG II and VI. Aside from passing to higher categories, there are standard approaches to weakening such as attacking the unit axioms, and attacking the associator compatibility. If I'm not mistaken, the associator compatibility can be weakened by asking the two natural transformations you gave before to be related by a specified strictly associative monoidal automorphism of the identity functor on D rather than being equal, and demand that said automorphisms obey a compatibility for any four input objects. - Firstly: A functor that messes up the associator but preserves product is NOT strictly monoidal, and from the following discussing I believe that in general it cannot be rectified into one that is. This is not an idea on weakning the assumptions of being an associator, and is thus not a direct answer to the question, but it is an analogy putting the question into another context, which to me at least clarifies it. In my point of view a very good way of looking at these things are on the classifying space of the categories. Any monoidal structure defines an $A^\infty$ product on the classifying space (i.e. the type of product you get on a loop space, but without inverse). Any lax-monoidal functor defines a map of $A^\infty$ spaces, i.e. it preserves the product and the underlying homotopy associativity structure both up to homotopy. The same is true for a strong monoidal functor. A strict moniodal functor preserves the product and the structure on the nose. The sort of functor you are talking about preserves the product but not the underlying homotopy associativity structure, and this makes it a little weird. I have no concrete example for associativity, but I do have one for commutativity. I.e. symmetric monoidal functors and $E^\infty$ structures, and I believe this can provide an example for associativity as well if it is delooped a couple of times. Ex: Let $C$ be the topological groupoid with objects $\mathbb{Z}$ and morfisms only automorphisms aut$(n) = S^1$ as a topological abelian group (meaning that composition is the usual product). The monoidal product $\oplus$ is the following: on objects: $n\oplus m = n+m$ on morphisms: $a\oplus b = a\cdot b$ This is somewhat trivial and $B C$ is just the product $\mathbb{Z} \times K(\mathbb{Z},2)$ in the category of $E^\infty$ spaces. However, I have said nothing about associators or symmetry isomorphisms because the product is both associative and commutative, BUT we can in fact give it a non-trivial such structure: Let $C'$ be the same category with the same monoidal product but the natural coherency isomorphisms are not all assumed to be the identity. indeed, the symmetry $\gamma_{n,m}^{\oplus} \colon n\oplus m \to m\oplus n$ are defined to be $\gamma_{n,m}^{\oplus} = (-1)^{nm}$, but all others are identities. One can check that this defines a symmetric monoidal category and the map $C \to C'$ given by the identity is a product preserving functor, but it is NOT \emph{symmetric} monoidal. Furthermore, in the category of $E^\infty$ spaces $BC'$ is not the trivial product $BC$ and delooping a number of times (3 I think) produces a space which is not a product of Eilenberg Maclane spaces. So no symmetric monoidal equaivalence exists between $C$ and $C'$. In fact one can prove that there are precisely two extensions in the category of $E^\infty$ spaces and $E^\infty$ maps of $K(\mathbb{Z},n)$ and $K(\mathbb{Z},n+2)$ (the third homology group of the Eilenberg maclane spectrum $H\mathbb{Z}$ is $\mathbb{Z}/2\mathbb{Z}$), and one can check that these two examples are precisely those two. I am not sure if this helps with the concrete example of which I am not that familiar. My idea was that maybe there is an obstruction to getting something more than product preserving (i.e. the $A^\infty$ structure may be different). - I think I agree most with this answer. In particular, we can think of monoidal categories as $A_\infty$ monoids in categories, and it sounds like Theo has a map of $A_n$ monoids for some small $n$ (like 2 or 3, I forget the indexing). However, I have been brought up to think of things that are not even $A_\infty$ as pretty awful, so perhaps Theo should try to find some extra "twisting" kind of structure (but I feel it may not be a purely category-theoretic notion). – Reid Barton Mar 18 2010 at 4:47 Define a lax monoidal category `$(\mathcal{C}, \{\otimes_n\},\{\gamma_{(i,j)}\}, \{i_a\}) $` as follows: $\mathcal{C}$ is a category. For each $n \in \mathbb{N}$, we have a functor of weight $n$, $\otimes_n: \mathcal{C}^n\to \mathcal{C}$ called the $n$-fold tensor. Since $\otimes_n$ is a functor $\mathcal{C}^n\to \mathcal{C}$, we can consider the composition $\otimes_n(\otimes_{j_1}, ..., \otimes_{j_n}):=\otimes_{(n,j)}$. This makes it a functor in $\sum_{i=1}^n j_i:=\ell$ objects. Then we we have a morphism of functors $\gamma_{(n,j)}: \otimes_{(n,j)} \to \otimes_\ell$ that is natural in each of the $\ell$ coordinates. For each object $a$ of $\mathcal{C}$, we have a map $i_a: a \to (a)$ which is a natural transformation $Id\to \otimes_1$. More notation: `$\otimes_n(\otimes_{j_1}(\otimes_{k^1_1},...,\otimes_{k^1_{j_1}}), ..., \otimes_{j_n}(\otimes_{k^n_1},...,\otimes_{k^n_{j_n}})):=\otimes_{(n,j,k)}$`. (Note: The n,j,k are not actually integers. They're a nasty multi-index notation.) Let $\epsilon=\sum_{i=1}^n j_i$, $\delta_i=\sum_{r=1}^{j_i}k^i_{r}$, and $\lambda=\sum_{i=1}^n \delta_i$ We require further that the $\gamma$ make the following diagrams commute (for every multi index in our notation): ```$$\begin{matrix}&&\gamma_{(n,j)}&&\\ &\otimes_{(n,j,k)}&\to&\otimes_{(\ell,k)}&\\ \otimes_{r=1}^n\gamma_{(j,k)}&\downarrow&&\downarrow&\gamma_{(\ell,k)}\\ &\otimes_{(n,\delta)}&\to& \otimes_\lambda&\\ &&\gamma_{(n,\delta)}&& \end{matrix}$$``` Further, we require that the natural transformation $i$ makes the following diagram commute: ```$$\begin{matrix} &&\otimes_n(i)\\ &\otimes_n &\to& \otimes_n\circ\otimes_1\\ id&\downarrow&&\downarrow & \gamma_{(n,1)}\\ &\otimes_n&\to&\otimes_n\\ &&id \end{matrix}$$``` ```$$\begin{matrix} &&i\otimes_n\\ &\otimes_n &\to& \otimes_1\circ\otimes_n\\ id&\downarrow&&\downarrow & \gamma_{(1,n)}\\ &\otimes_n&\to&\otimes_n\\ &&id \end{matrix}$$``` This defines a lax monoidal category. We call a lax monoidal category weak, or unbiased weak if all of the $\gamma$ and i are isomorphisms. (If they are equalities, then this is an unbiased strict monoidal category). Now, there is a proliferation of diagrams, to be sure, but every diagram merely verifies the associativity or unit individually for all finite tensor products individually. It is conceptually simpler than the standard notion of a weak monidal category. It's the naive approach. However, a priori, none of our maps are associative invertible. If we leave the diagrams as they are this is called a lax monoidal category, and if we turn the arrows around, it's called a colax monoidal category. Then an answer to your question is: yes. There are lax monoidal functors that preserve the lax monoidal structure, which is not a priori associative (it is lax associative, which means we can go in one direction). Edit: To give credit where credit is due, this is covered in Tom Leinster's book "Higher Categories, Higher Operads" in chapter 3. - See my discussion with Scott Carnahan in the comments on the question: I did end up getting Majid's paper from the library. Sure enough, he has the definition, so I feel mildly dumb for asking about it here. But anyway, if anyone actually reads this, go vote up other people's answers. For the record, Majid's says: Definition: A functor $F: \mathcal C \to \mathcal D$ is multiplicative if comes equipped with a natural isomorphism $c: F(X) \otimes F(Y) \overset\sim\to F(X\otimes Y)$, with no further conditions. Theorem: Let $\mathcal C$ be a monoidal category equivalent to a small category. Let $K$ be a commutative ring, and define $\text{Vect}_K$ to be the category of finitely-generated projective $K$-modules. Suppose that $(F,c) : \mathcal C \to \text{Vect}_K$ is a multiplicative functor. Then there exists a coquasibialgebra $A$ (over $K$) such that $(F,c)$ factors as a monoidal functor $C \to \text{comod}_A$ followed by the forgetful functor; in fact, there is a universal such $A$, and it is unique up to unique isomorphism. Moreover, if $\mathcal C$ is braided, then $A$ is coquasitriangular. Here a coquasibialgebra is a coalgebra $A$ along with a "multiplication" $A \otimes A \to A$ that is associative only up to an inner automorphism: $A$ comes equipped with a map $\phi: A^{\otimes 3} \to K$ such that the two associations of multiplication are related by conjugation by $\phi$ in the convolution algebra. It is coquasitriangular if the "multiplication" is commutative up to an inner automorphism in the convolution algebra. As is also well-known (see e.g. Joyal and Street), if $\mathcal C$ is $K$-linear and abelian and the functor $F$ is exact and faithful, then $\mathcal C$ is actually equivalent as an (all these adjectives) category to $\text{comod}_A$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 151, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934828519821167, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/3150/is-there-a-t-dual-of-wittens-twistor-topological-string-theory
# Is there a T-dual of Witten's twistor topological string theory? In late 2003, Edward Witten released a paper that revived the interest in Roger Penrose's twistors among particle physicists. The scattering amplitudes of gluons in $N=4$ gauge theory in four dimensions were expressed in a simple way using the twistor variables. Witten also proposed a particular model, the topological B-model on the $CP^{3|4}$ twistor space, to generate all these amplitudes. These methods began their own life but the topological B-model became largely silent, perhaps partly because the phenomenologists who fell in love with these things haven't been trained in string theory, especially not in the topological one. However, many twistor-related discoveries in the recent 3 years - which were made without Witten's constructive picture - lead me to ask whether Witten's theory actually knows about these matters. In particular, the "dual superconformal symmetry" was first noticed by Drummond et al. in 2006 and derived by stringy methods by Alday & Maldacena in 2008 or so. The 3+1 dimensions on the CFT boundary may be T-dualized to produce another copy of the Yang-Mills theory that is superconformally invariant once again. Scattering amplitudes have been converted to the expectation values of piecewise linear Wilson loops in the dual theory - the segments have the directions and length of the light-like momenta of the scattering particles. My question is Can you also "T-dualize" Witten's topological B-model to obtain another one in which the scattering amplitudes are computed in a different way? If you think that the answer is Yes, I would also like to know what is the "dual prescription" for the supersymmetric Yang-Mills amplitudes and whether the D1- and D5-branes in Witten's original models are replaced by other D1- and D5-branes or, for example, by D3-branes. - 3 I think this is a very nice question. It might take some time until it may be answered since it is one of few research questions on the site so far. I greatly acknowledge your work done for the community @Luboš Motl. You may also want to consider area51.stackexchange.com/proposals/23848/theoretical-physics as a place to post research question as soon as the site starts :) Greets – Robert Filter Jan 17 '11 at 13:33 Sir Roger Penrose! ;) In any case, good question, but I think as this is in advanced string theory question, you may be lucky to get answers here. – Noldorin Jan 17 '11 at 14:36 1 Anyone who can answer this question correctly is likely to write up a journal article and publish it first, before answering it here. :) – QGR Jan 17 '11 at 15:20 1 Hi! I don't expect to get a satisfactory answer, with all my respect to the big shot physicists here. After all, Nima Arkani-Hamed who has discussed it with Edward Witten hasn't answered it, either. But who knows? :-) If someone writes an article about it after reading this, good for science and for this server, too. (I have only partial clues how the T-dual theory could work.) Noldorin: sorry, we abandoned the kingdom in 1918 and kings were replaced by presidents. These days, we don't use Sir for the president to be respectful. Instead, he is the Professor. ;-) – Luboš Motl Jan 17 '11 at 17:58 ## 2 Answers Luboš would know this already (he's acknowledged in this paper), but Neitzke and Vafa conjectured in 2004 that the mirror manifold of $CP^{3|4}$ is a quadric surface $Q$ in $CP^{3|3}$ x $CP^{3|3}$, and mirror symmetry is a type of T-duality. There were a few follow-ups, including a paper by Sinkovics and Verlinde which studies classical $N=4$ super-Yang-Mills on the quadric, which in the very last paragraph asks whether the quantum scattering amplitudes can also be recovered from $Q$. After that, I can find nothing. But at least it's a place to begin! - This is a very good reminder, @Mitchell! I would forget about this mirror manifold, especially because I never found it too natural... +1 but let's keep the question open. The newest citation of Verlinde-Anamaria is from 2006 which is long before the dual superconformal symmetry etc. – Luboš Motl Feb 8 '11 at 19:08 The only thing which might be done is to cast the question in different forms. The CY supermanifold $CP^{3|4}$ for the "4" corresponding to a spinor field and "3" coordinates might be cast into $J^5(C) = R\oplus J^4\oplus C^4$, so the twistor components are contained in a $5\times5$ self adjoint matrix. By extension or analogue the question is whether this has some higher Jordan algebraic or a $J^3(O)$ realization. The cubic form gives $OP^2 \sim OP^1$, which might (I stress might without any strong evidence) mean the $D1$ is dual to a $D2$ or $M2$. The scalar part of this cubic form is the Chern-Simons form. As for any duality with the $D5$ (or $NS5$ “black brane”) that would have to be determined. The CS Lagrangian has a winding number transformation $L \rightarrow L + 2πNk$, which might then have a coordinate dual $x \rightarrow x + 2πiR$ winding or compactification. A chance to ponder per chance to solve. This might be one way of trying to think about it. - Thank you for your answer and welcome to the site! I don't have the background to get your answer but for those who do: Could you please format your formula using the MathJax notation employed here? Just set everything into two \$'s and it will be displayed like via LaTeX. Greets – Robert Filter Jan 17 '11 at 21:14 If there is something wrong with my suggestion, please let me and everyone know --- just be kind about. If it is wrong, then we need to think about something else. Cheers – Lawrence B. Crowell Jan 17 '11 at 22:18 I fixed some latex. @Lawrence you just need to put your math markup in – user346 Jan 17 '11 at 22:27 10 Thanks, Lawrence, but if your advanced text using as unexpected yet mutually unrelated objects as the Jordan algebra over octonions, M2-branes, NS5-branes, and the CS Lagrangian with winding numbers (?) has anything to do with my question, it may take centuries for the likes of me to understand the relationship. ;-) – Luboš Motl Jan 20 '11 at 8:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9531675577163696, "perplexity_flag": "middle"}
http://physicspages.com/2011/04/10/time-dilation/
Notes on topics in science ## Time dilation Required math: algebra, geometry Required physics: basics of relativity We saw in an earlier post that the events with a particular interval ${\Delta s^{2}}$ between them lie on an invariant hyperbola, and that this hyperbola can be used to calibrate the time or space intervals on the coordinate axes of two observers. We noted in passing that one effect of the postulates of relativity is time dilation, in which one observer believes the other observer’s clocks run slow relative to himself. We’ll review the effect here and then resolve a common so-called paradox that many people believes invalidates relativity. Consider the situation shown in the figure. \par Observer ${O_{2}}$ moves with a speed ${v}$ in the ${x}$ direction relative to observer ${O_{1}}$ as usual. The coordinate systems of the two observers are shown on the diagram, and the dark blue hyperbola (with equation ${x_{1}^{2}-t_{1}^{2}=-4}$) passing through the points ${A}$ and ${B}$ defines all events with an interval ${\Delta s^{2}=-4}$. This means that the point ${A}$, where the hyperbola intersects the ${t_{1}}$ axis, is the point where ${O_{1}}$‘s clock with world line ${0A}$ reads ${t_{1}=2}$. Observer ${O_{2}}$‘s clock whose world line is ${0B}$ reads ${t_{2}=2}$ at point ${B}$. Since ${O_{1}}$ measures all events on a horizontal line passing through ${A}$ as being simultaneous, and since ${B}$ is not on this line, he will say that ${A}$ and ${B}$ occur at different times. As we saw in the earlier post, we can work out the coordinates of event ${B}$ (by solving for the intersection of the hyperbola and the line ${0B}$), and they turn out to be | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle x_{1}$ | $\displaystyle =$ | $\displaystyle \frac{2v}{\sqrt{1-v^{2}}}$ | | $\displaystyle t_{1}$ | $\displaystyle =$ | $\displaystyle \frac{2}{\sqrt{1-v^{2}}}$ | That is, ${O_{1}}$ measures the time of event ${B}$ as ${t_{1}=t_{2}/\sqrt{1-v^{2}}}$ while ${O_{2}}$ measures it as ${t_{2}=2}$. Since ${t_{1}>t_{2}}$, ${O_{1}}$ believes that more time has elapsed than ${O_{2}}$ does, so he believes that ${O_{2}}$‘s clock runs slow. This is the time dilation effect. A paradox? The paradox (or so it is sometimes believed) is this: if ${O_{1}}$ believes ${O_{2}}$‘s clock runs slow, then surely ${O_{2}}$ must believe that ${O_{1}}$‘s clock runs fast. However, the principle of relativity (all inertial frames are equivalent) says that each observer should believe the same thing about the other, so that ${O_{2}}$ should measure ${O_{1}}$‘s clock as slow, not fast, relative to himself. The reason why this apparent paradox occurs is that the measurement process which gives rise to the time dilation effect is not symmetric between the two observers. To see this, we must consider carefully what it is we are actually measuring. The disparity occurs because we are comparing what the two observers are measuring as the time of event ${B}$. ${O_{2}}$ does this with one clock: the clock with world line ${0B}$. ${O_{1}}$, however, uses two clocks to do the measurement. The first clock is one whose world line is ${0A}$. This clock coincides with ${O_{2}}$‘s clock at the origin, where both observers agree that the time of this event is ${t_{1}=t_{2}=0}$. However, after this event, the two clocks diverge and follow different world lines, so when the observers want to measure the time of event ${B}$, ${O_{1}}$ can’t use the same clock that was used to measure the time at the origin, since that clock’s world line doesn’t go through event ${B}$. Instead he has to use the clock with world line ${FB}$. This looks fine to ${O_{1}}$ since he measures events 0 and ${F}$ as simultaneous (they both lie on his ${x_{1}}$ axis so they both occur at ${t_{1}=0}$). However, ${O_{2}}$ disagrees that events 0 and ${F}$ are simultaneous. According to ${O_{2}}$, the events that are simultaneous with event 0 are those that lie on the ${x_{2}}$ axis (the red line in the diagram), and it is clear that event ${F}$ occurs before any event on this axis. Thus to ${O_{2}}$, ${O_{1}}$‘s clock ticks off more time in travelling from ${F}$ to ${B}$ than ${O_{2}}$‘s clock does in going from 0 to ${B}$. Thus both ${O_{1}}$ and ${O_{2}}$ will agree, after doing this analysis, that ${O_{1}}$‘s clock should read a later time than ${O_{2}}$‘s clock at event ${B}$. Now let’s look at event ${B}$ from another viewpoint. According to ${O_{2}}$, events that are simultaneous with ${B}$ must lie on the line passing through ${B}$ and parallel to the ${x_{2}}$ axis; this line is shown in dark green in the figure. This line intersects the ${t_{1}}$ axis at event ${D}$, so ${O_{2}}$ measures the time of event ${D}$ as ${t_{2}=2}$. What time does ${O_{1}}$ assign to this event? We can work this out by finding the coordinates of ${D}$. The equation of the dark green line which has slope ${v}$ (as we saw in an earlier post), using the point-slope form of a straight line, is (we’ll work it out for a general value of ${t_{2}}$ since we’re trying to show the time dilation effect is symmetric): $\displaystyle t_{1}-\frac{t_{2}}{\sqrt{1-v^{2}}}=v\left(x_1-\frac{t_{2}v}{\sqrt{1-v^{2}}}\right)$ where we used the coordinates of ${B}$ given above as the point through which this line passes. This line crosses the ${t_{1}}$ axis at ${x_{1}=0}$, so the time of event ${D}$ as measured by ${O_{1}}$ is thus | | | | |------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------| | $\displaystyle t_{1}$ | $\displaystyle =$ | $\displaystyle \frac{t_{2}}{\sqrt{1-v^{2}}}-\frac{t_{2}v^{2}}{\sqrt{1-v^{2}}}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle t_{2}\frac{1-v^{2}}{\sqrt{1-v^{2}}}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle t_{2}\sqrt{1-v^{2}}$ | | $\displaystyle t_{2}$ | $\displaystyle =$ | $\displaystyle \frac{t_{1}}{\sqrt{1-v^{2}}}$ | That is, ${O_{2}}$ now sees ${O_{1}}$‘s clock as running slow compared to his own. So the time dilation effect really does work both ways, and each observer really does see the other’s clocks as running slow. In this last analysis, note that we are using one of ${O_{1}}$‘s clocks (the one with world line ${0D}$) and two of ${O_{2}}$‘s clocks (one with world line ${0B}$ to measure the time at the origin, and the other with world line ${GD}$ (light green) to measure the time of event ${D}$). In this case, ${O_{2}}$ measures events ${0}$ and ${G}$ as simultaneous, but ${O_{1}}$ thinks ${G}$ occurred before ${0}$, so he thinks ${O_{2}}$ is measuring a longer time than ${O_{1}}$. The key in understanding time dilation, and resolving the paradox, is to understand that the measurements involved in analyzing the times determined for a given event are not symmetric between the two observers: one observer always has to use one clock, and the other observer always has to use two clocks. This is because the two observers are moving relative to each other, so when one of ${O_{1}}$‘s clocks coincides with one of ${O_{2}}$‘s clocks, those two clocks can never be at the same place again, so one of the observers has to use a different clock to compare times between the two frames. The time dilation effect most definitely does not imply that all clocks in one frame run at a different rate from all clocks in another frame. You must be very careful about which clocks are being compared, and at which events the comparison takes place. One final note: nowhere in this discussion of time have we made any assumption about the nature of the clocks being used to make the measurements. They could be mechanical clocks or atomic clocks or whatever. These effects arise entirely from the postulates of relativity, and the difference between relativity and Galilean physics is due entirely to the assumption of the constancy of the speed of light. The time dilation effect is real and is a property of time itself, not of the clocks used to measure it. ### Like this: By , on Sunday, 10 April 2011 at 16:27, under Physics, Relativity. Tags: invariant hyperbolas, space-time diagrams, time dilation. 3 Comments ### Trackbacks • By Lorentz transformations « Physics tutorials on Monday, 11 April 2011 at 14:35 [...] take on a more involved form. However, all the standard relativistic effects such as time dilation and length contraction can be derived from these Lorentz [...] • By Length contraction and the pole-in-a-barn paradox « Physics tutorials on Thursday, 14 April 2011 at 09:09 [...] famous results of relativity are time dilation and length contraction. We looked at time dilation in an earlier post, so we’ll examine [...] • By Geodesics: paths of longest proper time | Physics tutorials on Saturday, 30 March 2013 at 15:13 [...] confuse this result with the time dilation principle, since they are considering different cases. Time dilation is an effect that occurs when [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 127, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593843221664429, "perplexity_flag": "head"}
http://mathoverflow.net/questions/63969/what-is-an-explicit-example-of-a-variety-x-which-is-finite-over-spec-f-p-but-whic/64026
## What is an explicit example of a variety X which is finite over Spec F_p but which does not lift to a scheme Y which is finite and flat over Spec Z_p? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is an explicit example of a variety X which is finite over Spec F_p but which does not lift to a scheme Y which is finite and flat over Spec Z_p? - Two things: (i) I'm guessing you want examples of varieties which do not lift, since it is easy to give examples which do. (ii) If you don't put any further conditions on the problem, it seems to me to be trivial: if you have a finite set of equations with coefficients in $\mathbb{F}_q$, you can lift them, coefficient by coefficient, to a set of equations with coefficients in the ring of integers of some number field. Doesn't this show what you want? – Pete L. Clark May 5 2011 at 3:45 Thanks Pete, I will edit the question. As for (ii), I certainly want to avoid things like Z_p[x]/(px^2 + x) lifting X = Spec F_p (actually, I care most about the case of lifting a finite scheme over F_p to a finite scheme over Z_p). – David Zureick-Brown♦ May 5 2011 at 3:54 4 @David: okay, now you have edited to a question that I don't know how to answer. :) – Pete L. Clark May 5 2011 at 4:00 ## 2 Answers I received the following very explicit answer via private communication: the algebra $$A = \mathbb{F}_p[x_1,\ldots,x_6]/(x_1^p,\cdots, x_6^p, x_1x_2 + x_3x_4 + x_5x_6)$$ does not lift to a finite flat $\mathbb{Z}_p$ algebra. (I am still working out the details of why this does not lift.) This is exampe 3.2(4) of Berthelot-Ogus. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Am I missing something or is this the classical question of Serre? A class of examples is given in Exemples de variétés projectives en caractéristique p non relevables en caractéristique zéro. Proc. Nat. Acad. Sci. U.S.A. 47 1961 108–109. They come from the quotient of some complete intersections by some finite groups, but if you read closely the proof (by the theory of the étale fundamental group, the impossibility of constructing a lift is reduced to the impossibility of constructing some group representations), you see that the ideas are in fact quite general. In order to give a non vacuous answer, let me also draw your attention to the letter of Serre in the appendix of the document illusie_trieste.pdf on Luc Illusie's website. - 3 I was under the impression that Serre's constructions have dimension at least 2, and David Brown is looking for something finite. – S. Carnahan♦ May 5 2011 at 19:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340617656707764, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27041/linearizing-quantum-operators
# Linearizing Quantum Operators [duplicate] Possible Duplicate: Linearizing Quantum Operators I was reading an article on harmonic generation and came across the following way of decomposing the photon field operator. $$\hat{A}={\langle}\hat{A}{\rangle}I+ \Delta\hat{a}$$ The right hand side is a sum of the "mean" value and the fluctuations about the mean. While I understand that the physical picture is reasonable, is this mathematically correct? If so what are the constraints this imposes? In literature this is designated as a "linearization" process. My understanding of a linear operator is that it is simply a homomorphism. I have never seen anything done like this and I'm having a hard time finding references which justify this process. I would be grateful if somebody can point me in the right direction! - 3 Why not look at it as at a definition of $\Delta \hat a$? The it can always be done (providing that the mean value is finite). It does not implies even any linearization. If the fluctuation magnitude is small with respect to the mean value, it can be considered "linearization", like $f(x)\approx f(x_0)+ f'(x_0)\cdot (x-x_0)$, but as a definition it is valid for any $\Delta \hat a$. – Vladimir Kalitvianski Oct 23 '11 at 20:24 – Antillar Maximus Oct 23 '11 at 21:24 I don't see how this can be justified based on calculus reasoning alone? The first thing that comes to my mind is Cosets, but I am not sure how to take that anywhere. – Antillar Maximus Oct 23 '11 at 21:25 Imagine the operator $\hat A$ has eigenfunctions. Then in space of its eigenfunctions it is like a regular function, not operator. And even if $\hat A$ is always an operator. The definitions work as long as they are reasonable, reversible, etc. So it may be not so necessary to search for a group theoretic reason. You just introduce new variables and you work with them, that's it. – Vladimir Kalitvianski Oct 23 '11 at 21:40 3 @Qmechanic: Argh! I wish people wouldn't do that. It makes it impossible to keep track of whether it has been answered or not. – Joe Fitzsimons Oct 24 '11 at 10:24 show 4 more comments ## marked as duplicate by Qmechanic♦Jan 13 at 13:49 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer The operation you mentioned $$\Delta \hat{a} = \hat{A}-\langle \hat{A}\rangle \hat{I}$$ is just shifting of the annihilation operator. Typically people even drop the identity and write $$\hat{a}_{new}=\hat{a}_{old}-\alpha_0,$$ where $\alpha_0$ is a complex number. In your question the shift is such that $\langle \hat{a}_{new} \rangle = 0$, which may be convenient for calculations. In particular when the pump beam has many photons, the quantum part (i.e. related to $\hat{a}_{new}$) may be neglected. Additionally, $\hat{a}_{new}$ is an annihilation operator with the same anti-commutation relations as $\hat{a}_{old}$. From mathematical point of view it is a perfectly legit operation. Moreover, both operators have the same domain, and spectrum only shifted by $\alpha_0$. To see that domain is the same take any $|\psi\rangle \in \text{dom}(\hat{a}_{old})$. Then denoting $|\phi\rangle := \hat{a}_{old} |\psi\rangle$, we check that $\hat{a}_{new} |\psi\rangle = |\phi\rangle-\alpha_0|\psi\rangle$ is a well-defined vector. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480318427085876, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/281759/does-multiplicative-version-of-azumas-inequality-hold
# Does Multiplicative Version of Azuma's Inequality Hold? We know that there are multiplicative version concentration inequalities for sums of independent random variables. For example, the following multiplicative version Chernoff bound. Chernoff bound: Let $X_1,\ldots,X_n$ be independent random variables and $X_i \in \{0,1\}$. Let $X=\sum_{i=1}^n X_i$. Then for any $\delta>0$, $\Pr\left(X \ge (1+\delta)EX \right) \le e^{-c\cdot(EX)\delta ^2},$ where $c$ is some absolute constant. Now we consider dependent random variables. A slight variant of Azuma's inequality states the following. Azuma's Inequality: Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in \{0,1\}$. Assume that there exists $m$, such that $\Pr\left( \sum_{i=1}^n \mathbb{E}[X_i|X_{<i}] \le m\right) = 1.$ Let $X=\sum_{i=1}^n X_i$. Then for any $\lambda > 0$, $\Pr\left(X \ge m+\lambda \right) \le e^{-2 \lambda^2/n}.$ Clearly Azuma's inequality is additive. My question is that does a multiplicative version of Azuma's inequality such as the following hold? My question: Let $X_1,\ldots,X_n$ be (dependent) random variables and $X_i \in \{0,1\}$. Assume that there exists $m$, such that $\Pr\left( \sum_{i=1}^n \mathbb{E}[X_i|X_{<i}] \le m\right) = 1.$ Let $X=\sum_{i=1}^n X_i$. Then for any $\delta >0$ $\Pr\left(X \ge (1+\delta)m \right) \le e^{-c\cdot m \delta^2},$ where $c$ is some absolute constant. Note that the standard Azuma's inequality does not imply the multiplicative version when $m \ll \sqrt{n}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8698582053184509, "perplexity_flag": "head"}