added
stringdate
2025-03-12 15:57:16
2025-03-21 13:32:23
created
timestamp[us]date
2008-09-06 22:17:14
2024-12-31 23:58:17
id
stringlengths
1
7
metadata
dict
source
stringclasses
1 value
text
stringlengths
59
10.4M
2025-03-21T14:48:31.210112
2020-06-08T00:06:31
362469
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Iosif Pinelis", "Red shoes", "https://mathoverflow.net/users/108824", "https://mathoverflow.net/users/36721" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629977", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362469" }
Stack Exchange
Is the integral functional $I(x) = \int_{0}^{T} \Lambda (t , x(t), \dot{x} (t)) \; dt $ locally lipschitz on the space $C^2 [0 ,T] $? Let the function $\Lambda : [0,T] \times \mathbb{R^n} \times \mathbb{R^n} \to \mathbb R$ be continuously differentiable. Then the integral functional $I(x) = \int_{0}^{T} \Lambda (t , x(t), \dot{x} (t)) \; dt $ is finite value for all $x \in C^{2}[0 ,2]$ . My question: Is the integral functional $I(x) = \int_{0}^{T} \Lambda (t , x(t), \dot{x} (t)) \; dt $ lipschitz on a neighborhood of $x_0$ on the space $C^2 [0,T]$ equipped with the norm $W^{1,1}$ ? P.S: $AC[0, T]$ stands for the space of all absolutely continuous function $x: [0,T] \to \mathbb R^n$ equipped with $W^{1,1}$ norm which is $$ \| x \| := \int_{0}^{T} \|x(t)\| \; dt + \int_{0}^{T} \|x' (t)\| \; dt$$ clearly $C^2 [0 ,T] \subset AC [0 ,T]$. The answer is no. Indeed, let $n=1$, $T=1$, and $\Lambda(t,u,v)\equiv v^2$, so that \begin{equation} I(x)=\int_0^1 x'(t)^2\,dt. \end{equation} Let $x_0:=0$ and, for each real $b\ge1$ and all $t\in[0,1]$, \begin{equation} y_b(t):=e^{-bt}. \end{equation} Then \begin{equation} \|y_b-x_0\|=\|y_b\|=\int_0^1 |y_b(t)|\,dt+\int_0^1 |y_b'(t)|\,dt\le1/b+1\le2, \end{equation} so that $y_b$ is in the ball of radius $2$ centered at $x_0$. However, \begin{equation} I(y_b)-I(x_0)=I(y_b)=\int_0^1 y_b'(t)^2\,dt\sim b/2\to\infty \end{equation} as $b\to\infty$. So, the functional $I$ is not locally Lipschitz. Why is $I$ differentiable at first place ? I'm actually trying to show that $I$ is Frechet differentiable ! That's why I asked this question. Do you have any reference in your mind regarding this issue ? Thanks in advance. Btw, when I want to apply DCT, I'm not sure how to choose the dominating function below to switch the integral and limit ? $$\lim_{n \to \infty} \int_{0}^{T} \frac{\Lambda (t , x(t)+ \tau_n h(t), \dot{x} (t)+ \tau_n \dot{h}(t)) - \Lambda (t , x(t), \dot{ x} (t))}{\tau_n}$$ In fact, $I$ is not locally Lipschitz.
2025-03-21T14:48:31.210254
2020-06-08T00:40:36
362470
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ashutosh", "Dieter Kadelka", "bof", "https://mathoverflow.net/users/100904", "https://mathoverflow.net/users/2689", "https://mathoverflow.net/users/43266" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629978", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362470" }
Stack Exchange
On a property possibly separating countable and not countable cardinals The following question is inspired by Function with vector space , which has been closed for unknown reason and which may have a wellknown answer. Is the following true? Let $X$ be an uncountable set. Then there is a function $f \colon X \times X \to \mathbb{N}$ such that for any function $g \colon X \to \mathbb{N}$ there is $(x,y) \in X^2$ with $f(x,y) > g(x) + g(y)$. It is easy to show that this is false if $X$ is countable. Suppose $c:[\omega_1]^2 \to \omega$ satisfies: For every uncountable $Y \subseteq \omega_1$, the range of $c \upharpoonright [Y]^2$ is unbounded in $\omega$. The existence of such $c$ and much more is well known (See S. Todorcevic, Partitioning pairs of countable ordinals, Acta Math., 159(3–4):261–294, 1987). So you can define $f(x, y) = c({x, y})$ if $x \neq y$, and $0$ otherwise. It follows that there is such an $f:X^2 \to \omega$ iff $X$ is uncountable. @Ashutosh Todorcevic's result is much stronger, but you don't need Todorcevic to get $f$ with the property you need, it's just an exercise in elementary set theory. For each $\alpha\in\omega_1$ choose an injection $\psi_\alpha:\alpha\to\mathbb N$. For $\beta\lt\alpha\in\omega_1$ define $f(\alpha,\beta)=f(\beta,\alpha)=\psi_\alpha(\beta)$. So $f:\omega_1\times\omega_1\to\mathbb N$, and $f$ is unbounded on $X\times X$ if $X\subseteq\omega_1$ is uncountable, indeed, if $X$ has order type $\ge\omega+1$. Thank you bof and @Ashutosh. I'm sorry, but I prefer the answer of bof. It is simple and effective. Can you make it to an answer? Yes, such a function $f:X\times X\to\mathbb N$ exists if $X$ is uncountable. It will suffice to prove it for $X=\omega_1$. The following proof is based on the same idea as a comment by Ashutosh but uses elementary set theory instead of the result of Todorcevic. For each ordinal $\alpha\in\omega_1$ choose an injective map $\psi_\alpha:\alpha\to\mathbb N$. Define a function $f:\omega_1\times\omega_1\to\mathbb N$ so that $f(\alpha,\beta)=\psi_\alpha(\beta)$ when $\beta\lt\alpha$. Now consider any function $g:\omega_1\to\mathbb N$. Then $g$ is bounded on some uncountable set $Y\subseteq\omega_1$; say $g(\xi)\le n\in\mathbb N$ for all $\xi\in Y$. Let $Z\subset Y$ be a set of order type $\omega+1$ and let $\alpha=\max Z$. Since $\{f(\alpha,\beta):\beta\in Z\cap\alpha\}=\{\psi_\alpha(\beta):\beta\in Z\cap\alpha\}$ is an infinite subset of $\mathbb N$, we can choose $\beta\in Z\cap\alpha$ with $f(\alpha,\beta)\gt2n\ge g(\alpha)+g(\beta)$. More generally, if $|X|\gt\aleph_\alpha$, then there is a function $f:X\times X\to\omega_\alpha$ such that, for any function $g:X\to\omega_\alpha$, there are $x,y\in X$ with $f(x,y)\gt g(x)+g(y)$.
2025-03-21T14:48:31.210443
2020-06-08T01:27:11
362473
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Adam P. Goucher", "Bertram Arnold", "Goulifet", "Steven Stadnicki", "Theo Johnson-Freyd", "https://mathoverflow.net/users/35687", "https://mathoverflow.net/users/39261", "https://mathoverflow.net/users/39521", "https://mathoverflow.net/users/7092", "https://mathoverflow.net/users/75761", "https://mathoverflow.net/users/78", "wlad" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629979", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362473" }
Stack Exchange
3D similarities and quaternions? As is well-known, in dimension 2, a linear map $f : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ is a direct similarity if, once we identify $\mathbb{R}^2$ with $\mathbb{C}$, $f$ is of the form $$\forall z \in \mathbb{C}, \quad f(z) = a z + b$$ with $a \in \mathbb{C}\backslash \{0\}$ and $b \in \mathbb{C}$. This gives an especially appealing way of describing and parameterizing similarities. By writing $a = r\mathrm{e}^{\mathrm{i} \theta}$ with $r >0$ and $\theta \in [0,2\pi)$, we recover that a similarity is the combination of a rotation (of angle $\theta$), a homothety (of parameter $r$), and a translation (of $b$). I am curious about possible extension of this result in dimension 3. Of course, there is no three-dimensional space such as the complex numbers. However, it is possible to describe 3D direct similarities in terms of a combination of homotheties (around a certain point, possibly non-zero), rotations (idem), and translations. Since we can represent 3D-rotations with unit-quaternions (see this nice Youtube video), I am wondering if there is a nice comparable relation to $f(z) = az + b$ above in the 3D case. One problem is that the rotation action of the quaternions isn't given by $q\mapsto(f_q(z): z\mapsto qz)$ as it is with complex numbers representing 2d rotations, but rather $q\mapsto(f_q(z): z\mapsto q^{-1}zq)$ (possibly with the specific conjugation inverted depending on your conventions). This question is, more or less, what led Hamilton to discover the quaternions in the first place. What about $v \mapsto qvq^* + u$ where $v$ is a vector quaternion, $u$ is a vector quaternion, and $q$ is a quaternion In dimension $4$, any orthogonal transformation is of the form $v\mapsto a\cdot v\cdot b$ with $a,b\in \mathbb S^3\subset H^\times$. This defines the double covering $SU(2)\times SU(2)\to SO(4)$. This generalizes immediately to similarities, the group of which is $\mathbb (H^\times \mathbb H^\times / \big((\lambda,1)\sim (1,\lambda)\big)\ltimes \mathbb H$ acting by $(a,b,x)\circ v = avb + v$. To get to $SO(3)$, restrict to those transformations preserving $\mathbb R^\perp\subset \mathbb H$, which is equivalent to $x\in \mathbb R^\perp,ab\in\mathbb R$, i.e. $b$ is proportional to $\overline a$. In terms of matrices, the group of similarity transformations is a subgroup of the group of affine transformations, the latter of which has a matrix representation @BertramArnold Shouldn't that be $(a,b,x)\circ v = avb + x$ Call a quaternion whose scalar part is zero a vector quaternion. We shall denote the vector quaternions as $\mathbb R^3$. Given $q = w + xi + yj + zk$, we shall define $q^*$ (called the "conjugate" of $q$) to be $w - xi - yj - zk$. If $q$ is a unit quaternion, then $v \in \mathbb R^3\mapsto qvq^*$ is a rotation. All rotations about the origin in 3D can be given in this form. If $q$ is a general (i.e. not necessarily a unit) quaternion, then $v \mapsto qvq^*$ is a rotation followed by a dilation from the origin by $|q|^2$ units. Finally, add a vector quaternion $x$ to perform a translation. We thus get $v \mapsto qvq^* + x$ as a general form for similarity transformations. Postscript: This can be generalised to higher dimensions, and some non-Euclidean geometries, using Clifford algebras. That's nice, using the "vector quaternions" for $\mathbb{R}^3$ and then describing the geometric transformations as exposed does the job. Thanks a lot! Is it obvious that we capture all the direct similarities doing so? Yes -- every orientation-preserving similarity is expressible uniquely as an element of SO(n) followed by a scaling followed by a translation.
2025-03-21T14:48:31.210706
2020-06-08T01:57:31
362474
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Max Alekseyev", "VS.", "https://mathoverflow.net/users/136553", "https://mathoverflow.net/users/7076" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629980", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362474" }
Stack Exchange
On a quadratic diophantine equation Given a quadratic diophantine equation in $\mathbb Z[x,y,z]$ of form $$ax^2+by^2+cx+dy+ez+f=0$$ are there standard methods to solve for it when $$\|(x^2,y^2,z)\|_\infty\leq e^{1/2}$$ $$\|(a,b,c,d,e,f)\|_\infty\leq e$$ holds in $O(\mathsf{polylog}(e))$ time? I noticed we can change the form to shape $X^2+Y^2+ez+g=0$ where $X=a'x+b'$ and $Y=c'y+d'$ for some $a',b',c',d',g\in\mathbb Z$ by completing the square. So we are asking $$(a'x+b')^2+(c'y+d')^2\equiv-g\bmod e$$ where $$\|(x,y)\|_\infty\leq e^{1/4}$$ holds. This can be potentially solved by a variation of the Coppersmith method, e.g., by that of Bauer and Joux (2007). 'Potentially' or provably? It's an heuristic method with expected but not provable outcome. I think that is for a general trivariate. This is a particular quadratic surface. Arithmetic might help that just combinatorics without arithmetic. Perhaps lot more is known in arithmetic geometry about such surfaces. It is worth going through the arithmetic geometry route. Anyway, I did not check carefully and have no ground to say that it's provable. Perhaps it is provable for particular surfaces.
2025-03-21T14:48:31.210839
2020-06-08T02:05:55
362475
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Arun Debray", "Bruno Martelli", "Igor Belegradek", "Joe", "Moishe Kohan", "Sam Hopkins", "https://mathoverflow.net/users/106463", "https://mathoverflow.net/users/112507", "https://mathoverflow.net/users/1573", "https://mathoverflow.net/users/25028", "https://mathoverflow.net/users/39654", "https://mathoverflow.net/users/6205", "https://mathoverflow.net/users/97265", "pianyon" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629981", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362475" }
Stack Exchange
Can triangulations (or some related combinatorial structure) distinguish smooth structures on $RP^4$? There are exotic versions of $RP^4$, constructed by Cappell-Shaneson, which are homeomorphic but not diffeomorphic to the standard $RP^4$. One way to distinguish them is via the $\eta$ invariant of $Pin^+$ Dirac operators on them, c.f. the article "Exotic structures on 4-manifolds detected by spectral invariants" by Stolz, Invent. math. 94, 147-162 (1988) (pdf here). I was wondering if there was a known combinatorial way to distinguish the smooth structures, e.g. in the following senses: Can one construct triangulations of $RP^4$ (e.g. via Morse theory) that must 'correspond' to one of the smooth structures? If a triangulation by itself can't distinguish smooth structures, is there some additional combinatorial data that one can put on top of the triangulation to distinguish them, like the branching structure on the triangulation? The motivation for this question is based on some papers (https://arxiv.org/abs/1610.07628, https://arxiv.org/abs/1810.05833) that construct topological invariants via state-sums on triangulations (generalizing the Crane-Yetter sum) that speculate whether exotic structures can be detected via the state-sum. So it's natural to ask whether such manifolds can even be distinguished combinatorially. And something like this could seem plausible because in 4 dimensions, every manifold is smooth iff it is triangulable. (If low-brow answers exist, that would be nice since I don't know much about exotic manifolds.) In dimensions less than 7, the categories DIFF and PL are equivalent, so, yes, one can. Thanks. Is there any place you're aware where specific constructions of 'non-diffeomorphic triangulations' are presented/elucidated? I do not understand: The equivalence of categories implies that if two smooth manifolds are PL isomorphic then they are diffeomorphic. On the other hand, I do not think there was any work on a PL version of Donaldson's or SW invariants. The question is more basic than Donaldson or SW invariants. The simplest questions would be "Are there PL-inequivalent triangulations of $RP^4$?" An equivalent question seems to be: are there triangulations of $RP^4$ that can't be connected by Pachner moves, where one triangulation is PL equivalent to the standard $RP^4$ and the other is equivalent to the Cappell-Shaneson $RP^4$? And the answer to this question is positive as I said in my comment: Take any two inequivalent smooth structures on $RP^4$. Triangulate these. The resulting triangulated manifolds will be PL-inequivalent. The reason is that in dimension 4 DIFF=PL. @MoisheKohan: is PL-equivalence of triangulations something that can be algorithmically checked? @SamHopkins: It is semidecidable for obvious reason; however, PL non-isomorphism in dimension 4, in general, is undecidable. It does not mean that the existing semi-algorithm is practical. Is the piecewise linear category sensitive to branching structures or just the unordered triangulation? I.e. can two PL-inequivalent manifolds share a common triangulation, but only differ in the branching structure on the triangulation? @Joe: I do not know what the last question means but you should ask a new question if you have one, just make sure you explain what your terminology means (e.g. "branching structure"). Before asking, I suggest you first take a look at some standard sources on PL manifolds such as Rourke and Sanderson ("Introduction to Piecewise-Linear Topology"), so you avoid making up your own terminology in lieu of the standard notions. @MoisheKohan Branching structures are a preexisting notion, e.g. in this paper and others about TFTs and lattice models. This question's author did not make up the word. @ArunDebray: This is a physics paper, they never define what they mean by a branching structure. My guess is that they mean a total order for each simplex, equivalently, for each simplex they fix an affine isomorphism to the standard simplex (making a simplicial complex a simplicial set). Is this what they mean? Yes that's what the term means. There's also the requirement that the orderings of vertices on adjacent simplices are compatible, i.e that two vertices sharing an edge must lie in the same order on every simplex that contains them. This can be visualized as drawing an arrow on each edge and requiring that the arrows define a total order on each simplex. $\newcommand{\RP}{\mathbb{RP}}\newcommand{\C}{\mathbb C}\newcommand{\cC}{\mathcal C}$Here's a TFT-style argument for why it should be possible in principle to use an invariant of triangulations to distinguish $\RP^4$ from Capell-Shaneson's fake $\RP^4$, which I'll call $Q$; however, the specific invariant needed has likely not been constructed. (Moishe Kohan's comment is a much faster argument that such a combinatorial invariant exists, but hopefully this answer makes it more explicit what it would look like.) Given a general $n$-dimensional pin+ TFT $Z'\colon\mathsf{Bord}_n(\mathrm{Pin}^+)\to\cC$, and for a nice choice of target category $\cC$, there is expected to be an $n$-dimensional unoriented TFT $Z\colon\mathsf{Bord}_n\to\cC$ obtained by “summing over pin+ structures,” akin to the finite path integral in Dijkgraaf-Witten theory. For example, if $M$ is a closed, unoriented $n$-manifold and $P^+(M)$ denotes its set of pin+ structures, $$ Z(M) = \sum_{\mathfrak p\in P^+(M)} \frac{Z'(M, \mathfrak p)}{\#\mathrm{Aut}(\mathfrak p)}.$$ If $Z'$ is fully extended, and $\cC$ is chosen appropriately, it should be possible to define $Z$ as a fully extended TFT as well. At present, though, I think this has only been shown up to category number 2 (once-extended TFTs). Moreover, it's believed that fully extended TFTs (again, for certain choices of target category $\cC$) can all be constructed using state sums, with input data of a triangulation. There is work of Kevin Walker on implementing this, though I don't know exactly what assumptions (e.g. choice of $\cC$) he works with. Let's use this strategy to build a 4d unoriented TFT $Z$ which distinguishes $\RP^4$ from $Q$. Let $\zeta := e^{i\pi/8}$ and $\mu_{16}\subset\C^\times$ denote the multiplicative group of 16th roots of unity, which is generated by $\zeta$. The 4d pin+ $\eta$-invariant is a $\mu_{16}$-valued invariant of the Dirac operator on a pin+ 4-manifold; for the two pin+ structures on $\RP^4$, it takes on the values $\zeta^{\pm 1}$, and for the two pin+ structures on $Q$, it takes on the values $\zeta^{\pm 9}$. This is discussed in Kirby-Taylor, “Pin structures on low-dimensional manifolds”; they also show this $\eta$-invariant is a pin+ bordism invariant. Freed-Hopkins show that any $\mathrm U_1$-valued bordism invariant $\alpha$ lifts to an invertible TFT $Z'$ such that in top dimension, $Z'(M) = \alpha(M)$. Such a TFT is expected to be fully extended, but so far has only been constructed down to codimension 2, with target 2-category the Morita category of superalgebras over $\C$. In any case, applying this to the $\eta$-invariant produces a 4d pin+ TFT, which will be our $Z'$. Summing over pin+ structures as above, we obtain a 4d unoriented TFT $Z$, with values $$ Z(\RP^4) = \frac{\zeta + \zeta^{-1}}{2},\qquad\quad Z(Q) = \frac{\zeta^9 + \zeta^{-9}}{2}.$$ Thus $Z(\RP^4)$ is a positive real number and $Z(Q)$ is a negative real number, so we have an (in principle) fully extended 4d unoriented TFT distinguishing $\RP^4$ and $Q$, hence which should admit a state-sum description. What are the physical interpretations of these TFTs? It looks like $Z'$ corresponds to the 3d time reversal invariant ("class DIII") topological superconductor? And $Z$ corresponds to a TFT obtained from it by gauging fermion parity? @pianyon I think that is correct, yes! I'll convert my comment to an answer: Yes, triangulations can distinguish two non-diffeomorphic smooth structures on any 4-dimensional manifold; in particular, given an exotic $RP^4$, there exists an exotic triangulation of topological $RP^4$ which is not PL-isomorphic to the standard triangulation. The reason is 2-fold: a. The easy part is that each smooth manifold $(M, s)$ (regardless of its dimension) admits a compatible PL structure: One can find a smooth triangulation $\tau_s$ of $M$ whose links will be triangulated spheres. b. The hard part is a theorem due to Kirby and Siebenmann, Kirby, Robion C.; Siebenmann, Laurence C., Foundational essays on topological manifolds, smoothings and triangulations, Annals of Mathematics Studies, 88. Princeton, N.J.: Princeton University Press and University of Tokyo Press. V, 355 p. hbk: $ 24.50; pbk: $ 10.75 (1977). ZBL0361.57004. that in dimensions $\le 6$, the categories PL and DIFF are equivalent. In particular, if $s_1, s_2$ are non-diffeomorphic smooth structures on a topological manifold $M$ of dimension $\le 6$, then $\tau_i=\tau_{s_i}, i=1,2$, define non-isomorphic PL structures on $M$. Concretely, one can say that triangulations given by $\tau_1, \tau_2$ do not admit isomorphic subdivisions. (This property fails in dimension 7: Famously, there are 28 non-diffeomorphic smooth structures on $S^7$, but all PL structures on $S^7$ are PL-isomorphic. The other difference between DIFF and PL categories in dimensions $\ge 7$ is that there are PL manifolds of dimension $\ge 7$ which do not admit compatible smooth structures.) Here one is working with unordered simplicial complexes. Thus, "branching structures" which one can assign (possibly after a subdivision) to triangulations $\tau_1, \tau_2$ are irrelevant. I think there is no need to quote Kirby-Siebenmann. The relevant 4-dimensional result is a theorem of Cerf. As Kirby-Siebenmann say on p.39 of their book: "It is of course a telling fault that we do not deal with compatible DIFF structures on PL manifolds of dimension $\le 4$. But the theory is rather joyless there; compatible structures exist and are unique up to concordance". And then they refer to [J.Cerf, Groupes d'automorphismes et groupes de diffeomorphismes des varietes compactes de dimension 3, Bull. Soc. Math. France 87 (1959), 319-329]. And in higher dimensions PL/Diff smoothing theory is treated in Hisch-Mazur well before Kirby-Siebenmann. It is very hard to construct state-sum invariants that distinguish smooth structures in dimension 4, for this simple but crucial fact that is worth mentioning: if $M$ and $N$ are homeomorphic smooth 4-manifolds, it is often the case (I don't remember what condition is needed here) that $M \#_h( S^2 \times S^2)$ and $N\#_h (S^2 \times S^2)$ are diffeomorphic for some $h$. Therefore any combinatorial invariant where the value on $M$ may be deduced from that on $M \# (S^2 \times S^2)$ will not work. So for instance if your invariant is multiplicative on connected sums, it should vanish on $S^2 \times S^2$. The most famous state-sum invariant in dimension 3 is the Turaev-Viro one, and it is multiplicative on connected sums and is almost never zero. It is often but not always true that homeomorphic $M$ and $N$ are $S^2\times S^2$-stably diffeomorphic. (It is true when $M$ and $N$ are oriented, by a theorem of Gompf.) One counterexample is Capell-Shaneson's fake $\mathbb{RP}^4$ and the standard $\mathbb{RP}^4$, in fact, which can be shown with Kreck's modified surgery theory, reducing the problem to the possible values they can take on in the 4th pin+ bordism group. Aha, I didn't know that. What happens if we replace $S^2 \times S^2$ with some other fixed 4-manifold $X$, like for instance the complex projective plane? Are the connected sums $M#X$ and $N# X$ always guaranteed to be non-diffeomorphic for these two specific 4-manifolds $M$ and $N$? That's a great question, and I don't know what happens in general. The trick that makes this work is that $S^2\times S^2$ is null-bordant, allowing bordism to be used to check stable diffeomorphism. So something different would have to be done for $\mathbb{CP}^2$. For these specific $M$ and $N$, I think $X := \mathbb{CP}^2 # (S^2\times S^2)$ works: for some $i$ and $j$, $M #_i X$ and $M #_j X$ are diffeomorphic. This is because $M # \mathbb{CP}^2$ and $N #\mathbb{CP}^2$ are $S^2\times S^2$-stably diffeomorphic: one once again uses Kreck's modified surgery theory, but $M # \mathbb{CP}^2$ and $N # \mathbb{CP}^2$ aren't pin+ or pin-, so the calculation takes place in the 4th unoriented bordism group, where they are bordant. It's actually showed $RP^4 # CP^2$ and $Q # CP^2$ ($Q$ being the fake $RP^4$) are diffeomorphic in this article: [S. Akbulut, A fake 4-manifold, 4-Manifold Topology, Contemporary Math. 35 (1984), 75-141.]
2025-03-21T14:48:31.211725
2020-06-08T03:49:00
362480
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629982", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362480" }
Stack Exchange
Grassmannian of vector bundles with determinant being a square I want to find a classfying space of vector bundles of rank $n$ generated by its global sections with determinant being a square of a line bundle. Stating in a formal way, suppose $X$ is a scheme, define a functor $$F:X\longmapsto\{(E,\theta)|O_X^{\oplus r}\longrightarrow E\longrightarrow 0, E\text{ is a vector bundle of rank }n, \theta:det(E)\xrightarrow{\cong}L\otimes L, L\in Pic(X)\}.$$ Is $F$ representable by a scheme?
2025-03-21T14:48:31.211792
2020-06-08T05:00:16
362481
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "0xbadf00d", "Nawaf Bou-Rabee", "https://mathoverflow.net/users/64449", "https://mathoverflow.net/users/91890" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629983", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362481" }
Stack Exchange
How does one define the gradient of a Markov semigroup? In the context of functional inequalities for Markov semigroups $(\mathcal P_t)_{t\ge0}$, what is one denoting by $\nabla\mathcal P_tf$? For example, I've found the following assumption in this paper: Above, $(\mathcal P_t)_{t\ge0}$ is the semigroup given by $$\mathcal P_t(x,B):=\operatorname P\left[\Phi_t(\;\cdot\;,x)\in B\right]$$ and $\phi$ is a Lipschitz continuous Fréchet differentiable function. However, there is no assumption ensuring that $\mathcal P_t$ preserves Fréchet differentiability. So, how does one understand the differential ${\rm D}\mathcal P_t$? A probabilistic way to understand $D \mathcal{P}_t$ is using a Bismut-Elworthy-Li formula; see, e.g., Theorem 10.3 of http://www.hairer.org/notes/Imperial.pdf Note that this formula involves the underlying process (10.1) and its derivative (10.2). @NawafBou-Rabee Maybe I don't understand the result, but isn't Theorem 10.3 an identity for ${\rm D}\mathcal P_t$ for the particular flow given by the solution of $(10.1)$? My question is more basic: What kind of differential is ${\rm D}\mathcal P_t$? Is it the ordinary Fréchet derivative? I mean, as indicated in the paper, $x\mapsto\Phi_t(\omega,x)$ is Fréchet differentiable for all $(\omega,t)$. Does this imply that $\mathcal P_tf$ is Fréchet differentiable for all Fréchet differentiable $f$? @NawafBou-Rabee Or do we need to understand it in terms of the "modulus of gradient" (local Lipschitz constant)? $D \mathcal{P}_t$ is a Fréchet derivative defined by directional derivatives of $\mathcal{P}_t \phi$. @NawafBou-Rabee So, is it a general property that if $x\mapsto\Phi_t(\omega,x)$ is Fréchet differentiable, then $\mathcal P_tf$ is Fréchet differentiable for all Fréchet differentiable $f$? It seems like the authors of the paper assume this. If so, do you have a reference for that at hand? A nice reference is the following book by Cerrai: https://www.springer.com/gp/book/9783540421368. The idea is to use a stochastic representation of $\mathcal{P}_t f(x)$ in terms of an expectation involving the underlying stochastic process, show the process is mean-square differentiable with respect to the initial point, differentiate the stochastic representation and then interchange the derivative with the expectation as in https://math.stackexchange.com/questions/217702/when-can-we-interchange-the-derivative-with-an-expectation @NawafBou-Rabee I've got a subsequent question regarding a bound for a gradient flow of a stochastic Navier-Stokes equation: https://mathoverflow.net/q/364005/91890. It would be great if you could take a look.
2025-03-21T14:48:31.211974
2020-06-08T05:11:08
362482
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "A.G", "Geva Yashfe", "KkD", "https://mathoverflow.net/users/154028", "https://mathoverflow.net/users/75344", "https://mathoverflow.net/users/92322" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629984", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362482" }
Stack Exchange
The local flatness criterion I am self studying the book "Commutative Ring Theory" by H. Matsumura. The main theorem of section 22 is the theorem 22.3, which characterizes flatness of a module $M$ over any ring $A$. The (part of the) theorem states : Let $A$ be a ring, $I$ an ideal of $A$ and $M$ an $A$-module. Then the following are equivalent (1) $Tor_1^A(N,M)=0$ for every $A/I$ module $N$; (2) $M/IM$ is flat over $A/I$ and $I\otimes M =IM$; (3) $M/IM$ is flat over $A/I$ and $Tor_1^A(A/I,M)=0$. I have been able to show the above three are equivalent. Furthermore, the theorem also states if one of the above three holds, then one must have : (4) $M/IM$ is flat over $A/I$ and $\gamma_n$ : $I^n/I^{(n+1)} \otimes M/IM \rightarrow I^nM/I^{(n+1)}M$ is an isomorphism for all $n\geq 0$. While proving this, the author of the book uses the result that, if $I\otimes M =IM$ then one must have $I^n\otimes M =I^nM$ for all $n\geq 2$, which I am unable to show. Any help would be appreciated. @A.G Can you explain why? Without one of the assumptions of the theorem, this looks false: take $R=k[x]/(x^2)$, $I=M=(x)$, $J=(1)$. Then $I\otimes M\neq 0$ but $I^2 = 0$. Now if your formula works, we should have $0\neq I\otimes M = (1)\cdot(I\otimes M) = ((1)\cdot I)\otimes M = (I\cdot(1))\otimes M = I\cdot ((1)\otimes M)$. But this is just $I\cdot (R\otimes I) = I^2 = 0$, a contradiction. @Geva Yashfe Yes, I think we need one of the assumptions, or rather loosely speaking, we need flatness. Otherwise, this is not true. I'm just hoping to understand what conditions make @A.G 's formula correct, because it looks nice @Geva Yashfe In the above example, I\otimes M is indeed 0, right? Because, any generating element of I\otimes M looks like rx\otimes m=r\otimes xm=r\otimes 0=0 No, this is not the case. See https://math.stackexchange.com/a/298198 for instance. Yes! My mistake. @Geva Yashfe Sorry, my (stupid) mistake. edit: I apologize, the original answer was nonsense (I mixed different Tor functors in a silly way). The following works, but uses the equivalent condition (1) of the question, and does not show $I\otimes M = IM \implies I^n \otimes M = I^n M$ unconditionally. Consider the exact sequence: $$ I^2\otimes M \rightarrow I \otimes M \rightarrow I/I^2 \otimes M \rightarrow 0.$$ The rightmost module is an $A/I$-module, so $\mathrm{Tor}_1(M, I/I^2)=0$. Hence the first map is injective. Also, $I\otimes M = IM$ and $I/I^2 \otimes M = I \otimes A/I \otimes M$ which by associativity and commutativity of tensor is $IM \otimes A/I = IM/I^2M$. So we have an exact sequence: $$ 0\rightarrow I^2\otimes M \rightarrow IM \rightarrow IM/I^2M \rightarrow 0,$$ and $I^2 \otimes M$ is the kernel of the second map, which is $I^2 M$. It seems this process can be continued: consider the exact sequence $$ I^3\otimes M \rightarrow I^2 \otimes M \rightarrow I^2/I^3 \otimes M \rightarrow 0,$$ where again $I^2 / I^3$ is an $A/I$-module and the last two terms are $I^2 M$ and $I^2 M / I^3 M$ respectively; by the vanishing of $\mathrm{Tor}_1(M,N)$ for $N$ an $A/I$-module, the first arrow is an injection and it equals the kernel of the second map, i.e. it is $I^3 M$. Thank you! The misleading situation for me was the fact that when one consider a module over the quotient ring $A/I^i$ for some $i>0$, then it naturally becomes $A/I^n$ module for all $n\geq 0$. Then it was very difficult to consider the right exact sequence, and I always believed that, for this problem, considering the right exact sequence is the hey. But your proof is the best way to absorb the full power of the hypothesis. I really thank you for the answer. @KkD You are welcome, I enjoyed the question.
2025-03-21T14:48:31.212235
2020-06-08T06:09:32
362483
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Leo Alonso", "M.G.", "Mohan", "Nulhomologous", "https://mathoverflow.net/users/158462", "https://mathoverflow.net/users/1849", "https://mathoverflow.net/users/6348", "https://mathoverflow.net/users/9502" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629985", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362483" }
Stack Exchange
Is there a name for commutative algebras over a field $k$ whose residue class fields have finite dimension over $k$? Let $k$ be a field and let $A$ be a (commutative) $k$-algebra. Assume that for every maximal ideal $P \subseteq A$ the residue class field $A/P$ has finite dimension as a $k$-vector space. Is there a name for $k$-algebras like that? Clearly, finitely generated $k$-algebras $A$ satisfy this property, but what about the case $A$ not finitely generated over $k$? Examples of such algebras can easily be obtained by localizing finitely generated $k$-algebras: For instance let $\mathcal{O} = k[x]_{(x)}$ be the localization of the finite $k$-algebra $k[x]$ by the maximal ideal generated by $x$. Then $\mathcal{O}$ is a local ring with maximal ideal $P$ generated by $x$ which is a non finitely generated $k$-algebra. But it has a finite dimensional residue class field $\mathcal{O}/P \cong k$. I was thinking about this since I am working with schemes over $k$ that do not need to be of finite type over $k$ but have finite dimensional residue class fields as above. I haven't encountered a definition or naming of this. I am grateful for any kind of input. I posted this question also on math.stackexchange, but did not receive any answer or input. At least essentially finitely generated algebras (a localization of a finitely generated algebra) have this property. This condition arises frequently. I would very much like to say "residually finite", but that's already taken by group theory :-) @LeoAlonso Essentially finite type includes localizations at arbitrary multiplicatively closed subsets, but the property desired by the OP is satisfied only if you semilocalize at maximal ideals. There are also the completions of such local $k$-algebras, like $k[[x]]$, and any $k$-algebra in between.
2025-03-21T14:48:31.212379
2020-06-08T06:32:55
362484
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carlo Beenakker", "Nate Eldredge", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/4832" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629986", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362484" }
Stack Exchange
How to prove invariance implies quadrature about Haar measure? I'm reading paper, which contains a lemma named as 'Invariance Implies Quadrature'. The lemma is stated as follows. Lemma Let $f$ be a function from $O(d)$ to $\mathbb{R}$ and let $H$ be a finite subgroup of $O(d)$. If $$\mathbb{E}_{P\in H}f(P)=\mathbb{E}_{P\in H}f(PQ_0)$$ for all $Q_0\in O(d)$, then $$\mathbb{E}_{P\in H}f(P)=\mathbb{E}_{Q\in O(d)}f(Q)$$ where $Q$ is chosen according to Haar measure and $P$ is uniform on $H$. proof. $$\mathbb{E}_{Q\in O(d)}f(Q) = \mathbb{E}_{Q\in O(d)}\mathbb{E}_{P\in H}f(PQ) = \mathbb{E}_{P\in H}\mathbb{E}_{Q\in O(d)} f(PQ) = \mathbb{E}_{P\in H}\mathbb{E}_{Q\in O(d)}f(P) = \mathbb{E}_{P\in H}f(P),$$ as desired. This lemma is crucial in the proof of the main theorem in the paper, but I cannot understand the proof and I can't see where it applies the condtion. The first equation seems not obvious to me, and the author didn't provide further details. Another problem is that the second formula could be $$\mathbb{E}_{Q\in O(d)}\mathbb{E}_{P\in H}f(PQ)= \mathbb{E}_{Q\in O(d)} \mathbb{E}_{P\in H} f(P)=\mathbb{E}_{P\in H} f(P)$$ by the conditoin, which directly conclude the result, and why the author prove that as above? Thanks for reading. Any comments are appreciated. isn't the first equality a consequence of the uniformity of the distribution of the permutation matrix $P$ on $H$? Indeed; by the invariance of Haar measure on $O(d)$ we have $\mathbb{E}_Q f(Q) = \mathbb{E}_Q f(PQ)$ for each $P$. Now average over $P \in H$ and interchange the average on $P$ with the expectation on $Q$ (the average on $P$ is a finite sum so no justification is needed).
2025-03-21T14:48:31.212509
2020-06-08T07:04:21
362487
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Otto", "Will Brian", "https://mathoverflow.net/users/119731", "https://mathoverflow.net/users/70618" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629987", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362487" }
Stack Exchange
Functions preserving almost disjoint of partitions A collection $\mathcal{A}\subseteq \omega^\omega$ is almost disjoint iff $\bigcap_{X\in \mathcal{A}}X^{-1}(j)$ is finite for all $j\in\omega$. A function $\Gamma:2^\omega\rightarrow 2^\omega$ is almost-disjoint preserving if $\Gamma(P)$ is almost-disjoint whenever $P\subseteq 2^\omega$ is. If a somewhat regular function (say Borel) $\Gamma:2^\omega\rightarrow 2^\omega$ is almost-disjoint preserving, then for some large (dense in some clopen set) subset $P$ of $2^\omega$, $\Gamma\upharpoonright P$ is a bitwise isomorphism. i.e., for all but finitely many $n$, there exists an $\hat{n}$, such that $\Gamma(X)(n)$ is determined by $X(\hat{n})$. It seems this can be proved by forcing. Is there any simple analytic proof? Is there any reference on this kind of subject? Note that this is the infinite version of the following observation: for every function $\Gamma:2^n\rightarrow 2$, if $\Gamma(V)$ is disjoint (which simply means $\Gamma(V) = \{0,1\}$) whenever $V\subseteq 2^n$ is disjoint, then there is an $m<n$ such that $(\forall \sigma\in 2^n) \ \Gamma(\sigma) = \sigma(m)$ or $(\forall \sigma\in 2^n)\ \ \Gamma(\sigma) = 1-\sigma(m)$. Thanks in advance~ it sounds like it has some connections with homomorphisms/automorphisms of $P(\omega)/fin$ (the bitwise isomorphism as you called it is related to the notion of triviality), for example in Chapter IV of Shelah's proper forcing book You might be looking for this paper of Velickovic (Lemma 2 in particular): https://www.jstor.org/stable/2045667?seq=3#metadata_info_tab_contents
2025-03-21T14:48:31.212636
2020-06-08T07:33:36
362489
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629988", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362489" }
Stack Exchange
Dress' construction and Serre spectral sequence Currently, I am reading Serre spectral sequence, given below, using Dress' construction. Let $f:E\to B$ be a Serre fibration. Then, there is a first quadrant spectral sequence $\big\{E^r,d^r\}_{r\geq1}$, called the Serre spectral sequence, with the second page $E^2_{p,q}=H_p\big(B;\underline{H_q(F)}\big):$ the singular homology with coefficients in the local system induced by the fibres of $f$. The spectral sequence converges to the singular homology of the topological space $E$ : $$E^2_{p,q}=H_q\big(B; \underline{H_q(F)}\big)\underset{p}{\implies} H_{p+q}(E).$$ One of the great things is that it gives a quick proof general Hurewicz Isomorphism Theorem. But the dress construction is based on following fact If $f:X\to Y$ is a weak homotopy equivalence, then for any $n\geq 0$ the induced map $f_*: H_n(X) \to H_n(Y)$ is an isomorphism. and which in turn depends on the relative version of the general Hurewicz Isomorphism Theorem. So, some kind of cyclic arguments. My question is that is there any proof of the above fact without using the Hurewicz Isomorphism Theorem. Any help will be appreciated. Thanks in advance.
2025-03-21T14:48:31.212743
2020-06-08T09:01:46
362492
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "https://mathoverflow.net/users/142929", "user142929" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629989", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362492" }
Stack Exchange
Least number of factors $\sigma(p^e)$ of representation of $\sigma(N)$ to get the least multiple of $\operatorname{rad}(N)$, for odd perfect numbers I've cross-posted this from the post of Mathematics Stack Exchange that I've asked (Apr, 2nd 2020) with title On the least number of factors $\sigma(q^{e_q})$ to get the least multiple of $\operatorname{rad}(n)$, on assumption that $n$ is an odd perfect number and identificator MSE 3606189. Definitions and notation. We denote the sum of divisors function $\sum_{1\leq d\mid m}d$ as $\sigma(m)$ and the radical of an integer $m>1$ as $\operatorname{rad}(m)$, the product of distinct primes dividing $m$ with the definition $\operatorname{rad}(1)=1$. Both arithmetic functions are multiplicative functions. The number of distinct prime factors dividing an integer $m>1$ has the notation of the prime omega function $\omega(m)$. For integers $m>1$ since the sum of divisors function $\sigma(m)$ is multiplicative, and we know that the fundamental theorem of arithmetic provide us the representation $m=\prod_{q\mid m}q^{e_q}$ ($q$ are the distinct primes dividing $m$ and $e_q$ their corresponding exponents in this unique factorization: $m$ can be represented as product of those prime powers $q^{e_q}$ and the representation is unique, see [2]) we get that $\sigma(m)$ has the representation $\prod_{q\mid m}\sigma(q^{e_q})$ (see [1]). Claim. It is obvious invoking Euclid–Euler theorem for even perfect numbers that if $n$ is an even perfect number, there exist two factors of the form $\sigma(q^{e_q})$ from the representation $\sigma(n)=\prod_{q\mid m}\sigma(q^{e_q})$ such that their product is a multiple of $\operatorname{rad}(n)$. Here again we emphasize that $\sigma(q^{e_q})$ represents those generic factors in the previous representation of $\sigma(m)$. For even perfect numbers $n$ we've that $\omega(n)=2$, and the claim tell us that we need to multiply both factors, that is $\sigma(2^{p-1})\sigma(2^p-1)$ to get the least multiple of $\operatorname{rad}(n)$. Summary and goal. Thus for even perfect numbers $n$ we need $$\kappa:=2$$ factors from the representation $\prod_{q\mid n}\sigma(q^{e_q})$ of $\sigma(n)$ to get the least multiple of $\operatorname{rad}(n)=2(2^p-1)$, being $\kappa$ the least possible choice of this number $1\leq \kappa\leq\omega(n)=2$. I would like to know if it is feasible a similar discussion on assumption that $N$ is an odd perfect number, that is if we can get an approximation or some reasoning about what about the $1\leq \kappa\leq \omega(N)$. Question. I would like to know, on assumption that $N$ is an odd perfect number, if it is possible to do some work or deduction* about the least number $\kappa$, with $1\leq\kappa\leq \omega(N)$, of those factors of the form $\sigma(q^{e_q})$ that I need to multiply to get the least multiple of $\operatorname{rad}(N)$. Many thanks. *I mean a work or deduction about an approximation of the value of the quantity $\kappa$ for odd perfect numbers. Here $N=\prod_{q\mid N}q^{e_q}$ is the canonical representation ([2]) of our odd perfect number $N$, compare if you need it with Euler's theorem for odd perfect number. In the linked post were added valuable comments and references. References: [1] Tom M. Apostol, Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag (1976). [2] Wikipedia has the articles Fundamental theorem of arithmetic and Perfect number. Please feel free to add your feedback in comments or as an answer (I've cross-posted it since I think that can be an interesting post and that maybe it is possible to do some work on MathOverflow about the question). As reference for all users Wikipedia has also the articles with titles Prime omega function, Radical of an integer and Euclid–Euler theorem. Many thanks and good day.
2025-03-21T14:48:31.213231
2020-06-08T09:10:02
362493
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "Dirk Werner", "https://mathoverflow.net/users/127871", "https://mathoverflow.net/users/36886" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629990", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362493" }
Stack Exchange
Explicit homeomorphism between $L^p$ and Sobolev Space From the Anderson-Kadec theorem, we know that all separable infinite-dimensional Banach spaces are homeomorphic. I'm wondering, is there an explicit such homeomorphism between $W^{p,k}(\mathbb{R}^n)$ and $L^2(\mu,\mathbb{R}^n)$? Here, $\mu$ is a Borel probability measure on $\mathbb{R}^n$ and $W^{p,k}(\mathbb{R}^n)$ is the Sobolev space of "functions" with $k$ weak derivates with finite $p^{th}$ norm. If possible, an explicit bi-Lipschitz homeomorphism would be most interesting. I know that the Fourier transform does the job when $\mu$ is the Lebesgue measure and $k$ is sufficiently well-chosen, but I was wondering what is possible in this setting. You already know that the Sobolev space and $L_p(\lambda)$ are explicily linearly isomorphic; also $L_2(\mu)$ and $L_2(\lambda)$ are isometric (via orthonormal bases, hence (?) explicitly); finally $L_p(\mu)$ and $L_2(\mu)$ are homeomorphic via the Mazur map $f\mapsto |f|^{p/2}\operatorname{sign}(f)$. (Mazur's paper is in Studia Math.\ 1, 83--85 (1929).) @DirkWerner Yes but the Isomorphism between the Sobolev space and $L_p(\lambda)$ is practical like the Mazur map in the sense that a function must first be decomposed in terms of an basis and then mapped. Right? Is there a direct transformation does the job (even if it's non-linear like the Mazur map)? @AIM_BPB For the isomorphism between the Sobolev space and $L_p$ one uses Fourier multipliers, right? So only the isomorphism between the $L_2$-spaces requires the choice of a noncanonical basis. Actually I'm not familiar with Fourier multipliers. (now this may have become basic) but how would you do that? Please take a look at Wojtaszczyk's ``Banach Spaces For Analysts'', Proposition III.A.3 (which deals with the periodic Sobolev space).
2025-03-21T14:48:31.213361
2020-06-08T09:29:58
362494
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629991", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362494" }
Stack Exchange
Family of Lie algebras parametrized by a discrete valuation ring I have a family of Lie algebras parametrized by a discrete valuation ring, whose generic fiber is reductive and whose special fiber is nilpotent. I'd like to learn about the relationship between the cohomology of the special fiber and the cohomology of the generic fiber. In particular, I have good reason to believe that a certain part of the cohomology of the special fiber is isomorphic to the cohomology of the generic fiber. Are there any results related to Lie algebra cohomology that can help with my situation ? More generally, I'd like to know about results related to how cohomology varies from the special fiber to the generic fiber for varieties over a discrete valuation ring. Everything I've found is related to mod-$p$ étale cohomology, but I'd really like to know about results related to Lie algebra cohomology. Thank you!
2025-03-21T14:48:31.213436
2020-06-08T09:50:44
362496
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Anton Mellit", "Mikhail Borovoi", "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/4149", "https://mathoverflow.net/users/89514" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629992", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362496" }
Stack Exchange
A class of unipotent group actions Consider algebraic actions of unipotent groups $G$ on affine spaces $X=\mathbb{C}^n$. I am looking for a condition that would guarantee that the quotient $X/G$ exists and is also an affine space. For instance: Suppose $X$ is itself isomorphic to a unipotent group and $G$ acts via compositions of left translations with group automorphisms. Suppose the action is such that every point of $X$ has trivial stabilizer. Is it true that $X/G$ is isomorphic to affine space? UPDATE: It turns out, by replacing $X/G$ with $G\backslash X\rtimes A/A$ where $A$ is the image of $G$ in the group of automorphisms of $X$, the question is reduced to the corresponding question for double cosets, which is again a special case of the original question. So here is an equivalent question: Suppose a unipotent group $G$ contains unipotent subgroups $G_1, G_2$ such that $G_1\cap x G_2 x^{-1} = \{e\}$ for all $x\in G$. Is the double coset space $G_1 \backslash G / G_2$ isomorphic to affine space? "The action doesn't have fixed points" has two possible meanings: no global fixed point, vs free action. What do you mean? @YCor Sorry, I rephrased the question. I meant free action. I would also like to understand the non-free case, but I don't know what kind of property to expect. If it is indeed isomorphic, then I think I know a formula for an isomorphism. You may assume that your unipotent groups $G$ is embedded in ${\rm GL}(V)$. Then its Lie algebra $\frak g$ is embedded in ${\frak gl}(V)$, and the exponential map $\exp\colon {\frak g}\to G$ is a (polynomial) isomorphism of varieties. Let $U\subset V$ be a complement in $\frak g$ to ${\frak g}_1\oplus{\frak g}_2$. In other words, ${\frak g}= {\frak g}_1\oplus U\oplus{\frak g}_2$. I expect that for a suitable choice of $U$ (or even for any choice of $U$), the map $$e\colon ,{\frak g}= {\frak g}_1\oplus U\oplus{\frak g}_2,\to, G,\quad g_1+u+g_2,\mapsto, \exp(g_1)\cdot\exp(u)\cdot\exp(g_2)$$ will be an isomorphism of varieties. This will answer your question. @Mikhail Borovoi, it would be great if it works. Note that for arbitrary $U$ it doesn't work even for trivial $\mathfrak g_2$ and one-dimensional $\mathfrak g_1$. Unless $\mathfrak g_1$ is in the center. The answer is "no". In this paper: Winkelmann, J. On free holomorphic ℂ-actions on ℂn and homogeneous stein manifolds. Math. Ann. 286, 593–612 (1990) a free affine linear action of $G_a\times G_a$ on $\mathbb{C}^6$ is given in such a way that the quotient is not an affine variety. So for $X=G_a^6$, $G=G_a^2$, we obtain a counter-example. By the way, in op. cit. this action is reduced to a triangular algebraic action of $G_a$ on $\mathbb{C}^5$. There is also an example there of a free triangular algebraic action of $G_a$ on $\mathbb{C}^4$ with non-Hausdorff quotient. It is not hard to check that the class of triangular algebraic actions coincides with the class of actions of $G_1$ on $G/G_2$ where $G_1, G_2$ range over unipotent subgroups of unipotent groups $G$.
2025-03-21T14:48:31.213746
2020-06-08T10:42:24
362499
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629993", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362499" }
Stack Exchange
Does a total preorder on lotteries that preserves countable mixtures preserve arbitrary mixtures? Let $X$ be a countable set. A lottery on $X$ is a function $\lambda: X \to [0,1]$ such that $\sum_x \lambda(x) = 1$. Let $\Delta X$ be the set of lotteries on $X$. A total preorder $\preceq$ on $\Delta X$ is a transitive, reflexive, complete binary relation on $\Delta X$. Let $(I, \mathcal I)$ be a measurable space. Note that if $\lambda_i$ is a family of lotteries on $X$ indexed by $I$ and $p$ is a probability measure on $(I, \mathcal I)$, then the function $\int\lambda_ip(di)$ on $X$ defined by $\big(\int\lambda_ip(di)\big)(x) = \int \lambda_i(x)p(di)$ is a lottery on $X$. Let $\lambda_i$ and $\mu_i$ be families of lotteries on $X$ indexed by $I$. The total preorder $\preceq$ is discretely mixing just in case if $\lambda_i \preceq \mu_i$ for all $i \in I$, then $\int \lambda_i p(di) \preceq \int\mu_ip(di)$ for all discrete probabilities $p$ on $(I, \mathcal I)$. Question. If $\preceq$ is discretely mixing, then is $\preceq$ totally mixing in the following sense: If $\lambda_i \preceq \mu_i$ for all $i \in I$, then $\int \lambda_i p(di) \preceq \int \mu_i p(di)$ for all probability measures on $(I, \mathcal I)$? I suspect the answer is negative, but have not found a counterexample.
2025-03-21T14:48:31.213855
2020-06-08T11:57:39
362501
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Adam P. Goucher", "Gordon Royle", "N math", "https://mathoverflow.net/users/1492", "https://mathoverflow.net/users/152342", "https://mathoverflow.net/users/39521" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629994", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362501" }
Stack Exchange
Cayley graphs on $Z_{11}$ and $Z_p$ I want to find all cayley graphs on $Z_{11}$. I know how many connected cayley graphs exist but i want to find all of them, connected or not, to find their eigenvalues. I found some of them and a theorem about isomorphism of caykey graphs on $Z_p$, p is a prime number. Also I trid to work with GAP to construct these cayley graphs but I can't. I know cayley graphs on $Z_p$ are circulant. Is there any special property about the cayley graph on $Z_p$? Or is there any category for them? Thanks for your helping. The empty graph is the only disconnected Cayley graph on 11 vertices, so you should now be done. This paper should be everything you need: https://core.ac.uk/download/pdf/81945449.pdf @Gordon Royle Thanks for your answer but I know the number of these graphs. I want to find the graphs to obtain their eigenvalues. @Adam P.Goucher thanks for your answer but I want to construct these graphs on $Z_{11}$ to find their spectrums. These are symmetric circulant matrices, see https://en.wikipedia.org/wiki/Circulant_matrix and scroll down to “Symmetric circulant matrices”. @Gordon Royle I studied that before but I coulden't use that. For $Z_11$, when |S| =4, S is a symmetric generate set for the cayley graph, we have 210 choices for this generate set. @Gordon Royle I mean $Z_{11}$. Also I found this corollary ( at the top of 448) https://books.google.com/books?id=5l5ps2JkyT0C&pg=PA446&lpg=PA446&dq=cayley+graph%2BZ_p%2Ba+course+in+combinatorics&source=bl&ots=wVZR19KXtC&sig=ACfU3U2Ihs1Hm7ZbOuQjm1-qgkT_BR2tgA&hl=en&sa=X&ved=2ahUKEwjNuc3D5_PpAhWnyKYKHYBzApEQ6AEwAHoECAYQAQ#v=onepage&q=cayley%20graph%2BZ_p%2Ba%20course%20in%20combinatorics&f=false and I used this for some generate sets. But my problem is there are 210 choices just for |S|=4, and I have to check all of generate sets with different cardinalities. If you are interested in graphs (not digraphs), then the elements of the connection set must come in pairs, so you are only looking at subsets $$ C \subseteq \{\pm1, \pm2, \pm3, \pm4, \pm5\}. $$ Moreover, we know that the graph with connection set $C$ is isomorphic to the graph with connection set $kC$ ($k \ne 0$ and all calculations mod 11) so if the graph is not empty, we can assume that $\pm 1 \in C$. So if $|C| = 1$, we get the $11$-cycle. If $|C| = 2$, then either $C = \{\pm 1, \pm 2\}$ or $C = \{\pm 1, \pm 3\}$ (the choices $C = \{\pm 1, \pm4\}$ and $C = \{ \pm 1, \pm 5\}$ are each isomorphic to one of the previous ones.) If $|C| > 2$ then the graph is the complement of one already found. So you only have three graphs and their complements to check Great! Thanks for your complete answer.
2025-03-21T14:48:31.214056
2020-06-08T12:48:51
362502
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Christopher Ryba", "Ion Nechita", "Mark Wildon", "https://mathoverflow.net/users/159272", "https://mathoverflow.net/users/30138", "https://mathoverflow.net/users/7709" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629995", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362502" }
Stack Exchange
Structure constants for the double coset algebra of a Young subgroup Fix a Young subgroup $H_\lambda \subseteq \mathcal S_n$, where $\lambda \vdash n$ is a partition of $n$ with $k$ blocks. Inside the group algebra $\mathbb C[\mathcal S_n]$, consider the idempotent $$\varepsilon = \frac{1}{|H_\lambda|}\sum_{h \in H_\lambda} h.$$ The double cosets $H_\lambda \backslash \mathcal S_n / H_\lambda$ are indexed by matrices $k \times k$ with non-negative entries and row (resp. column) sums given by the blocks of $\lambda$ (see Richard Stanley's answer to Number of double cosets of a Young subgroup). For a representative $\alpha_x \in \mathcal S_n$ of a given coset, define the element $$\sigma_x = \varepsilon \alpha_x \varepsilon,$$ which is an element of the corresponding Hecke algebra; above, $x$ is a matrix with non-negative entries with row and column sums given by $\lambda$. I would like to know how to find the structure constants of the Hecke algebra, i.e. the coefficients $\gamma_{x,y}^z$ such that $$\sigma_x \cdot \sigma_y = \sum_z \gamma_{x,y}^z \sigma_z.$$ Example In the case $\mathcal S_p \times \mathcal S_q \subseteq \mathcal S_{n=p+q}$, the double cosets can be indexed by integers $0 \leq x \leq \min(p,q)$, where $x$ represents the number of elements of $[1,p]$ mapped to $[p+1,p+q]$. Using basic combinatorics, I can show that $$\sigma_x \sigma_1 = \frac{1}{pq} \left( (p-x)(q-x) \sigma_{x+1} + x^2 \sigma_{x-1} + (x(p-x)+x(q-x))\sigma_x \right).$$ There is a combinatorial rule for the structure constants. It appears in Section 2 of https://arxiv.org/abs/1104.1959, although it is not immediately clear that it is indeed what you are looking for. Suppose that $V$ is a vector space of dimension at least $l(\lambda)$. Then in the setting of Schur-Weyl duality, there is a simultaneous action of $GL(V)$ and $S_n$ on $V^{\otimes n}$. We note that the $\lambda$-weight space of $V^{\otimes n}$ is precisely the permutation module $M^\lambda$ induced from the trivial module of $H_\lambda$. As a representation of $\mathbb{C}S_n$, $\mathbb{C}S_n \varepsilon$ is $M^\lambda$. So the Hecke algebra you are implicitly considering is $$ \mathrm{End}_{\mathbb{C}S_n}(M^\lambda). $$ This is related to $\mathrm{End}_{\mathbb{C}S_n}(V^{\otimes n})$, which is why the Schur algebra (appearing in the paper) is relevant here. However, the most convenient basis is given by $$ \xi_x = \frac{1}{|H_\lambda|}\sum_{g \in H_\lambda \alpha_x H_\lambda} g, $$ which is to say, double-coset sums divided by $|H_\lambda|$, which is a different normalisation to your elements $\sigma_x$. Explicitly, $$ \sigma_x = \frac{|H_\lambda \cap \alpha_x H_\lambda \alpha_x^{-1}|}{|H_\lambda|}\xi_x, $$ and it is a standard lemma that if $\alpha_x$ is a minimal-length double-coset representative, $H_\lambda \cap \alpha_x H_\lambda \alpha_x^{-1}$ is also a Young subgroup, so its size can easily be written down. Note that the double cosets $H_\lambda \alpha_x H_\lambda$ are indexed by square matrices of size $l(\lambda)$ whose row and column sums are $\lambda$. The $(i,j)$-th entry counts how many elements of $\{1,2,\ldots,n\}$ permuted by $S_{\lambda_j}$ are mapped by some (and therefore any) element of the double coset to elements permuted by $S_{\lambda_i}$. So we let the indexing variables $x,y,z$ be matrices of that form, and write them with subscripts (e.g. $x_{ij}$) to refer to the entries of the corresponding matrices. To calculate the coefficient of $\xi_z$ in $\xi_x \xi_y$, we consider "cubic matrices of size $l(\lambda)$" (i.e. $l(\lambda) \times l(\lambda) \times l(\lambda)$ 3-tensors) $A_{ijk}$ with entries in $\mathbb{Z}_{\geq 0}$. We require that $$ \sum_i A_{ijk} = x_{jk} $$ $$ \sum_j A_{ijk} = z_{ik} $$ $$ \sum_k A_{ijk} = y_{ij} $$ Then the coefficient is $$ \sum_A \prod_{i,k}\frac{(\sum_j A_{ijk})!}{\prod_j A_{ijk}!}, $$ subject to the conditions on $A$ already mentioned. Let's check this for your example. We take the matrix $x$ to be $$ \begin{bmatrix} p-x & x \\ x & q-x \end{bmatrix}. $$ (Here there is an conflict of notation between the label of the double coset, and the number of elements it "mixes" between $S_{p}$ and $S_q$.) We take the matrix $y$ to be $$ \begin{bmatrix} p-1 & 1 \\ 1 & q-1 \end{bmatrix}. $$ So we are going to calculate $\xi_x \xi_y$. Typsetting a 3-tensor $A_{ijk}$ is tricky, so let's consider the "slices" $A_{i1k}$ and $A_{i2k}$ separately. The row-sums of $A_{i1k}$ are $(p-x,x)$ and the column sums are $(p-1,1)$. There are two possibilities for $A_{i1l}$: $$ \begin{bmatrix} p-x & 0 \\ x-1 & 1 \end{bmatrix}, \begin{bmatrix} p-x-1 & 1 \\ x & 0 \end{bmatrix}. $$ Similarly, for $A_{i2k}$ we have row-sums $(x,q-x)$ and column sums $(1,q-1)$, so we get $$ \begin{bmatrix} 1 & x-1 \\ 0 & q-x \end{bmatrix}, \begin{bmatrix} 0 & x \\ 1 & q-x-1 \end{bmatrix}. $$ So there are four ways of combining these into the full tensor $A_{ijk}$. For each of these, we get a multiple of $\xi_z$, where $z = A_{i1k}+A_{12k}$, and the scalar multiple is a product of (very simple) binomial coefficients. Choosing the first option for each of $A_{i1k}$ and $A_{i2k}$ gives $z$ with off-diagonal entries $x-1$, and the scalar multiple is $(p-x+1)(q-x+1)$. Choosing the last of each gives $z$ with off-diagonal entries $x+1$, and the scalar multiple is $(x+1)^2$. Each of the other two choices gives $z$ with off-diagonal entries $x$, and the scalar multiples are $x(p-x)$ and $x(q-x)$. To recover your equation, we just need to know that if $z$ has off-diagonal entries equal to $r$, then $$ \sigma_r = \frac{1}{{p \choose r}{q \choose r}} \xi_r, $$ so for example, you can immediately multiply your equation by $pq$ to turn $\sigma_1$ into $\xi_y$ (and clear a denominator on the right). I have left out some details to keep this post from being egregiously long, but I would be happy to explain further or give references if it would be helpful. What is described in this post is just the tip of the iceberg, for example it's possible to interpolate these structure constants to define certain "permutation module Deligne Categories", such as in https://arxiv.org/abs/1909.04100. Welcome to MathOverflow! Dear Christopher, welcome to MO and thanks a lot for taking the time to write such an insightful and detailed comment. Not only you answer precisely the question I had, the background you provide is very helpful. Note that the elements $\sigma_x$ I consider are averages over the double cosets, so my normalization is the inverse of the size of the double coset, while yours is the inverse of the size of $H_\lambda$. Also, this means that the fraction in your equation relating $\sigma_x$ and $\xi_x$ should be inverted, right? Thanks again for your help. Thanks for the kind words! I believe the relation between $\sigma_x$ and $\xi_x$ is correct as stated. I am using the fact that the size of the $(H,K)$-double coset $HgK$ in a group $G$ is $|H||K|/|H \cap gKg^{-1}|$. In particular, I am taking $H=K=H_\lambda$ and $g=\alpha_x$. (But your comment would apply if the numerator of the fraction was the size of the double coset.) Dear Christopher, you are of course right, I was confused about the formula for the size of the double coset. Thanks again!
2025-03-21T14:48:31.214473
2020-06-08T13:02:26
362503
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ben McKay", "BinAcker", "https://mathoverflow.net/users/102114", "https://mathoverflow.net/users/13268" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629996", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362503" }
Stack Exchange
Exponential of mixed-type End-valued differential form Let $E\rightarrow \mathbb{P}^1$ be a complex vector bundle and let $a_{(0,0)},a_{(1,0)},a_{(0,1)},a_{(1,1)}$ be differential forms such that $a_{(i,j)}\in\Omega^{i,j}(\mathbb{P}^1,End(E))$. I would like to compute the degree (1,1) part of the form $$Tr(\exp(a_{(0,0)}+a_{(1,0)}+a_{(0,1)}+a_{(1,1)}))\in\Omega^{*,*}(\mathbb{P}^1,\mathbb{C})$$ Any idea please? I am not clear how to define the exponential, because they don't commute. @BenMcKay this is why it is harder to compute, but since you can multiply them the exponential is define
2025-03-21T14:48:31.214541
2020-06-09T20:35:42
362646
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ilya Bogdanov", "Sam Hopkins", "https://mathoverflow.net/users/17581", "https://mathoverflow.net/users/25028" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629997", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362646" }
Stack Exchange
Is there a characterization for the finite sequences of natural numbers which are the shifts of a permutation? Given a natural number $k<n$ and a permutation $\sigma\in S_n$, I will call $k$ a shift of $\sigma$ if there is some $m$ with $\sigma(m)-m\equiv k \mod n$. If one is given a sequence $s=(x_1,...,x_n)$ of $n$ natural numbers between $0$ and $n-1$, is there some known way to determine whether $s$ represents the shifts of some permutatuon in $S_n$? If not, I am also interested in relevant heuristics. Thank you. See https://artofproblemsolving.com/community/c6h85076p494833 @IlyaBogdanov: that link characterizes sequences which are sums (equivalently, differences) of two permutations. But the question here is slightly different, because here one of the permutations is assumed to be the identity. Does the same result still apply? @SamHopkins Yes; if $\sigma$ and $\tau$ are two permutations with prescribed differences, then $\sigma\tau^{-1}$ is a permutation with the same multiset of shifts. @IlyaBogdanov: yes, but it was unclear to me whether the question was about multisets of shifts or sequences of shifts. @SamHopkins Well, for a sequence it is rather trivial, isn’t it? The characterization is ‘$x_i+i$ are pairwise distinct module $n$’... @IlyaBogdanov: ah of course, I see.
2025-03-21T14:48:31.214666
2020-06-09T20:52:59
362647
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Iosif Pinelis", "Ivan Meir", "Sam Hopkins", "Will Sawin", "https://mathoverflow.net/users/142929", "https://mathoverflow.net/users/18060", "https://mathoverflow.net/users/25028", "https://mathoverflow.net/users/36721", "https://mathoverflow.net/users/7113", "user142929" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629998", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362647" }
Stack Exchange
Conceptual explanation of geometric mean as a limit of power means Let $x_1,\dots,x_n$ be positive real numbers and $p\in\mathbb{R} -\{0\}$. The power mean $M_p(x_1,\dots,x_n)$ of exponent $p$ is defined by $$ M_p(x_1,\dots,x_n)=\left( \frac 1n\sum_{i=1}^n x_i^p \right)^{1/p}. $$ By taking logarithms and applying L'Hôpital's rule (or just the definition of a derivative), we get $$ \lim_{p\to 0} M_p(x_1,\dots,x_n) = \sqrt[n]{x_1\cdots x_n}, $$ the geometric mean of $x_1,\dots,x_n$. Thus the "correct" definition of $M_0$ is $M_0(x_1,\dots,x_n) = \sqrt[n]{x_1\cdots x_n}$. All this is well known, but I am wondering if there is some conceptual explanation, not involving computation, for the value of $M_0(x_1,\dots,x_n)$. Small, not really conceptual comment: suppose we allow some $x_i$ to be $0$ as well; then $\lim_{p\to \infty}M_p$ is totally insensitive to having a single value of $x_i$ equal to zero; whereas the ``opposite'' limit $\lim_{p\to 0}M_p$ is "totally sensitive" to a single zero value. Unrelated to your nice post (if it is required I can delete this comment), is that I tried in recent months and weeks write variations of certain unsolved problems using combinations with certain means, for example the more recent is the post with title On variations of a claim due to Kaneko in terms of Lehmer means of Mathematics Stack Exchange. I mean, we can define $M_p$ as the unique solution $\mu$ to $\sum_{i=1}^n \int_{x_i}^\mu y^{p-1} dy = 0$ and then for $p=0$ we get the geometric mean. In case no one has thought of this, a trivial observation that lets you guess the value of $M_0$ immediately with no calculation is that as $p$ approaches $0$ the $x_i^p$'s get closer and closer to 1 and the inequality $M_p\geq GM$ moves towards its condition of equality (from AM-GM applied to the $x_i^p$'s). $(\frac{a^{1/n} + b^{1/n}}{2})^n$ has normalized symmetric binomial (i.e., close to normal) distribution of coefficients convolved with some bounded expressions in $a$ and $b$ (of the form $a^\theta b^{1 - \theta}$). The normal distribution is sharply peaked around its mean so you end up with the middle terms with $\theta = 1/2$ in the limit of large $n$. This generalizes to more variables and weights. That is still a calculation, but one where the answer is easy to foresee at the outset and the idea can be explained in a few words. A reasonably intuitive way to "see" that the limit must be the geometric mean is the plausible and useful observation that any power mean can be expressed in terms of the midpoint means, $M_p(x,y)=((x^p+y^p)/2)^{1/p}$, recursively if the number of variables is not a power of 2. See my answer to a related question. Simple algebra then proves that for $n=2$, and all $p\neq0$, $M_p M_{-p}=x_1x_2$ so letting $p\rightarrow0$ we have $M_0^2=x_1x_2$ and the general limit value follows immediately. Note that this doesn't use calculus, transcendental functions, or in fact anything other than the power means themselves and their continuity. Update: A further low tech way to see the result uses functional equations. We simply note that the power means satisfy $$M_{rp}(x_1,\cdots,x_n)=M_p(x_1^r,\cdots,x_n^r)^{1/r}.$$ Setting $p=0$ gives $$M_{0}(x_1,\cdots,x_n)=M_0(x_1^r,\cdots,x_n^r)^{1/r}$$ for all $r\in \mathbb{R}-\{0\}.$ Then it is intuitively clear since $M$ is symmetric and $M_p(\lambda \mathbb x)=\lambda M_p(\mathbb x)$ that $M_0$ must be the geometric mean. You can prove this formally by induction starting with $n=2$. Let $f(x)=M_0(x,1)=f(x^r)^{1/r}$, by the above. Then setting $x=e$, $r=\log X$ we have $f(X)=f(e)^{\log X}=X^{\log f(e)}=x^\mu$ where $\mu$ is constant. Hence $M_0(x,y)=yM_0(x/y,1)=yf(x/y)=y(x/y)^\mu=y^{1-\mu}x^{\mu}$. SInce $M_0$ is symmetric in $x$ and $y$ we have $\mu=1/2$ and $M_0(x,y)=x^{1/2}y^{1/2}$. The other cases $n>2$ follow in a similar manner. Further update: Actually probably the most intuitive way is just to note, as Iosif did as well, that AM-GM or Jensen's inequality tells you $M_p\geq GM\geq M_{-p}$. Then just take the limit as $p\rightarrow 0$. Concerning your Update: When you say "Setting $p=0$", you set $p=0$ in an identity which you assume to hold only for $p\ne0$, right? Concerning your Further update: Did you notice that using the AGM/Jensen inequality is what had been done in my answer? After that, the main problem was to "just take the limit as $p\to0$". Well, that was the problem stated in the OP from the very beginning. @IosifPinelis In my answers I was more concerned with explaining the value since that was requested by the OP - "I am wondering if there is some conceptual explanation...for the value of $M_0$" So when I say "setting $p=0$" I am assuming the limit exists. Yes I did notice after I added my further update that you had used the AGM inequalities - again I assumed the limit exists. $\newcommand\o\overline$ This proof uses only the arithmetic-geometric mean (AGM) inequality and the fact that for any smooth even function $g\colon\mathbb R\to\mathbb R$ we have $g'(0)=0$. To simplify the writing, for any function $f\colon\mathbb R\to\mathbb R$ let $$\o{f(x)}:=\frac1n\,\sum_1^n f(x_i).$$ We have to show that $$M_p:=(\o{x^p})^{1/p}\to M_0:=\exp\,\o{\ln x}$$ as $p\to0$. Take any real $p>0$. Replacing the $x_i$'s in the AGM inequality $$\o x\ge \exp\,\o{\ln x} \tag{1}$$ by the $x_i^p$'s, we have $M_p\ge M_0$. Similarly, replacing the $x_i$'s in (1) by the $x_i^{-p}$'s, we have $M_{-p}\le M_0$. So, $$M_{-p}\le M_0\le M_p.$$ It remains to show that $M_p/M_{-p}\to1$ as $p\downarrow0$ or, equivalently, that $$g(p):=\ln\o{x^p}+\ln\o{x^{-p}}=o(p),$$ which follows because the function $g$ is smooth and even, with $g(0)=0$. $\Box$ The expression $\exp\,\o{\ln x}\,[=(x_1\cdots x_n)^{1/n}]$ for the geometric mean arises naturally, as an instance, with $f=\exp$, of the more general mean of the form $f\big(\o{f^{-1}(x)}\big)$ with a continuous increasing function $f$. So, the geometric mean is just a logarithmically/exponentially re-scaled version of the arithmetic mean. Also, the AGM inequality (1) is an instance of Jensen's inequality for the concave function $\ln$ or, equivalently, for the the convex function $\exp$. I am not sure if the following is a conceptual explanation rather than a calculation: Writing $x_i=e^{u_i}$ and letting $p\to0$, we have $$M_p=\Big(\frac1n\,\sum_1^n e^{pu_i}\Big)^{1/p} =\Big(1+\frac p{n+o(1)}\,\sum_1^n u_i\Big)^{1/p}\to\exp\Big(\frac1n\,\sum_1^n u_i\Big)=M_0,$$ where $M_r:=M_r(x_1,\dots,x_n)$. (I guess in any case we need to show that $M_p\to M_0$ as $p\to0$. Here, at least we do not explicitly use the l'Hospital rule or differentiation.)
2025-03-21T14:48:31.215091
2020-06-09T21:14:30
362650
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Jochen Glueck", "Mark", "erz", "https://mathoverflow.net/users/102946", "https://mathoverflow.net/users/119514", "https://mathoverflow.net/users/53155" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:629999", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362650" }
Stack Exchange
Principal ideals in Banach lattices Let $E$ be a Banach lattice. Then $u \in E_+$ is said to be a quasi-interior point of $E$ is $$E_u:=\{f \in E:\exists\, c\geq 0 \text{ such that } |f| \leq cu\}$$ is dense in $E.$ Let $\Omega$ be a bounded domain with $C^{\infty}$-boundary and define $u(x)=\operatorname{dist}(x,\partial \Omega).$ Describe the sets $\left(L^p(\Omega)\right)_{u^2} \, (1\leq p <\infty)$ and $\left(C_0(\Omega)\right)_{u}.$ My attempt: Let $E=L^p(\Omega)$ and $f \in E_{u^2}.$ Then there exists $c\geq 0$ such that $|f| \leq c u^2.$ Therefore $|f|(x) \leq c \operatorname{dist}(x,\partial \Omega)^2$ for all $x.$ Therefore $f \in L^{\infty}(\Omega).$ Therefore $E_{u^2} \subseteq L^{\infty}(\Omega).$ The converse inclusion is obviously true an so $E_{u^2} = L^{\infty}(\Omega).$ For similar reasons, $\left(C_0(\Omega)\right)_{u}=L^{\infty}(\Omega).$ Am I right? "The converse inclusion is obviously true" is false. Any ideal of $C_0$ has to consist of continuous functions "The converse inclusion is obviously true" is also false for $L^p$. For instance, the constant function with value $1$ is not contained in $(L^p(\Omega))_{u^2}$ Hmm, I'm not quite sure what kind of description you are looking for. We have $(L^p(\Omega))_{u^2} = {f \in L^p(\Omega): ; \exists c \ge 0: , |f| \le cu^2}$ and $(C_0(\Omega))_u = {f \in C_0(\Omega): ; \exists c \ge 0: , |f| \le cu}$ by the very definition of principal ideals, and I find this already quite explicit ;actually, I don't think we'll get anything more explicit than that if we want to describe these principal ideals as subspaces of $L^p(\Omega)$ and $C_0(\Omega)$. [to be continued] [contiuation] However, we can isometrically isomorphically describe these principal ideals of we endow them with the gauge norm: the space $(L^p(\Omega))_{u^2}$ is isometrically lattice isomorphic (but not equal) to $L^\infty(\Omega)$, and the space $(C_0(\Omega))_u$ is isometrically lattice isomorphic (but of course not equal) to $C_b(\Omega)$. @JochenGlueck Lattice isomorphic seems nice enough. I was mainly trying to figure out how to show an element is in the principal ideal, so I thought it would be nice to consider examples. Say, for example, I already know $f \in C^{k}(\overline{\Omega})$ for sufficiently large $k,$ and I want to show $f \in \left(L^p(\Omega)\right)_{u^2}.$ Is it sufficient to show $f$ is bounded? @Mark: Well, lattice isomorphic is indeed nice, but not the right notion if you want to check that a concrete function is contained in a principal ideal. If you have a certain function $f \in L^p(\Omega)$ and want to prove that $f \in (L^p(\Omega)){u^2}$, then I think there's no other way then just proving that there exists a constant $c \ge 0$ such that $|f| \le c u^2$. "Bounded and smooth" is of course not enough since the constant function with value $1$ is bounded and smooth, but not in $(L^p(\Omega)){u^2}$. [to be continued.] [continuation] But smoothness up to the boundary can help you to prove that $|f| \le cu^2$ for some number $c \ge 0$: If $f \in C^2(\overline{\Omega})$ and $f$ vanishes on the boundary of $\Omega$ and the normal derivative of $f$ vanishes on the boundary, too, then $|f|$ can indeed by dominated by a multiple of $u^2$ (this is mainly the definition of the derivative - I think the best way to get an intuition for this is to first consider the one-dimensional case $\Omega = (0,1)$). @JochenGlueck Hmm, so if $f \in C^2(\overline{\Omega}),$ then $f$ is bounded and $|f|$ would be dominated by a multiple of $u^2$ if $f=0,$ whenever $u=0?$ Why do we also need the normal derivative of $f$ to vanish on the boundary? @Mark: I think it is really helpful to consider the one-dimensional case - say on the interval $\Omega = (-1,1)$: then $u(x) = 1-|x|$ for all $x \in \Omega$. Now consider, say, the function $f$ given by $f(x) = \cos(\frac{\pi}{2}x)$ for $x \in \Omega$. Then $f$ vanishes on the boundary of $\Omega$, but the (normal) derivative of $f$ does not vanish there. If you draw a sketch of $f$ and $u^2$, you can easily see that $f (=|f|)$ is not dominated by multiple of $u^2$ since $u^2$ vanishes "quadratically" at $-1$ and $1$, while $f$ vanishes only "linearly". @JochenGlueck I see. I tried to see other examples too in the one-dimension case and it does seem that when $f$ and $f'$ vanish on the boundary, then $|f|$ is dominated a multiple of $u^2.$ But I don't see algebraically the role of the derivative here. How would one prove this rigorously? @Mark: Taylor approximation of order $2$ implies that $f$ is dominated by $u$ close to the boundary of $\Omega$; and in a compact subset of $\Omega$, there is no problem since $u$ is bounded away from $0$ there.
2025-03-21T14:48:31.215395
2020-06-09T21:15:02
362651
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "Andres Koropecki", "erz", "https://mathoverflow.net/users/19393", "https://mathoverflow.net/users/36886", "https://mathoverflow.net/users/53155" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630000", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362651" }
Stack Exchange
Existence of topologically transitive map on Euclidean space I was reading this post and wondered. Does there exist a topologically transitive (TT) map $f:\mathbb{R}^n\to\mathbb{R}^n$ when $n\geq 2$? I know that post asks for compactness and topological mixing but if we relax the requirement to only TT is it possible? Note: If $\mathbb{R}^n$ is replaced by an infinite-dimensional Hilbert space, then the Ansari-Bernal theorem guarantees such a map exists; moreover it can be linear... So maybe it can exist in the finite-dimensional case? It's overkill, but the answer to the post you cited in your question says that there is a Baire-generic subset of topologically mixing diffeomorphisms in the space of volume-preserving diffeomorphisms of any closed manifold of dimension greater than $1$. You can use this in the $n+1$-dimensional sphere. If you pick any diffeomorphism of this sphere with a hyperbolic fixed point, then arbitrarily close to it there is a topologically mixing diffeomorphism which still has a fixed point. Removing this point you get a topologically mixing (in particular transitive) diffeomorphism of $\mathbb{R}^n$ Here is an explicit answer. First, define a continuous tent map $w$ on $[1,2]$ define $w$ with $w(1)=w(2)=0$, and $w(3/2)=4$. Then define $w:[0,+\infty)\to [0,+\infty)$ by pasting scaled versions of this tent infinitely on both sides so that the graph consists of a sequence of congruent triangles. Then for every open non-empty $U\subset [0,+\infty)$, there are $k\in\mathbb{Z}$ and $m\in\mathbb{N}$ such that $$r>m \ \to \ [0,2^{k+2r}]\subset w^{2r}(U).$$ (This is proved In Silverman - On Maps with Dense Orbits and the Definition of Chaos, p 360.) Now define $v:[0,+\infty)^n\to [0,+\infty)^n$ by $v(x_1,...,x_n)=(w(x_1),...,w(x_n))$. Let $U\subset [0,+\infty)^n$ be open and non-empty. There are $U_i\subset [0,+\infty)$, such that $U_1\times...\times U_n\subset U$. From the above property of $w$ there are $k_1,...,k_n\in\mathbb{Z}$ and $m\in\mathbb{N}$ such that $$r>m \ \to \ [0,2^{k_1+2r}]\times...\times[0,2^{k_m+2r}] \subset v^{2r}(U_1\times...\times U_n)\subset v^{2r}(U)$$ Now $[0,+\infty)^n$ is homeomorphic to $W=[0,+\infty)\times(-\infty,+\infty) ^{n-1}$ and the boundaries are also homeomorhpic. Below I will view $v$ as a map on $W$. Then the set $\{(x_1,...,x_n), x_1=0\}$ is invariant with respect to $v$. Also, if $U\subset W$ is open and nonempty, then $\bigcup v^{r}(U)= W$. Let $h:\mathbb{R}^n\to \mathbb{R}^n$ be the reflection $h(x_1,...,x_n)=(-x_1,x_2,...,x_n)$. Define $f:\mathbb{R}^n\to \mathbb{R}^n$ by $$f\left(x_1,...,x_n\right)=\left\{\begin{array}{ll} h(v(x_1,...,x_n)) & x_1\ge 0 \\ v(h(x_1,...,x_n)) & x_1\le 0 \end{array}\right.$$ When $x_1=0$, $h$ does nothing, but also since $v$ maps boundary points into boundary points, the first coordinate of $v(x_1,...,x_n)$ is $0$. So the map is well-defined and continuous. Observe that $f^2|_{W}=v^2$, and so if $U\subset \mathbb{R}^n$ is open and nonempty, one can show that $\bigcup f^{r}(U)= \mathbb{R}^n$, which implies Topological Transitivity. The last couple steps you use to modify the n-fold tent map is very nice. @AnnieLeKatsu thanks, but for the most part it's just a modification of the Silverman's construction
2025-03-21T14:48:31.215733
2020-06-09T21:22:58
362652
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Adam P. Goucher", "M. Winter", "Sam Hopkins", "https://mathoverflow.net/users/108884", "https://mathoverflow.net/users/25028", "https://mathoverflow.net/users/39521" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630001", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362652" }
Stack Exchange
Can we combine the symmetries of two polytopes to create a more symmetric polytope? Suppose that there are two combinatorially equivalent (convex) polytopes $P_1,P_2\subset\Bbb R^d$, that is, both with the same face lattice $\mathcal L$. The symmetry group $\mathrm{Aut}(P_i)\subset\mathrm O(\smash{\Bbb R^d})$ of $P_i$ induces a group $\Sigma_i\subseteq\mathrm{Aut}(\mathcal L)$ of combinatorial symmetries on the face lattice. Now, we can generate the larger group $\Sigma:=\langle \Sigma_1,\Sigma_2\rangle\subseteq\mathrm{Aut}(\mathcal L)$ of symmetries of $\mathcal L$. Question: Is there also a realization of $\mathcal L$ as a polytope $P\subset\smash{\Bbb R^d}$ whose symmetry group $\mathrm{Aut}(P)$ induces (at least) $\Sigma$? As an example, here are two quadrangles, one is vertex-transitive, one is edge-transitive, and they combine to a quadrangle that is vertex- and edge-transitive. Your previous question seems very related: https://mathoverflow.net/questions/311877/can-the-graph-of-a-symmetric-polytope-have-more-symmetries-than-the-polytope-its/ @SamHopkins Please help me if I am missing something, but the new thing in above questios is, that I already know that the symmetries I am asking for can in fact be realized. Sure, I'm just saying, you're again asking for a bigger set of symmetries. @M.Winter Do you require equality to hold, i.e. must Aut(P) be equal to $\Sigma$, or merely contain $\Sigma$? @Adam Indeed, I only require that $\Sigma$ is contained. Thanks for the note. Ah, sorry, I guess I really meant this question of yours is related: https://mathoverflow.net/questions/308430/can-we-realize-a-graph-as-the-skeleton-of-a-polytope-that-has-the-same-symmetrie
2025-03-21T14:48:31.215876
2020-06-09T21:34:45
362655
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andy Sanders", "https://mathoverflow.net/users/33741", "https://mathoverflow.net/users/49247", "leo monsaingeon" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630002", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362655" }
Stack Exchange
how to construct a finite energy map In the construction of harmonic maps by Eells and Sampson, one needs to start with a map with finite energy and use the heat equation to deform it into a harmonic map. The construction of such a finite energy map seems to be the starting point. I want to know if there is a systematic way to construct a finite energy map with possible singularities on the domain or on the target. To my knowledge, there is a construction in rank 1 locally symmetric space other than real hyperbolic and complex hyperbolic space by Gromov and Schoen. Also, when the rank $\ge 2$, some people require the domain to be compact to ensure the existence of this finite energy map. I am curious about more general situations, i-e other than the compact case. Many thanks! Welcome to MO. Perhaps you should help out your fellow mathematicians here by recalling the definition of the energy in this very general context? You can edit your question after it is posted. Without some requirement on the map, e.g. its homotopy class, or some equivariance, you can always just take the constant map, which has energy equal to zero. Also, for Gromov-Schoen, those symmetric spaces are the domain, and the target is a p-adic building, and the maps are equivariant for a p-adic representation of a lattice, which is quite a bit more exotic than the Eells-Sampson work. It might be more instructive to look for some "natural" settings where there can't be a finite energy map. I am not sure that this is an answer but: there is a paper from 1993 ("Geometric Superrigidity") by Mok--Siu--Yeung which reproves the superridigidity results of Margulis for cocompact irreducible lattices of higher rank using differential geometric methods (harmonic maps) instead of the dynamical/ergodic theory methods of Margulis. This differential geometric proof is available only in the case of cocompact lattices. The reason is, as far as I remember, the following: consider a nonuniform lattice $\Gamma$ of higher rank, acting on "its" symmetric space $X$ and a representation $\varrho : \Gamma \to {\rm Isom}(Y)$ where $Y$ is another symmetric space of noncompact type (we assume that $\varrho$ has large enough image, I won't give the precise statement). In that case it is not known in general how to build a $\varrho$-equivariant map of finite energy $f : X \to Y$, for a nonuniform $\Gamma$. The "reason" is that ends of locally symmetric spaces in higher rank are harder to understand than in rank $1$... so the differential geometric proof of superrigidity currently applies only in the cocompact case. (all of this is discussed in that paper of D. Fisher page 18: https://arxiv.org/pdf/0809.4849.pdf). Conclusion: already for locally symmetric spaces, the problem you mention is still open in higher rank. If you have a finite energy retraction onto a compact sets, than you will be in good shape, besides the paper of Gromov--Schoen this is also discussed in Koziarz--Maubon (Annales Fourier).
2025-03-21T14:48:31.216096
2020-06-09T22:00:03
362656
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Lyonel", "YCor", "https://mathoverflow.net/users/134603", "https://mathoverflow.net/users/14094" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630003", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362656" }
Stack Exchange
Linear Lie algebra generated by $\mathbb{R}$-diagonalizable matrices If $\mathfrak{gl}_n(\mathbb{R})$ denotes the Lie algebra of real $(n \times n)$-matrices, then the $\mathbb{R}$-diagonalizable matrices generate $\mathfrak{gl}_n(\mathbb{R})$ as a Lie algebra. If $\mathfrak{g}<\mathfrak{gl}_n(\mathbb{R})$ is an arbitrary Lie subalgebra, then it need not be generated by the $\mathbb{R}$-diagonalizable matrices which it contains. In fact, $\mathfrak{g}$ might not contain any diagonalizable matrices (e.g. if it is generated by a single non-diagonalizable matrix). What are criteria which guarantee that $\mathfrak{g}$ is generated by $\mathbb{R}$-diagonalizable matrices? In particular, is it sufficient for $\mathfrak{g}$ to be simple and non-compact? Yes: if $K$ is a field of characteristic zero and $\mathfrak{g}$ is semisimple with no $K$-anisotropic factor (when $K=\mathbf{R}$, $K$-anisotropic means compact) then it is generated (and even linearly spanned) by its $K$-diagonalizable elements. And more generally if $\mathfrak{g}$ is perfect (trivial abelianization) with no compact factor. Since $\mathfrak{g}$ is perfect, it is the Lie algebra of some algebraic subgroup of $\mathrm{GL}_n$. The set of $K$-diagonalizable elements in $\mathfrak{g}$ is stable under scalar multiplication, and under the action of $G$. Hence its span is a $G$-invariant subspace of $\mathfrak{g}$, hence is an ideal $I$. If by contradiction $I\neq \mathfrak{g}$, let $J$ be a maximal proper ideal of $\mathfrak{g}$ containing $I$. Hence $\mathfrak{g}/J$ is either 1-dimensional or simple. The first case is excluded since $\mathfrak{g}$ is perfect. In the second case, the quotient is $K$-isotropic by assumption. It is a split quotient (by standard structure theory), but then we have a simple $K$-isotropic subalgebra $\mathfrak{s}\subset\mathfrak{g}$ with no nontrivial $K$-diagonalizable element. This is a contradiction: indeed a maximal $K$-split abelian subalgebra of $\mathfrak{s}$ is positive-dimensional and consists of $K$-diagonalizable elements. Addendum: In the above setting, every element is a sum of $K$-diagonalizable elements. Hence every element of $\mathfrak{g}$ is a sum of $m$ $K$-diagonalizable elements, for some $m$ (e.g., $m=\dim(\mathfrak{g})$ works). One can wonder whether one can take $m$ small, e.g., $m=2$? It seems we can't expect some universal $m$: In $\mathfrak{so}(n,1)$, if I'm correct the dimension of those $\mathbf{R}$-diagonalizable elements is $\le 2n$ and hence $m\ge (n+1)/4$. This looks like exactly what I hoped for, but in the hard part (dealing with $\mathfrak{g}/J$ in the simple case) you use some terminology I am not familiar with. Does "split quotient" just mean that the sequence $J \to \mathfrak{g} \to \mathfrak{g}/J$ splits (certainly $\mathfrak{g}/J$ is not $K$-split simple)? Also, do you have a reference for the fact that every simple $K$-isotropic subalgebra of $\mathfrak{gl}_n(K)$ contains a maximal $K$-split abelian subalgebra of $K$-diagonalizable element over general fields of characteristic 0? For split: yes this is what I mean (yes split in this meaning is unrelated to $K$-split; in French it's 2 different words: scindé vs $K$-déployé). For the second: it's a trivial fact for an arbitrary subalgebra (just choose one of maximal dimension); what I use is that for a $K$-isotropic one it's positive-dimensional. But this is essentially the definition of $K$-isotropic.
2025-03-21T14:48:31.216319
2020-06-09T22:01:05
362657
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Guilherme", "Willie Wong", "https://mathoverflow.net/users/156344", "https://mathoverflow.net/users/3948" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630004", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362657" }
Stack Exchange
How to find the conserved quantities of the Kirchhoff equation? Consider the Kirchhoff equation, given by $$u_{tt}-\left(1+\int_{\mathbb{R}} u_x^2\;dx\right)u_{xx}+f(u)=0, (x,t) \in \mathbb{R}\times \mathbb{R}_+$$ where $f(u)=u-u^{2r+1}$, for $r \in \mathbb{N}$. How to find the conserved quantitie of this equation? Some of them: Multiply by $u_x$ and integrate, you get $$ \int u_{tt} u_x ~dx = 0 $$ so $$ \partial_t \int u_{t} u_x ~dx - \int u_t u_{tx} ~dx = 0 $$ the second term integrates to zero. Multiply by $u_t$ and integrate by parts you get $$ \int u_{tt} u_t + (1 + \int u_x^2 ~dx) u_{xt} u_x + uu_t - u^{2r+1} u_t ~dx = 0 $$ This you can rewrite as $$ \partial_t \int \frac12 u_t^2 + \frac12 (1 + \frac12 \int u_x^2~dx) u_x^2 + \frac12 u^2 - \frac1{2r+2} u^{2r+2} ~dx = 0$$ Why multiplying by $u_x$ and integrante we get only $\int u_{tt}u_x ; dx=0$? And the rest of the terms? Integration of a total derivative yields 0. $$\int u_{xx} u_x = \int \partial_x (u_x)^2 = 0$$ and similarly the $f(u)$ terms. $$ \int f(u) u_x = \int \partial_x F(u) = 0$$ where $F$ is the primitive to $f$. Very good. Thank you!
2025-03-21T14:48:31.216417
2020-06-09T23:19:13
362663
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Math is like Friday", "Pierre PC", "https://mathoverflow.net/users/129074", "https://mathoverflow.net/users/157350" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630005", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362663" }
Stack Exchange
Martingale convergence theorem in Polya's urn I want to get checked if my attempt is okay. First off, let me shortly describe what Polya's urn is: A certain urn initially contains a red and a blue ball. We now repeatedly do the following : we (uniformly at random) pick a ball from the urn, and then put it back in together with an additional ball of the same colour. Let $X_n$ denote the number of red balls and $Y_n$ denote the fraction of red balls after we’ve done this $n$ times, i.e. $$ Y_n = \frac{X_n}{n+2}. $$ And the following is my attempt so far. (let me assume that $Y_n$ is a martingale without showing it) \begin{align*} X_{n+1} &= X_n + 1_{\{\text{$n+1^{th}$ ball is red}\}} \\ X_n &= 1 + \sum_{1\leq i\leq n}1_{\{\text{$i^{th}$ ball is red}\}} \\ \to EX_n &= 1 + \sum_{1\leq i\leq n}E[1_{\{\text{$i^{th}$ ball is red}\}}] = 1 + nE[1_{\{\text{$i^{th}$ ball is red}\}}] = 1 + \frac{n}{2} \end{align*} since the indicator function take only values $0$ or $1$ with the same probability here (picking either ball has the same probability). Thus, $EX_n = 1 + \frac{n}{2} = \frac{n+2}{2}$, so $EY_n = \frac{1}{2}$. So now we know the expectation of $Y_n$, but is it enough to say that $Y$ is the uniform distribution on $[0,1]$? (where $Y_n\xrightarrow[]{a.s.}Y$ (Martingale convergence theorem)) Thank you for your help in advance! If you know that $Y_n$ is a martingale, then obviously $\mathbb E[Y_n]=\mathbb E[Y_0]=1/2$. You're more or less proving that $Y_n$ is a martingale again. Also no, it is not enough, $Z_n:=1/2$ is a martingale with expectation $1/2$. @PierrePC Thanks for letting me know. But how did you derive that $\mathbb{E}[Y_n] = \mathbb{E}[Y_0] = 1/2$ by the fact that $Y_n$ is a martingale? @PierrePC Also can you explain what I should show/prove in order to show that $Y$ is uniformly distributed on $[0,1]$? Thank you in advance! $\mathbb E[Y_{n+1}] = \mathbb E[\mathbb E[Y_{n+1}|\mathcal F_n]] = \mathbb E[Y_n] = \cdots = \mathbb E[Y_0]$. About the uniform distribution of $Y$ in $[0,1]$, I'd have to think about it but I would use very different methods. In any case, I don't think this question is research level, and I would advise you to post it on math.SE instead. @PierrePC Thanks for your advise, but it's too late to delete this post and move it to math. I did not know the difference between here and math stack exchange, I won't happen it again The answer is yes, the distribution of $Y_n$ converges to the uniform distribution on $[0,1]$. More generally, if we initially have $r$ red and $b$ blue balls in the urn, then the distribution of the proportion of the red balls in the urn converges to the beta distribution with parameters $r,b$; see e.g. Section 4. Moreover, in your case of $r=b=1$, letting $$p_{n,k}:=P(X_n=k),$$ it is easy to check directly by induction on $n$, using the recursion $$p_{n,k}=p_{n-1,k}\frac{n+1-k}{n+1}+p_{n-1,k-1}\frac{k-1}{n+1}$$ for natural $n$ and $k$, that $$p_{n,k}=\frac1{n+1}\,1\big\{k\in\{1,\dots,n+1\}\big\}$$ for $n=0,1,\dots$; that is, $X_n$ has the discrete uniform distribution on the set $\{1,\dots,n+1\}$. Therefore, it is clear that the distribution of $Y_n=\frac{X_n}{n+2}$ converges to the uniform distribution on $[0,1]$.
2025-03-21T14:48:31.216625
2020-06-10T06:55:44
362677
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alex Gavrilov", "Donu Arapura", "Piotr Achinger", "https://mathoverflow.net/users/3847", "https://mathoverflow.net/users/4144", "https://mathoverflow.net/users/9833" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630006", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362677" }
Stack Exchange
Topological information in sheaf cohomology? Let $X$ be a projective complex manifold and $E\to X$ a holomorphic vector bundle. When $E=X\times\mathbb{C}$ there is an injection (for any $n$) $$H^n(X, \mathcal{O}(E))=H^{0,n}(X)\to H^n(X,\mathbb{C}).$$ The question is about possible maps $$H^n(X, \mathcal{O}(E))\to \text{some ``topological'' group}$$ in cases when $E$ is not trivial and $X$ is simply connected. It seems unlikely that topological invariants of this kind exist in general, but I am curious if there is anything interesting in special cases. Where did you get that isomorphism from? For $X$ an elliptic curve, $H^1(X, \mathcal{O}_X)$ is one-dimensional while $H^1(X, \mathbb{C})$ is two-dimensional. I do not understand the second paragraph. P.S. Maybe the following is of interest to you: if $E$ carries a holomorphic integrable connection $\nabla$, then the horizontal sections of $E$ form a local system of $\mathbb{C}$-vector spaces $\mathcal{E}$ on $X$ and there is an isomorphism $H^_{\rm dR}(X, E) \simeq H^(X, \mathcal{E})$, where the first group is the de Rham cohomology of $E$ i.e. hypercohomology of the complex $(\Omega^\bullet_X \otimes E, \nabla)$. I edited it so hopefully now it makes some sense. @Piotr Achinger, If a connection is integrable and the base is simply connected, doesn't it mean that the bundle is trivial? Even if $X$ is simply connected, a Zarsiki open $U\subset X$ need not be. Suppose additionally that $D=X-U$ is a divisor with normal crossings. If one starts with a unitary representation of $U$ forms the associated vector bundle, and extends to bundle $E$ $X$ a la Deligne. Then this gives something nontrivial example along the lines of your question. I have no idea, if this is the sort of thing you want however.
2025-03-21T14:48:31.216779
2020-06-10T07:49:32
362679
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Franz Lemmermeyer", "Gerry Myerson", "Mr Pie", "https://mathoverflow.net/users/115623", "https://mathoverflow.net/users/158000", "https://mathoverflow.net/users/3503" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630007", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362679" }
Stack Exchange
A constant bizarrely related to the Fibonacci Numbers For roughly the past month, I have been studying denesting radicals. For example: the expression $\sqrt[3]{\sqrt[3]2-1}$ is a radical expression that contains another radical expression, so this radical is nested. Is there a way to express this with radicals that are not (or not as) nested? Writing it in this way is referred to as denesting, and indeed, there is one such way. Ramanujan mysteriously found that $$\sqrt[3]{\sqrt[3]2-1}=\sqrt[3]{\frac 19}-\sqrt[3]{\frac 29}+\sqrt[3]{\frac 49}.$$ I found that this is also equal to $\sqrt{\sqrt[3]{\frac 43}-\sqrt[3]{\frac 13}}$, which does not denest it, but it does write the expression under a radical of a coprime degree, which fascinates me. I have found an abundance of results, one of which fascinates me the most, and is on the constant, $$1-\sqrt[3]{\frac 12}+\sqrt[3]{\frac 14}.$$ This constant is equal to all of the expressions below. Note that the degree of the radicals are, lo and behold, the Fibonacci numbers! $$\sqrt{\frac 32\bigg(\sqrt[3]2-\frac 1{\sqrt[3]2}\bigg)}$$ $$\sqrt[3]{\frac{3^2}{2^2}\big(\sqrt[3]2-1\big)}$$ $$\sqrt[5]{\frac{3^3}{2^3}\bigg(\frac 3{\sqrt[3]2}-\sqrt[3]2-1\bigg)}$$ $$\sqrt[8]{\frac{3^5}{2^5}\bigg(4-\frac{5}{\sqrt[3]2}\bigg)}$$ $$\sqrt[13]{\frac{3^8}{3^8}\bigg(1+\frac{17}{\sqrt[3]2}-\frac{23}{\sqrt[3]4}\bigg)}$$ and, slightly breaking the pattern, $$\sqrt[21]{\frac{3^{14}}{2^{14}}\big(41-59\sqrt[3]2+21\sqrt[3]4\big)}$$ and presumably, this list goes on forever (but the numbers start becoming pretty big). So... what on earth is going on here? It appears that for some $n$th Fibonacci number $F_n$, this is equal to (for at least most of the time), $$\sqrt[F_n]{\frac{3^{F_{n-1}}}{2^{F_{n-1}}}\big(a+b\sqrt[3]2+c\sqrt[3]4\big)}$$ for some $\{a, b, c\}\subset \mathbb R$, with a minimal polynomial of $4x^3-12x^2+18x-9$. Can anybody explain these wild affairs? Don't know of any other appropriate tags Your element is equal to $(1 - \sqrt[3]{2}+\sqrt[3]{4})/\sqrt[3]{4}$, i.e., to the quotient of elements with norms $3$ and $4$ in the number field ${\mathbb Q}(\sqrt[3]{2})$. If you raise this to the $n$-th power, you get an $n$-th power, which is hardly surprising. @FranzLemmermeyer should you be referring to galois theory, I have not examined this part to construct nested radicals, and have purely been tackling this from an algebraic approach, given thus far my limited knowledge of matrices, vector spaces and linear transformations; however, does your comment explain the forms possessed by the fibonacci-type nested radicals? You've only given examples with Fibonacci roots. Are you sure that there are none with 4th, 7th or other roots? @FranzLemmermeyer there appear to be none of this particular form. Although similar, the exponents raised by $3/2$ does not seem to share a pattern relative to the degree of the radical put over it, as in the case of the fibonacci degrees. In fact, sometimes the radicands would have $3^m/2^n$ for $m\neq n$ unlike in the fibonacci-type examples. But if you give no conditions on the number in the brackets, you may pull any power of $3$ and $2$ in front of the bracket. $14$ is not a Fibonacci number. Was the second-last display meant to involve $3^{13}/2^{13}$? @GerryMyerson it involved 14. This was the only one I found to break the pattern. @FranzLemmermeyer You are right. I suppose the condition would have to be $\gcd(a,b,c)=1$, although I thought this was natural to assume. First of all we get $A=1-\sqrt[3]{\frac{1}{2}}+\sqrt[3]{\frac{1}{4}}=\frac{3\sqrt[3]{2}}{2(\sqrt[3]{2}+1)}$......(1) And from here you can proof by induction... $A^{F_n}=A^{F_{n-1}}.A^{F_{n-2}}$ If $A^{F_{n-1}}=(\frac{3}{2})^{F_{n-2}}(a_{n-1}+b_{n-1}\sqrt[3]{2}+c_{n-1}\sqrt[3]{4})$ and $A^{F_{n-2}}=(\frac{3}{2})^{F_{n-3}}(a_{n-2}+b_{n-2}\sqrt[3]{2}+c_{n-2}\sqrt[3]{4})$ From (1) we get, $A^{F_2}, A^{F_3}$, so we shall get the rest by induction. Where, $F_2=2, F_3=3$. $\frac{3}{2}(\frac{\sqrt[3]{2}}{\sqrt[3]{2}+1})^2=\sqrt[3]{2}-\sqrt[3]{\frac{1}{2}}$ And, $\frac{3}{2}(\frac{\sqrt[3]{2}}{\sqrt[3]{2}+1})^3=\sqrt[3]{2}-1$
2025-03-21T14:48:31.217042
2020-06-10T08:00:31
362680
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Piotr Achinger", "https://mathoverflow.net/users/3847", "https://mathoverflow.net/users/64302", "user2520938" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630008", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362680" }
Stack Exchange
Milnor fibration for non-isolated singularities Can someone provide a precise reference/statement of the fibration theorem for non-isolated singularities? So given $f:\mathbb{C}^n\to \mathbb{C}$, and an $x\in Sing(f)$, what exactly does the fibration theorem say about the local structure of $f$ around $x$? Of course for isolated singularities there is Milnor's well known book, but for the non-isolated case I cannot find a reference. Dung Tráng Lê, Some remarks on relative monodromy, Real and complex singularities (Proc. Ninth Nordic Summer School/NAVF Sympos. Math., Oslo, 1976), Sijthoff and Noordhoff, Alphen aan den Rijn, 1977, pp. 397– 403. MR 0476739 (57 #16296) Maybe also J. L. Cisneros-Molina, J. Seade, and J. Snoussi, Refinements of Milnor’s fibration theorem for complex singularities, Adv. Math. 222 (2009), no. 3, 937– 970. MR 2553374 (2010k:32042) @PiotrAchinger Thanks! The first book seems very difficult to find unfortunately. Since I am mostly interested in ambient space being $\mathbb{C}^n$, in particular smooth, is it fair to say that: 1) Milnor showed completely generally that $f/|f|:S_\epsilon\setminus Z(f)\to S^1$ is a fibration (without assumption on $f$) 2) Milnor showed that the fibers of $f/|f|$ are diffeomorphic to $f^{-1}(\delta)\cap B_\epsilon$ for small $\delta$ and $\epsilon$ (again in full generality) 3) Others showed that $f:B_\epsilon\setminus Z(f)\to B_\delta$ is a fibration in general
2025-03-21T14:48:31.217159
2020-06-10T08:02:21
362681
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "https://mathoverflow.net/users/36886" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630009", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362681" }
Stack Exchange
Breaking up dense subset in non-separable space Let $X$ be a not necessarily separable (infinite-dimensional) Banach space and $D\subseteq X$ be dense linearly independent subset. Then does there exist a set of infinite-dimensional separable Banach subspaces $\{X_i\}_{i \in I}$ of $X$ with the property that: $D\cap X_i$ is dense in $X_i$, $\bigcup_{i \in I} X_i$ is dense in $X$? For every countable subset $M\subseteq D$ set $X_M=\overline{\text{span}(M)}$ (with the norm of $X$). These spaces seem to satisfy your requirements. Am I missing something? No i think I was missing something... Thank you.
2025-03-21T14:48:31.217227
2020-06-10T08:53:41
362687
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "https://mathoverflow.net/users/89236", "user89236" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630010", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362687" }
Stack Exchange
Explanation for devissage argument Let $K$ be a local field of characteristic $0$ with the ring of integers $\mathcal{O}_K$ and uniformizer $\pi$. Let $k$ be the residue field of $K$ with $\text{card}(k)=q$. Let $\mathcal{O}_\mathcal{E}$ be the $\pi$-adic completion of $\mathcal{O}_K((u))$, where $u$ is a fixed local co-ordinate. Then $\mathcal{O}_\mathcal{E}$ is complete local ring with uniformizer $\pi$ and residue field $E:=k((u))$. Let $\mathcal{E}$ be the field of fractions of $\mathcal{O}_\mathcal{E}$. Let $\widehat{\mathcal{E}^{ur}}$ be the completion of the maximal unramified extension of $\mathcal{E}$. Let $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ denotes the ring of integers of $\widehat{\mathcal{E}^{ur}}$. Then $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$ is a complete local ring with uniformizer $\pi$ and residue field as $E^{sep}$. Then we have the following exact sequence \begin{equation*} 0\rightarrow k \rightarrow E^{sep}\xrightarrow{x\mapsto x^q-x}E^{sep}\rightarrow 0. \end{equation*} In other words, the sequence \begin{equation*} 0\rightarrow \mathcal{O}_K/\pi\mathcal{O}_K \rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\xrightarrow{x\mapsto x^q-x} \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\rightarrow0 \end{equation*} is exact as $k$ is the residue field of $\mathcal{O}_K$ and $E^{sep}$ is the residue field of $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$. Then by devissage the sequence \begin{equation} 0\rightarrow \mathcal{O}_K/\pi^n\mathcal{O}_K \rightarrow \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\xrightarrow{x\mapsto x^q-x} \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}/\pi^n \mathcal{O}_{\widehat{\mathcal{E}^{ur}}}\rightarrow0, \end{equation} is exact for all $n\geq1$. I don't want to say the word "by devissage" and want to write the explicit proof. I am trying to induction on $n$. but somehow I am not able to prove that the sequence is exact. Is there any other way to prove the exactness of this sequence. Yes, it is a lift of Frobenius. We have a map, say $\varphi_q$ on $E^{sep}$, which sends $x$ to $x^q$, where $q$ is some power of $p$ for a fixed prime number $p$. Then we lift this map to $\mathcal{O}_{\widehat{\mathcal{E}^{ur}}}$.
2025-03-21T14:48:31.217369
2020-06-10T11:01:43
362692
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Geva Yashfe", "Tom Copeland", "darij grinberg", "https://mathoverflow.net/users/12178", "https://mathoverflow.net/users/2530", "https://mathoverflow.net/users/75344" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630011", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362692" }
Stack Exchange
Extending submodular functions from a sublattice This came about when I was studying the connection between matroids and strong greedoids, but it has broken through into a subject I am not particularly familiar with: submodular functions on lattices. Let $L$ be a finite lattice (in the sense of combinatorics). Let $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $. A function $f:L\to \mathbb{N}$ is said to be submodular if it satisfies $f\left( a\right) +f\left( b\right) \geq f\left( a\wedge b\right) +f\left( a\vee b\right) $ for all $a,b\in L$. isotone if it satisfies $f\left( a\right) \leq f\left( b\right) $ whenever $a,b\in L$ satisfy $a\leq b$. 1-continuous if it satisfies $f\left( b\right) -f\left( a\right) \in\left\{ 0,1\right\} $ whenever $a,b\in L$ satisfy $a\lessdot b$ (that is, $a<b$ but there exists no $c\in L$ satisfying $a<c<b$). Note that the first two of these notions are standard, while the third is mine. Now, assume that $L$ is the Boolean lattice $2^{E}$ of a finite set $E$ (so that the order relation $\leq$ on $L$ is the relation $\subseteq$ on $2^{E}$). Thus, the 1-continuous isotone submodular functions $f:L\to\mathbb{N}$ satisfying $f\left( \varnothing\right) =0$ are precisely the rank functions of matroids on the ground set $E$. If we drop the "1-continuous", then we obtain the rank functions of polymatroids instead. Note that "$a\lessdot b$" is equivalent to "$a \subseteq b$ and $\left|b \setminus a \right| = 1$" for any $a, b \in L = 2^E$. Let $M$ be a sublattice of $L$, by which I mean a subset of $L$ that is a lattice when equipped with the partial order it inherits from $L$ and that has the same $0$, $1$, $\wedge$ and $\vee$ as $L$. (This may or may not be some people's definition of a sublattice.) Let $g:M\to\mathbb{N}$ be a function. An extension of $g$ to $L$ will mean a function $f:L\to\mathbb{N}$ such that $f\mid_{M}=g$. Theorem 1. If $g$ is an isotone submodular function on $M$, then there exists an isotone submodular extension of $g$ to $L$. This theorem is (a particular case of) Lemma 5.1 in Donald M. Topkis, Minimizing a Submodular Function on a Lattice, Operations Research 26, No. 2 (Mar. - Apr., 1978), pp. 305--321. The proof defines the extension $f:L\to\mathbb{N}$ of $g$ to $L$ by setting \begin{align} f\left( y\right) =\min\left\{ g\left( x\right) \ \mid\ x\in L\text{ satisfying }x\geq y\right\} \label{darij1.eq.pf.thm1.1} \tag{1} \end{align} for all $y\in L$. It is easy to check that this extension $f$ is indeed isotone and submodular. Note that $L$ could be any finite lattice here, not necessarily a Boolean one. My question is: how well do other properties extend from $M$ to $L$ ? The specific question I am most interested in is: Question 2. If $g$ is a 1-continuous isotone submodular function on $M$, then does there exist a 1-continuous isotone submodular extension of $g$ to $L$ ? (Here, $L$ is still supposed to be Boolean.) A positive answer to Question 2 (specifically, in the case when $M$ is the lattice of order ideals of a certain poset structure on $E$) would yield a neat (if not quite one-to-one) correspondence between matroids and strong greedoids that I believe could help understand the latter. (I could elaborate if there is interest.) Note that Topkis's above construction of $f$ does not produce a 1-continuous $f$ even if it is applied to a 1-continuous $g$. Having asked Question 2, we can vary the assumptions and get curious: Question 3. If Question 2 has a positive answer, how far does it generalize? For example, can we replace the Boolean lattice $L$ by an arbitrary geometric lattice? ranked distributive lattice? ranked lattice? Note that we cannot get rid of the "ranked" condition completely; e.g., the rank function on the long chain of the pentagon lattice $N_{5}$ is a 1-continuous isotone submodular function, but has no 1-continuous extension to the whole $N_{5}$. For the sake of curiosity, let me mention a related result: Corollary 4. If $g$ is a submodular function on $M$, then there exists a submodular extension of $g$ to $L$. (This is just Theorem 1 with the words "isotone" removed.) An easy way to prove Corollary 4 is by piggybacking on our proof of Theorem 1: Namely, let $g$ be a submodular function on $M$, and let $\operatorname{rank} : L \to \mathbb{N}$ be the function that sends each set $a \in L$ to its size $\left|a\right|$. Pick any integer $n > \max\left\{g\left(b\right) - g\left(a\right) \mid a \lessdot b \text{ in } M\right\}$. Then, the function $g' := g + n \operatorname{rank}$ (where we add and scale functions pointwise -- so this function sends each $a \in M$ to $g\left(a\right) + n\left|a\right|$) is isotone (by the choice of $n$) and submodular (since $g$ is submodular and since $\left|a\right| + \left|b\right| = \left|a \wedge b\right| + \left|a \vee b \right|$ for any $a, b \in L$). Thus, applying Theorem 1 to $g'$ instead of $g$, we obtain an isotone submodular extension $f'$ of $g'$ defined via \eqref{darij1.eq.pf.thm1.1}. Now, it is easy to see that $f := f' - n \operatorname{rank}$ is a submodular extension of $g$. (To check this, make sure to prove that all values of $f$ are nonnegative; this relies on the specific formula \eqref{darij1.eq.pf.thm1.1}, which guarantees that the multiple of $n$ added at any given point of $M$ is larger or equal to the multiple of $n$ subtracted afterwards.) Btw, have you looked over Aquiar and Ardila https://arxiv.org/abs/1709.07504 and their use of submodular functions? @TomCopeland: Thanks for reminding me of that paper, but as always with Aguiar, a more precise reference would be appreciated... All I'm seeing so far is about submodular functions on the Boolean lattice, not on sublattices. The answer to Q2 is positive. A consequence is the answer to Q3 for $L$ any finite distributive lattice, by Birkhoff's theorem (which embeds such a lattice in a boolean lattice in such a way that a maximal chain between elements is mapped to such a maximal chain in the boolean lattice, so the proof of $1$-continuity goes through). It might be possible to extend to infinite distributive complemented lattices (using Stone's representation theorem), I'm not sure. The case of a geometric lattice should be true and not be much harder. I don't know about finite modular lattices. In any case, I hope there is a shorter, more conceptual proof. Proof: We will regard $M$ as a collection of sets closed under union and intersection. The strategy will be to gradually extend it (along with the function $r$) by steps of the following two types: If some nonempty set $A$ is inclusion-minimal in $M$, add all its subsets to $M$, and extend $M$ to the sublattice of $L$ this generates. If all inclusion-minimal subsets in $M$ are singletons (i.e. $\{x\}$ for some $x\in E$,) find some $S\in M$ which is not a union of singletons in $M$, and extend $M$ by $S \setminus T$ for $T = \{x\in E\mid \{x\}\in M, x\in S\}$ the unique maximal union of singletons in $M$ which $S$ covers. ($S$ is chosen to satisfy a certain condition, see below.) Steps of type 1: Consider all the atoms of $M$ - that is, the elements covering $0$. Each of these is some subset $A$ of $E$, and we extend $r$ to the minimal sublattice generated by $M$ and the singletons contained in $A$ by setting the elements of $A$ to be parallel (each of rank $r(A)$). Here "parallel" is in the matroid-theoretic sense. It means that for any set $S$ in the sublattice generated by $M$ and $\{\{a\}\mid a\in A\}$, $$r(S) = \begin{cases}r(S) & S\cap A=\emptyset \\ r(S\cup A) & \mathrm{otherwise}.\end{cases}$$ In other words, any element of $A$ spans all the rest, so the influence on the rank of adding one of them to a set is the same as adding all $A$. After enough such steps have been performed, every atom of $M$ is a singleton. Steps of type 2: Take the union $U$ of all atoms of $M$, and denote $$M_x = \bigcap_{x\in S \in M} S$$ for each $x \in E \setminus U$. The family $\{M_x\}_{x\in E \setminus U}$ has a minimal nonzero element $M_z$. Since $M_z$ does not cover $0$ in $M$ (or it would be an atom, hence contained in $U$,) it covers some unique $T=U\cap M_z$ in $2^U$ (on which we already have a matroid structure). Denote $A = M_z \setminus T$, and note that if $A$ intersects an element of $M$ nontrivially it is contained in it (else $A\cap S \ni y$ but $A\cap S \not\ni y'$ for some $y,y' \in M_z$ and some subset $S\in M$, but then $M_y \subsetneq M_z$ contradicts minimality). Hence, without loss of generality, we can think of $A$ as a singleton (and later set all its elements parallel to each other). What remains is to extend the rank function to the lattice generated by $M$ and $A$. If $r(M_z) - r(T) = 0,$ just make each element of $A$ a loop, i.e. set the rank of $A$ to $0$. Otherwise, $r(M_z) - r(T) = 1$. For each $W \in M$, define the rank of $A\cup W$ to be $$\begin{cases} r(W)+1 & T\not\subset \overline{W} \\ r(W \cup (T\cup A)) & T \subset \overline{W}, \end{cases}$$ where $\overline{W}$ is the closure, i.e. $T\subset\overline{W}$ iff $r(W)=r(W\cup T)$ (note that in the second case, the rank is already defined, since $W, (T\cup A)$ are both in $M$). This is still submodular, isotone, and continuous, and extends the rank function to the sublattice generated by $M$ and $A$. Let's verify this: Edit: The previous version of the proof had a bug: I considered only $W \subset U$ and not general $W\in M$. So there is more computation to be done. I suspect much of this can be shortened. The most unfortunate part is that this error makes the previous explanation (based on modular filters) invalid. This was more conceptual and much shorter. Submodularity: let $W_1,W_2 \in M$. If $T \subset \overline{W_1},\overline{W_2}$ then we have \begin{align} & r(W_1 \cup A) + r(W_2 \cup A) \\ &= r(W_1 \cup T \cup A) + r(W_2 \cup T\cup A) \\ &\ge r(W_1 \cup W_2 \cup T \cup A) + r((W_1 \cap W_2)\cup T \cup A). \end{align} It suffices to show that if $T \not\subset \overline{W_1 \cap W_2}$ then the last summand is at least $r(W_1 \cap W_2) + 1$. This is clear, since by assumption $$r(W_1 \cap W_2) < r((W_1 \cap W_2)\cup T) \le r((W_1 \cap W_2)\cup T \cup A).$$ If $T$ is in the closure of $W_1$ only, then \begin{align} r(W_1 \cup A) + r(W_2 \cup A) &= r(W_1 \cup T\cup A) + r(W_2) + 1 \\ &\ge r(W_1 \cup W_2 \cup T \cup A) + r(W_1 \cap W_2) + 1 \\ &= r(W_1 \cup W_2 \cup A) + r((W_1 \cap W_2) \cup A). \end{align} The case in which $T$ is in the closure of neither $W_i$ is easier, as $r(W \cup T\cup A) \le r(W\cup T) + 1 = r(W) + 1$ (apply with $W=W_1\cup W_2$). Monotonicity: Suppose $W\cup A \subset W'$ for $W,W' \in M$. Then by construction $W' \supset M_z,$ so $T\subset W'$. Thus either $T\subset\overline{W}$ and $r(W\cup A)=r(W\cup T\cup A)=r(W\cup M_z)$ and we conclude by monotonicity of $r$ on $M$, or otherwise $$r(W) < r(W\cup T) \le r(W').$$ Inclusions of the form $W\subset W'\cup A$ are obvious. For inclusions of the form $W\cup A \subset W' \cup A$, the case $T\subset\overline{W}$ reduces again to monotonicity of $r$ on $M$, and the other case is analogous to what we did before. 1-continuity: Suppose $W,W'\in M$ and $W\cup A \lessdot W'$ in the lattice generated by $M$ and $A$. We need to verify that $|r(W')- r(W \cup A)|\le 1$. Note that if $W \lessdot W'$ in $M$ we are done: we already have $|r(W')- r(W \cup A)|\le 1$ (because $W \le W\cup A \lessdot W'$ and $W \lessdot W'$ together imply $W=W\cup A\in M$, so this follows from 1-continuity in $M$). If not, there is some maximal chain in $M$: $$ W \lessdot W_1 \lessdot \ldots \lessdot W_n \lessdot W',$$ where $W' \supset A$. Consider the maximal $i\le n$ such that $W_i \cup A \neq W'$. If $i < n-1$, then $W_{n-1}\cup A = W_n \cup A = W'$, but then $W_n \setminus W_{n-1} \subsetneq A,$ a contradiction (as $A$ is contained in any set of $M$ it intersects). If $i=n-1$ then in particular $W\cup A \subseteq W_{n-1}\cup A \subsetneq W'$, but $W\cup A$ is covered by $W'$, so $W=W_{n-1}$ (and $i=n$ doesn't occur because it implies $W=W_n$, but we assumed $W$ is not covered by $W'$, whereas $W_n \lessdot W'$.) Thus we need to show $r(W_{n-1}\cup A) \ge r(W') - 1$. By submodularity, if $r(W_{n-1}\cup A) = r(W_{n-1})$ then also $$r(W') = r(W_n \cup A) = r(W_n),$$ but $W_n \cup A = W'$ since $i=n-1$, and therefore $r(W_n)=r(W')$, contradicting $W_n \lessdot W'$. Therefore $r(W_{n-1} \cup A) \ge r(W_{n-1})+1 \ge r(W_n)\ge r(W')-1.$ There is another case to check: for $W,W' \in M$ we need to verify that $|r(W'\cup A) - r(W\cup A)|\le 1$ if $W\cup A \lessdot W'\cup A$ in the lattice generated by $M$ and $A$. If $A\subseteq W'$ we reduce to the previous case; if not, such a covering relation implies $W\lessdot W'$. Now: If $T \subseteq \overline{W}$ then $r(W)=r(W\cup A)$ and $r(W')=r(W'\cup A)$, so 1-continuity follows from 1-continuity in $M$. Otherwise, if $T\subseteq \overline{W'}$ then $r(W\cup A)=r(W)+1 \ge r(W') = r(W'\cup A)$ and 1-continuity is satisfied. Finally, if $T \not\subseteq \overline{W'}$ then $r(W\cup A)=r(W)+1$ and $r(W' \cup A)=r(W')+1$, so again 1-continuity follows from 1-continuity in $M$. Now set all elements of $A$ parallel to each other, and continue. Edit: I forgot to mention at first that I would be interested in your application. It would be nice if you could sketch it. Another edit: I added some details to the verification of 1-continuity and tried to make it a bit more readable. I'm not sure I understand your plan: Shouldn't Step 2 be unnecessary, seeing that the lattice extension in Step 1 will eventually bring all subsets of $E$ into $M$ ? About the application to greedoids: See §4 of https://www.dropbox.com/s/bg2yw7dbjgxpkl3/stronggreed.pdf?dl=0 (the link will eventually rot, but it should be good for a month or two) for the question I'm trying to answer, and §7.2 for how I'm hoping to use Question 2 to achieve this. @darijgrinberg I don't think so: if $M$ is just a chain of sets, say $\emptyset \subset {a} \subset {a,b} \subset {a,b,c}$, steps of type $1$ can do nothing to make ${b}$ an element of $M$. Ah, I see. Will try to understand the argument now. @darijgrinberg I've added some further details in the last part. I think the argument is now complete (and as far as I see it is correct), so I'm done editing unless you spot some bug or have questions. This turned out longer than expected... @darijgrinberg and your application is cool! @darijgrinberg Well, I found a bug. It is fixed, but the new proof is more computational (this, and precise statement of the mistake, are in the edit). I am fairly certain that this suffices for your application to greedoids. Any comments are welcome. Because I'm too lazy to read up on lattice theory: What is "the lattice generated by $M$ and $A$" exactly, in the sense of, what is the simplest way to describe it? (I don't want to take nested intersections and unions...) @darijgrinberg For those $A$'s as in the argument above: $A$ is contained in any set in $M$ that it intersects nontrivially. So this lattice only contains $M$ and the sets of the form $B \cup A$ for $B \in M$ (you can check this collection is closed under unions and intersections). (I have not forgotten about this, just need to finish a few other things before I have the time to take a serious look at this.) Have you found any simplifications in the meantime? I'm struggling with getting any clue on what's going on in Step 2... @darijgrinberg I have not thought about it since June. I think the argument is probably correct, but I can go over it again and check more carefully, try to motivate it a little better. Would that help? Would you prefer to pass to email? Sure I wouldn't mind email. My problem is, the definition in Step 2 is just too... long. As I don't have any intuition for this subject, I can do nothing other than formal manipulations with such a definition. That makes sense. I'll get to it within a couple of weeks and probably write to you directly - the comment thread feels like a bottleneck at this point.
2025-03-21T14:48:31.218425
2020-06-10T11:50:07
362693
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "D.S. Lipham", "Fred Rohrer", "YCor", "erz", "https://mathoverflow.net/users/11025", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/53155", "https://mathoverflow.net/users/95718" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630012", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362693" }
Stack Exchange
A connected topological space whose points cannot be connected by irreducible components Does there exist a topological space $X$ with the following properties? $X$ is connected. The set of irreducible components of $X$ is locally finite. Not every pair of points in $X$ can be "connected by irreducible components", i.e., there exist points $x,y\in X$ such that there does not exist a finite sequence $(Z_i)_{i=0}^n$ of irreducible components of $X$ with $x\in Z_0$, $y\in Z_n$ and $Z_i\cap Z_{i+1}\neq\emptyset$ for every $i\in\{0,\ldots,n-1\}$. Note that in such a case, there are infinitely many irreducible components that are not pairwise disjoint. (I think that such a space must exist: We take an uncountable well-ordered set of irreducible spaces such that each of them meets "the next one" in a single point. However, while unsuccessfully trying to do this rigorously, I got the feeling that some understanding of ordinal numbers might be helpful, which I seemingly do not have; hence the corresponding tag) This question arose while trying to understand and compare different characterisations of connectedness of topological spaces. If I understand the question: take X=[0,3] with usual topology on [1,2] and co-finite topology on [0,1] and [2,3]. Then [0,1] and [2,3] are irreducible but cannot be joined. @erz: In your example, the set of irreducible components is not locally finite. What's meant by "the set of irreducible components of $X$ is locally finite"? First, I'm not sure what an irreducible component is in such a broad setting: is it a maximal irreducible closed subset? Does the condition mean that every point has a neighborhood meeting only finitely many irreducible components? Dear @YCor, both your guesses are correct. @YCor What is an irreducible closed subset? @YCor nevermind, it is on wikipedia @D.S.Lipham It's a nonempty closed subset that is not the union of two closed proper subsets. If $X$ is Hausdorff these are only singletons (so the singletons are the irreducible components, and the "locally finite" condition means discrete). In general, if we have a nonempty totally ordered family of closed irreducible subsets, then the closure of its union is also closed irreducible. In particular, every closed irreducible subset is contained in a maximal one. A good reference for irreducibility is Bourbaki, AC.II.4.1, or (if one can read German) Section IV.4 in Grotemeyer, _Topologie,_B.I.Hochschultaschenbücher (1969). No such space can exist. The proof doesn't use very much about irreducible components. That is, suppose $X$ is connected. Let $S$ be any set of closed subsets of $X$ which exhaust $X$ and suppose $S$ is locally finite in the sense that every point $x$ has a neighborhood $U_x$ intersecting only finitely many sets $Z_1,\ldots,Z_n$ of $S$. Then every pair of points in $X$ can be 'connected by $S$-sets' à la condition (3). The local finiteness condition can be slightly strengthened for free: every point $x$ of $X$ has a neighborhood meeting only finitely many sets in $S$, each of which contains $x$, because the intersection of $U_x$ with the complements of those $Z_i$ not containing $x$ is still open. Now for $x$ a point of $X$,let $F_x$ be the set of all points $y$ in $X$ such that there exists a finite sequence of $S$-sets between $x$ and $y$ as in (3). We will show that $F_x$ is both open and closed, which will mean that $F_x=X$. To see that $F_x$ is closed, let $y$ lie in the closure of $F_x$; then there exists a neighborhood $U_y$ of $y$ which intersects finitely many $S$-sets containing $y$, necessarily including one $Z$ which meets $F_x$. Then a finite sequence from $x$ to $z\in Z$ can be extended to a sequence from $x$ to $y$ by appending $Z$, so $y$ is in $F_x$. To see that $F_x$ is open, let $y$ lie in $F_x$. Then by the same strengthening there exists a neighborhood $U_y$ of $y$ contained in the union of all $S$-sets containing $y$. Certainly each of these $S$-sets is in $F_x$ so a neighborhood of $y$ is contained in $F_x$. Dear @Gabriel, thank you very much! The generality of your result helps a lot in understanding "connected by certain sets"-condition.
2025-03-21T14:48:31.219033
2020-06-10T12:09:39
362694
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "John", "Joseph O'Rourke", "https://mathoverflow.net/users/6094", "https://mathoverflow.net/users/8110" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630013", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362694" }
Stack Exchange
Do continuous motions of the vertices of convex polyhedra that maintain local convexity imply global convexity? (Reference request) A convex polyhedron has all of its internal dihedral angles in $(0, \pi)$. However, if I start with an abstract polyhedron $P$, let's say a triangulated one, so I don't have to worry about planarity of faces, and I embed each vertex $i\in V(P)$ at some point of $p_i \in \mathbb{E}^3$, measuring the convexity of the dihedral angles is not enough to check that the polyhedron is convex because it could be self intersecting and have all of its dihedral angles between 0 and $\pi$. On the other hand, assume I start with an embedded convex polyhedron $P$ and allow its vertex positions to vary continuously over some interval $t\in I$ so that the position of a vertex $i$ at time $t$ is given by a continuous function $p_i(t)$. Let $P(t)$ denote the family of polyhedra I obtain from the vertex position functions with $P(0) = P$. Suppose throughout this motion every dihedral angle takes a value within $(0, \pi)$, it seems obvious to me that $P(t)$ remains convex for all $t$. I'm looking for a reference for where this appears as a theorem (if indeed I'm not missing something and its false). Bonus points for a projective version where one vertex is allowed to be at infinity. EDIT: I am looking for this over every motion that maintains the dihedral angles in $(0, \pi)$, not just sufficiently small motions. EDIT 2: It took me writing out nearly an entire proof of the theorem above to realize how Joe O'Rourke's reference is all you need. To reproduce it here: Alexandrov's observation is that the strictly convex polyhedra are an open set of $\mathbb{R}^{3v}$. Similarly, we may observe that the strictly non-convex polyhedra form an open set of $\mathbb{R}^{3v}$. To pass through the two requires passing from one open set into another, the boundary is a non-strictly convex polyhedron. Thus to pass from one to the other at least one dihedral angle must become $0$ or $\pi$. EDIT 3: I think you also need to add that no edge length goes to 0. Otherwise you can sort of invert the polyhedron by inverting edges. Yes, I should have caught the need to avoid $|e| \to 0$. Perhaps this quote from Alexandrov's book, p.147, suffices:     Alexandrov, Alexandr D. Convex Polyhedra. Springer Science & Business Media, 2005. Thanks for the reference, though I'm don't quite see how this quite gets all the way to the result. My thinking is that Alexandrov's statement here applies only for a sufficiently small open neighborhood of $\mathbb{R}^{3v}$ and thus $t$ values close enough to 0, but not necessarily for the entire interval. I think maybe I was unclear that I'm interested in any motion maintaining the dihedral angle bounds, not just sufficiently small motions. I edited the question to clarify. Ok, I think I see how this proves it. Strict convexity is an open set of $\mathbb{R}^{3v}$ and strict non-convexity is an open set of $\mathbb{R}^{3v}$. Therefore, a continuous motion passing from one to the other must pass through a non-strictly convex state, and thus a dihedral angle has become $\pi$. Took me longer than it should have. Thanks! @John: I was out of touch for a while there, but I think Yes, strict convexity is an open set in that space.
2025-03-21T14:48:31.219291
2020-06-10T12:13:58
362695
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andreas Blass", "Cynasty", "Emil Jeřábek", "Seva", "https://mathoverflow.net/users/12705", "https://mathoverflow.net/users/153820", "https://mathoverflow.net/users/6794", "https://mathoverflow.net/users/9924" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630014", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362695" }
Stack Exchange
The average size of downward closed family of the subsets of $[n]$ is at most $n/2$? I learned that the average size in any ideal of subsets of $[n]$ is at most $n/2$, but I think the downward closed family of the subsets of $[n]$ also satisfied. I want to know how to proof it or it is wrong. A downward closed family $\mathcal{F}$ means for any $A \in \mathcal{F}, B \subseteq A$, we have $B \in \mathcal{F}$, and the ideal's definition is here https://en.wikipedia.org/wiki/Ideal_(set_theory). Didn't you ask this question already? Why did you delete it and re-ask it again using a different account? Someone posted an irrelevant discussion under the question before, maybe I should deal with it in other ways. Please, in the future, do not repost questions in this way. In the uncommon case that someone posts a spam answer like that again, do not worry about it, it will be dealt with soon. (Note that the offending post was being heavily downvoted and, apparently, attracted spam flags. Most likely it would get deleted in short order.) It is not clear to me what you are asking. Do you mean taking the sum of the sizes of all downsets and dividing it by the total number of downsets? Or you are looking at the maximal downsets only? Do you really mean the sum, or a sort of weghted average? @Seva I think the question is not about the average size of downsets, but about the average size of sets inside any downset. I think @EmilJeřábek is right about the intended meaning. (The alternative, the average size of downsets, is $2^{n-1}$ for symmetry reasons.) Here is a pedestrian answer. If $\def\cF{\mathcal F}\cF$ is a downward closed subset of $\mathcal P([n])$, we have $$\frac1{|\cF|}\sum_{A\in\cF}|A|=\sum_{i\in[n]}\Pr_{A\in\cF}[i\in A].$$ Now, for any $i\in[n]$, $$\Pr_{A\in\cF}[i\in A]\le\frac12,$$ because the mapping $A\mapsto A\smallsetminus\{i\}$ provides an injection $$\{A\in\cF:i\in A\}\to\{A\in\cF:i\notin A\}.$$ This is true and indeed, much more can be said: if $\mathcal F$ is a downward closed family of subsets of $[n]$, then $$ \frac1{|\mathcal F|}\, \sum_{F\in\mathcal F} |F| \le \frac12\, \log_2|\mathcal F|; $$ equivalently, if $A\subseteq\{0,1\}^n$ is a downset, then $$ \frac1{|A|}\, \sum_{a\in A} w(a) \le \frac12\, \log_2 |A|, $$ where $w(a)$ is the number of non-zero components of $a$. This is Theorem 3 of the linked paper (see also the abstract).
2025-03-21T14:48:31.219495
2020-06-10T12:15:17
362696
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "gregodom", "https://mathoverflow.net/users/158936" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630015", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362696" }
Stack Exchange
Left Kan extension that preserves colimit I'd be very happy if the question When do Kan extensions preserve limits/colimits? has been fully answered. But it seems not. I have a more specific question though. Let $C$ be a site (essentially small to be safe) equipped with some subcanonical Grothendieck topology, $PSh$ be its category of presheaves. Then it's well-known that for any functor $C \to B$ with $B$ cocomplete, the left Kan extension $PSh \to B$ along the Yoneda embedding $C \to PSh$ necessarily preserves colimit. Is it true if I replace $PSh$ by the category of sheaves $Sh$? In other words, does the left Kan extension along $C \xrightarrow{Yoneda} PSh \xrightarrow{sheafification} Sh$ preserve colimit? If in general not, is there any sufficient condition we can impose on the functor $C \to B$? Usually it does not. We can take $C \to B$ to be the universal example of a functor to a cocomplete category, namely the Yoneda embedding $C \to PSh$. Then the left Kan extension of $y : C \to PSh$ along $j : C \to PSh \to Sh$ is the inclusion $Sh \to PSh$, and this usually does not preserve colimits. The calculation of this left Kan extension can be verified using the coend formula: $$ (Lan_j y)(X) = \int^{c : C} Sh(jc, X) \otimes yc = \int^{c : C} X(c) \otimes yc = X $$ where the second equality used the fact that the topology is subcanonical. Thanks for the counterexample! Is there any sufficient condition on the functor $C \to B$ that can guarantee the left Kan extension is colimit preserving? E.g. if $F:C \to B$ is a subfunctor of a functor $F':C \to B$ whose left Kan extension is colimit preserving, is the left kan extension of $F$ colimit preserving?
2025-03-21T14:48:31.219642
2020-06-10T12:47:04
362698
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Abdelmalek Abdesselam", "S.Farr", "darij grinberg", "https://mathoverflow.net/users/157078", "https://mathoverflow.net/users/2530", "https://mathoverflow.net/users/7410" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630016", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362698" }
Stack Exchange
Consequences of Littlewood-Richardson rule I am trying to read Deligne's paper 'Categories Tensorielles', and in the first chapter Deligne states some results obtained from the Littlewood-Richardson rule that I do not understand. He states: 'Fix $r$ and a partition $\lambda$ of $n$. For there to exist $n_1,\dots,n_r$ of sum $n$ such that $[\lambda: (n_1),\dots,(n_r)] \neq 0$, it is necessary and sufficient that $[\lambda]$ has at most $r$ rows.' Here the notation $[\lambda: (n_1),\dots,(n_r)]$ means the multiplicity of $V_{\lambda}$ in $\otimes _i V_{(n_i)}$. Can someone explain how he gets this result? This is an immediate consequence of the Pieri Rule https://www.math.upenn.edu/~peal/polynomials/schur.htm#schurPieriRule "au plus" = "at most", not "at least". @darijgrinberg I do not speak any French so I used a translation. Thank you for noticing, I would have never found out! @AbdelmalekAbdesselam Can you elaborate on that? @S.Farr: Did you read the statement of Pieri's rule I referred to? You just have to apply it successively as you tensor with a new $V_{(n_i)}$. Watch out for the length of the first column and keep in mind what the rule says about not putting two new boxes in the same column. In any case, this is not research level and thus not suitable for MO. It should be asked on math.stackexchange instead @TheoJohnson-Freyd: this might be your translation; in that case it would probably not hurt to fix the "at least". Thank you!
2025-03-21T14:48:31.219895
2020-06-10T13:53:20
362707
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Aleksei Kulikov", "Bazin", "Carlo Beenakker", "https://mathoverflow.net/users/104330", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/21907" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630017", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362707" }
Stack Exchange
On Mehler's formula for Hermite polynomials In the reference article of Richard Askey and George Gasper published in the American Journal of Mathematics, Autumn, 1976, Vol. 98, No. 3 (Autumn,1976), pp. 709-737, they attribute on page 731 the following formula to Mehler (quoting a book by Erdelyi): $$ \sum_{n=0}^{\infty}\frac{r^n H_n(x) H_n(y)}{2^n n!}=(1-r^2)^{-1/2} \exp\bigl\{ x^2-\frac{(x-ry)^2}{(1-r^2)} \bigr\}, $$ where $H_n$ is the $n$th Hermite polynomial. Question: I do not believe that formula, since the lhs is symmetric in $x,y$ whereas the rhs fails to be symmetric in $x,y$. I am also puzzled since, as said above, this article is a reference material for numerous articles on Laguerre and Hermite polynomials. The formula is actually symmetric in $x\leftrightarrow y$. You can find a proof here: A combinatorial proof of the Mehler formula. Thanks for your answer and for the reference, which I need to study. However, I am still puzzled, since if user69642 is correct, the logarithms of the two different rhs must be the same. @Bazin I'm a bit lost. If you literally expand everything in the exponent in the rhs from your post the resulting expression will be symmetric in $x$ and $y$ (and would symbol-by-symbol coincide with the one from the answer by user69642). $x^2-\frac{(y-r x)^2}{1-r^2}=y^2-\frac{(x-r y)^2}{1-r^2}$ @Carlo Beenakker Take $r=0$ in your equality, you get $x^2-y^2=y^2-x^2$. Nevertheless you are right for my post since $x^2-\frac{(x-ry)^2}{1-r^2}$ is indeed symmetric. Sorry for the absurd question. @Aleksei Kulikov You are right for my post since $x^2-\frac{(x-ry)^2}{1-r^2}$ is indeed symmetric. Sorry for the absurd question. I think that the formula is equal to, for all $-1<r<1$, $$ \sum_{\ell \geq 0} \dfrac{r^\ell H_{\ell}(x)H_{\ell}(y)}{2^{\ell}\ell!} = (1-r^2)^{-\frac{1}{2}} \exp \left(\dfrac{2xyr -r^2(x^2+y^2)}{1-r^2}\right), $$
2025-03-21T14:48:31.220059
2020-06-10T14:49:34
362713
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Mateusz Kwaśnicki", "S.Surace", "https://mathoverflow.net/users/108637", "https://mathoverflow.net/users/127334", "https://mathoverflow.net/users/68463", "https://mathoverflow.net/users/69603", "sharpe", "zab" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630018", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362713" }
Stack Exchange
The domains of generators of Markov processes and moment of hitting times This is a question on domains of generators and moment of hitting times. Let $M$ be a locally compact separable metric space and $X=(\{X_t\}_{t \ge 0}, \{P_x\}_{x \in M})$ a diffusion process on $M$ (that is $X$ is a strong Markov process with continuous sample paths). We assume that $X$ is symmetric with respect to a finite measure $\mu$ on $M$. Let $K$ be a compact subset of $E$. We also assume that $M \ni x \mapsto h(x):=E_{x}[e^{ \sigma_K}]$ is bounded on $M$, where $\sigma_K=\inf\{t \in [0,\infty) \mid X_t \in K\}$. Then, can we show the following equation: \begin{align*} (1)\quad \mathcal{L}h=-h ? \end{align*} Here, we denote by $\mathcal{L}$ the generator of $L^2$-generator of $X$. That is, $\mathcal{L}h$ is defined by $\mathcal{L}h=\lim_{t \to 0} (T_th-h)/t$ in $L^2(M,\mu)$. In a series of papers, the equation (1) is proved when $M$ is a smooth manifold and $X$ is a diffusion process on it. However, I think that (1) should hold under more general settings. It immediately follows that \begin{align*} &\{T_th(x)-h(x)\}/t=\{E_x[E_{X_t}[e^{\sigma_K}]]-h(x)\}/t\\ &=\{E_{x}[e^{\sigma_K \circ \theta_t}]-E_{x}[e^{\sigma_K}]\}/t, \end{align*} where $\theta_t$ is the shift operator of $X$. However, I cannot proceed any further. Am I making a mistake? Can you supply the references? @zab Thank you for your comment. For example, https://projecteuclid.org/download/pdfview_1/euclid.aihp/1359470127 is the reference. In p. 102 of this paper, you can see the same equality. The calculation should look like this: $E[h(X_t)|X_0=x]=E[E[e^{\sigma_K-t}|X_t]|X_0=x]=e^{-t}E[e^{\sigma_K}|X_0=x]=e^{-t}h(x).$ Therefore $$ \frac{T_th(x)-h(x)}{t} = \frac{e^{-t}-1}{t}h(x) \to -h(x).$$ Thank you for your comment. However, your argument is not valid. Please notice that $\sigma_K=t+\sigma_K \circ \theta_t$ if $t<\sigma_K$. In general, $\sigma_K \le t+\sigma_K \circ \theta_t$. Of course you are right. For small $t$ the probability that $\sigma_K<t$ is small, but I haven't found a way to derive good bounds. @sharpe: $L h = -h$ only holds, in some sense, in the complement of $K$. "Some sense" here may refer to Itô calculus (meaning that $d M_t := dh(X_t) + h(X_t) dt$ defines a local martingale, up to $\sigma_K$), for example. @MateuszKwaśnicki Thank you for your kind comment. $t \mapsto h(X_{t \wedge \sigma_K})+\int_{0}^{t \wedge \sigma_K}h(X_s),ds$ is a local martingale, right?
2025-03-21T14:48:31.220254
2020-06-10T15:52:01
362716
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "0xbadf00d", "Benoît Kloeckner", "https://mathoverflow.net/users/4961", "https://mathoverflow.net/users/91890" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630019", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362716" }
Stack Exchange
Extension of spectral gap inequality in Wasserstein distance Let $E$ be a separable $\mathbb R$-Banach space, $\rho_r$ be a metric on $E$ for $r\in(0,1]$ with $\rho_r\le\rho_s$ for all $0<r\le s\le1$, $\rho:=\rho_1$, $$d_{r,\:\delta,\:\beta}:=1\wedge\frac{\rho_r}\delta+\beta\rho\;\;\;\text{for }(r,\delta,\beta)\in[0,1]\times(0,\infty)\times[0,\infty)$$ and $(\kappa_t)_{t\ge0}$ be a Markov semigroup on $(E,\mathcal B(E))$. Assume we arbe able to show that for all $n\in\mathbb N$ there is a $\alpha\in[0,1)$ and $(r,\delta,\beta)\in[0,1]\times(0,\infty)\times(0,1)$ with$^1$ $$\operatorname W_{d_{r,\:\delta,\:\beta}}\left(\delta_x\kappa_n,\delta_y\kappa_n\right)\le\alpha\operatorname W_{d_{r,\:\delta,\:\beta}}\left(\delta_x,\delta_y\right)\tag1$$ for all $x,y\in E$, where $\delta_x$ denotes the Dirac measure on $(E,\mathcal B(E))$ at $x\in E$. Why are we able to conclude that there is a $(c,\lambda\in[0,\infty)^2$ with $$\operatorname W_\rho\left(\nu_1\kappa_t,\nu_2\kappa_t\right)\le ce^{-\lambda t}\operatorname W_\rho\left(\nu_1,\nu_2\right)\tag2$$ for all $\nu_1,\nu_2\in\mathcal M_1(E)$ and $t\ge0$? It's clear to me that if $\kappa$ is any Markov kernel on $(E,\mathcal B(E))$ and $d$ is any metric on $E$ such that there is a $\alpha\ge0$ with $\operatorname W_d\left(\delta_x\kappa,\delta_y\kappa\right)\le\alpha\operatorname W_d\left(\delta_x,\delta_y\right)$ for all $x,y\in E$, then this extends to $\operatorname W_d(\mu\kappa,\nu\kappa)\le\alpha\operatorname W_d(\mu,\nu)$ for all $\mu,\nu\in\mathcal M_1(E)$. Moreover, it's clear that $\operatorname W_d\left(\delta_x,\delta_y\right)=d(x,y)$. Note that for any choice of $(r,\delta,\beta)\in[0,1]\times(0,\infty)\times[0,\infty)$, it holds $$\beta\rho\le d_{r,\:\delta,\:\beta}\le\left(\frac1\delta+\beta\right)\rho.\tag3$$ Remark: The desired claim seems to be used in the proof of Theorem 3.4 in https://arxiv.org/pdf/math/0602479.pdf. $^1$ If $(E,d)$ is a complete separable metric space and $\mathcal M_1(E)$ is the space of probability measures on $\mathcal B(E)$, then the Wasserstein metric $\operatorname W_d$ on $\mathcal M_1(E)$ satisfies the identity $$\operatorname W_d(\mu,\nu)=\sup_{\substack{f\::\:E\:\to\:\mathbb R\\|f|_{\operatorname{Lip}(d)}\:\le\:1}}(\mu-\nu)f\;\;\;\text{or all }\mu,\nu\in\mathcal M_1(E),$$ where $$|f|_{\operatorname{Lip}(d)}:=\sup_{\substack{x,\:y\:\in\:E\\x\:\ne\:y}}\frac{|f(x)-f(y)|}{d(x,y)}\;\;\;\text{for }f:E\to\mathbb R$$ and $\mu f:=\int f\:{\rm d}\mu$ for $\mu$-integrable $f:E\to\mathbb R$. I can answer assuming some regularity on the Markov semigroup, which I would expect to be satisfied in most cases. Specifically, assume local (in time) Lipschitz continuity on your Markov semigroup, i.e. $$\forall s_0>0, \exists C>0, \forall s\in[0,s_0], \forall \mu_1,\mu_2 : \mathrm{W}(\mu_1\kappa_s,\mu_2\kappa_s)\le C\mathrm{W}(\mu_1,\mu_2)$$ (I do not precise for which metric, since the two metrics under consideration are Lipschitz-equivalent and so only the constant $C$ would change when passing from one to the other.) Using convexity of Wasserstein distance, every Lipschitz/contraction bound we have on Dirac masses is also true for arbitrary measures (I guess that is what you mean at the end of your question, although an $\alpha$ appears to be missing). For any $t_0$, using (1) with $n=1$ iteratively and the double inequality (3): \begin{align*} \mathrm{W}_\rho(\delta_x\kappa_{t_0},\delta_y\kappa_{t_0}) &\le \frac1\beta \mathrm{W}_{d_{r,\delta,\beta}}(\delta_x\kappa_{t_0},\delta_y\kappa_{t_0}) \\ &\le \frac{\alpha^{t_0}}{\beta} \mathrm{W}_{d_{r,\delta,\beta}}(\delta_x,\delta_y) \\ &\le \alpha^{t_0}\Big(\frac{1}{\beta\delta}+1\Big) \mathrm{W}_\rho(\delta_x,\delta_y) \end{align*} Since $\alpha\in(0,1)$, this is what you needed. (Side note: this kind of computation shows that any decay of the form $$ d(T^n(x),T^n(y)) \le f(n) d(x,y)$$ where $d$ is any metric, $T$ is any Lipschitz dynamical system, and $f(n) \to 0$ as $n\to \infty$ (or even $f(n)<1$ for some $n$), actually imply exponential decay. This is pretty basic, but seems to be sometimes overlooked.) Thank you very much for your answer. (a) How do you define the notion of local Lipschitz continuity of a transition semigroup? (b) I guess you've assumed $t_0=n$ to obtain the second inequality in your third displayed equation, right? I'm not sure what your goal is at this point. Is it to establish an inequality of the form as in your first displayed equation (with $\eta=\alpha^{t_0}((\beta\delta)^{-1}+1)$)? (c) Why is it sufficient to take $t_0$ large enough (and why can we do that)? How precisely do we need to choose $\lambda$ in $(2)$? @0xbadf00d (a) for all $s_0$, there exists $C$ such that for all $s\in[0,s_0]$ $\mathrm{W}(\mu_1\kappa_s,\mu_2\kappa_s) \le C \mathrm{W}(\mu_1,\mu_2)$; (b) no, I use (1) with $1$ in the role of $n$ (hence the power $t_0$ for the $\alpha$); yes my goal is to prove the first displayed equation, this is why I wrote "this $t_0$ is given by"; (c) $\alpha<1$ so $\alpha^{t_0}\to 0$; we can choose $t_0$ freely here because all three inequalities are true for all $t_0$. For $\lambda$, recall $\eta<1$ and $k\sim t$. Could it be the case that you've mistakenly mixed things up in the third equation? If you put $d:=d_{r,:δ,:\beta}$, then assumption $(1)$ essentially is that there is a $t_0\ge0$ such that $$\text W_d(δ_xκ_{t_0},δ_yκ_{t_0})\le\alpha\text W_d(δ_x,δ_y);;;\text{for all }x,y\in E\tag4.$$ Now, by $(3)$, $$\text W_\rho\le\frac1\beta\text W_d\le\left(\frac1{\betaδ}+1\right)\text W_\rho\tag5$$ and hence, by $(4)$, $$\text W_\rho(δ_xκ_{t_0},δ_yκ_{t_0})\le\alpha\left(\frac1{\betaδ}+1\right)\text W_{\rho}(δ_x,δ_y);;;\text{for all }x,y\in E\tag6.$$ (a): You didn't specify any metric associated to the Wasserstein distance. Or will the Lipschitz continuity hold for all metrics as soon as it holds for one metric? Is this notion somehow related to the usual notion of Lipschitz continuity when we consider the semigroup as a mapping from $[0,\infty)$ into a suitable space? (b): I'm sorry, I don't get what you mean by "with $1$ in the role of $n$". What is your $t_0$ then? Isn't $(6)$ the instance of your first equation we are seeking for? Otherwise it seems like a Lipschitz constant is missing in your second inequality of the third equation. Applying your second equation to $(6)$ would yield $$\text W_\rho(δ_xκ_t,δ_yκ_t)\le C\left[\alpha\left(\frac1{\betaδ}+1\right)\right]^{\left\lfloor\frac t{t_0}\right\rfloor}\text W_{\rho}(δ_x,δ_y);;;\text{for all }x,y\in E\tag7.$$ (c) I don't get why these inequalities should be true for all $t_0$, since your assumption was that there is a particular $t_0$ such that your first equation holds. Or did you use my assumption from the question here (with $t_0=n$)? @0xbadf00d: it seems you either made mistake in what you wrote, or did not read carefully enough. I'll reformulate the answer, but you need to put more effort yourself when you benefit from other's time. I think I was on the wrong track. I'll edit the question soon, writing down what I think is going on. Rewritting the answer, it appeared it could be simplified. Please take note of my edit. I think the Lipschitz thing is confusing me, since $(10)$ and $(11)$ both seem to be Lipschitz assumptions. So, I guess the crucial point must be that the second Lipschitz constant (the one in $(11)$, is $<1$, while the first can be arbitrary large. @0xbadf00d: of course $\alpha<1$ is crucial! You cannot hope for exponential decay otherwise. I feel you are on the wrong site, and need to build your skills a bit more (math.SE has wider scope). I am not willing to put any more time in this, sorry. Let me ask you one last thing: Writing $d:=d_{r,:\beta,:δ}$, how did you obtain $\text W_d(δ_xκ_t,δ_yκ_t)\le\alpha^t\text W_d(δ_x,δ_y)$ for arbitrary $ t>0$? And why does your Lipschitz constant $C$ not occur at any point? It surely holds $\text W_d(\muκ_n,\nuκ_n)\le\alpha^n\text W_d(\mu,\nu)$ for all $n\in\mathbb N_0$. And if $t\ge0$, we may write $t=n+r$ for some $n\in\mathbb N_0$ and $r\in[0,1)$. Together with your Lipschitz assumption, this yields $\text W_d(δ_x,κ_t,δ_yκ_t)\le c\alpha^n\text W_d(δ_x,δ_y)$, but since $\alpha^n\ge\alpha^t$, I don't get how you conclude. Please take a look at my answer. I guess you've got $t_0=1$ in mind, but I'd really be interested in the question whether the claim remains to hold for arbitrary $t_0$. @0xbadf00d: I implicitely took $t_0$ an integer here, sure, but I would not have imagined this would be your issue. Your answer is pretty much the same as mine. Building up on Benoît Kloeckner's answer, consider the following simplified scenerio: Let $(E,d)$ be a complete separable metric space, $(\kappa_t)_{\ge0}$ be a Markov semigroup on $(E,\mathcal B(E))$ with $$\operatorname W_d(\delta_x\kappa_t,\delta_y\kappa_t)\le c\operatorname W_d(\delta_x,\delta_y)\;\;\;\text{for all }x,y\in E\text{ and }t\in[0,1)\tag{10}$$ for some $c\ge0$ and $$\operatorname W_d(\delta_x\kappa_1,\delta_y\kappa_1)\le\alpha\operatorname W_d(\delta_x,\delta_y)\tag{11}$$ for some $\alpha\in(0,1)$. From $(11)$, we easily deduce $$\operatorname W_d\left(\delta_x\kappa_n,\delta_y\kappa_n\right)\le\alpha^n\operatorname W_d\left(\delta_x,\delta_y\right)\tag{12}$$ for all $x,y\in\mathbb N$ and $n\in\mathbb N_0$. If $t>0$, we may write $t=n+r$ for some $n\in\mathbb N_0$ and $r\in[0,1)$ so that $$\operatorname W_d\left(\delta_x\kappa_t,\delta_y\kappa_t\right)\le\alpha^n\operatorname W_d\left(\delta_x\kappa_r,\delta_y\kappa_r\right)\le c\alpha^n\operatorname W_d\left(\delta_x,\delta_y\right)\tag{13}$$ for all $x,y\in E$ by $(12)$ and $(10)$. Now we only need to note that $$c\alpha^n=\frac c\alpha\alpha^{n+1}\le\frac c\alpha\alpha^t\tag{14}$$ (the last "$\le$" is actually a "$<$" as long as $c\ne0$) and hence we obtain $$\operatorname W_d\left(\mu\kappa_t,\nu\kappa_t\right)\le\tilde ce^{-\lambda t}\operatorname W_d(\mu,\nu)\tag{15}$$ for all $\mu,\nu\in\mathcal M_1(E)$, where $$\tilde c:=\frac c\alpha$$ and $$\lambda:=-\ln\alpha.$$ Remark I'd still be interested in the question whether this result still holds when $(10)$ and $(11)$ are replaced by the following assumption: There is a $t_0>0$ with $$\operatorname W_d(\delta_x\kappa_t,\delta_y\kappa_t)\le c\operatorname W_d(\delta_x,\delta_y)\;\;\;\text{for all }x,y\in E\text{ and }t\in[0,t_0)\tag{10'}$$ and $$\operatorname W_d(\delta_x\kappa_{t_0},\delta_y\kappa_{t_0})\le\alpha\operatorname W_d(\delta_x,\delta_y)\tag{11'}$$ for some $\alpha\ge0$. (The original statement in this answer is the particular case $t_0=1$.)
2025-03-21T14:48:31.220814
2020-06-10T16:09:10
362718
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Abdelmalek Abdesselam", "https://mathoverflow.net/users/110603", "https://mathoverflow.net/users/143783", "https://mathoverflow.net/users/7410", "lrnv", "tomos" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630020", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362718" }
Stack Exchange
Is my application of Faà di Bruno's formula correct? Suppose I have a function $f$ from $\mathbb R^d$ to $\mathbb R$, and denote $g = \exp \circ f$. I want to express the derivatives of the function $g$ in term of the derivatives of $f$ and vice versa, so I need to do two applications of Faà di Bruno's formula, on $\exp$ and $\ln$. I obtain: $$f^{(\mathbf i)}(\mathbf t) = \sum\limits_{\pi \in \Pi(\mathbf i)} g(\mathbf t) \prod\limits_{B \in \pi} g^{(\mathbf i(B))}(\mathbf t)$$ $$g^{(\mathbf i)}(\mathbf t) = \sum\limits_{\pi \in \Pi(\mathbf i)} (\lvert \pi \rvert - 1)! (-1)^{\lvert \pi \rvert - 1} f(t)^{-\lvert \pi \rvert} \prod\limits_{B \in \pi} f^{(\mathbf i(B))}(\mathbf t)$$ Where I denoted: $\mathbf i$ a multi-index representing the number of derivations in each dimensions $B(\mathbf i)$ is a simple map from e.g $\mathbf i = (1,0,2)$ to $B(\mathbf i) = (1,3,3)$ giving the blocks, and $\mathbf i(B)$ is the reverse map. $\Pi(\mathbf i)$ is the set of partitions of $B(i)$ Could you tell me if you see a obvious mistake ? (also, I have trouble tagging the question properly) You need to explain your notations. Right now you are asking the reader to guess the correct formulation of what you meant to say. I can guess that $B(\mathbf{i})$ is a choice of multiset where $1$ appears $\mathbf{i}_1$ times, $2$ appears $\mathbf{i}_2$ times, etc. But what is a partition of $B(\mathbf{i})$? @AbdelmalekAbdesselam $B(\mathbf i)$ is exactly the set you wrote. A partition of a set $A$ is a set of disjoint blocks $A_i$ included in $A$ such that their union is $A$. There are many ways to write the Faa di Bruno formula, some good and some bad. For functions of one variable, a good way to write it can be found in my answer to the MO question Gevrey estimate of derivatives It says, $$ (f\circ g)^{(n)}(t)=n!\sum_{k\ge 0}\ \sum_{n_1,\ldots,n_k\ge 1} \mathbf{1}\left\{\sum_{i=1}^{k} n_i=n\right\} \ \frac{f^{(k)}(g(t))\ g^{(n_1)}(t)\cdots g^{(n_k)}(t)}{k!\ n_1!\cdots n_k!} $$ where $\mathbf{1}\{\cdots\}$ stands for the indicator function of the condition within braces. Now suppose the inner function $g$ is now a function from $\mathbb{R}^d$ to $\mathbb{R}$. Then the Faa di Bruno is exactly the same but with a little twist as to the interpretation of $n$, $n_1$, etc. Let $\mathbb{N}=\{0,1,2,\ldots\}$ to clarify my choice of conventions. Now of course $t=(t_1,\ldots,t_d)\in\mathbb{R}^d$, but more importantly $n$, $n_1$, etc. are multiindices, i.e., elements of $\mathbb{N}^d$. For a multiindex $p=(p_1,\ldots,p_d)\in\mathbb{N}^d$, we let by convention $$ p!=p_1!\cdots p_d! $$ and $$ g^{(p)}(t)=\frac{\partial^{p_1+\cdots+p_d}}{\partial t_1^{p_1}\cdots\partial t_d^{p_d}}g(t)\ . $$ Another modification is that $n_1,\ldots,n_k\ge 1$ now means none of these multiindices is $(0,\ldots,0)$. It is now straightforward to specialize the formula to $\exp$ or $\log$. So, in the $\mathbb R^d$ case, you mean that the LHS is now $(d/dx_1)^{n_1}\cdot \cdot \cdot (d/dx_d)^{n_d}$ right? And do you have a source for this? (I trust you I just need a source too, but I only find it in terms of counting permutations, which I find hard to get my head round, and the Wikipedia article seems to specialise to $n_1=\cdot \cdot \cdot =n_d$). Quite possible I'm misunderstanding something though. And actually, even in the $d=1$ case, I'm not sure how it matches up with the Wikipedia article's statement. Apologies! How do you group the terms?
2025-03-21T14:48:31.221044
2020-06-10T16:55:43
362721
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630021", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362721" }
Stack Exchange
Survey/references on random geometric $K$-NN – $K$-nearest-neighbour graphs? [Edit:] Some related info on number of connected components of NN-graphs can be found here: https://cstheory.stackexchange.com/a/47037/2408 Sample $N$ points in $\mathbb{R}^d$ from some distribution $f$, e.g. uniform $[0,1]^d$, or Gaussian $\mathcal{N}(0,1)^d$, or whatever... Question 1: What is known/survey/references on $K$-nearest-neighbor graphs for such data clouds ? (that means we connect each point with $K$ its nearest neighbors). At least for some distributions $f$ like uniform or Gaussian ? Question 2: For example what is known about degrees distribution ? Simulations suggest it is power-law, what is the exponent depending on $d$ and $f$? There is good survey on Wikipedia - Geometric random graphs, but a little bit different class of graphs is considered there. I.e. points are connected if the distance is shorter than threshold $r$ (and, well, distribution is only uniform). It is more common in practical applications to consider $K$-NN graphs, rather than GRG, by the clear reason - the size of graph is $K\times N$, while for GRG you might get $N^2$ (in the worst case). Question 3 Is there a way looking on $K$-NN graph to estimate dimension $d$ of the space, at least for uniform/Gaussian distributions ? Somewhat similar as "cluster coefficient" of GRG depends only on the dimension $d$: Question 4: Is there an estimate of clustering coefficient for $K$-NN graph ? Question 5: If one considers minimal spanning tree for $K$-NN graph, what is known about it ? Degree distribution ? I am aware about the following beautiful result on the length estimate for Euclidean MST: The wikipedia article on random geometric graphs only scratches the surface. A much deeper treatment is provided in Mathew Penrose's amazing text Random Geometric Graphs. Chapter 4 contains a treatment of what you are asking about, namely the "empirical distribution of nearest-neighbour distances amongst the points." Also, the underlying distribution of the points does not need to be uniform. You can feed in any underlying distribution and run a $\chi^2$-test to see if that distribution is a good fit, based on a $k$-NN statistic. Penrose says this was considered by Bickel and Breiman. Penrose cites a book by Silverman for more on this kind of thing. Penrose also cites a monograph by Yukich that I have not read. About this monograph, Penrose writes (emphasis mine) Related graph constructions include those where the decision on whether to connect two nearby points depends not only on the distance between them, but also on the positions of other points. Such constructions include the minimal spanning tree, and also graphs such as the nearest-neighbour graph and the Delaunay graph; in the latter, points lying in neighbouring Voronoi cells are connected. For many of these related graph constructions, some of the asymptotic theory is described in Yukich (1998). For further results see Penrose and Yukich (2001, 2003).
2025-03-21T14:48:31.221251
2020-06-10T17:17:58
362724
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "R. van Dobben de Bruyn", "darij grinberg", "https://mathoverflow.net/users/2530", "https://mathoverflow.net/users/82179" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630022", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362724" }
Stack Exchange
Constructive definition of noncommutative rational functions (aka free skew fields) The question Let $F$ be a field. (I am fine with assuming $F=\mathbb{Q}$, but I suspect that a "right" answer will be independent of $F$.) Let $k$ be a nonnegative integer. Question. Is there a constructive definition of a skew field $S_k$ of noncommutative rational functions in $k$ variables $x_{1},x_{2},\ldots,x_{k}$ over $F$ ? "Noncommutative rational functions" should mean the following assumptions: Assumption A1: $S_k$ is an $F$-algebra and a skew field. Here, "skew field" is to be understood constructively -- i.e., there is an algorithm that takes any element $a\in S_k$ as input and returns either $a^{-1}$ or a proof that $a=0$. Assumption A2: Every element of $S_k$ is obtained from the generators $x_{1},x_{2},\ldots,x_{k}$ and the elements of $F$ by the operations $+,-,\cdot,^{-1}$, where the latter operation means inverting a nonzero element. Assumption A3: For any $F$-algebra $A$, there is a partially-defined "evaluation map" $\operatorname{ev} : S_k \times A^k \to A \sqcup \left\{\operatorname{undefined}\right\}$ that sends every pair $\left(f, \left(a_1, a_2, \ldots, a_k\right)\right)$ (with $f \in S_k$ and $a_1, a_2, \ldots, a_k \in A$) either to an element of $A$ or to the value $\operatorname{undefined}$. This map $\operatorname{ev}$ should send each such pair $\left(f, \left(a_1, a_2, \ldots, a_k\right)\right)$ to the result of substituting $a_1, a_2, \ldots, a_k$ for $x_1, x_2, \ldots, x_k$ in $f$ (more precisely, in some representation of $f$ as a $+,-,\cdot,^{-1}$-expression in the $x_{1},x_{2},\ldots,x_{k}$) if this substitution is well-defined (i.e., if the denominators are invertible after the substitution), and otherwise to $\operatorname{undefined}$. (Thus, this map should be an $F$-algebra homomorphism in its first argument, and should send $\left(x_i, \left(a_1, a_2, \ldots, a_k\right)\right)$ to $a_i$ for each $i \in \left\{1,2,\ldots, k\right\}$. It should also be "maximally defined" -- i.e., it shouldn't take the value $\operatorname{undefined}$ without reason.) Assumption A4: For any $f \in S_k$, there exist (and can be explicitly constructed) a positive integer $n$ and some $n\times n$-matrices $A_1, A_2, \ldots, A_k$ over $F$ such that $\operatorname{ev} \left(f, \left(A_1, A_2, \ldots, A_k\right)\right)$ is not $\operatorname{undefined}$. (This means that the nonvanishing of the denominators in $f$ can be "witnessed" by explicit matrices.) The definitions I've seen (via some handwavy noncommutative localization, or via explicit embedding into noncommutative power series) are both nonconstructive, but I haven't looked beyond the surveys. I'd also want $S_k$ to embed into $S_{k+1}$. Why I care Free skew fields come up in algebraic combinatorics as the theater for noncommutative birational rowmotion, for Kontsevich--Iyudu--Shkarin periodicity and for quasideterminants, as well as in the work of Schützenberger apparently inspired by automata theory. In simple cases, the use of $S_k$ is completely optional, and it suffices to work in an arbitrary ring and just require that denominators appearing in one's theorems are invertible. However, this kind of reasoning doesn't allow us to introduce extra denominators by WLOG assumption, as is commonly done in the commutative case using Zariski density. That is, if I want to prove that two noncommutative rational expressions $a$ and $b$ are equal everywhere they are defined, I cannot use any rational expressions with denominators that don't appear in $a$ or $b$. (In particular, I cannot argue that $ac = bc$ and $c \neq 0$ imply $a = b$; there is nothing that a-priori guarantees the equality of $a$ and $b$ outside of the domain where $c$ is invertible.) The situation is even worse if I want to use Zariski topology properly, such as defining birational equivalences (a followup question I will ask once this one is resolved). The concrete impetus for this question is a theorem about periodicity of birational rowmotion (joint work with Tom Roby), which we can prove in the noncommutative case but only up to the existence of a reasonably well-behaved notion of "general position" (i.e., Zariski density). What (little) I know From what I understand, the only(?) systematic textbook on this subject is Paul Cohn, Free Ideal Rings and Localization in General Rings, which gets to the existence of $S_k$ (or at least something similar) somewhere in Section 7.5 after 400+ pages of general theory. Admittedly, Cohn probably doesn't use too much of the earlier chapters there, but still it is a daunting perspective to read a whole book for a single theorem. Worse yet, I have no reason to expect that the proof is constructive (Zorn's lemma is being used at least 3 times before that point). Gelfand and Retakh, in their paper Determinants of matrices over noncommutative rings, take a different road to $S_k$, or at least to the inverses that they need in their study of quasideterminants (probably a smaller ring than $S_k$, and not quite a skew field): they embed things in a completed monoid ring $\mathcal{R}$ of a certain monoid. (The monoid consists of what they call "special monomials". The word "completed" means completed with respect to degree -- thus, a noncommutative version of formal power series.) Unfortunately, the "completed" part here destroys constructivity, because checking whether two elements of $\mathcal{R}$ are equal reduces to the halting problem. This is quite possibly sufficient for a good chunk of the theory quasideterminants, where equality checking isn't necessary; but for what I am doing, it certainly is not. Cohn and Reutenauer have a paper called A Normal Form in Free Fields, which however is not constructive either (again due to power series being involved). Note that I am not insisting on a normal form, merely on decidable equality and computable inverses. Small comment: I would be a little careful with the operation $/$, because dividing on the left or on the right may not be the same thing. E.g. $x_1x_2^{-1}$ should not equal $x_2^{-1}x_1$. I guess part of the difficulty is that probably not every element is of the form $fg^{-1}$ for $f$ and $g$ in the free algebra, and it might be desirable to allow both left and right division. (Disclaimer: my comment is not burdened by any relevant knowledge.) @R.vanDobbendeBruyn: True, I should say "inverse" rather than "division". You might have a look on the paper Operator scaling: theory and applications by Garg, Gurvits, Oliveira, Wigderson. This is about algorithms for testing non-commutative rational functions. My paper The free field: realization via unbounded operators and Atiyah property with Tobias Mai and Sheng Yin addresses also problems how to test whether a non-commutative rational expression is invertible or not. Instead of using matrices we show that you can take tuples of operators satisfying some specific properties. In particular, free semicircular operators will do.
2025-03-21T14:48:31.221790
2020-06-10T17:19:48
362725
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "fosco", "gregodom", "https://mathoverflow.net/users/158936", "https://mathoverflow.net/users/7952" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630023", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362725" }
Stack Exchange
Kan extension of subfunctors Let $i: A \to B$ be a fully faithful embedding (dense if it helps) of categories and $F:A \to C$ be a functor with $C$ being complete. Assume that the right Kan extension of $F$ along $i$ is limit preserving, given a subfunctor $F'$ of $F$, is the right Kan extension of $F'$ along $i$ in general limit preserving too? If no, can we put any reasonable conditions (not too restrictive) on the categories/functors so that this becomes true? I have the gut feeling that for $Ran_iF$ preserving (say) products is a rare condition; maybe try with $i$ = Yoneda and find a small counterexample? Hi @Fosco. One example in my mind is as follows: let $A$ be a fully faithful dense subcategory of $B$, $C=Sets$, $a$ be an object of $A$ and $F=Hom_A(a,-)$. Then I think $Ran_iF=Hom_B(a,-)$ which is limit preserving. But is $Ran_iF'$ limit preserving? It seems to me $Ran_iF$ is $B(ja,_)$ if $j$ is fully faithful and dense, but this is a back-of-the-envelope calculation, you should check carefully. In general, $Ran_iF'$ is not well-behaved. For example, if $F'$ is the minimal subobject of $A(a,_)$, $Ran_iF'$ is the terminal functor. Oh, but probably in that case the resulting functor is limit preserving without conditions on $A$. Let me think more thoroughly. @Fosco what's your $j$ here? Anyway I think $Ran_iF$ should be really the actual extension which thus has to be $Hom_B(i(a),-)$? For $Ran_iF'$ I believe it has to be a subfunctor of $Hom_B(i(a),-)$? If this is true, we're really asking what subfunctors of $Hom_B(i(a),-)$ necessarily preserve limit.
2025-03-21T14:48:31.221929
2020-06-10T20:10:09
362735
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Abdelmalek Abdesselam", "CHUAKS", "Chua KS", "https://mathoverflow.net/users/112259", "https://mathoverflow.net/users/1894", "https://mathoverflow.net/users/7410" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630024", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362735" }
Stack Exchange
How to prove that for the real stable characteristic polynomial $P=\Phi_T$ of a tree $T$, $P_iP_j-PP_{ij}=(\Phi_{T-[v_i,v_j]})^2$? If $G$ is a labeled graph, the multi-affine characteristic polynomial (which depends on labeling) is defined by $\Phi_G(x_1,...,x_n)=\det(I_x-A)$, where $I_x$ is the diagonal matrix $diag\{x_1,...,x_n\}$ and $A$ is the adjacency matrix. Since we can also write $\Phi_G=\det( \sum_{j=1}^n x_jI_j-A)$, where $I_j$ is the matrix which is $1$ at the $(j,j)$ position and zero elsewhere which are positive semi definite and $A$ is symmetric, $P=\Phi_G$ is real stable. It follows we must have $\Delta_{ij}(P)=P_iP_j-PP_{ij} \ge 0$ as real valued function. How does one prove that in the case of a tree $T$, $\Delta_{ij}(\Phi_T)=(\Phi_{T-[v_i,v_j]})^2,$ where $T-[v_i,v_j]$ means the forests given by $T$ with all vertices (include end points) on the unique path from $v_i$ to $v_j$ deleted ? In case $G$ is not a tree, how to prove $\Phi_G$ is still a perfect square ? How to read off the square root from the graph? (Added) The fact that $\Delta_{ij}(P)$ is a perfect square holds more generally if we take $A$ to be any Hermitian matrix. In that case $\Phi_A$ is still real stable but $D_{ij}$ may be non real and the matrix identity implies $\Delta_{ij}(\Phi_A)=|D_{ij}|^2$ so that it is either a square or a sum of two squares of real polynomials. The interpretation of $D_{ij}$ as polynomial of deleted path in the case of tree is less obvious and actually holds in any graph for which there is a unique path from $v_i$ to $v_j$. How can one read off $D_{ij}$ from the graph when there is more than one path ? One get a more interesting result if one replace determinant by permanent. If we define $\Psi_G=permanent(I_x-A)$, then for a tree $T$, we have $\Delta_{ij}(\Psi_T)=(-1)^{len([v_i,v_j])} (\Psi_{T-[v_i,v_j]})^2$ where $len([v_i,v_j])$ is the length of the unique path $[v_i,v_j]$. For a general graph $G$ and $P=\det(I_x-A_G)$, we still have $\Delta_{ij}(P)=(\sum_{p_{ij} \in [v_i,v_j]} \Phi_{G-p_{ij}}(x))^2$, where $p_{ij}$ range over all simple paths form $v_i$ to $v_j$. Here simple means all vertices along the path are distinct. Let $B$ denote the $n\times n$ matrix $I_x-A$. Let $[n]=\{1,2,\ldots,n\}$. For $I,J\subset [n]$ with the same number of elements, let $D_{I,J}$ denote the determinant of the matrix resulting from $B$ when deleting the rows indexed by the elements of $I$ and the columns indexed by the elements of $J$. Assuming $i\neq j$, we clearly have $P=D_{\varnothing,\varnothing}$, $P_i=D_{\{i\},\{i\}}$, $P_j=D_{\{j\},\{j\}}$, and $P_{ij}=D_{\{i,j\},\{i,j\}}$. By the Dodgson condensation identity, $$ D_{\{i\},\{i\}}D_{\{j\},\{j\}}-D_{\{i\},\{j\}}D_{\{j\},\{i\}}= D_{\{i,j\},\{i,j\}}D_{\varnothing,\varnothing}\ . $$ Since $B$ is symmetric $D_{\{i\},\{j\}}=D_{\{j\},\{i\}}$. As a result $$ \Delta_{ij}=(D_{\{i\},\{j\}})^2\ . $$ In the tree case, the relation between $\Phi_{T-[v_i,v_j]}$ and $D_{\{i\},\{j\}}$ can be worked out using Theorem 1 of my article "The Grassmann–Berezin calculus and theorems of the matrix-tree type" in Adv. Appl. Math. 2004. For those without access to the journal, the preprint version is here. I don't quite understand. For $\Phi_{T-[v_i,v_j]}$, one has to delete all vertices on the unique path between $v_i$ and $v_j$ but your $D_{{i},{j}}$ only delete one row and one column. Also how does your argument use the fact that $T$ is a tree. It is not true if $T$ is not a tree. did you find a mistake in what I wrote? Why is $D_{{i},{j}}=\Phi_{T-[v_i,v_j]}$ ? I did not address that. I found the square root for general graphs. Where did you get that the square root for trees is $\Phi_{T-[v_i,v_j]}$? In general, on MO you should give some context and references. I updated my answer so you can find how to relate $D$ and $\Phi$. Yes. I discover the tree relation first by numerical computation. The graph case look more strange but is actually easier. I can deduce it now from your $D_{ij}$. Thanks. I don't have time to do the computation in detail now, but I think using Thm 1 from my paper, this should give $D_{{i},{j}}=(-1)^{i+j}\Phi_{T-[v_jv_j]}$. It's not possible to read off the square root from the graph as it depends on choice of labeling . The tree square root is intrinsic which is the original question, I would need more details of the definition of dependence on labelling and intrinsic to understand your comment. If you use my formula for a general simple graph what you get for $\pm D_{{i},{j}}$ looks pretty intrinsic to me. You have a sum over subforests one component of which must connect $i$ and $j$ (hence the unique path in the tree case). The contribution of such a forest is a product over the other components of sum over vertices $s$ in that component of $x_s$ minus the degree of vertex $s$ in the original graph $G$. This, assuming your definition of $A$ has zeros on the diagonal. It seems that it is more interesting to use this result in reverse to give a polynomial time algorithm to find the unique path between two given vertices in a tree. If you have a say 300 vertices tree, it may not be easy to find the path between $v_i$ and $v_j$ but we can evaluate $D_{ij}$ using Gauss elimination. If we then remove all vertices from the tree corresponding to the variables that remain in $D_{ij}$, one get the unique path.
2025-03-21T14:48:31.222275
2020-06-10T20:10:37
362736
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Pietro Majer", "Taras Banakh", "erz", "https://mathoverflow.net/users/53155", "https://mathoverflow.net/users/6101", "https://mathoverflow.net/users/61536" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630025", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362736" }
Stack Exchange
Is a closed connected semilattice of $C(I)$ path-connected? Let $\Gamma $ be a sub-lattice of the Banach space $\big( B(S),\|\cdot\|_\infty\big)$ of all bounded real valued functions on the set $S$ (meaning that for any $f,g\in\Gamma $ both functions $f\wedge g: x\mapsto \min(f(x),g(x))$ and $f\vee g: x\mapsto\max(f,g)$ belong to $\Gamma$). Then, it is not hard to see that if $\Gamma$ is closed and connected, it is also path-connected (details below). But what if $\Gamma$ is only assumed to be a closed $\wedge$-semilattice, that is, $f\wedge g\in\Gamma$ for any $f,g\in\Gamma$. Is it still true that $\Gamma$ is path-connected? Additional hypotheses may be assumed, e.g. $S$ a topological space and $\Gamma\subset C_b(S)$. Question: Is a closed connected $\wedge$-semilattice $\Gamma$ of $C([0,1])$ path-connected? And if $\Gamma$ is also compact? $$*$$ Details. Proof that a closed connected lattice $\Gamma\subset B(S)$ is path-connected (in fact, even locally path connected). Recall that, as a general fact, in a metric space $(\Gamma,d)$, any pair of elements $f$ and $g$ of $\Gamma$, for any $\epsilon>0$, are connected by $\epsilon$-chains. That means: for any $\epsilon>0$ there is $m\in\mathbb{N}$ and a finite sequence $(g_i)_{0\le i \le m}\subset \Gamma$ such that $g_0=g$, $g_m=f$ and $d(g_{i+1},g_i)<\epsilon$ for $i=0,\dots,m-1$. (This because the set of $f$ that can be connected to $g$ by an $\epsilon$-chain is a non-empty closed and open set, thus the whole space $\Gamma$). In the case of a lattice $\Gamma\subset B(S)$, assuming $g\le f$, the $\epsilon$-chain can be taken increasing w.r.to the point-wise order: just replace $g_i$ with $g^*_i:=(g_0\vee\dots\vee g_i)\wedge f\in\Gamma$: then clearly $g^*_0=g$, $g^*_m=f$, $g^*_i\le g^*_{i+1}$ for $0\le i<m$ and since for $h\in\Gamma$ the maps $\Gamma\ni u\mapsto u\vee h\in \Gamma$ and $\Gamma\ni u\mapsto u\wedge h\in \Gamma$ are $1$-Lip, it is also true $$\|g^*_{i+1}-g^*_i\|_\infty=\big\|\big((g_0\vee\dots\vee g_i)\vee g_{i+1})\wedge f-\big((g_0\vee\dots\vee g_i)\vee g_i\big)\wedge f\big\|_\infty $$ $$\le\|g_{i+1}-g_i\|_\infty<\epsilon.$$ Moreover, we may re-indicize monotonically $\epsilon$-chains on finite subsets of $[0,1]$. If we iterate this construction, between consecutive elements of an $\epsilon$-chain, with $\epsilon=1/k$ for $k=0,1,2,\dots$, we precisely get by induction: A nested sequence of finite sets $J_0:=\{0,1\}\subset J_1 \subset\dots\subset J_k\subset J_{k+1}\subset\dots\subset [0,1]$, with $J:=\cup_{k\ge0}J_k$ dense in $[0,1]$, and a sequence of increasing maps $\alpha_k:J_k\to\Gamma$ such that $\alpha_0(0)=g$, $\alpha_o(1)=f$, ${\alpha_{k+1}}_{|J_k}=\alpha_k$, $\|\alpha(s')-\alpha(s)\|_\infty<1/k$ for any pair of consecutive elements $s,s'$ of $J_k$. Therefore the maps $\alpha_k$ glue to an increasing map $\alpha:J\to \Gamma$, and I claim this map is uniformly continuous. Indeed for any $k\ge1$ and $t,t'\in J$ with $|t-t'|\le\delta_k$, where $$ \delta_k:=\min\{|s-s'|: s,s'\in J_k, s\ne s'\}>0$$ there are consecutive points $s<s'<s''$ in $J_k$ such that $s'\le t,t'\le s''$, so that by monotonicity of $\alpha$ $$\|\alpha(t')-\alpha(t)\|_\infty\le\|\alpha(s'')-\alpha(s)\|_\infty\le2/k.$$ So $\alpha$ is uniformly continuous and extends by density to a continuous path in $\Gamma$ between $g$ and $f\ge g$. For the general case, we may juxtapose a path from $g$ to $g\vee f$ and one from $g\vee f$ to $f$, showing that $\Gamma$ is path connected. Notice that if $f$ is $\epsilon$-close to $g$, so is the whole path, so $\Gamma$ is indeed also locally path connected. Could you please elaborate on why connectedness of a closed sublattice imply path-connectedness? Sure; I've edited and added a proof so you proved the following: if $\Gamma$ is a lattice, endowed with a complete metric that makes $f\to f\wedge g$ and $f\to f\vee g$ $1$-Lipschitz, then any $f\ge g$ can be joined by a monotone continuous path within $\overline{B}(f, d(f,g))$. And in the process you also showed that a monotone map into such lattice from a dense subset of $[0,1]$ is uniformly continuous as long as there are $\varepsilon$ stairs for every $\varepsilon$. Interesting! (sorry for an unhelpful comment) It seem that the map $C[0,1]\to C[0,1]$, $f\mapsto- f$, is an isomorphism of $C[0,1]$ endowed with the $\vee$-semilattice operation to $C[0,1]$ endowed with the $\wedge$-semilattice operation. So, these topological semilattices are isomorphic and should have the same properties.
2025-03-21T14:48:31.222571
2020-06-10T20:14:25
362739
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Karim KHAN", "erz", "https://mathoverflow.net/users/152650", "https://mathoverflow.net/users/53155" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630026", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362739" }
Stack Exchange
If $\tau_1\subset \tau_2$ and $X^*$ is separable for $\tau_1$ then $X^*$ is separable for $\tau_2$? Let $X$ be a Banach space the associated dual space is denoted by $X^*$. Take $\tau_1$ and $\tau_2$ two topologies in $X^*$ compatible with the duality $(X^*,X)$, such that $\tau_1\subset \tau_2$. We suppose that $X^*$ is separable for $\tau_1$. Can we say that $X^*$ is separable for $\tau_2$? Sure. If $D$ is $\tau_1$ dense, then the linear span of $D$ is $\tau_2$ dense by the separation theorem, and the rational linear combinations of a set in any TVS is dense in the linear span of the set. Why "by the separation theorem" we have the linear span of $D$ is $\tau_2$ dense? @KarimKHAN since $\tau_2$ is compatible with the duality, if rational span of $D$ was not dense there would be $x\in X\0$ such that $<x,d>=0$.
2025-03-21T14:48:31.222657
2020-06-10T20:42:21
362742
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630027", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362742" }
Stack Exchange
Linear vs algebraic unipotent quotient stacks Consider algebraic stacks of the form $\mathbb{C}^n/G$ where $G$ is a unipotent group satisfying either Type 1: $G$ acts on $\mathbb{C}^n$ via affine linear transformations Type 2: $G$ acts on $\mathbb{C}^n$ via triangular algebraic transformations (transformations of the form $(x_1,\ldots,x_n) \to (x_1',\ldots,x_n')$ where $x_i'=x_i+p_i(x_1,\ldots,x_{i-1})$ for polynomials $p_i$) Clearly stacks of Type 1 are also of Type 2. Is the converse true? In other words, for a triangular algebraic quotient $\mathbb{C}^n/G$ can one always construct an equivalent affine linear quotient $\mathbb{C}^{n'}/G'$? Motivation here is the first example from the paper: Winkelmann, J. On free holomorphic $\mathbb{C}$-actions on $\mathbb{C}^n$ and homogeneous Stein manifolds. Math. Ann. 286 593–612 (1990)
2025-03-21T14:48:31.222740
2020-06-10T20:49:37
362743
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andreas Blass", "Feng Liang", "Johannes Schürz", "Wojowu", "YCor", "https://mathoverflow.net/users/134910", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/159397", "https://mathoverflow.net/users/30186", "https://mathoverflow.net/users/6794" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630028", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362743" }
Stack Exchange
Formal proof of $ZFC \vdash CON(\ulcorner ZFC-P\urcorner)$ I am wondering that if one can show $ZFC \vdash CON(\ulcorner ZFC-P \urcorner)$. There is an argument in Set Theory, An Introduction to Independence Proofs by Kunen (page 145), but I am confused about the proof. Let $\phi$ be the formula for the coding of $ZFC-P$ in natural numbers, and $X_{ZFC-P}=\{n\in \omega :\phi(n)\}$. By Godel Completeness Theorem as the formal sentence $\forall X (CON(X) \leftrightarrow \exists \mathfrak{M}(\mathfrak{M} \models X) )$, it is enough to prove that: $ZFC \vdash H(\omega_1) \models X_{ZFC-P} $, or say $ZFC \vdash \forall x \in X_{ZFC-P} (H(\omega_1) \models x)$. By Completeness and Soundness Theorems, it is enough to show that whenever $M$ is model of $ZFC$, $M$ models $\forall x \in X_{ZFC-P} (H(\omega_1) \models x)$. This amounts to showing for all $x \in X_{ZFC-P}$, $H(\omega_1) \models x$ is true in $M$. However, if $M$ is a nonstandard model which has nonstandard natural numbers, $X_{ZFC-P}$ may be strictly larger than the actual collection of codings of $ZFC-P$. Let $x_0$ be the coding of a nonstandard axiom $\psi$ which has infinite length looking from outside while we have $\phi(x_0)$. In Kunen's book, they showed $H(\omega_1) \models x$ for actual axioms of $ZFC-P$, but not for infinite sentences like $\psi$. In fact, $CON(\ulcorner ZFC-P \urcorner)$ as a formal sentence also includes possible nonstandard axioms. I am wondering that if there is a way to deal with these nonstandard axioms, or if one can show $ZFC \vdash CON(\ulcorner ZFC-P \urcorner)$. What is P here? Power set axiom? Yes, P is Power Set axiom. What's the meaning of ZFC $\Rightarrow$ Con(X) (where X is a set of axioms)? I don't understand "ZFC" as an assertion (while "$M$ is a model of ZFC" or "Con(ZFC)") are logical assertions). (Oh, it's just been edited, thanks) You can directly show from $ZFC$ that $\forall n \in X_{ZFC-P}\, \colon \, ( H(\omega_1) \vDash n)$. To see this remind yourself that $ \vDash$ is expressible by a single formula $\psi$, so that $H(\omega_1) \vDash \varphi(z_1,...,z_m)$ iff $\psi(H(\omega_1), \ulcorner \varphi \urcorner, \vec{z},1)$. Now for $n \in X_{ZFC-P}$ you have a finite case distincition what kind of axiom $n$ is. E.g. if $n$ is ` $\forall A \, \forall \vec{z} $ replacement for the formula $\varphi_n(x,y, \vec{z})$ with respect to $A$ holds ', let $A \in H(\omega_1)$ and $\vec{z} \in H(\omega_1)^{<\omega}$ be arbitrary and define the set $$B:=\{y \in H(\omega_1) \, \colon \, \exists x \in A \, \, H(\omega_1) \vDash \varphi_n(x,y, \vec{z})\}.$$ Finally prove that $B \in H(\omega_1)$, and so $H(\omega_1) \vDash n$. The subtlety here is that you are only using replacement with respect to $\psi$. Therefore, this is a finite proof. On the meta-level, for every finite $\Delta \subseteq ZFC$ you can prove $CON(\Delta)$. But $ZFC$ does not prove $\forall \Delta \subseteq ZFC \, \text{finite}\, \colon CON(\Delta)$ as this would contradict Gödel's second Incompleteness Theorem. Actually I wanted to post this as a comment, but it was too long... so I put it in an answer. I agree with the part that if $n \in X_{ZFC-P}$ represents some real axiom of $ZFC-P$, then $H_{\omega_1} \models n$. However, $ZFC$ allows nonstandard models which have nonstandard natural numbers. Such natural numbers have infinite predecessors looking from outside but are regarded as finite numbers inside the model. When we define $X_{ZFC-P}$, we are using some formula $\phi$ such that $X_{ZFC-P}={ n : \phi (n) }$. It is not clear if $X_{ZFC-P}$ contains nonstandard naturals. If $n_0 \in X_{ZFC-P}$ is nonstandard, we don't have a corresponding axiom in ZFC-P to $n_0$. - -In this case it is not clear if $H_{\omega_1} \models n_0$. @FengLiang In any model of ZFC that has nonstandard natural numbers, some of them will be in $X_{ZFC-P}$, but that makes no difference. It is a theorem of ZFC (and therefore true in all models of ZFC, even nonstandard ones) that $H_{\omega_1}$ satisfies all axioms of ZFC-P (and "all" here means all in the sense of the model, including nonstandard axioms). Do you know where I can find a proof of it? To be clear, we need to show: $ZFC \vdash H_{\omega_1} \models X_{ZFC-P}$, which is stronger than the statement: assuming $ZFC$, $H_{\omega_1}$ is a model of $ZFC-P$. For the latter one, it is proved as Theorem 6.5 in Chapter IV of Kunen's book. I agree with that proof, but all it proved is only all standard axioms of $ZFC-P$ are true in $H(\omega_1)$. Besides that, I don't know a proof of $ZFC \vdash H_{\omega_1} \models X_{ZFC-P}$. Sorry I didn't understand your answers at the beginning. I think I get it now! Thanks a lot!
2025-03-21T14:48:31.223050
2020-06-10T20:49:42
362744
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alf", "Dieter Kadelka", "Sina Baghal", "https://mathoverflow.net/users/100904", "https://mathoverflow.net/users/37014", "https://mathoverflow.net/users/59151" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630029", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362744" }
Stack Exchange
concentration inequality for product of matrices Suppose that we have $N$, $n\times n$ positive semidefinite matrices $A_1, \cdots,A_N$ such that $0\preceq A_i \preceq I_n$, $\forall i$. Also let us assume that $A_i\neq I_n$ for all $i$ and they are not commutable. Now we construct a sequence $X(t)$ as follows: At each iteration, we choose a matrix $A(t)$ from $\{A_1,\cdots,A_N\}$ uniformly at random and let $X(t)=A(t)*X(t-1)$. We also let $X(0)=I_n$. Let us also assume that $X(t)\rightarrow 0$ with probability 1 as $t\rightarrow \infty$. Then the question is whether we can compute bounds for the following \begin{equation} \mathbb{P}\left(\Vert X(t)-\mathbb{E}[X(t)]\Vert> \delta \right). \end{equation} I don't expect any answer for the general case, but do we know results for specific cases? Have you investigated the case when each $A_i$ is diagonal? What if the matrices are symmetric and commute? @DieterKadelka I edited my post. We exclude the case where they commute, as that case would be perhaps an easier problem. For nonasymptotic bounds I have a recent preprint that might be useful: https://arxiv.org/pdf/2003.05437 This is an extremely well studied problem. The canonical reference is Bougerol and LaCroix's book: Bougerol, Philippe; Lacroix, Jean, Products of random matrices with applications to Schrödinger operators, Progress in Probability and Statistics, Vol. 8. Boston - Basel - Stuttgart: Birkhäuser. X, 283 p. DM 88.00 (1985). ZBL0572.60001. But the paper by Goldsheid-Margulis is also very good: Gol’dshejd, I. Ya.; Margulis, G. A., Lyapunov exponents of a random matrix product, Usp. Mat. Nauk 44, No. 5(269), 13-60 (1989). ZBL0687.60008.
2025-03-21T14:48:31.223193
2020-06-10T21:23:59
362749
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "S.Surace", "https://mathoverflow.net/users/158968", "https://mathoverflow.net/users/35520", "https://mathoverflow.net/users/69603", "ofer zeitouni", "user158968" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630030", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362749" }
Stack Exchange
What is the drift for a convex combination of Girsanov measures? Consider two Girsanov measures $\mu_1$ and $\mu_2$ corresponding to drifts $F_1(t)$ and $F_2(t)$ respectively. By this, I mean that we have that $B(t)\sim F_1(t)+\tilde B(t)$ where $\tilde B(t)$ is a Brownian motion under $\mu_1$. Similarly for $\mu_2$. For $\lambda \in [0,1]$ we can consider the probability measure $\mu=\lambda \mu_1+(1-\lambda) \mu_2$. $\mu$ is also a Girsanov measure so it corresponds to a drift $F(t)$. What is $F$ in terms of $F_1,F_2$? I know if $F_1, F_2, F$ are all deterministic then $$F(t)=E_\mu[B(t)]=\lambda F_1(t)+(1-\lambda)F_2(t)$$. What about in general? Even in the case where $F_1,F_2$ are deterministic can we say that $F$ is? This itself is pretty tricky. It is not clear to me why the convex combination of a Girsanov measure should be a Girsanov measure. Where do you get this from? @S.Surace Because it has a density. @S.Surace Any measure that is absolutely continuous wrt Wiener measure is a Girsanov measure and corresponds to a $W^{1,2}$ drift. Sure, this makes sense. Unfortunately I don't know an answer to this. The exponential martingales and the sum don't seem to go well together. Just take drift $F_1$ with probability $\lambda$ and drift $F_2$ w.p $(1-\lambda)$. If you want an explicit probabilistic description in terms of the drifts $F_1,F_2$, just enlarge the probability space to support an independent Bernoulli $B$ of parameter $\lambda$ and set the drift $F=BF_1+(1−B)F_2$. I'm sorry, I'm not sure exactly what you're showing here. I thought you wanted a drift $F$ that will generate the measure $\mu$. I constructed one for you. Wasn't it your question? BTW, I somewhat disagree with what you wrote in the "deterministic case". That is what I wanted. Thank you. Why do you disagree? Because $F(t)$ is the expression I wrote, and not the one you did. If you take $F=\lambda F_1+(1-\lambda) F_2$, where $F_i$ are deterministic functions, the measure you will get is not the convex combination of $\mu_1$ and $\mu_2$. Thanks. $\qquad$
2025-03-21T14:48:31.223364
2020-06-10T22:48:46
362757
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carlo Beenakker", "hichem hb", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/144355", "https://mathoverflow.net/users/491235", "user3826143" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630031", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362757" }
Stack Exchange
expectation of the trace of the square root of wishart matrix Let $X(N,N)$ be Wishart matrix with rank(X)=K in order to estimate the expectation of the trace of the square root of X i.e $X^{1/2}$ I want to know if is possible to use the unordered Wishart distribution function to estimate this value? \begin{align} E[trace(\sqrt X )]=? \end{align} Since the trace is invariant under unitary transformations, you can work in a basis where $X$ is diagonal, with nonzero elements $x_n$, $n=1,2,\ldots K$ on the diagonal; denote by $P(x)$ their marginal distribution; then $$\mathbb{E}[{\rm tr}\, \sqrt X]=\mathbb{E}\left[\sum_{n=1}^K\sqrt{x_n}\right]=K\int P(x)\sqrt{x}\,dx.$$ For $K\gg 1$ you can use the Marcenko-Pastur distribution for $P(x)$. @ Carlo Beenakker sir can I used the distribution of the unordered eigenvalue \begin{align} f(\lambda ) = \frac{1}{K}{\sum {\frac{{i!}}{{(i + N - K)!}}{{[L_i^{N - K}(\lambda )]}^2}\lambda } ^{N - K}}{e^{ - \lambda }}\end{align} wher $L$ represent the lagrangien polynom yes, the order is irrelevant. sir I have tried to estimate the expectation using the distribution of the unordered eigenvalue. however, when I compare this result to the result obtained numerically (using Matlab) I find that there are not similar? As a test, you could first try large K and use the Marcenko -Pastur distribution. using he Marcenko -Pastur distribution i can't find a finite form of this equation? It is just an integral you have to evaluate. @CarloBeenakker I am puzzled by the first equality in your answer. Matrices X distributed according to Wishart are not diagonalizable in the same basis. I would appreciate it if you could clarify the answer. Are you claiming that the quantity in question is just K times the distribution of $X_11$ (the first diagonal entry of $X$)? @user3826143 : the trace of a matrix $\sqrt X$ is the sum of the eigenvalues of $\sqrt X$, and the eigenvalues of $\sqrt X$ are the square root of the eigenvalues $x_n$ of $X$; hence ${\rm tr}, \sqrt X=\sum_n \sqrt{x_n}$; and no, $x_1$ is not the first diagonal entry of $X$, it is the first eigenvalue of $X$ @CarloBeenakker Yes! I see how stupid my question was. Thank you for clarifying nonetheless. While we are here though, I hope I could ask one more question. How do we know the marginal distribution of the eigenvalues are the same? Actually never mind! You already answered this question by noting that trace is invariant under unitary transformations. Thanks so much!
2025-03-21T14:48:31.223551
2020-06-10T23:08:05
362759
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630032", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362759" }
Stack Exchange
How do we obtain the divisor bound in similar but different settings? The divisor bound refers to the statement that the divisor function $d(n)$ defined to be the number of positive integral divisors of $n$, can be bounded very efficiently. Indeed, it is easily shown that $d(n) = O_\varepsilon \left(n^{\varepsilon}\right)$ for every $\varepsilon > 0$. This, for example, immediately allows one to show that the equation $x^2 + y^2 = n$ has at most $O_\varepsilon \left(n^\varepsilon\right)$ solutions, since the solutions to this equation counts the number of divisors of $n$ over the Gaussian integers (which is a UFD). Now consider the field $K = \mathbb{Q}(\sqrt[3]{2})$ and the ring of integers $\mathcal{O}_K = \mathbb{Z}[\sqrt[3]{2}]$. We are in the nicest possible setting: $\mathcal{O}_K$ is monogenic, is a PID, and the unit group has rank one. Here my question can be made very explicit. Suppose we want to factor a principal ideal of the shape $(s + \sqrt[3]{2} t)$, with $s,t$ coprime rational integers. We do so by writing $$\displaystyle (s + \sqrt[3]{2} t) = R \cdot J$$ for ideals $R, J \subset \mathcal{O}_K$. Since $\mathcal{O}_K$ a is a PID, we can represent $R, J$ by $R = (r_0 + \sqrt[3]{2} r_1 + \sqrt[3]{4} r_2)$ and $J = (j_0 + \sqrt[3]{2} j_1 + \sqrt[3]{4} j_2)$ with $r_i, j_i \in \mathbb{Z}$ for $i = 0,1,2$. By fixing a fundamental domain of the action of $\mathcal{O}_K^\ast$ we can eliminate associates, and reduced to finding solution to an equation of the form $$\displaystyle \begin{bmatrix} s \\ t \\ 0 \end{bmatrix} = \begin{bmatrix} r_0 j_0 + 2 r_2 j_1 + 2 r_1 j_2 \\ r_1 j_0 + r_0 j_1 + 2 r_2 j_2 \\ r_2 j_0 + r_1 j_1 + r_0 j_2 \end{bmatrix}. (\ast)$$ By the divisor bound, we know that $I = (s + \sqrt[3]{2} t)$ has at most $O_\varepsilon \left(N(I)^{\varepsilon} \right)$ divisors, which immediately gives a good bound. Is there a way to see this bound without using number theory? That is, for fixed $s,t$ can one show that $(\ast)$ has $O_\varepsilon \left((s^2 + t^2)^\varepsilon \right)$ solutions say, if we bound $|r_i|, |j_i|$ so that there can't be a unit group action? More generally, if we have three bilinear forms $Q_0(\mathbf{x}; \mathbf{y}), Q_1(\mathbf{x}; \mathbf{y}), Q_2(\mathbf{x}; \mathbf{y}) \in \mathbb{Z}[\mathbf{x}; \mathbf{y}]$ can we produce a divisor like bound for solutions $\mathbf{x}, \mathbf{y}$ to the system $$\displaystyle \begin{bmatrix} s \\ t \\ 0 \end{bmatrix} = \begin{bmatrix} Q_0(\mathbf{x}; \mathbf{y}) \\ Q_1(\mathbf{x}; \mathbf{y}) \\ Q_2(\mathbf{x}; \mathbf{y})\end{bmatrix}$$ at least provided that $\lVert \mathbf{x} \rVert, \lVert \mathbf{y} \rVert \ll (s^2 + t^2)^{1/4}$?
2025-03-21T14:48:31.223836
2020-06-11T00:10:26
362761
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andres Mejia", "Connor Malin", "https://mathoverflow.net/users/134512", "https://mathoverflow.net/users/84075" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630033", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362761" }
Stack Exchange
relationship between "linear approximation" to immersions and formal immersions I'm reading these notes Here, I am regarding $\mathrm{Imm}(-,N)$ as a presheaf on the open sets of some manifold $M$ If we take $\mathrm{Imm}^f(-,N)$ to be the sheaf of formal immersion (an element is an injective bundle map on the tangent bundles covering an arbitrary map $g:M \to N$.) I've gathered from context that there is a connection between formal immersions and the sheaf of sections $\mathcal{F}(U):=\Gamma(V_n(TU) \times_{GL_n} \mathrm{Imm}(\mathbb R^n,N))$ (defined for an arbitrary open $\mathbb R^n \to M$. Please see definition 3.2 and the preceeding paragraph in the linked notes for more details.) This is referenced as the linear approximation to the sheaf of immersions, but I don't know why, although I assume it agrees in the sense of functor calculus. My questions are the following: is the topological sheaf of formal immersions isomorphic to $\mathcal{F}$? If not, is there some relationship? If so, is the scanning map of Segal compatible (via some isomorphism) with the ``obvious" maps $\mathrm{Imm}(U,N) \to \mathrm{Imm}^f(U,N),\,\,\, f \mapsto (df,f)$? The last obvious map is 1-jet prolongation. The basic motivation here is to realize Hirsch-Smale as an instance of the H-principle in the sense of the posted notes (via the scanning map) I think the idea is that we can view a formal immersion as immersing a neighborhood of p into the tangent space of $g(p)$. This coincides with how we view the linearization of a diff invariant sheaf because the fiber over a point of the linearization is a colimit of the sheaf on values of trivial neighborhoods, and these trivial neighborhoods are exactly what the tangent space approximates via the exponential map (which we use to define the scanning map). @ConnorMalin thank you for your comment, it was very helpful! What remains for me is writing down an explicit map (and hopefully it’s inverse.) I will try to work it out later today using your insights. You probably need to choose a type of exponential map to define the isomorphism. Probably a good idea to choose this so it coincides with the exponential used to define the scanning map if you want the square to commute. Check out section 37 of these notes: http://people.math.harvard.edu/~kupers/teaching/272x/book.pdf . To be linear (at a manifold) is to have the map to the first taylor approximation a weak equivalence. It seems to me after reading this that satisfying the h-principle is equivalent to being linear. @ConnorMalin I think one needs "h-principle+reduced=linear" if I remember the terminology correctly. I agree. I think my confusion here is why hirsch-smale is an h-principle a lot. I will look at those notes (which I've used in the past to some success for different topics.) Thanks!\ @ConnorMalin sorry for the late update. I think that this settles why being "linear"+reduced is equivalent to satisfying an $h$-principle in the sense of the notes I posted. The exposition here did not have enough detail for me to compute why $F(U)$ is actually (as above) is $T_1(Imm(U))$. In particular, I don't understand how the "bousfield-kan formula" gives what Kupers claims, or even assuming that, why it results in the formula that he writes. Another thing (which is basically what this question is about) is why the natural map $Imm(U) \to F(U)$ being a homotopy equivalence... is an equivalent condition to $Imm(U) \hookrightarrow Imm^f(U)$ being a homotopy equivalence.
2025-03-21T14:48:31.224094
2020-06-11T01:55:24
362766
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Elías Guisado Villalgordo", "Johannes Huisman", "Julian Rosen", "Qfwfq", "https://mathoverflow.net/users/101848", "https://mathoverflow.net/users/4721", "https://mathoverflow.net/users/5263", "https://mathoverflow.net/users/85592" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630034", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362766" }
Stack Exchange
"Real algebraic varieties" vs finite type separated reduced $\mathbb{R}$-schemes with dense $\mathbb{R}$-points This question is partly motivated by a few comments here. Let me denote by $R$ the (real-closed) field of real numbers $\mathbb{R}$; everything is probably the same over an arbitrary real-closed field. When one has a polynomial subset $V$ of $R^n$, the following two are equally sensible ways of putting a structure sheaf on $V$: One is by considering regular functions in the sense of usual scheme theory: in this case the global regular functions are $R$-polynomials in $n$ variables modulo the ideal of those polynomials vanishing on $V$. If, more precisely, we call $X$ the corresponding $R$-scheme (with all of its non-$R$-points too, which by the way are reconstructible from the set $V\subseteq R^n$) and $O_X$ its structure sheaf, then $X(R)=V\subseteq R^n$ and $O_X(X)\simeq R[x_1,\ldots, x_n]/I_X$. The other way is by declaring that a regular function is a ratio of polynomials with nonvanishing denominator. We will call such functions $R$-regular, and $R_V$ the resulting structure sheaf. We call $(V,R_V)$ an $R$-algebraic variety. This definition seems to be standard in real algebraic geometry, see e.g. Bochnak-Coste-Roy - Real algebraic geometry (Section 3.2). I think it doesn't change much if we consider the topological space $X$ of the scheme in point 1) instead, endowed with the sheaf $R_X$ that sends an open set $U$ to the rational functions on $U\subseteq X$ that are regular at each point of $U\cap X(R)$. The resulting structure sheaves are not the same. For example, consider the real line: the function $\frac{1}{1+x^2}$ is an $R$-regular function which is not (scheme-theoretically) regular. Likewise, one can define abstract $R$-algebraic varieties, and $R$-regular maps thereof. The curious thing is that every projective $R$-algebraic variety is $R$-biregularly isomorphic to an affine one. Indeed, the set theoretic map (example 1.5 in Ottaviani - Real algebraic geometry. A few basics or theorem 3.4.4 in BCR) $$\mathbb{P}^n(R)\to \operatorname{Sym}^2(R^{n+1})\;\;,\quad (x_0:\ldots : x_n)\mapsto \frac{x_ix_j}{\sum_{h=1}^n x_h^2}$$ is an $R$-regular embedding. This does not correspond to an everywhere-defined morphism of schemes, as is immediately seen by looking at any component of the map in a standard affine chart of $\mathbb{P}^n$. Are there non-(quasi-)affine abstract $R$-algebraic varieties at all? Edit: I think the "quasi" in "quasi-affine" may be pleonastic: I haven't checked the details but a quasi-affine $R$-algebraic variety should very often be affine. Indeed, if $X=W\smallsetminus Y$, $Y\subset W \subseteq R^n$ with $W$ affine and $Y$ closed (maybe with some assumptions on $Y$), the real blowup $\operatorname{Bl}_Y W$ is closed in some $\mathbb{P}^{m}\times W$ and the latter is affine; but now the "missing" set $E$ has become a divisor: $X\simeq (\operatorname{Bl}_Y W)\smallsetminus E$, and affine minus a divisor is still affine. The above example (the one of projective space embedding in an affine space) shows that the category $\text{$R$-Var}$ of $R$-algebraic varieties is not a full subcategory of schemes over $\operatorname{Spec}(R)$. On the other hand, I think the category $\operatorname{Sch}'_R$ of finite type separated reduced schemes over $\operatorname{Spec}(R)$ is a full subcategory of $\text{$R$-Var}$. [Edit: following the comment of Julian Rosen, we probably also want to require the schemes in $\operatorname{Sch}'_R$ to have dense $R$-points] Are there two non-isomorphic schemes in $\operatorname{Sch}'_R$ that become isomorphic in $\text{$R$-Var}$? Edit: even before posting, I found example 3.2.8 in BCR. There is also proposition 3.5.2 in BCR, the $R$-biregular isomorphism between the circle $x^2+y^2=1$ and $\mathbb{P}^1_R$. And between the "quadric" sphere and the "Riemann" sphere (i.e. complex projective line thought of as a real algebraic variety). In which other ways does $\text{$R$-Var}$ deviate from $\operatorname{Sch}'_R$? Note: I'm not asking how real algebraic geometry deviates from complex algebraic geometry (which is surely addressed in a preexisting MO question). Edit: (added following question) For non real-closed fields, or fields of positive characteristic, do people consider varieties in the sense of 1) or in the sense of 2)? For example, should $1/(1+x^2)$ be a regular function on the line over $\mathbb{F}_7$? (It's a well defined function on a finite field, so there will be a polynomial realizing its values set theoretically, but should it be enough?) -- Or, should 1/(x^2-3) be a regular function on the line over $\mathbb{Q}(\sqrt{2})$? For your second question, the scheme $\operatorname{Spec} R[x]/(x^2+1)$ becomes isomorphic to the empty scheme in $R$-Var. Maybe the definition of $\operatorname{Sch}'_R$ should include only those schemes whose $R$-points are Zariski-dense (this is true for every scheme arising in the construction (1)). Thanks. Edited accordingly. As for your first question, concerning nonaffine R-varieties as you call them, yes, there are nonaffine R-varieties. However, they are considered pathological. Example 12.1.5 on page 301 of Bochnak-Coste-Roy, Real algebraic geometry, constructs an R-line bundle over $\mathbf R^2$ whose total space is not affine. In fact, it is not affine since it does not have any separated complexification. Note that the R-variety itself, however, is separated! The essential point here is that the set of real points of an irreducible affine scheme over $\mathbf R$ can be reducible. In the aforementioned example, the irreducible scheme in question is the one defined by the irreducible polynomial $$p=x^2(x-1)^2+y^2\in\mathbf R[x,y,z].$$ The set of real points in $\mathbf R^3$ defined by $p$ is the disjoint union of the affine lines $$L_0=\{(0,0)\}\times\mathbf R\ \mathrm{and}\ L_1=\{(1,0)\}\times\mathbf R. $$ This is clearly a reducible subset of $\mathbf R^3$. The separated R-variety that does not have a separated complexification is the one obtained by gluing the open subsets $$ U_0=\mathbf R^3\setminus L_0\ \mathrm{and}\ U_1=\mathbf R^3\setminus L_1 $$ along the open subsets $$ U_{01}=U_0\cap U_1\subseteq U_0\ \mathrm{and}\ U_{10}=U_0\cap U_1\subseteq U_1 $$ via the regular isomorphism $$ \phi_{10}\colon U_{01}\rightarrow U_{10} $$ defined by $$ \phi_{10}(x,y,z)=(x,y,pz). $$ Note that this is indeed a regular isomorphism since the map $\phi_{01}=\phi_{10}^{-1}$ is the regular map $$ \phi_{01}\colon U_{10}\rightarrow U_{01} $$ defined by $$ \phi_{01}(x,y,z)=(x,y,\tfrac{z}{p}). $$ Now, it is easy to see that the R-variety $U$ one obtains is separated, as defined in the founding paper of the whole theory: Faisceaux algébriques cohérents by Jean-Pierre Serre. Indeed, one easily checks that the diagonal in $U\times U$ is closed. However, if one wants to construct a real scheme $X$ whose set of real points coincides with $U$, then, inevitably, $X$ will not be separated. Indeed, the polynomial $p$ defines a nonclosed point $x_0$ in any scheme-wise thickening $X_0$ of $U_0$ since $p$ has zeros in $U_0$, and similarly it defines a non closed point $x_1$ of any scheme-wise thickening $X_1$ of $U_1$. The gluing morphisms $\phi_{01}$ and $\phi_{10}$ will extend to open subsets $X_{01}$ of $X_0$ and $X_{10}$ of $X_1$, but they won't contain $x_0$ and $x_1$, respectively. This is because the polynomial $p$ vanishes at $x_0$. As a result, any scheme-wise thickening of $U$ will be nonseparated! As for your second question, if I understand correctly, you are asking whether the functor $$ F\colon Sch_R'\rightarrow R-Var $$ defined by $F(X)=X(\mathbf R)$ is an equivalence onto a full subcategory, where $Sch_R'$ is the category of finite type separated reduced schemes over $Spec(\mathbf R)$ having dense sets of real points. This is an equivalence onto a full subcategory, its image category, if you localize $Sch_R'$ with respect to inlcusions of open subsets containing all real points: any morphism of $R$-varieties wil extend to a morphims defined on some open subset containing the real points. Uniqueness is implied by density of real points and separation. As for your third question, I can't think of other differences between $R$-varieties and schemes over $\mathbf R$ that differ essentially from phenomena already present in the example above. As for your final question about varieties in the sense of $R$-varieties over other fields, Serre certainly did define them in the paper I mentioned above. I'm not sure whether that has had a follow-up for other fields than real or algebraically closed fields. Thank you for the detailed answer. As for the second question, in the edit immediately following it I reported a couple of examples, that I found in Coste et al. just after having written that question, which show $Sch'_R\to R-Var$ cannot be full if you don't localize. If I get it correctly, your answer now adds the observation that localizing is the only thing left to do to obtain fullness. As for essential surjectivity, your "non separated scheme theoretic thickening" example shows that $F$ cannot be ess. surjective (cause my $Sch_R'$ by definition only contained separated schemes). On the other hand we can't just extend $F$ from the cat. $Sch_R''$ of all possibly non separated schemes (with the rest of conditions unchanged) cause we would land outside $R-Var$. I'm wondering if the "thickification" is a functor $G:R-Var\to Sch_R$, and if we have an adjuction with the suitable extention $\tilde{F}:G(R-Var)\to R-Var$ where $G(R-Var)\subset Sch_R$ is the essential image in schemes. Well, I think I said that the functor $F$ after localization is an equivalence onto a full subcategory. As you said, it cannot be essentially surjective. Concerning a thickification functor, I think it can only be defined if at both sides you allow nonseparated objects. If you localize suitably, the functor of taking-real-points is then probably an equivalence of categories, so in particular, you would have an adjunction, yes. Re your first comment (and my second one): you're right. (Btw, I wasn't making a correction, just recapping and asking an additional question) Regarding “Serre certainly did define them in the paper I mentioned above” in the last paragraph of you answer: FAC's definition of (pre)variety certainly applies to an arbitrary ground field. However, I think it is interesting to know that Serre doesn't worry about this: at the beginning of the 2nd chapter of FAC, we read dans toute la suite de cet article $K$ désigne un corps commutatif algébriquement clos, de caractéristique quelconque.) Two questions: (1) How one does exactly define a "thickification" functor $G:R-Var\to Sch_R$? It is not clear to me how one could even define it on the affine case: if $X\in R-Var$ is such that $X\cong V(I)\subset k^n$, it seems natural to define $G(X)=\operatorname{Spec}(k[x_1,\dots,x_n]/I(X))$. But this definition is unfeasible, since the coordinate ring is not invariant under isomorphisms in $R-Var$ (see this). (2) If we allow everything to be non-separated, is now $F$ essentially surjective? That is, is every real classical prevariety of the form $X(\mathbb{R})$, for some reduced scheme $X$ of finite type over $\mathbb{R}$? (and same question but now demanding $X$ to have dense real points) I wrote my thoughts about this here.
2025-03-21T14:48:31.225042
2020-06-11T03:15:24
362767
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andy Putman", "Moishe Kohan", "Ryan Budney", "https://mathoverflow.net/users/1465", "https://mathoverflow.net/users/317", "https://mathoverflow.net/users/39654" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630035", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362767" }
Stack Exchange
The existence of an isotopy in the manifold Let $M$ be a connected closed smooth manifold and $B(1)$ the unit ball in $\mathbb R^n$. Suppose $f:M \times B(1) \to M \times \mathbb R^n$ is a smooth embedding. Can we find an ambient isotopy $F_t$ of $M \times \mathbb R^n$ such that $F_0=\text{id} \times \text{id}$ and $F_1 \circ f=h\times \text{id}$, where $h$ is a self-diffeomorphism of $M$. You need some more assumptions there to rule out things like $f(p,x)=(g(p),x)$ for $g\colon M \rightarrow M$ a diffeomorphism that is not homotopic to the identity even after crossing with a large enough euclidean space (e.g. for $g$ that acts nontrivially on the homology of $M$). No, you can not. The simplest example I can imagine is $M=S^0$ and $f$ be any embedding which switches the components. @RyanBudney: This is definitely false (see my comment), but the OP did insist that $M$ be connected. In view of Andy's comment, you should also allow for $F_1\circ f$ to be of the form $h\times id$, where $h$ is a self-diffeomorphism of $M$. @AndyPutman: That just pushes the problem one dimension up, let $M=S^1$ and $f$ be conjugation on $S^1$. Your initial response appeared at essentially the same time as my own so I missed it. That's not true as stated. $h$ being a diffeomorphism implies that $F_1 \circ f$ is a homotopy equivalence, thus $F_0 \circ f$ also needs to be a homotopy equivalence. One can easily construct counterexamples, such as $$S^1 \times [-1,1] \to [-1,1] \times [-1,1] \to S^1 \times \mathbb{R}.$$ Thinking of $S^1 \subset \mathbb{C}$, the first map can be $ (z,t) \mapsto (z \cdot e^{t-1})$ and the second one $ (t,x) \mapsto (e^{it},x)$. Since the first map is contractible, the composition is contractible. Both maps are smooth embeddings so the composition is as well. Here's a picture. In such case you can't even hope for $h$ to be a topological embedding since that would be a homeomorphism. But in this case you can get some smooth $h$. And of course asking $f$ to be a homotopy equivalence, or homotopic to identity (both spaces are homotopic to $M$), would yield an interesting question. EDIT: Even requiring that $f|_{M \times \{0\}} = \mathrm{id} \times \{0\}$ (and thus $f$ is homotopic to identity) would not be sufficient. Say this time that $M=S^1$ and $n=2$, and take the map $ S^1 \times D^2 \owns (z,w) \mapsto_f (z,zw) \in S^1 \times \mathbb{C} = S^1 \times \mathbb{R}^2 $. If you view $S^1 \times \mathbb{R}^2$ as a solid torus embedded in $\mathbb{R}^3$, then $f( S^1 \times \{0\})$ and $f(S^1 \times \{1/2\})$ have linking number $1$ and no isotopy can unlink them. So, you also need to assume at least that $Df$ as a bundle map is, in a suitable sense, homotopic through embeddings to identity. Precisely, you need to assume that there is a homotopy $f_t:M \times B(1) \to M \times \mathbb{R}^2$ between $f$ and identity and a homotopy $F_t : TM \times \mathbb{R}^n \to TM \times \mathbb{R}^n$ between $Df$ and identity of bundle maps lifting $f_t$. At this point it sounds plausible to me, probably some $h$-principle machinery may be used to prove this.
2025-03-21T14:48:31.225265
2020-06-11T03:29:54
362768
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630036", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362768" }
Stack Exchange
Are the symmetric groups integrable as Hopf algebras? Let $G$ be a group. For $g,h \in G$, let $[g,h]=g^{-1}h^{-1}gh$ be a commutator. The normal subgroup $G' = \langle [g,h] \ | \ g,h \in G \rangle$ is called the commutator subgroup or derived subgroup. An integral of $G$ is a group $H$ such that $H'\simeq G$. The problem of the existence of an integral was first mention by B.H. Neumann is this paper (1956). A group without integral is called non-integrable. The smallest non-integrable finite group is the symmetric group $S_3$; moreover $S_n$ is non-integrable $\forall n \ge 3$. Here are two recent references about integrals of groups: Filom-Miraftab (2017) and Araújo-Cameron-Casolo-Matucci (2019). The commutator subgroup is the smallest normal subgroup for which the quotient is commutative. This notion was extended to semisimple Hopf algebra by Burciu (here, 2012) and studied by Cohen-Westreich (see for ex. here). It is called commutator subalgebra. It is the smallest normal left coideal subalgebra for which the quotient is commutative. Then let call a semisimple Hopf algebra integrable if it is isomorphic to the commutator subalgebra of a semisimple Hopf algebra. Question: Are the Hopf algebras $\mathbb{C}S_n$ integrable? What if $n=3$? More generally we can ask whether there exist a non-integrable finite group which is integrable as Hopf algebra, and if so, whether there is one which is not, and if no, whether every finite dimensional semisimple Hopf algebra is integrable.
2025-03-21T14:48:31.225388
2020-06-11T03:38:28
362769
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Steven Landsburg", "Timothy Chow", "https://mathoverflow.net/users/10503", "https://mathoverflow.net/users/3106" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630037", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362769" }
Stack Exchange
Does is it change an auction's incentives when causing the winner to pay more makes losers pay less? Say there are three roommates moving into an apartment with three rooms. Two of the apartment's rooms are identical, but the third one is valued higher by all three parties (say it's bigger and has a private bathroom). To decide which one gets the big room, the roommates decide to hold an auction. If the apartment's monthly rent is $p$ dollars in total, the idea behind the auction is to find a number $\delta$ such that the winner of the auction pays $\frac{p}{3} + \delta$, while the two losers pay $\frac{p}{3} - \frac{\delta}{2}$. In other words, the winner of the auction (who gets the big room) pays a bigger share of the rent compared to the two losers. Does this type of auction—where the two losers benefit from the value of $\delta$ being large—differ from traditional auctions, where it doesn't concern the losers what the winner pays? For example, does it still apply that if the price is decided using an English auction, that each bidder $i$ is incentivized to never bid higher than their value $v_i$? Intuitively, you might think that if the bidder $s$ with the second highest value $v_s$ for $\delta$ is reasonably confident that the to-be winner $w$ has a much higher value $v_w$ for $\delta$ they might bid higher than their own value $v_s$, in hopes of raising $\delta$ and lowering the rent that they have to pay. Assuming all three reservation prices are drawn independently from a uniform distribution on $[0,1]$, and assuming also that my arithmetic is right, the Nash equilibrium bidding strategy is $\delta=v/2$, contrary to your expectation. @ÁrniDagur : Seems to me you've answered your own question. In the standard setting, bidding your true value is a weakly dominant strategy in an English auction. If you're going to be outbid, it doesn't help you to bid higher than your true value. But here, it does. So bidding your true value is no longer weakly dominant. You can also indirectly infer that an English auction does not easily solve this problem because the literature on rent division involves rather complicated protocols and not a simple auction. The business about the English auction makes no sense. In an English auction, by definition, the losers neither make nor receive any payments. In your auction, losing is not as bad as it is in an English auction. Therefore you should expect that in your auction, people bid lower, not higher. To check this, I'll take the three reservation prices to be drawn independently and uniformly from $[0,1]$. (For other distributions, it will be easy to modify these calculations.) 1) If your reservation price is $x$ and your bid is $b$, your expected return is $$\pi(x,b)=Prob(Win)(x-b)+E(b)$$ where $E(b)$ is your expected gain if you lose the auction. ($E$ can't depend on $x$ because $x$ is not observable to anyone but you.) 2) Let $b=B(x)$ be a symmetric monotonic Nash equilibrium bidding strategy. (Symmetric means that all three players employ the same strategy.) Let $\pi_0(x)=\pi(x,B(x))$. 3) By the chain rule, $$\pi_0'(x)={\partial\pi\over\partial x}+{\partial\pi\over\partial b}B'(x)$$ But when you choose your bid optimally, the second partial is zero, so we have $$\pi_0'(x)={\partial\pi\over\partial x}=Prob(Win)=x^2$$ with the last equality following from the uniformity assumption. Clearly $\pi_0(0)=1/3$, so we have $$\pi_0(x)={x^3\over 3}+{1\over 3}$$ 4) Directly from the definitions, and continuing to assume we're in a symmetric monotonic Nash equilibrium, we have $$\pi_0(x)=\pi(x,B(x))=x^2(x-B(x))+E(B(x))$$ Combining this with (3), we get $${x^3\over 3}+{1\over 3}=x^2(x-B(x))+E(B(x))\hskip{2pc}(1)$$ where $$ 2E(B(x))=\int_0^1\int_0^1 max(B(y),B(z))dydz- \int_0^x \int_0^x max(B(y),B(z)))dz$$ Equation (1) is an integral equation for $B$. Solving it, I get $B(x)=x/2$. In particular, your optimal bid is always less than your reservation price.
2025-03-21T14:48:31.225666
2020-06-11T03:52:42
362771
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Kevin Carlson", "https://mathoverflow.net/users/43000" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630038", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362771" }
Stack Exchange
Strictifying closed monoidal categories? Let $C$ be a cartesian closed category. It's well known that $C$ is equivalent to a category where the product is strict monoidal; i.e. where there are equalities of the functors given by the expressions below, $1 \times X = X = X \times 1$ $(X \times Y) \times Z = X \times (Y \times Z)$ Question: Is there a similar coherence theorem including the exponential? E.g. I imagine an internal hom for which there are equalities of functors $[1,X] = X$ $[X \times Y, Z] = [X, [Y,Z]]$ $[X,1] = 1$ $[X, Y \times Z] = [X, Y] \times [X, Z]$ Bonus question: Can we further assume that the set of objects, together with the operations, is a free algebra for the variety of universal algebra on the constant $1$, binary operations $\times$ and $[,]$, and satisfying the above axioms and monoid axioms? I'm also curious about general closed monoidal categories. Obviously we can't ask for all four of these axioms, but can we get the first two? Kelly and Mac Lane proved a coherence theorem for closed monoidal categories here: https://www.sciencedirect.com/science/article/pii/0022404971900132
2025-03-21T14:48:31.225779
2020-06-11T04:20:22
362772
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Angelo", "Ben Webster", "Carlo Beenakker", "LSpice", "Piotr Achinger", "Vít Tuček", "Wojowu", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/2383", "https://mathoverflow.net/users/30186", "https://mathoverflow.net/users/3847", "https://mathoverflow.net/users/4790", "https://mathoverflow.net/users/66", "https://mathoverflow.net/users/6818" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630039", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362772" }
Stack Exchange
Why the name O for category O? What is the motivation behind naming the category O appearing in the theory of Lie algebras? Does O stand for something? Here is a question Why the BGG category O? that further confuses me. It seems like there is a notion of when a category is O, is it? The category is in some sense generated by modules arising from sheaves of holomorphic sections of line bundles. So maybe the question is why are the sheaves of holomorphic sections denoted by $\mathcal{O}$? @VítTuček, isn't that because of something like "O for olomorfe"? I think that the $\mathcal O$ notation comes from the notation for rings of integers in number fields, probably standing for "order" ("Ordnung" in German). I thought it was O for Oka... Four comments, four different answers... I guess that proves this question is worthy of an authoritative answer! the paper by Bernstein, Gelfand & Gelfand that introduces category O just says: "We shall call this category of $g$ modules the category O." Someone (Mirkovic? I'm not sure) told me that when they discovered it, they said "Oh, that's the right category!" This was probably a joke, but is my preferred explanation. From [Humphreys: Representations of semisimple Lie algebras in the BGG category O], notes for Chapter 1: The letter chosen to label the category is the first letter of a Russian word meaning “basic” which is основной.
2025-03-21T14:48:31.226067
2020-06-11T05:10:56
362773
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630040", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362773" }
Stack Exchange
Minimum of a product of polynomial evaluated at primitive roots of unity, given that the value of the polynomial at the same lies on unit circle This is something that came out of working on a problem: Let $m$ be an odd positive integer and $f \in \mathbb Q[x]$ be a polynomial of degree less than $m$. With $\zeta_m$ denoting a primitive root of unity, suppose that $|f(\zeta_m)|=1$ but $f(\zeta_m)$ is not a root of unity. Find the minimum value of the product $$P_{f, m}(n):= \prod_{a \in (\mathbb Z / m \mathbb Z)^\times} |f(\zeta_m^a)^{2^{n+1}} - f(\zeta_m^{2a})^{2^n}|$$ over all natural numbers $n$. I would like to know The minimum $\min_{n \in \mathbb N} P_{f, m}(n)$ possibly as a function of $m$ and (perhaps) the coefficients of $f$. For which $n$ (of course, allowed to depend on $m$, $f$) is this bound attained? If there is any "absolute" lower bound on the product $P_{f, m}(n)$ over all natural numbers $n$ and over all polynomials $f \in \mathbb Q[x]$ with $\deg f \leq m-1$ which is positive (and in terms of $m$). For which $n$ and $f$ (of course, allowed to depend on $m$) is this bound attained? (Perhaps too much to hope) Can we describe the behaviour of the function $P_{f, m}(n)$ as a function in $n$, again depending on the given polynomial $f$ and $m$? Since all the Galois conjugates of $f(\zeta_m)$ are given by $\{f(\zeta):\zeta \in \mu_m\}$ (where $\mu_m$ denotes the group of the primitive $m$-th roots of unity) and the extension $\mathbb Q(\zeta_m)/\mathbb Q$ is abelian so that complex conjugation commutes with all other elements of the Galois group, I could reformulate the given problem as the following: Let $m \in \mathbb N$, $f \in \mathbb Q[x]$ with $\deg f \leq m-1$ such that $f(\zeta) \in \mathbb{S}^1 \setminus (\bigcup_{r \in \mathbb N} \mu_r)$ for all $\zeta \in \mu_m$. Find the respective minimum values (as above) of the product $P_{f,m}(n) = \prod_{\zeta \in \mu_m} |f(\zeta^a)^{2^{n+1}} - f(\zeta^{2a})^{2^n}|$. Apart from this, I tried explicitly writing out $f$ and the above product in terms of a sequence of polynomials but (as expected I guess), that got pretty out of hand pretty quickly. I would really appreciate any suggestions. I'm not sure if this would help but we may also assume that $m$ is odd (in all of the above problems). P.S.: The exact function that I am trying to find the upper bound (over all natural numbers $n$) of (again in terms of $m$, $f$, both or neither) is actually $$G_{f,m}(n):= \frac{1}{2^n} \log\frac{2^{n+1}}{P_{f,m}(n)^{1/\phi(m)}} = \frac{1}{2^n} \left( (n+1) \log 2 - \frac{1}{\phi(m)} \log P_{f, m}(n) \right)$$ so if it is easier to directly find the corresponding upper bounds of $G_{f, m}(n)$ then I would really appreciate some suggestions for the same.
2025-03-21T14:48:31.226276
2020-06-11T05:45:41
362774
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Dieter Kadelka", "RSMax", "RobPratt", "Robert Israel", "https://mathoverflow.net/users/100904", "https://mathoverflow.net/users/13650", "https://mathoverflow.net/users/141766", "https://mathoverflow.net/users/159426" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630041", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362774" }
Stack Exchange
Of all probability matrix $P$ having stationary distribution $\pi$, find the one having smallest diagonal I am requesting your help today trying to solve a somewhat odd problem. Is there a way to find through some numerical algorithm such as Newton's method the stochastic matrix $\boldsymbol{P}$ having stationary distribution $\boldsymbol{\pi}$ (column vector, given as an input) and lowest diagonal (whose elements are closest to 0)? I initially thought about minimizing the following loss function: $$L(\boldsymbol{P}) = {(\mathrm{diag}(\boldsymbol{P}))}^\mathsf{T} \mathrm{diag}(\boldsymbol{P}) + \lambda_1 {||\boldsymbol{P}^\mathsf{T}\boldsymbol{\pi} - \boldsymbol{\pi}||}^2 + \lambda_2 {||\boldsymbol{P}\boldsymbol{e} - \boldsymbol{e}||}^2,$$ where $\mathrm{diag}$ returns the diagonal of its argument, $\lambda_1$ and $\lambda_2$ are both Lagrange multipliers, and $\boldsymbol{e}$ is a vector filled with 1 (same dimensions as $\boldsymbol{\pi}$). As far as I know, this is a case of differentiating a scalar ($L$) with respect to a matrix ($\boldsymbol{P}$). I think it may involve something called tensors, but to be honest I have little to zero experience with this, and even less once you throw in Lagrange multipliers. I did some calculations, and it would appear the differential is given by $$\tfrac{\partial{}L}{\partial{}\boldsymbol{P}} = 2\mathrm{\boldsymbol{D}}(\boldsymbol{P}) - 2\lambda_1\boldsymbol{\pi}\boldsymbol{\pi}^\mathsf{T}(\boldsymbol{I} - \boldsymbol{P}) - 2\lambda_2 \boldsymbol{e} \boldsymbol{e}^\mathsf{T} (\boldsymbol{I} - \boldsymbol{P}^\mathsf{T}),$$ which at some point involved an "outer product" or "Kronecker product", but could be simplified to that. The $D$ function outputs a diagonal matrix having same diagonal as its argument. In turn, the Hessian matrix (matrix of second order derivatives) would be given by $$\tfrac{\partial{}^2L}{\partial{}\boldsymbol{P}\partial{}\boldsymbol{P}^\mathsf{T}} = 2\boldsymbol{I} + 2\lambda_1\boldsymbol{\pi}\boldsymbol{\pi}^\mathsf{T} + 2\lambda_2 \boldsymbol{e} \boldsymbol{e}^\mathsf{T}.$$ I tried inputting these in a "Newton's method-like" program, but all it outputted was gibberish. All of this is a bit out of my league, but I really tried to make it work by myself before running here. I would be so grateful if someone could help me out. I know a solution exists, because Excel's solver is able to find solutions (don't ask why I use Excel, in this case I don't have a choice). Thanks, RSMax P.S. Just in case there would be multiple definitions around, by "stochastic matrix" I mean a square matrix whose elements are probabilities, and whose rows all sum to 1. P.P.S. By stationary distribution, I am referring to "long terms odds", as given by: $$\boldsymbol{\pi} = {(\boldsymbol{I} + \boldsymbol{E} - \boldsymbol{P}^\mathsf{T})}^{-1} \boldsymbol{e},$$ where $\boldsymbol{E}$ is a square matrix filled with ones (same dimensions as $\boldsymbol{P}$). EDIT No1: Fixed some typos and added some clarifications regarding what is an input. EDIT No2: Substituting the sum of the squares of the diagonal elements for the trace indeed made this a linear problem which I was then able to solve using the simplex's "Big M" method. Works like a charm, even on Excel! My VBA code: Public Const MAXVALUE As Double = 1.79769313486231E+308 'Arbitrary large value Public Const EPSILON As Double = 0.0001 'Tolerance parameter for the Simplex method Public Const BIGM As Double = 100# 'Big M constant for the Simplex method Public Function Simplex(odds As Range) As Variant '/////////////////////////////////////////////////////////////////////// '/// :function: Simplex '/// :scope: Public '/// :description: For odds.Cells.Count > 10, there can be instability issues. '/// :return: Wrapper (data validation) for the Simplex_ function. '/// :odds: An array of long-term odds (Range). '/////////////////////////////////////////////////////////////////////// Dim n As Long n = odds.Cells.Count With WorksheetFunction If .Count(odds) <> n Then Simplex = CVErr(xlErrNum) ElseIf .CountIf(odds, ">1") > 0 Then Simplex = CVErr(xlErrValue) ElseIf .CountIf(odds, "<0") > 0 Then Simplex = CVErr(xlErrValue) ElseIf Abs(.Sum(odds) - 1) > EPSILON Then Simplex = CVErr(xlErrValue) Else Simplex = Simplex_(odds) End If End With End Function Private Function Simplex_(pi As Range) As Double() '/////////////////////////////////////////////////////////////////////// '/// :function: Simplex_ '/// :scope: Public '/// :description: Implements the "Big M" variant of the Simplex method. '/// :return: The lowest trace probability matrix having stationary distribution Odds. '/// :pi: An array of long-term odds. '/////////////////////////////////////////////////////////////////////// Dim StopFlag As Boolean Dim output() As Double Dim MinRatio As Double Dim MaxCost As Double Dim factor As Double Dim Ratio As Double Dim Pivot As Double Dim x() As Variant Dim Row As Long Dim Col As Long Dim n As Long Dim i As Long Dim j As Long Dim k As Long n = pi.Cells.Count 'As we're looking for a matrix, the number of unknowns is n^2 + others for the constraints ReDim x(0 To 2 * n + 1, 0 To n * n + 2 * n + 2) As Variant ReDim output(1 To n, 1 To n) As Double x(0, 0) = "BASIS" x(0, 1) = "Z" For i = 1 To n * n Row = 1 + (i - 1) \ n Col = 1 + (i - 1) Mod n x(0, i + 1) = "p" & Row & Col Next For i = 1 To n Col = 1 + (i - 1) Mod n x(0, i + n * n + 1) = "c.r" & Col x(0, i + n * n + n + 1) = "c.pi" & Col x(i, 0) = "c.r" & Col x(i + n, 0) = "c.pi" & Col Next x(0, n * n + 2 * n + 2) = "RHS" 'Right-hand side x(2 * n + 1, 0) = "COST" For i = 1 To n For j = 1 To n * n Row = 1 + (j - 1) \ n x(i, j + 1) = IIf(Row = i, 1, 0) Next x(i, n * n + 2 * n + 2) = 1 For j = 1 To n * n Row = 1 + (j - 1) \ n Col = 1 + (j - 1) Mod n x(i + n, j + 1) = IIf(Col = i, pi(Row).Value, 0) Next x(i + n, n * n + 2 * n + 2) = pi(i).Value Next For i = 1 To n * n Row = 1 + (i - 1) \ n Col = 1 + (i - 1) Mod n x(2 * n + 1, i + 1) = IIf(Row = Col, -1, 0) Next For i = 1 To 2 * n For j = i + 1 To 2 * n x(i, n * n + 1 + j) = 0 x(j, n * n + 1 + i) = 0 Next x(i, 1) = 0 x(i, n * n + 1 + i) = 1 x(2 * n + 1, n * n + 1 + i) = -BIGM Next x(2 * n + 1, 1) = 1 x(2 * n + 1, n * n + 2 * n + 2) = 0 For k = 1 To 2 * n Col = n * n + 1 + k MinRatio = MAXVALUE For i = 1 To 2 * n If x(i, Col) > 0 Then Ratio = x(i, n * n + 2 * n + 2) / x(i, Col) If Ratio < MinRatio Then MinRatio = Ratio Row = i End If End If Next Pivot = x(Row, Col) For i = 1 To 2 * n + 1 If i <> Row Then factor = -x(i, Col) / Pivot For j = 1 To n * n + 2 * n + 2 x(i, j) = x(i, j) + factor * x(Row, j) Next End If Next For j = 1 To n * n + 2 * n + 2 x(Row, j) = x(Row, j) / Pivot Next Next Do MaxCost = -MAXVALUE For i = 1 To n * n If x(2 * n + 1, i + 1) > MaxCost Then MaxCost = x(2 * n + 1, i + 1) Col = i + 1 End If Next If MaxCost <= 0 Then Exit Do Else MinRatio = MAXVALUE For i = 1 To 2 * n If x(i, Col) > 0 Then Ratio = x(i, n * n + 2 * n + 2) / x(i, Col) If Ratio < MinRatio Then MinRatio = Ratio Row = i End If End If Next Pivot = x(Row, Col) For i = 1 To 2 * n + 1 If i <> Row Then factor = -x(i, Col) / Pivot For j = 1 To n * n + 2 * n + 2 x(i, j) = x(i, j) + factor * x(Row, j) Next End If Next For j = 1 To n * n + 2 * n + 2 x(Row, j) = x(Row, j) / Pivot Next x(Row, 0) = x(0, Col) End If Loop For i = 1 To n * n Row = 1 + (i - 1) \ n Col = 1 + (i - 1) Mod n For j = 1 To 2 * n If x(j, 0) = "p" & Row & Col Then output(Row, Col) = x(j, n * n + 2 * n + 2) Exit For End If Next Next Simplex_ = output End Function Usually this sort of matrices is called "stochastic". @Dieter Kadelka You are perfectly right, in fact I initially learned about these in a class named "Stochastic Processes", so in makes so much sense, but since I'm translating everything in my head as I write, I simply didn't make the connection right away. If you make the objective to minimize the sum of the diagonal entries (i.e. the trace), your problem becomes a linear programming problem, solvable with readily available software (I think even Excel). In many cases the optimal solution will have all diagonal entries $0$. EDIT: It may help to think of the problem this way. There are $n$ people, numbered from $1$ to $n$. Each person $i$ has a nonnegative amount $\pi_i$ of money, and we want to move all the money around so that everyone ends up with the same amount they started with. $P_{ij}$ is the fraction of person $i$'s money that goes to person $j$. Of course this will not be possible if one person has more than half the total amount of money. Otherwise it will be possible, by the following algorithm. We arrange the people in order clockwise from richest to poorest around a circular table, and everyone puts their money on the table in front of them (let's say in coins of a given denomination). The richest persion (#1) takes the amount $\pi_1$ off the table, starting with #2's coins, which come after his own coins. Each person then continues, taking the correct number of coins starting where the previous one left off. By induction, for $k$ from $2$ to $n$, when #k's turn comes, all his coins have already been taken. EDIT: Here is a more explicit version of the algorithm I alluded to above. We may assume that $\pi_1 \ge \pi_2 \ge \ldots \pi_n > 0$. If necessary, we reorder the rows and columns to make them decrease. If some $\pi_i$ are $0$, we make the corresponding columns all $0$ and put a $1$ arbitrarily (off the diagonal) in each of the corresponding rows, and follow the following algorithm for the rows and columns where $\pi > 0$. If $\pi_1 > 1/2$, we let $P_{11} = 2 - 1/\pi_1$, $P_{i1}=1$ and $P_{1i} = \pi_i/\pi_1$ for $ i > 1$, and all other $P_{ij} = 0$. It is easy to check that this works, and no solution can have $P_{11} < 2 - 1/\pi_1$. Now suppose $\pi_1 \le 1/2$. Let $S_k = \sum_{i=1}^k \pi_i$ be the partial [[sums of $\pi$, so $S_0 = 0$ and $S_n = 1$. Let $r_{ij}$ be the length of the intersection of the intervals $[S_{i-1}, S_i]$ and $[S_{j-1}+\pi_1, S_j + \pi_1] \mod 1$. Then $P_{ij} = r_{ij}/\pi_i$. For example, suppose $\pi = [0.3, 0.24, 0.24, 0.16, 0.06]$. The partial sums are $[0, 0.3, 0.54, 0.78, 0.94, 1]$. The shifted partial sums $S_i + \pi_1$ are $[.3, .0.6, 0.84, 1.08, 1.24, 1.30]$. Then, for example, the intersection of $[S_{2} + \pi_1, S_3 + \pi_1] \mod 1 = [.84, 1] \cup [0, .08]$ and $[S_0, S_1]=[0,0.3]$ has length $0.08$, so $P_{13} = 0.08/0.3 = 4/15$, while the intersection of $[S_2+\pi_1, S_3 + \pi_1] \mod 1$ with $[S_3, S_4] =[.78, .94]$ has length $.1$ so $P_{43} = .1/.16 = 5/8$ and the intersection of $[S_2+\pi_1, S_3 + \pi_1] \mod 1$ with $[S_4, S_5] = [.94, 1]$ has length $.06$ so $P_{53} = .06/.06 = 1$. The resulting matrix is $$ \pmatrix{ 0 & 0 & 4/15 & 8/15 & 1/5\cr 1 & 0 & 0 & 0 & 0\cr 1/4 & 3/4 & 0 & 0 & 0\cr 0 & 3/8 & 5/8 & 0 & 0\cr 0 & 0 & 1 & 0 & 0\cr} $$ Alternatively, an objective to minimize $\sum_i P_{i,i}^2$ yields a (convex) quadratic programming problem. I totally agree with the EDIT section, this is something I have observed in practice using Excel's solver. What remains encouraging is that through this scheme, $P_{11}$ will nonetheless be less than $\pi_1$, given that $\pi_1 \geq 0.5$, ergo the diagonal has still been minimized in some way. Whether I minimize the trace or the sum the squares of all elements on the diagonal, my current issue (under the Newton's method approach) is that my "main" unknown is a matrix ($\boldsymbol{P}$, whereas the Lagrange multipliers are scalars, and I'm confused about how to wrap those 3 under a common structure so that I can recursively update it and converge toward a solution. Should I add rows and columns to the unknown $\boldsymbol{P}$ matrix so that it incorporates $\lambda_1$ and $\lambda_2$ on the main diagonal? If by linear programming you mean something else than Newton's method, I am very much interested. As I need to ultimately code the result, anything easier and/or more efficient than Newton's method will be received with open arms. Would you mind elaborating a bit? Thank you sir. Linear programming in Excel. But the algorithm I presented in the EDIT is simpler (though maybe not so easy to program in Excel). Maybe I should make it more explicit. The following is too large for a comment and definitely not an answer. Fix $n \in \mathbb{N}$ and let $\cal{P}$ be the set of stochastic $n \times n$-matrices with $\text{diag}(P) = 0$. Let further $$\mathcal{E} := \left\{\pi \in \mathbb{R}^n \colon \pi_i \geq 0, \sum \pi_i = 1, \exists P \in \mathcal{P} \text{ with } \pi \cdot P = \pi\right\},$$ each $\pi$ being a row vector. Then both $\mathcal{P}$ and $\mathcal{E}$ are compact convex subsets of $\mathbb{R}^{n \times n}$ resp. $\mathbb{R}^n$ and not empty if $n \geq 2$ and for $n = 2$ we have $\mathcal{E} = \{(0.5,0.5\}$. The problem is that $(1,0,\ldots,0), \ldots, (0,0,\ldots,1) \not\in \mathcal{E}$. But for large $n$ it seems (I have no proof) it seems that almost any random stochastic $\pi$ is in $\mathcal{E}$. For this case your original problem is equivalent to "Find a $P \in \mathcal{P}$ with $\pi \cdot P = \pi$". Here the explicit construction of a homogeneous Markov chain with transition probability matrix $P$ (appropriately chosen) may be helpful to get a "feeling" for the solution. Of course this is impossible to formalize. It seems to me that for $n$ large using numerical methods may be too heavy for this sort of problem. Edit (I've just seen the answer of Robert Israel) A reformulation as a linear programm with criterion $$\sum P_{ii} = \min!$$ and variables $P_{ij}$ is possible. But $\pi_i$ is given, right? Right, I just edited my answer. I tried with a $10\times{}10$ matrix using Excel's solver and it took about half an hour. That's much too slow for my purpose. I doubt the order of the $\boldsymbol{P}$ matrix will ever exceed $n = 10$ states though. In fact, I won't allow it too. If you have access to a unix/linux computer you should try the glpk package on https://mirrors.ocf.berkeley.edu/gnu/glpk It's very much faster than excel. After installing this package you have glpsol. With glpsol you can solve among others linear programs. There are examples where you can start @RSMax: If you have a windows computer, there is a windows version of glpk: http://winglpk.sourceforge.net/ There is glpsol.exe in the package. I don't know anything about the quality of this solver. It should be much faster than excel, but you have to invest some time.
2025-03-21T14:48:31.227138
2020-06-11T06:11:11
362775
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Colin Reid", "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/4053" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630042", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362775" }
Stack Exchange
Commutators in an unrestricted infinite wreath product Consider an unrestricted wreath product $G = \prod_X A \rtimes B$, where $A$ is a group and $B$ is some subgroup of $\mathrm{Sym}(X)$. I am wondering what the circumstances are under which $\prod_X A$ is contained in (the closure of) the derived group $[G,G]$. If $X$ is finite, then $\prod_X A \nleq [G,G]$, because there is a nontrivial $G$-invariant homomorphism $\prod_X A \rightarrow A$ given by summing the entries. If there is $b \in B$ such that all orbits of $\langle b \rangle$ are infinite, then a standard argument shows that every element of $\prod_X A$ is a commutator of $G$. (We represent $h \in \prod_X A$ as $[b,h']$, where the entries of $h$ along each $b$-orbit are given by differences of successive entries of $h'$.) Edit: Suppose $\prod_X A$ has the product topology (with respect to some group topology on $A$). Under what circumstances does $[G,G]$ contain a dense subgroup of $\prod_X A$? That $[A,A]^X\subset [G,G]$ is not always true: there are examples where $[A,A]^X$ is not contained in $[A^X,A^X]$. Precisely the inclusion $[A,A]^X\subset [A^X,A^X]$ fails iff $A$ is perfect but not uniformly perfect and $X$ is infinite (then choose $B={1}$). If $B$ acts by finitely supported permutations and $A$ is abelian, the mapping from $G$ to $A^X/A^{(X)}$ mapping mapping $(a,b)\in A^X\rtimes B$ to the image of $a\in A^X$ in $A^X/A^{(X)}$, is a homomorphism. So the abelianization is huge (if $A\neq 0$ and $X$ is infinite). ... more generally if $A$ is nontrivial abelian and every finitely generated subgroup of $B$ has infinitely many finite orbits then the abelianization is huge, by an ultraproduct argument. @YCor, good points. I think I want to make some topological assumption and ask whether $[G,G]$ contains a dense subgroup of $A^X$. Then for the density question, the only obstruction is the existence of infinite orbits. Proposition: let $B$ be a group, let $A$ be a nontrivial abelian group and $X$ a $B$-set. Then the intersection of $D(A^X\rtimes B)$ with $A^X$ is dense in $A^X$ (for the product topology, $A$ being discrete, and $D$ meaning derived subgroup) if and only if all $B$-orbits are infinite. Lemma: Let $B$ be a group, let $A$ be an abelian group and $X$ a transitive $B$-set. Then the coinvariants of $B$ in the $\mathbf{Z}$-module $A^{(X)}$ consists of the subgroup $A^{(X)}_0$ of elements with sum zero. [The coinvariants of a $\mathbf{Z}B$-module $M$ is the $\mathbf{Z}$-submodule of $M$ generated by the elements $bm-m$ when $(b,m)\in B\times M$. So the quotient is the largest quotient module with trivial $B$-action.] Proof: clearly it is contained in $A^{(X)}_0$. Also $A^{(X)}_0$ is generated as $\mathbf{Z}$-module by the elements with support of cardinal two. Let $\delta_x(a)-\delta_y(a)$ be such an element (with self-explanatory notation). Since the action is transitive, there is $b\in B$ such that $y=bx$, and hence this is a commutator in an obvious way. As a consequence, we get: Lemma: Let $B$ be a group, let $A$ be an abelian group and $X$ a $B$-set. Then the coinvariants of $B$ in the $\mathbf{Z}$-module $A^{(X)}$ consists of the set $A^{(X)}_{B,0}$ of elements $m\in A^{(X)}$ such that for every $B$-orbit $Y\subset X$ we have $\sum_{y\in Y}m(y)=0$. $\Box$ Proof of the proposition: if there is a finite orbit $Y$, the mapping $(m,b)\mapsto \sum_Y m$ is a continuous surjective homomorphism $A^X\rtimes B\to A$. If $A$ is nontrivial, this homomorphism is nontrivial on $A^X$ and this implies failure of the density (this part was already noted by the OP). Conversely, suppose that every $B$-orbit is infinite. Let $Y$ be a finite subset of $X$, and let $m_0$ be an element of $A^Y$. Since all orbits are infinite, we can find $m\in A^{(X)}_{B,0}$ whose restriction to $Y$ is $m_0$. By the corollary, $m\in D(A^X\wr B)$. This proves the density. Corollary: Let $B$ be a group, let $A$ be a group and $X$ a $B$-set. Then the intersection of $D(A^X\rtimes B)$ with $A^X$ is dense in $A^X$ if and only if either $A$ is a perfect group, or all $B$-orbits are infinite. $\Box$
2025-03-21T14:48:31.227395
2020-06-11T07:10:39
362777
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andreas Blass", "Dominic van der Zypen", "bof", "https://mathoverflow.net/users/43266", "https://mathoverflow.net/users/6794", "https://mathoverflow.net/users/8628" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630043", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362777" }
Stack Exchange
Hypergraph coloring in linear hypergraphs Let $H=(V,E)$ be a hypergraph with $V\neq\varnothing$ and $E \neq \varnothing$ such that for all $e\in E$ we have $|e|\geq 2$, and for all $e\neq e_1 \in E$ we have $|e\cap e_1|\leq 1$. Is there a function $f: V\to E$ such that for all $e\in E$ the restriction $f{\restriction}_e$ is non-constant? EDIT. By @bof's remark below, the case that $E$ is infinite, is trivial. Indeed I only wanted to ask the question for finite $V$ (and therefore finite $E$). Apologies for the omission. If $E$ is infinite we can find a set $W\subseteq V$ such that $|W\cap e|\ge2$ for all $e\in E$, and $|W|\le|E|$, so we can take a function $f:V\to E$ whose restriction to $W$ is injective. I guess the finite case is equally trivial. It probably reduces to the case of ordinary finite graphs, and in that case it's clear that the chromatic number is less than or equal to the number of edges. By the way, do these questions have some kind of motivation? It might make the questions more interesting if we knew what the motivation was. Right @bof I should have written it is for finite $V$ only - will edit the question. I was just playing around with linear hypergraphs, they also arise in formulations of statements equivalent to the Erdös-Faber-Lovasz conjecture. I also thought it would be trivial in the finite case, but I couldn't do it (which doesn't mean it isn't trivial). If we replace each edge with a $2$-element subset, the number of edges does not increase and the chromatic number does not decrease, so no generality is lost by assuming the hypergraph is a ordinary finite graph. Now $\chi(G)\le\Delta(G)+1\le|E|$ unless the graph is a star, and then $\chi(G)\le2$, oops, looks like $K_2$ is a counterexample. I think the comment by @bof essentially covers the finite case too, except for the case of 2 vertices and one edge, in which case the result is false. In any other finite case, reduce to an ordinary graph (shrink each edge to have size 2) and assume it's connected (treat each component separately. Fix a spanning tree T and a vertex v. Let f map each vertex w$\neq$v to the edge of T leading from w toward v. That's injective, but we still need to map v somewhere. So we're OK unless T is the whole graph and all its edges are incident to v. In that case, try again, starting with a different v. In my comment, "the comment by @bof" refers to the first such comment; the second arrived while I was typing, and it completes the answer.
2025-03-21T14:48:31.227579
2020-06-11T07:34:08
362779
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630044", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362779" }
Stack Exchange
The properties of total variation metric The questions was asked by me on Math StackExchange, but no answer appears, so I ask for help again. Let $(X, d)$ be a complete (Hausdorff, separable, local compact and other nice properties you want) metric space and $\mathcal{M}$ be the space of local finite full supported Borel probability measures on $X$ (with Borel sigma-algebra $\mathcal{B}$). Then the total variation metric can be defined by $d_{TV}:= \sup_\limits{A}|\mu(A)- \nu(A)|$ where $A$ runs over $\mathcal{B}$ and $\mu \in\mathcal{M}, \nu\in \mathcal{M}$. Questions: Is $d_{TV}$ a complete metric on $\mathcal{M}$? Is $(\mathcal{M}, d_{TV})$ a Hausdorff/separable space? When the subset $\mathcal{N}$ ($\subset \mathcal{M}$) is pre-compact? Let $(X,d)=([0, 1], | |)$, where $| |$ is the Euclidean metric, then if it has sequence $\{\mu_i\}$ such that $\{\mu_i\}$ weak converge to $\mu$, but not $d_{TV}$-converge to $\mu$? Here $\mu_i, \mu \in \mathcal{M} $. How about the questions when $(X, d)$ is a Riemannian manifold or require the meassures are Radon measures? Answer to Question 3: Yes, there is such a sequence. E.g., for all $n=0,1,\dots$ and all Borel $B\subseteq[0,1]$, let $$\mu_n(B):=\frac12\int_B(2+\sin2\pi nx)\,dx.$$ Then $\mu_n$ converges to $\mu_0$ weakly but not in total variation. (Also, $\mu_n$ is full supported for each $n$.) Concerning Question 1: In general, $d_{TV}$ is not a complete metric on $\mathcal M$. Indeed, for all $n=0,1,\dots$ and all Borel $B\subseteq X:=[0,1]$, let $$\mu_n(B):=(2-1/n)|B\cap[0,1/2]|+(1/n)|B\cap[1/2,1]|,$$ where $|\cdot|$ is the Lebesgue measure, and $$\mu(B):=2|B\cap[0,1/2]|.$$ Then $\mu_n\in\mathcal M$ for all $n$, $d_{TV}(\mu_n,\mu_k)\to0$ as $n,k\to\infty$, but $\mu_n\to\mu$ in total variation as $n\to\infty$ and $\mu\notin\mathcal M$. Also concerning Question 1: The metric space $(\mathcal{M}, d_{TV})$ is Hausdorff, as any metric space. The metric space $(\mathcal{M}, d_{TV})$ is not separable in general. Indeed, for all $x\in X:=[0,1]$ and all Borel $B\subseteq[0,1]$, let $$\mu_x(B):=\tfrac12\,|B|+\tfrac12\,1_B(x).$$ Then the subset $\{\mu_x\colon x\in[0,1]\}$ of $\mathcal{M}$ is uncountable and $d_{TV}(\mu_x,\mu_y)=1/2$ for any distinct $x$ and $y$ in $[0,1]$. So, $(\mathcal{M}, d_{TV})$ is not separable (cf. e.g. the first paragraph in this answer). Concerning Question 2: $\mathcal N$ will be pre-compact if and only if $\mathcal N$ is totally bounded and closed in $d_{TV}$ as a subset of the set of all probability measures on $X$. NB: Try to avoid asking multiple questions in one post.
2025-03-21T14:48:31.227759
2020-06-11T07:51:34
362780
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Dmitri Pavlov", "James Baxter", "https://mathoverflow.net/users/132446", "https://mathoverflow.net/users/402" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630045", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362780" }
Stack Exchange
Path connectedness of a certain subspace of measurable functions Note: Functions that differ on a null set are not identified. Consider the space of measurable functions $[0, 1] \to [0, 1]$ that are continuous exactly on a set of Lebesgue measure $r$ , $0 < r < 1$ under the sup norm. Is this space path connected? What does "continuous exactly on a set of Lebesgue measure r" mean? Is this set fixed? What about its complement, is the function required to be everywhere discontinuous there? The set {a| f is continuous at a} has measure r. The set is not fixed. And what is the sup norm here? The ordinary supremum of f? The supremum of the restriction of f to its set of continuity? The essential supremum of f? Just the normal supremum.
2025-03-21T14:48:31.227952
2020-06-11T08:06:44
362782
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630046", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362782" }
Stack Exchange
Hales' generalization of the stacked bases theorem (seeking a proof) In his paper Analogues of the stacked bases theorem, published in the proceedings of a 1976 conference, A.W. Hales claimed some interesting generalizations of the stacked bases theorem for abelian groups, but he didn't include the proofs because he saw them "messy" and was hoping to find some kind of canonical form yielding his generalizations in a more elegant way. In private communication, Hales has commented that in the end he didn't pursue the matter further, and that no write-up of the original proofs exists. I have searched the literature, but I haven't been able to find any proof of his statements: two papers refer to his (Representations over PID's with three distinguished submodules, Half-factorial subsets in infinite abelian groups), but only tangentially, or so seems to me (the first one does not even cite Hale's paper in the text, just adds the reference!). Below I condense the ideas and results of the paper: Let $A$ be an abelian group and $0\rightarrow \ker f\rightarrow F\rightarrow^f A\rightarrow 0$ be a presentation with $F$ free. We write $(F,f)$ for the presentation and $(F,f,X,R)$ for the presentation with fixed basis $X$ of $F$ and fixed system of generators $R\subseteq F$ for $\ker f$ (we say we have completed $(F,f)$ with $X,R$). Given the presentation $(F,f,X,R)$, define the length $l(R)$ as the supremum of $l(r)$ over all $r\in R$, where $l(r)$ is the number of nonzero coefficients of $r$ written in basis $X$. Observe that for any abelian group a presentation can be found such that $l(R)\leq 3$, by taking $F$ with basis $A$ and all relations of the form $x+y-z$ where $z=x+y$. Clearly, an abelian group has $l(R)=0$ for some presentation iff it is free, and has some presentation with $l(R)\leq1$ iff it is a direct sum of cyclic groups, since it is of the form $F/H$ with $F$ generated by $\{x_i\}_{i\in I}$ and $H$ generated by $\{a_ix_i\}_{i\in I}$ with $a_I\in A$. A group with some presentation with $l(R)\leq2$ is called simply presented. With this terminology in mind we can restate the stacked bases theorem as follows: Let $A$ have a presentation of the form $(F_0,f_0,X_0,R_0)$ with $l(R_0)\leq1$. Then any presentation $(F,f)$ can be completed with $X,R$ so that $l(R)\leq1$. This version allows for generalization, parametrized by $l(R)$, which we will call assertion SBT$(n)$: If an abelian group $A$ has a presentation $(F_0,f_0,X_0,R_0)$ with $l(R_0)\leq n$ then all presentations $(F,f)$ can be completed with $X,R$ such that $l(R)\leq n$ (so the stacked bases theorem is SBT$(1)$, which is true). By the observations above, the only interesting cases left to study are SBT$(0)$, SBT$(2)$ and SBT$(3)$. The result SBT$(0)$ is false as it stands, since $f_0$ is an isomorphism from $F_0$ to $A$, but this needs not be the case for any free module $F$. We can sightly modify it to a true assertion SBT'$(0)$: If $A$ has a presentation with $l(R_0)=0$ then all presentations $(F,f)$ can be completed with $X,R$ such that $R\subseteq X\cup\{0\}$. This is true because $A$, having $l(R_0)=0$, is free, so any short exact sequence presenting $A$ splits, hence $F\cong A\oplus\ker f$ and we get the desired basis for $F$. Assertions SBT$(2)$ and SBT$(3)$ are claimed by Hales to be true, i.e., if $A$ is simply presented then any presentation can be made simply presented, and any presentation of an abelian group can be completed with a basis $X$ and set of relations $R$ such that $l(R)\leq3$ in basis $X$. My specific questions are: Can we give a proof of SBT(2) and SBT(3), or some counterexample? Can the results be extended to modules over other rings, e.g. PIDs?
2025-03-21T14:48:31.228188
2020-06-11T09:25:28
362785
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carlo Beenakker", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/143783", "lrnv" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630047", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362785" }
Stack Exchange
Is there a proper name for those 'shifted moments'? Suppose that we have a random variable $X$, $i \in \mathbb N$, and a scalar $t$. Is there a proper name for these integrals, that I for the moment call 'shifted moments' ? $$I_{i}^{t} = \mathbb{E}\left(X^i e^{tX}\right)$$ I use the name 'shifted moments' as they are derivatives of the moment generating function, but not in $t = 0$. How would you call them ? $I_0^t$ is called the cumulant generating function; wouldn't you then just call $I_i^t$ the $i$-th derivative of this function? (up to a factor $t^i$) I think $I_0^t$ is the moment generating function. It's log is the cumulant generating function. certainly, cumulant $\mapsto$ moment, my mistake. Yes of course, these are derivatives of the moment generating function. But i watned to found a better name. If there is not, i'll take it. (Btw, i have the same issue with cumulant generating function). These are moments of what is often called the exponentially tilted distribution.
2025-03-21T14:48:31.228283
2020-06-11T10:13:55
362787
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Yuval Peres", "https://mathoverflow.net/users/7691" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630048", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362787" }
Stack Exchange
Properties of measures that are not countably additive but have countably additive null ideals This is a very naive question, maybe more of a reference request than anything else. Let $(X, \mathcal X)$ be a measurable space. If $m$ is a real-valued function on $\mathcal X$, we say that $m$ has a countably additive null ideal if $m(\cup_{n=1}^\infty A_n) = 0$ whenever $A_n \in \mathcal X$ and $m(A_n)=0$ for all $n$. Of course if $m$ is a countably additive measure, then $m$ has a countably additive null ideal. If $m$ is a merely finitely additive probability measure (i.e. finitely but not countably additive and such that $m(X)=1$) it may or may not have a countably additive null ideal. In a typical example of a merely finitely probability, the null ideal is not countably additive: extend the natural density function to a probability measure $m$ on $(\mathbb N, 2^{\mathbb N})$ by means of a Banach limit and then $m\{n\}=0$ for all $n$ while $m(\mathbb N)=1$. I am wondering what can be said about merely finitely additive probabilities with countably additive null ideals. What's a typical example of such a probability? "How similar" are such probabilities to countably additive probabilities, i.e. what properties of countably additive probabilities do such probabilities preserve? Any other interesting results about merely finitely additive probabilities with countably additive null ideals are welcome. Can you add some motivation? Do you know nonatomic examples of such measures? Here is an answer for the case that $X$ is countable and all its subsets are measurable. Let $Y \subset X$ be nonempty, suppose $\{p_y : y \in Y\}$ are strictly positive numbers with $p= \sum_{y \in Y} p_y \le 1.$ Let $\mu$ be an arbitrary finitely additive probability measure on $Y$ (with all subsets measurable) and define a finitely additive probability measure $m$ on $X$ by $$m(A):=(1-p)\mu(A \cap Y)+\sum_{y \in Y \cap A} p_y\, .$$ Then $m$ is a finitely additive probability measure with countably additive null ideal. Conversely, every finitely additive probability measure $m$ on $X$ with a countably additive null ideal can be obtained this way, by defining $Y:=\{y \in X : m(y)>0\}$ and $p_y=m(y)$ for $y \in Y$ and letting $p= \sum_{y \in Y}$. If $p=1$ then $\mu$ can be arbitrary, while if $p<1$ then take $$\mu(A):=[m(A)- \sum_{y \in A} p_y]/(1-p)\, $$ for $A \subset Y$. $\newcommand{\N}{\mathbb{N}}\newcommand{\R}{\mathbb{R}}$There are examples on $\R$ with the Borel $\sigma$-algebra $\mathcal{B}$. We take the null ideal to be the meagre Borel sets $\mathcal{M}$ (the $\sigma$-ideal in the Borel sets generated by closed sets with empty interior). The regular open sets of $\R$ form a complete Boolean algebra $\mathcal{RO}$, and the mapping from $\mathcal{RO} \rightarrow \mathcal{B}/\mathcal{M}$ formed by mapping a regular open set to the equivalence class of Borel sets differing from it by a meagre set is an isomorphism (this uses the Baire category theorem - see for example Fremlin's Measure Theory 514I). What we shall do is define a finitely additive measure $\mu$ on $\mathcal{RO}$ for which the only null element is $\emptyset$. Under the isomorphism above, this defines a finitely-additive Borel probability measure on $\R$ whose null ideal is $\mathcal{M}$. Let $(U_i)_{i \in \N}$ be a countable base of regular open sets for $\R$ (e.g. open intervals with rational endpoints). By the ultrafilter lemma, for each $i \in \N$, there exists an ultrafilter on $\mathcal{RO}$ containing $U_i$, which defines a finitely-additive measure $\mu_i : \mathcal{RO} \rightarrow [0,1]$ taking only the values $0$ and $1$ and such that $\mu_i(U_i) = 1$. We then define $\mu : \mathcal{RO} \rightarrow [0,1]$ by $\mu(U) = \sum_{i=1}^\infty 2^{-i} \mu_i(U)$. It is easy to verify that this is a finitely-additive probability measure. Also, for any non-empty regular open $U$ there exists some $i \in \N$ such that $U_i \subseteq U$, and therefore $$ \mu(U) \geq \mu(U_i) \geq 2^{-i}\mu_i(U_i) = 2^{-i} > 0. $$ So the only $\mu$-null regular open set is $\emptyset$. The measure $\mu$ is not countably additive because on Polish spaces without isolated points there are no countably-additive Borel probability measures that vanish on meagre sets.
2025-03-21T14:48:31.228535
2020-06-11T11:01:45
362790
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Mr. Palomar", "https://mathoverflow.net/users/151698" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630049", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362790" }
Stack Exchange
Two definitions of power operations --- how do they relate? Below are two different stories about power operations for $\mathbb{E}_\infty$-ring spectra, and I am struggling to see how they relate. In the following we let $R$ be an $\mathbb{E}_\infty$-ring spectrum. $R$ admits natural maps $R^{\wedge n}_{h\Sigma_n} \to R$. If $X$ is now a space, we may apply $(\,\cdot\,)^{\wedge n}_{h\Sigma_n}$ to a $\Sigma^\infty_+ X \to R$ representing an element of $R^0(X)$, which we then compose with the multiplications $R^{\wedge n}_{h\Sigma_n} \to R$ to obtain a map $$\mathbb{P}_n \colon R^0(X) \to R^0(X^{\times n}_{h\Sigma_n})$$ called the $n$-th total power operation of $R$ --- a multiplicative but non-additive map. There is a category $\mathsf{CAlg}(R)$ of $R$-algebras, and it admits a forgetful functor $U \colon \mathsf{CAlg}(R) \to \mathsf{Sp}$. One then defines a spectrum of power operations on $R$ to be the endomorphism spectrum $\operatorname{Map}(U,U)$. Question. Are these two approaches in any way related? At first glance it appears not so, but the forgetful functor $U$ admits a left adjoint $F$ sending a spectrum $Y$ to $R \wedge \bigoplus_n Y^{\wedge n}_{h\Sigma_n}$ --- a formula vaguely similar to what we see in the first definition. I guess if you apply the second definition to the $R$-algebra $\operatorname{Map}(\Sigma^\infty_+ X,R)$ and play around with the adjunction you can make the comparison precise, but I'm struggling with the details. The first observation is that $U$ is representable in $CAlg(R)$ by $F(S)$ (with $F$ as in your question): $$Map_{CAlg(R)}(F(S),A) \simeq Map_{SMod}(S,A) \simeq U(A).$$ By some appropriately flowery version of Yoneda's Lemma, it follows that $$ Hom(U,U) \simeq Map_{CAlg(R)}(F(S),F(S)) \simeq Map_{SMod}(S,F(S)) \simeq R \wedge \bigvee_n B\Sigma_{n+}.$$ Applying $\pi_0$ to this, is easy to see that the operation corresponding to $1 \in R_0(B\Sigma_n)$ will be precisely the classic $n$th power operation, when applied to the $R$--algebra $A = Map(\Sigma^{\infty}_+X,R)$. Added later by request ... Suppose given $f \in \pi_0(F(R))$. Tracing through my equivalences, the associated operation $\theta_f: \pi_0(A) \rightarrow \pi_0(A)$, for $A$ a commutative $R$--algebra, is as follows. Firstly $f$ can regarded as an $R$-module map $f:R \rightarrow F(R)$. Similarly, $x \in \pi_0(A)$ can be regarded as an $R$-module map $x:R \rightarrow A$. Then $\theta_f(x)$ is the composite $$ R \rightarrow F(R) \rightarrow F(A) \rightarrow A,$$ where the first map is $f$, the next is $F(x)$ and the the last is the structure map for the $R$--algebra: the wedge over $n$ of the maps $A^{\wedge n}_{h \Sigma_n} \rightarrow A$. For the $n$th power operation, recall that $\displaystyle F(R) = R \wedge \bigvee_m B\Sigma_{m+}$, and let $f_n$ be the composite $$S^0 \rightarrow B\Sigma_{n+} \rightarrow R \wedge B\Sigma_{n+}\hookrightarrow F(R).$$ Then $\theta_{f_n}$ will be the $n$th power operation. You can now specialize to $A = Map(X,R)$ if you want. Thanks. I'm struggling to follow the last sentence. Could you expand on that? Dear Nicholas, this is just a reminder that I've put up a bounty a few days ago in the hopes that someone could expand on the last sentence. As of yet I'm not at all following what is going on. I do not know what '$1$' means nor how any element of $\pi_0\Big(R \wedge \bigvee_n (B\Sigma_n)+\Big)$ could ever give rise to a map of the form $R^0(X) \to R^0(X^{\times n}{h\Sigma_n})$. I'd be grateful if you could help me out.
2025-03-21T14:48:31.228773
2020-06-11T12:21:05
362797
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630050", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362797" }
Stack Exchange
A diameter 2 arc-transitive graph whose complement is not arc-transitive? A graph $G=(V,E)$ is arc-transitive if its symmetry group acts transitively on ordered pairs of adjacent vertices. In general, the complement of an arc-transitive graph is not arc-transitive. But I have a hard time finding an example of such a graph if I assume $\mathrm{diam}(G)=\max_{v,w\in V} \mathrm{dist}(v,w)=2$. All my examples of diameter 2 arc-transitive graphs have arc-transitive complements: e.g. the 4-cycle and 5-cycle, the Petersen-graph, the Hoffman-Singleton graph, the Payley graphs, ... I suppose an equivalent question would be: find a diameter 2 arc-transitive graph that is not distance-transitive. Complements don't even have to be edge transitive. Perhaps the simplest example is the wreath graph $W_5$ (which is obtained by applying the construction below to $C_5$). Call two vertices "twins", if they have the same neighbourhood in $G$. Since the twin-relation is preserved under automorphisms, it suffices to construct an arc-transitive graph $G$ of diameter $2$ which has non-adjacent twins as well as non-adjacent non-twins. Now let $G$ be an arc-transitive graph $G$ of diameter $2$ and assume that there are non-adjacent non-twins $v_1$ and $v_2$ in $G$. Take the lexicographic product of $G$ with an empty graph on $2$ vertices. In other words, we add a twin $v'$ to every vertex $v$ of $G$, where $v'$ is connected to all neighbours of $v$ and their respective twins. The resulting graph is still arc transitive, has diameter $2$, and contains non-adjacent twins $v$ and $v'$ as well as non-adjacent non-twins $v_1$ and $v_2$.
2025-03-21T14:48:31.228900
2020-06-11T12:27:21
362799
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Rene", "YCor", "https://mathoverflow.net/users/126475", "https://mathoverflow.net/users/14094" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630051", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362799" }
Stack Exchange
Non-commutative version of the order dimension of a poset I view the order dimension of a poset $P$ as an inherently commutative notion. On the one hand, it can be defined via realizers, which I find fairly intuitive from an order-theoretic viewpoint. On the other hand, the order dimension is known to be equivalent to the minimal $k$ such that $P$ has an order-embedding into $\mathbb{N}^k$. Taking the latter statement as the definition, we can generalize it to a non-commutative version: For a poset $P$, define the non-commutative version of the order-dimension $\operatorname{ncdim}(P)$ to be the minimal $k$ such that $P$ has an order-embedding into a (possibly non-commutative) monoid of $k$ generators. Hereby, the partial order of the monoid is induced by left- or right-factorization, analogously to what we think of in $\mathbb{N}^k$. To make it more interesting, one could also demand the resulting monoid to have some additional properties, such as cancellativity. Maybe it makes sense to restrict to finite posets, or, later, to investigate if the restriction to lattices simplifies the theory. Anyway, one can ask some natural questions: Is there a general bound for $\operatorname{ncdim}(P)$, depending on the additional properties of the monoid? Is it possible to give a concrete order-embedding into a monoid of $\operatorname{ncdim}(P)$ generators that could be understood as "most simple" or "canonical" in some sense? Is there hope to determine $\operatorname{ncdim}(P)$, probably with some polynomial algorithm? (and more of course.) Does anyone know about any research in this direction? Any references or thoughts are highly appreciated! Every countable monoid embeds into a monoid on 2 generators. So I'd guess the only countable values taken by this number are $0,1,2$. For the poset structure, you maybe mean left-factorization only (of course right-factorization leads to the same definition, but I'm not sure whether you're considering a mixture of the two). Thanks for the answer. So we would use the order dimension for that construction. Also in that case, question 2 may still be interesting. But I am not sure if that statement still holds true if we restrict to cancellative monoids, however. I guess the embedding result still holds for cancellative monoids (to be confirmed). Oh, but thinking twice, a monoid embedding need not be a poset embedding (it's typically false for an embedding into a group). So one could ask whether every countable monoid embeds into a 2-generated one in a way that preserves, say, the left-factorization partial ordering ("preserves" meaning $i(x)\le i(y)$ iff $x\le y$). Exactly. That's what I am looking for. Just wondering if I have to start from scratch or if there is already some research out there.
2025-03-21T14:48:31.229104
2020-06-11T13:35:08
362803
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alexander", "Emil Jeřábek", "Geoff Robinson", "LSpice", "https://mathoverflow.net/users/12705", "https://mathoverflow.net/users/14450", "https://mathoverflow.net/users/159444", "https://mathoverflow.net/users/22377", "https://mathoverflow.net/users/2383", "verret" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630052", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362803" }
Stack Exchange
Subgroups of multiplicative groups of the finite field with Mersenne prime order I have a question about properties of the multiplicative groups. Let we have finite field of prime order $2^k$ -1. $k$ is co-prime with $$\frac{2^k-2}{k}$$. It is clear that multiplicative group of such field has subgroup of order. $$\frac{2^k-2}{k}$$. How it is possible to find formula for generator $g$ of this subgroup (this subgroup is always cyclic) ? For example for $k=5$ $g=6$ For example for $k=7$ $g=24$ The equality $k = \frac{2^k - 2}k$ is obviously false. (For example, as you say, for $k = 5$ it does not work.) What do you mean? Fixed my mistake , thanks Note that $18^{18} \equiv 2 \pmod{2^7 - 1}$, so I think your conjecture fails for the case $k = 7$, $g = 18$ that you specify. 1 18 70 117 74 62 100 22 15 16 34 104 94 41 103 76 98 113 I just wrote simple computer script Yes, your computations are correct, and they show that the order of $18$ modulo $127$ is not less than $18$; but, as you can tell by going one step further, it's not $18$, either. In fact, the order of $18$ modulo $127$ is $63$. Correct my bad. Anyway the question is the same. How can the question be the same? You've made a conjecture, and shown that it is false. 24 68 108 52 105 107 28 37 126 103 59 19 75 22 20 99 90 1 The question is the same: how to find generator for this subgroup ? OK, I see that you have changed the question to remove the incorrect conjecture; now it makes sense. The subgroup is cyclic—all finite subgroups of multiplicative groups of fields are—so there is no need to gather evidence for the existence of $g$. My suspicion is that this is probably not much easier than finding a primitive generator for a random $\mathbb Z/p\mathbb Z$, which is hard, but I have no evidence for that. It may be interesting even to see if you can come up with an $\mathrm o(2^k/k)$ algorithm for testing membership in this subgroup (but I'm no expert; it may be known). For example generator for subgroup of order $k$ is clear - it is always 2. May be there is some interesting property for described subgroup ? You might as well ask for a generator $x$ for the whole multiplicative group of the field, and then take $x^{k}$. The whole multiplicative group is actually $C_k x C_{\frac{2^k-2}{k}}$ And I thought that may be there are some interesting properties @LSpice Testing membership is easy, as the group just consists of all $x$ such that $x^{(2^k-2)/k}=1$. Is it possible that $g = 2^{t}*3$ ? @EmilJeřábek, right, but how fast can you perform that exponentiation? (Faster than $\mathrm O(2^k/k)$, obviously, now that I think about it; but, yes, testing that property quickly is what I had in mind.) @LSpice Quite fast (in time $\tilde O(k^2)$ or so). This is just modular exponentiation. @AVT The multiplicative group is not $C_k\times C_{(2^k-2)/k}$ if $k$ is a Wieferich prime; are you saying that this can’t happen? ($2^k-1$ itself cannot be Wieferich, but this is something else.) @EmilJeřábek, right, I know that modular exponentiation is fast. That's why I was suggesting that a less difficult task than finding a generator—I assumed the question was how to find it quickly, since otherwise you can do as @‍GeoffRobinson suggests—might be to seek a faster test for membership than just exponentiation. @Emil Jeřábek Sorry don't understand your point Would be appreciative for such example of $k$ I can’t give you an example, as only two Wieferich primes are known. But anyway, the multiplicative group is $C_{2^k-2}$, which is isomorphic to $C_k\times C_{(2^k-2)/k}$ only if $k$ is coprime to $(2^k-2)/k$, and there is no a priori reason this should be the case. Modified question, added condition. What is order of $3$ in such multiplicative groups ? @AVT Can you clarify what you mean by "how to find"? Do you mean a fast algorithm? etc. no, I mean some formula hypothesis: this is $3*2^t$ for some $t$ The multiplicative group can be represented as table Rows: cosets for $C_k$ Columns: cosets for $C_{\frac{2^{k}-2}{k}}$ I know the generator for rows, but don't know the generator for columns if I find it - any element would be represented as $g^{r}*2^{s}$ for some $r,s$
2025-03-21T14:48:31.229398
2020-06-11T14:13:51
362806
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Aaron Bergman", "BinAcker", "Donu Arapura", "Leo Alonso", "Tim", "Will Sawin", "https://mathoverflow.net/users/102114", "https://mathoverflow.net/users/18060", "https://mathoverflow.net/users/4144", "https://mathoverflow.net/users/6348", "https://mathoverflow.net/users/73622", "https://mathoverflow.net/users/947" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630053", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362806" }
Stack Exchange
Recipe for resolving a coherent sheaf Let $X$ be a complex manifold and let $V\subset X$ be a subvariety. Let $F\rightarrow V$ be a holomorphic vector bundle over $V$ and let $\mathcal{S}=\Gamma(F)$ be the sheaf of holomorphic section of $F$. As $\mathcal{S}$ is coherent over $X$ it admits a resolution by holomorphic vector bundles $E^i$ over $X$. Is there a canonical recipe for building these vector bundles $E^i$? What do you mean by "canonical"? @WillSawin I actually mean just a recipe with the fewest choices. Don’t you need an extra condition on $X$ in there like ‘algebraic’? If $X$ is projective, one can use coproduces of twisted invertible sheaves $\mathcal{O}(n)$ but probably you already know this. Let me summarize the above comments, with some additional remarks: 1) If $X$ is just a complex manifold, then a resolution by bundles on $X$ need not exist 2) If $X$ is projective with fixed very ample bundle, and you know the Castelnuovo-Mumford regularity of $V$, then you can give a canonical resolution. But I don't see how you would do it without additional information. @DonuArapura When X is projective with fixed very ample bundle. do you have a reference where it describes that "canonical" resolution? if I understand the comment correctly, then you can read about the global syzygy associated to a coherent sheaf in e.g. Chapter 5, Section 4 ("Global Duality") of Griffiths and Harris' Principles of Algebraic Geometry. Either way, I believe the result about resolutions given the existence of a very ample bundle are found somewhere in this book.
2025-03-21T14:48:31.229540
2020-06-11T14:31:27
362808
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630054", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362808" }
Stack Exchange
Optimal preprocessing in the Kuhn-Munkres algorithm The matrix formulation of the Kuhn-Munkres algorithm for solving the Linear Assignment Problem requires a preprocessing in which the minimal values of a line be subtracted from every value in that line; here the set $L$ of lines resembles the union $R\cup C$ of the sets of rows and columns in the cost matrix $\boldsymbol{M}\in\mathbb{R}_{+}^{m\times n},\ m\le n$. In case of square cost matrices the row-minima are subtracted first and then the minima of columns that don't contain a zero; that is described as steps 1 and 2 in the wikipedia article. In case of non-square cost matrices only step 1 is performed as explained here. In both cases the undesirable outcome of the preprocessing is that the generated zeros are not unique in a line of the cost matrix, because if that is the case, further steps with higher complexity are required to resolve conflicts or ambiguities. Questions: what is the optimal preprocessing strategy for the matrix formulation of the LAP in the sense of maximizing the number of lines with unique zero? can the quality of the standard preprocessing be improved by adding random vertex weights to the entries of the cost matrix prior to preprocessing, i.e. replacing $\omega_{ij}$ with $\omega{ij}+\xi(i)+\eta(j)$ with $\xi()$ and $\eta()$ generating random positive values. My guess is that the optimal strategy for generting the zeros is to select the line without zeros that has maximal difference between smallest and second smallest value because that apparently minimizes the "danger" of creating multiple zeros in a line.
2025-03-21T14:48:31.229682
2020-06-11T15:47:58
362815
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Monty", "https://mathoverflow.net/users/15629", "https://mathoverflow.net/users/29422", "paul garrett" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630055", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362815" }
Stack Exchange
Question on the residual representation Let $G=SO_n$ and fix a borel subgroup $P_0$ of $G$. Let $P=MN$ be a standard maximal parabolic subgroup $G$ and $\sigma$ a cuspidal representation of $M$ Consider the normalized parabolic induced representation $\text{Ind}_P^G(\sigma|\cdot|^z)$ and for sufficiently large $z$, we can define Eisenstein series $E(z,\phi)$ for $\phi \in \text{Ind}_P^G(\sigma)$. Since $E(z,\phi)$ has a meromorphic continuation, let $z_0$ be a simple pole of $E(z,\phi)$. Put $\mathcal{E}(\phi,z_0)$ a residue of $E(z,\phi)$ at $z=z_0$. I am wondering if there are two $\phi_1,\phi_2 \in \text{Ind}_P^G(\sigma)$ such that $\mathcal{E}(\phi_1,z_0)=\mathcal{E}(\phi_2,z_0)$, then $E(\phi_1,z)=E(\phi_2,z)$ as meromorphic functions on $\mathbb{C}$. Is it right? In the positive cone/half-plane this is essentially never the case, because in that region the map "take residue at $z_o$" is an intertwining map with non-trivial kernel (and image is the smaller quotient repn generated by the residue) from the principal series generated by the Eisenstein series, to the repn generated by the residue. So when two Eisenstein series differ by something in the kernel, they'll have the same residue. This is already visible with $PGL(2)$, where Eisenstein series have at most a pole at $s=1$ (in classical normalization) in the right half-plane, and the residues are constants. At any higher level, even constructed in a classical way, there are several different data (with fixed right $K$-type, etc.), and several linearly independent Eisenstein series, but just one residue possible. In contrast, in the left half-plane and/or in other images of the positive cone, due to zeros of the denominator of the constant term(s), there are many poles of Eisenstein occurring at points where the principal series is irreducible. At those points, the "take residue" map is a $G$-isomorphism. In that case, your desired property holds. @ Paul, Thank you very much for your excellent answer! Then can we also say nothing about the constant terms of Eisenstein series $E_P(\phi_1,s)$ and $E_P(\phi_2,s)$? Even in the positive cone/half-plane case, I still believe that $E(\phi_1,s)$ and $E(\phi_2,s)$ should be related in some concrete way. Well, correct, in the right half-plane, at points where the constant terms have poles, the residues are in a proper quotient of the principal series. Thank you very much! Your comments always help me a lot!
2025-03-21T14:48:31.230236
2020-06-11T17:23:15
362820
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carlo Beenakker", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/118412", "user0735" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630056", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362820" }
Stack Exchange
Eigenvalue inequality Let $g(\boldsymbol{\theta},\boldsymbol{\theta_0}) = trace [ \boldsymbol{\Omega{(\boldsymbol{\theta})}}^{-1} \boldsymbol{\Omega{(\boldsymbol{\theta_0})}}]-ln[det(\boldsymbol{\Omega{(\boldsymbol{\theta})}}^{-1} \boldsymbol{\Omega{(\boldsymbol{\theta_0}))}}]-N $ where $\boldsymbol{\theta} \in \boldsymbol{\Theta}$ with $\boldsymbol{\Theta}$ a compact subset of $R^{n}$, $n$ and $N$ are fixed numbers, and $\boldsymbol{\theta_0}$ belongs to the interior of $\boldsymbol{\Theta}$. Denote the eigenvalues of the symmetric matrix $\boldsymbol{\Omega{(\boldsymbol{\theta})}}^{-1} \boldsymbol{\Omega{(\boldsymbol{\theta_0})}}$ by $\lambda_s$ $(s=1,2,...,N)$ where $\lambda_s>0$ for all $s$. We then have $g(\boldsymbol{\theta},\boldsymbol{\theta_0}) = \sum_{s=1}^N [\lambda_{s}-ln(\lambda_{s})-1] \geq 0,$ with $g(\boldsymbol{\theta},\boldsymbol{\theta_0}) =0$ if and only if $\lambda_{s}=1$ for every $s$. Am I correct in thinking that $g(\boldsymbol{\theta},\boldsymbol{\theta_0}) =0$ if and only if $\lambda_{s}=1$ for every $s$ does not necessarily imply that this equality is true only for $\boldsymbol{\theta}=\boldsymbol{\theta_0}$ and that it can happen also for $\boldsymbol{\theta}\ne\boldsymbol{\theta_0}$? indeed: $g(\theta,\theta_0)=0$ implies $\Omega(\theta)=\Omega(\theta_0)$; there could be $\theta\neq\theta_0$ where this holds. Thank you Carlo for the confirmation and for putting it this way.
2025-03-21T14:48:31.230337
2020-06-11T17:53:46
362824
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630057", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362824" }
Stack Exchange
Outer-regular product of $\tau$-additive measures Due to the deficiencies of the simple product measure defined on measurable rectangles, there have been many different constructions of product measures in more specialized circumstances. Originally, it seems that the product of Radon (or regular Borel) measures was defined by defining the product integral, and then constructing a measure from the integral. After the realization of the importance of continuity / smoothness properties of measures beyond $\sigma$-additivity, there were constructions of product measures of $\tau$-additive measures, e.g. as appears in volume 4 of Fremlin. As far as I can tell, these product measures are always constructed in an inner-regular manner, i.e. taking two $\tau$-additive measures inner-regular with respect to a family of sets and producing a new $\tau$-additive measure on the product that is inner-regular on a product family. Is there a naturally outer-regular version of this construction? The construction of an outer-regular Borel measure from a product integral would be better characterized as constructing a product measure on the $\sigma$-ring generated by compact $G_\delta$s, and then extending this measure to an outer-regular Borel measure.
2025-03-21T14:48:31.230445
2020-06-11T18:12:42
362826
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Deane Yang", "Willie Wong", "Zamanyan", "https://mathoverflow.net/users/157028", "https://mathoverflow.net/users/3948", "https://mathoverflow.net/users/613" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630058", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362826" }
Stack Exchange
A basic question about elliptic pde I have (hopefully) a rather basic question about smooth elliptic partial differential equations. Let $L$ be a linear elliptic differential operator with polynomial coefficients in $\mathbb{R}^n, n>1.$ Let $u\in L^{\infty}(\mathbb{R}^n)$ be such that it has compact support and $L(u)$ is supported on a finite set (as a distribution). Then is $u$ necessarily 0? I (naively) hope that the answer is yes, and perhaps could be proved by using some known asymptotics of the fundamental solution of $L$ near the support of $u.$ Any suggestions or references will be greatly appreciated. Could you clarify? If $u$ has compact support, then isn’t $Lu$ always a compactly supported distribution? Right, but question is if $Lu$ can have a support that is a finite set, which is much stronger that having a compact support. you probably want $n > 1$, else the tent function $\psi$ has $\psi''$ supported at exactly three points. // A minor nitpick about your phrasing: since your conclusion is $u \equiv 0$ a fortieri you cannot have $L(u)$ with non-empty support. Maybe better to say that $\mathrm{supp} L(u)$ is contained in a finite set. Anyway: to your question, you maybe able to get what you want using unique continuation. Thanks, I do have that n>1. The answer is Yes and is a consequence of Holmgren's uniqueness Theorem. See for instance Theorem 1.1.4 in this text.. The ellipticity serves here to ensure that there does not exist a characteristic hypersurface. Applying HUT, you obtain that $u\equiv0$ over the connected component of the complement of the support of $Lu$. In your case, this means $u\equiv0$ away from a finite set, but since $u\in L^\infty$, this is $u=0$. Thank you very much!
2025-03-21T14:48:31.230590
2020-06-11T18:19:22
362827
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Anonymous", "R. van Dobben de Bruyn", "Vitay", "https://mathoverflow.net/users/14044", "https://mathoverflow.net/users/159456", "https://mathoverflow.net/users/82179" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630059", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362827" }
Stack Exchange
Frobenius action on de Rham cohomology Let $X$ be a smooth projective $k$-scheme, where $k=\mathbb{F}_p$ and $p$ is prime. We have an identification of the de Rham cohomology of $X$ with $H^*_{crys}(X/k)$: $H^*_{DR}(X/k)\cong H^*_{crys}(X/k)$ (note that I am not considering crystalline cohomology $H^*_{crys}(X/W)=\lim H^*_{crys}(X/W_n)$, where $W=\mathbb{Z}_p$ is the Witt ring of $k$, and $W_n=W/p^n$). My understanding is that $H^*_{crys}(X/k)$ is the cohomology of a sheaf of $k$-algebras $\mathcal{O}$ on a site. Thus, we may consider the Frobenius endomorphism of $\mathcal{O}$: $F:\mathcal{O}\to \mathcal{O}$, $t\mapsto t^p$ for every section $t$. Clearly, $F$ induces a linear (because $k=\mathbb{F}_p$) endomorphism of any injective resolution of $\mathcal{O}$, and so we get an induced endomorphism $F^*$ of $H^*_{DR}(X/k)$. Questions: is $F^*$ always the identity? If not, how to compute $F^*$? For example, what is $F^*$ if $X$ is a projective space (other examples would be welcome too)? I tried to find this information in standard references, but they always consider the case of $X/\mathbb{Z}_p$, and they consider $H^*_{crys}(X/W)[1/p]$. On $\mathbf P^n$, the map is just $[x_0:\ldots:x_n] \mapsto [x_0^p:\ldots:x_n^p]$. In arbitrary characteristic, the map $[x_0:\ldots:x_n] \mapsto [x_0^d:\ldots:x_n^d]$ acts on $H^i(\mathbf P^n,\Omega^i)$ as multiplication by $d^i$, so in this case its zero for $i > 0$. Another example: if $E$ is an elliptic curve, then the Frobenius action on $H^1(E,\mathcal O_E)$ is $0$ if and only if $E$ is supersingular. More generally, it is related to de Rham–Witt cohomology, which rationally gives the slope filtration on crystalline cohomology. So the answer depends on subtle arithmetic information. Thank you! Do you have a reference which explains the relation to de Rham–Witt cohomology? Expanding on what was already said: the Frobenius map $H^_{dR}(X) \to H^{dR}(X)$ factors as $H^*{dR}(X) \to H^(X,O_X) \to H^(X,O_X) \to H^*_{dR}(X)$ where the first and last map are edge maps for the Hodge/conjugate spectral sequences, and the middle map is the Frobenius on structure sheaf cohomology. So understanding, e.g., the rank of the Frobenius on de Rham cohomology requires knowing that rank of the Frobenius on $O_X$-cohomology as well as behaviour of certain differentials in the Hodge/conjugate spectral sequences.
2025-03-21T14:48:31.230762
2020-06-11T18:25:39
362829
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gerhard Paseman", "Gerry Myerson", "Random", "Wojowu", "efs", "https://mathoverflow.net/users/109085", "https://mathoverflow.net/users/158000", "https://mathoverflow.net/users/30186", "https://mathoverflow.net/users/3402", "https://mathoverflow.net/users/88679" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630060", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362829" }
Stack Exchange
A paper by W. Ljunggren I am looking for the following paper by Ljunggren, Wilhelm: "Zur Theorie der Gleichung $x^2 + 1 = Dy^4$", Avh. Norske Vid. Akad. Oslo. I., 1942 (5): 27 The main result of this paper which I am interested in is that the solutions in positive integers of the equation $x^2 + 1 = 2y^4$ are $(1,1), (239,13)$, so a reference summarizing Ljunggren's proof would be welcome as well. This comment says "The original proof uses a very delicate variant of Skolem's p-adic method". Also user whose name resembles Jose Stgo. has a post on MathOverflow which summarizes a related paper and may have a link to your paper or near to your paper of interest. So search MathOverflow for It. Gerhard "You're In The Right Place". Paseman, 2020.06.11. Thanks, hopefully I'll find it. An idea that has worked for me is to ask for a copy to someone who has recently cited the article in question. Many people (me included) keep scanned copies of articles which are impossible to find online. Thanks, I'll try that! Two papers that simplify Ljunggren's proof: An Elementary Proof for Ljunggren Equation (2017) Simplifying the Solution of Ljunggren's Equation (1991) I believe that the first paper has an error, specifically the claim at the bottom of page 2 that if (c,b,13k) is a Pythagorean triple then b/c is either 5/12 or 12/5. In any case, I am aware of other proofs but I am interested specifically in the orginal proof. I should probably have emphasized that in the original question. $(16,63,65)$, $b/c=16/63$.
2025-03-21T14:48:31.230917
2020-06-11T20:25:59
362833
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Robert Bryant", "Robert Israel", "Roy Goodman", "https://mathoverflow.net/users/13650", "https://mathoverflow.net/users/13972", "https://mathoverflow.net/users/159458" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630061", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362833" }
Stack Exchange
Exact solution to a periodic linear ODE sought We have been studying a Hamiltonian system that possesses a one-parameter family of periodic orbits, depending on the energy level $h$. We "know" via various non-rigorous means that these periodic orbits are stable for $h<\frac{1}{8}$ and unstable for $h>\frac{1}{8}$ but haven't been able to prove it. We know that if the linearization possesses a periodic orbit at the critical value $h=\frac{1}{8}$, then this value of $h$ lies on the boundary between stability and instability. We have numerically calculated this periodic orbit using ultra-high-precision arithmetic and a 30th order ODE solver. We have also used Hill's method of harmonic balance to high order in computer algebra to show that $h_{\rm critical}$ agrees with $1/8$ to double precision, using no ODE solves. If we could simply show that the following ODE has a $2\pi$-periodic orbit, we would have a proof. Does anybody know how find such a solution? Corrected since original posting $$\frac{d}{dt} \vec{x} = A(t) \vec{x},$$ where $$ A(t) = \left( \begin{array}{cc} -\frac{4 \sin (2 t)}{\sqrt{8 \cos (2 t)+17}} & \frac{8 \cos ^2(2 t)-12 \cos (2 t)+3 \sqrt{8 \cos (2 t)+17}-11}{2 (1-\cos (2 t)) \sqrt{8 \cos (2 t)+17}} \\ \frac{-8 \cos ^2(2 t)-4 \cos (2 t)-\sqrt{8 \cos (2 t)+17}+7}{2 (\cos (2 t)+1) \sqrt{8 \cos (2 t)+17}} & \frac{4 \sin (2 t)}{\sqrt{8 \cos (2 t)+17}} \\ \end{array} \right) $$ This would answer the one big question we left unanswered in this paper also available on my website. Are you sure you have written the system correctly? I tried the system numerically in Maple, and it's nowhere near to $2\pi$-periodic. Periodic with period $2\pi$? With initial condition $(1,0)$, Maple gives me $[0.822717619142367, 0.627509890225599]$. It looks to me like (nearly) a period of $21\pi$. Then there's a problem with the formula that I posted above. My student found that the solution returned to within $10^{-120}$ of it's initial condition at $t=2\pi$. We'll look into it. Okay. Here is the corrected matrix $$A(t) = \begin{pmatrix} -\frac{4 \sin (2 t)}{\sqrt{8 \cos (2 t)+17}} & \frac{8 \cos ^2(2 t)-12 \cos (2 t)+3 \sqrt{8 \cos (2 t)+17}-11}{2 (1-\cos (2 t)) \sqrt{8 \cos (2 t)+17}} \ \frac{-8 \cos ^2(2 t)-4 \cos (2 t)-\sqrt{8 \cos (2 t)+17}+7}{2 (\cos (2 t)+1) \sqrt{8 \cos (2 t)+17}} & \frac{4 \sin (2 t)}{\sqrt{8 \cos (2 t)+17}} \end{pmatrix}$$ OK, that one seems to work. @manroygood: You really should correct the formula in the question, not just put the correction in the comments. It's trivial to do, since you have already have the formula in LaTeX. Unfortunately, only you can copy the formula you have in your comment and use it to correct the question. Rather incredibly, your (corrected) system does have a closed-form solution, which I found with Maple's help. $$ x(t) = 1+4\,\cos \left( 2\,t \right) +3\,\sqrt {8\,\cos \left( 2\,t \right) + 17}$$ $y(t)$ is a rather complicated beast that I'll just write in text rather than LaTeX: when pretty-printed, it still doesn't look pretty. -16384/(5-(16*cos(t)^2+9)^(1/2))^(1/4)/(4*cos(2*t)-5+(8*cos(2*t)+17)^(1/2))^(1/ 2)/(8*cos(2*t)+17)^(3/4)/((16*cos(t)^2+9)^(1/2)-3)^(1/4)*(cos(t)+1)*((8*cos(2*t )+17)^(1/2)+5)^(3/4)/((16*cos(t)^2+9)^(1/2)+5)^(3/4)/(16*cos(t)^2+9)^(1/4)/((8* cos(2*t)+17)^(1/2)+3)^(1/4)*(-(8*cos(2*t)+17)^(1/2)+5)^(1/4)*((16*cos(t)^2+9)^( 1/2)+3)^(1/4)*(8*cos(t)^2-9+(16*cos(t)^2+9)^(1/2))^(1/2)*(-1/4*sin(2*t)*((5/8* cos(2*t)^3-1/2*cos(2*t)^2-11/32*cos(2*t)-31/64)*(8*cos(2*t)+17)^(1/2)+(cos(2*t) ^3-2*cos(2*t)^2+5/4*cos(2*t)+7/8)*(cos(2*t)+17/8))*(16*cos(t)^2+9)^(1/2)+cos(t) *((cos(2*t)^3-2*cos(2*t)^2+1/8*cos(2*t)+73/32)*(8*cos(2*t)+17)^(1/2)+cos(2*t)^4 -cos(2*t)^3+15/4*cos(2*t)^2-5/4*cos(2*t)-305/32)*sin(t)*(cos(2*t)+17/8))*((8* cos(2*t)+17)^(1/2)-3)^(1/4)*(cos(t)-1)/(4*cos(2*t)^2*(8*cos(2*t)+17)^(1/2)+16* cos(2*t)^3-44*cos(2*t)^2-13*(8*cos(2*t)+17)^(1/2)+20*cos(2*t)+53)/(32*cos(t)^4- 56*cos(t)^2+9+3*(16*cos(t)^2+9)^(1/2)) The solution is manifestly $2\pi$-periodic, with $y(0) = y(2\pi) = 0$. EDIT: Of course $y(t)$ can be obtained from $x(t)$ and $x'(t)$ by looking at the differential equation for $x'(t)$. $$y \left( t \right) =-2\,{\frac { \left( 4\,\sin \left( 2\,t \right) x \left( t \right) + \left( {\frac {\rm d}{{\rm d}t}}x \left( t \right) \right) \sqrt {8\,\cos \left( 2\,t \right) +17} \right) \left( \cos \left( 2\,t \right) -1 \right) }{8\, \left( \cos \left( 2 \,t \right) \right) ^{2}-12\,\cos \left( 2\,t \right) +3\,\sqrt {8\, \cos \left( 2\,t \right) +17}-11}}$$ Thanks. That does it. Mathematica's DSolve fails to integrate it. Is there any way to get an insight into how Maple fountain answer? Solving for $y(t)$ from the first component of the differential equation, and doing a bunch of trigonometric gymnastics yields in the end $$y(t)=-\frac{ \left(1+ 4 \cos {2t}+\sqrt{8 \cos {2t}+17}\right)\sin {2t}}{1 +\cos {2t}}$$ We would like to submit this as an addendum to a published paper. What is the best way to acknowledge your contribution? Just write something like "We thank Robert Israel for ..."
2025-03-21T14:48:31.231179
2020-06-11T23:03:43
362839
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carnby ", "R. van Dobben de Bruyn", "abx", "https://mathoverflow.net/users/159466", "https://mathoverflow.net/users/40297", "https://mathoverflow.net/users/82179" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630062", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362839" }
Stack Exchange
About the multiplicity of intersection in projective space I found this statement, and I can't get a proof: Given $X\subset \mathbb{P}^{n}$ an irreducible projective set of dimension $d$ and degree $e$, and given $H_{1},...,H_{d}$ hypersurfaces with degrees $e_{1},...,e_{d}$, respectively, such that $X\cap H_{1}\cap...\cap H_{d}$ is a finite set of points, then the number of points is precisely $ee_{1}...e_{d}$, counted with multiplicity. Nothing is assumed about the regularity of the sequence of the polynomials defining the hypersurfaces. Could anyone give me a hint about how can it be proven? Thanks a lot in advance. See for example Fulton's Intersection Theory, Prop. 8.4. If you want to prove it yourself, I think you should be able to do it using Hilbert polynomials (use additivity in short exact sequences). See Harris Algebraic Geometry (a first course), Theorem 18.4 (perhaps less intimidating than Fulton). Thanks! My first approach was using Hilbert polynomial, but I’m not sure how to get rid of the undesirable embedded components that may appear when treating with the sum of the ideals, just to get nice exact sequences that give you the degree of the intersection (counting multiplicity)
2025-03-21T14:48:31.231318
2020-06-12T03:23:15
362848
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Benji", "Brian Hopkins", "Carlo Beenakker", "Gerald Edgar", "J.J. Green", "LSpice", "Michael Engelhardt", "Nik Weaver", "RobPratt", "Steven Landsburg", "Timothy Chow", "Willie Wong", "https://mathoverflow.net/users/10503", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/134299", "https://mathoverflow.net/users/141766", "https://mathoverflow.net/users/14807", "https://mathoverflow.net/users/23141", "https://mathoverflow.net/users/2383", "https://mathoverflow.net/users/3106", "https://mathoverflow.net/users/3948", "https://mathoverflow.net/users/454", "https://mathoverflow.net/users/5734", "https://mathoverflow.net/users/57697", "https://mathoverflow.net/users/75761", "wlad" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630063", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362848" }
Stack Exchange
The name for an assumption made for the sake of contradiction What is the name (or adjective) for an assumption made for the sake of contradiction? To be clear, I'm in search of an expression in the form "a(n) $\underline{\quad \quad \quad \quad}$ assumption". Edit: Why are there votes to close this question? This question at least fits the tag "mathematical-writing" and is pertinent to the writing of mathematical research papers, no doubt. I once heard Robert Solovay refer to an assumption of this type as a "paranoid fantasy" ... but I don"t think that's an official term. I think it is usually better to state the claim you intend to prove and then begin the proof with "Assume the contrary", so that the "contrary" does not need to be explicitly stated --- or referred to by whatever adjective you're seeking. @Steve I agree. Still, occasionally the putative adjective lends to the flow and structure of the sentence. a strawman assumption @Carlo Not quite what I'm looking for. Aside: as an undergraduate I discovered Scott's Group Theory with most of its terse proofs by contradiction beginning "Deny. Then ...", which I took as a model of a perfect proof in subsequent homework assignments. I was rapidly disabused of this, fortunately. Marked: "Let us assume that blah. This assumption is marked for refutation. The marked assumption implies..." Perhaps antithetical? You could call it “not” as in “Suppose not.” counterfactual ... I like this question: I imagine you are asking about proof writing where you want to refer to the "contrary" assumption in the course of the proof to highlight where it is used (similar to how we would refer to the "inductive hypothesis" in a proof by induction). I think "counterfactual" that @MichaelEngelhardt suggested is good, alternatively maybe "specious"? @ogogmad I like "the marked assumption" and found it used a few times in logic papers. "Counterfactual" may have the slight disadvantage that it connotes conditionals other than the material conditional; e.g., the conditionals that arise in possible world semantics. @WillieWong, why not just refer to "the contrary assumption", as you have done? That seems clearer to me. (The counterfactual assumption cannot authoritatively be literally said to be counterfactual until we're done the proof, so it had probably better not be called such during the proof.) @LSpice I can follow your concern about "counterfactual" in a strictly logical sense, but if we, as we usually do if we care about our readers, preface our discussion with the announcement that we will show that the assumption is absurd, there should be no confusion ... or, we could say "putatively counterfactual" ... @LSpice:I agree with your parenthetical; I was going to include that also in my comment but couldn't find a nice concise way of phrasing it, and so gave up. In regards to the "contrary assumption": I feel as if I need to specify an antecedent everytime I use that phrase. This is possibly just me personally. (I think this is because "contrary" refers to the content of the assumption, rather than the method.) Perhaps I would like "contradictive assumption" better. A good question. We have the term "induction hypothesis". A similar similar term for this reductio hypothesis would be useful. I sometimes write: "Assume (for purposes of contradiction) that..." On the other hand, sometimes you are not doing an indirect proof, you are merely proving the contrapositive. That should be announced in a different way. Not the name, but a name: "contradictive assumption". Google knows about 80 some odd uses of this in a mathematical context. I like it because the word "contradictive" has a dictionary definition that is more-or-less suited for the job, and the word immediately invokes the method of proof we are using. Especially in the philosophy of religion, the term reductio premise is sometimes used. A Google Scholar search for "reductio premise" (in quotation marks) turns up a few dozen references; one of the most highly cited is Robust vagueness and the forced-march sorites paradox, by Terence Horgan. However, among mathematicians, I don't think there is any standard terminology. I was reminded today that I once used the term "reductio hypothesis" in one of my own papers. Straw Man Proposal A straw-man (or straw-dog) proposal is a brainstormed simple draft proposal intended to generate discussion of its disadvantages and to provoke the generation of new and better proposals. https://en.wikipedia.org/wiki/Straw_man_proposal I liked the suggestion "the marked assumption", found in the comments. Unlike Willie Wong's suggestion the "contradictive assumption", the "marked assumption" only partially suggests the nature of the expression. This is hard to fix unless the expression is made standard. An advantage of the "marked assumption" is that it can serve double-duty: Even in cases where the truth of the assumption is unknown, the expression is still relevant. This is particularly notable in the study of the Riemann Hypothesis where the Riemann Hypothesis is occasionally assumed and consequences derived therefrom. If the marked assumption holds, then all we have done is discover consistent structure. If the marked assumption fails, then our presumptive work can just be seen as a search for a contradiction. In this case, why not just label your assumption "Assumption A" at the beginning of the proof, and refer to it as "Assumption A" throughout the course of the argument? If you are going to mark it, might as well actually mark it. (People already do this for "positive" assumptions that they need to use repeatedly in the same paper.) @WillieWong Of course this is a valid tactic. That said, the whole point of this question is to pindown an acceptable adjective in such cases where such an adjective is desirable. I thought the whole point of the question was to find a name for an assumption made for the sake of contradiction. @NikWeaver Imagine trying to motivate a theory or definition, or even just proceeding through mathematical research. How often is the truth state of an assumption known?
2025-03-21T14:48:31.231795
2020-06-12T03:32:22
362849
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alexander Chervov", "Pulcinella", "Robert Israel", "Sandeep Silwal", "Zach Teitler", "https://mathoverflow.net/users/10446", "https://mathoverflow.net/users/119012", "https://mathoverflow.net/users/13650", "https://mathoverflow.net/users/83122", "https://mathoverflow.net/users/88133" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630064", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362849" }
Stack Exchange
Why is the Vandermonde determinant harmonic? It can be checked that the Vandermonde determinant defined as $$V(\alpha_1, \cdots, \alpha_n) = \prod_{1 \le i < j \le n}(\alpha_i-\alpha_j) $$ is a harmonic function, that is $\Delta V = 0$ where $\Delta$ is the Laplace operator. Is there a deeper or more intuitive reason why this fact should hold? The straightforward proof of just computing the derivatives and checking doesn't provide any insights. Deeper than what? Which algebraic proof are you thinking of? (Perhaps this is the algebraic proof?) Consider the symmetric group action permuting the variables. The Vandermonde is antisymmetric, meaning it spans an alternating representation—it's invariant under permutations, up to multiplication by the sign of the permutation. And it's the lowest-degree antisymmetric form. Applying any symmetric operator (such as the Laplacian) lowers the degree, but preserves antisymmetry. But the only lower-degree antisymmetric form is just zero. I would argue that provides insights: it generalizes to other reflection groups, which are deeply studied. @RobertIsrael The proof was just computing the second derivatives and working with the messy algebraic expression. Since harmonic functions are so nice and the vandermonde det is so ubiquitous, it is natural to hope for a 'deeper' connection @ZachTeitler Thank you for your response. Can you elaborate on why its the lowest degree antisymmetric form? Yet another fun with Vandermonde of similar spirit: https://mathoverflow.net/q/78540/10446 There's got to be a way of doing this by viewing $\Delta$ as a left invariant differential operator on $\text{GL}_n$ (the Casimir), and the Vandermonde determinant as a ``function'' on $\text{GL}_n$ (i.e. function to $\mathbf{A}^1/ \pm1$), where $\alpha_i$ picks out the $i$th eigenvalue. Consider the symmetric group action permuting the variables. The Vandermonde determinant $V$ is antisymmetric, meaning it spans an alternating representation—it's invariant under permutations, up to multiplication by the sign of the permutation. Applying any symmetric differential operator (such as the Laplacian) preserves antisymmetry, but lowers the degree (as long as the operator doesn't have any constant terms). And $V$ is the lowest-degree antisymmetric form. This is a fun, quick exercise. First note that $\deg V = \binom{n}{2}$, which is $0 + 1 + \dotsb + (n-1)$, and indeed all the monomials appearing in $V$ have the form $x_1^0 x_2^1 \dotsm x_n^{n-1}$, up to permutation and coefficient of $\pm 1$. None of the exponents here are repeated, and we realize that in any lower degree polynomial, there isn't enough room to have distinct exponents. Now if $f$ is any antisymmetric polynomial with a term $c x_1^{a_1} \dotsm x_n^{a_n}$ with a repeated exponent $a_i = a_j$, then permuting by the transposition $(i \, j)$ leaves this term unchanged; but it has to take this term to $-c x_1^{a_1} \dotsm x_n^{a_n}$ in order for $f$ to be antisymmetric; so $c=0$. Only terms with pairwise-distinct exponents can appear in $f$, so $\deg f$ must be at least $\binom{n}{2}$. This actually proves a bit more: up to scalar factor, $V$ is the unique antisymmetric polynomial of degree $\binom{n}{2}$, and in fact any antisymmetric polynomial is divisible by $V$. This also generalizes to other finite reflection groups. You can see, for example, Chapter 20 of Kane's book Reflection groups and invariant theory. But at the moment we just care about the property of having minimal degree. Now the point is that applying a symmetric differential operator preserves the antisymmetry property, but lowers the degree. But the only lower-degree antisymmetric form is just zero. I would argue that this approach provides insights: it generalizes to other reflection groups, which are deeply studied. For me, it came up in relation to apolarity and Waring rank, where it was useful to know which differential operators annihilate $V$. (The above shows that symmetric differential operators lie in the ideal of annihilators, and it turns out they generate the ideal.) The harmonicity of $V$ can be understood and placed in a more general context by identifying $V$ as the Doob h-transform$^\ast$ of $n$ independent and identically distributed diffusion processes, see Orthogonal polynomial ensembles in probability theory page 433. The generalization is that ${\cal D}V=0$ when $${\cal D}=\sum_{i=1}^n\biggl[(ax_i+b)\frac{\partial^2}{\partial \alpha_i^2}+c\frac{\partial}{\partial\alpha_i}\biggr]$$ for some $a,b,c\in\mathbb{R}$. This covers the cases of Brownian motion, squared Bessel processes (squared norms of Brownian motions) and generalized Ornstein-Uhlenbeck processes driven by Brownian motion. $^\ast$ A diffusion process of $\alpha_1,\alpha_2,\ldots\alpha_n$, starting at zero, conditioned on $\alpha_1<\alpha_2<\cdots<\alpha_n$. The other answers already answered the question. I just wanted to point out that this type of question is also part of the Dyson Brownian motion topic. You can see that from the answers already stated, for example Carlo Beenakker. If you look up Dyson Brownian motion you can get to this old blog post of Tao https://terrytao.wordpress.com/2010/01/18/254a-notes-3b-brownian-motion-and-dyson-brownian-motion/ Around equation (9) he starts to address the type of algebra you mention. In your notation, his equation (11) says $$ \frac{\partial}{\partial \alpha_i} V\, =\, \sum_{\substack{j\\j\neq i}} \frac{V}{\alpha_i-\alpha_j}\, . $$ Applying that again and the derivative of $1/x$ you get $$ \frac{\partial^2}{\partial \alpha_i^2} V\, =\, \sum_{\substack{j,k\\|\{i,j,k\}|=3}} \frac{V}{(\alpha_i-\alpha_j)(\alpha_i-\alpha_k)}\, . $$ Then see his Exercise 25 and the simple algebraic fact that facilitates it. Of course this might not be the deeper reason you want. And then you have all the other deeper connections mentioned by the other folks who already gave you answers.
2025-03-21T14:48:31.232286
2020-06-12T05:09:17
362851
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Math Lover", "https://mathoverflow.net/users/129638" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630065", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362851" }
Stack Exchange
Results which are known about ideals of spatial tensor product I am studying about ideals of spatial (minimal) tensor product of $C^{\ast}$-algebras but I did not find any book/paper in which all the results are given. What are some results or folklore which are well known about the ideals(primitive/prime/modular) of spatial tensor products of $C^{\ast}$-algebras. To start with, if $A$ or $B$ is exact then closed ideals of spatial tensor product $A \otimes B$ are generated by tensor product of two sided closed ideals. Well, one source I know which "considers ideals of tensor products of $C^*$-algebras" is David McConnell's thesis: $C_0(X)$-structure in $C^*$-algebras, multiplier algebras and tensor products. Thank you Matthew, I will have a look. You can also find some results of that flavor in Closed ideals and Lie ideals of minimal tensor product of certain C*-algebras .
2025-03-21T14:48:31.232379
2020-06-12T05:18:19
362852
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Fedor Petrov", "Timothy Chow", "https://mathoverflow.net/users/122662", "https://mathoverflow.net/users/3106", "https://mathoverflow.net/users/4312", "Đào Thanh Oai" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630066", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362852" }
Stack Exchange
Like Bradley’s conjecture (Four incenters lie on a circle) Please don't close this question. Because there is simple configuration with 57 vote up, and don't close. Why you vote up that question and You vote to close this question? A problem I posed at here since 2014 but no solution: Let $ABCD$ be a bicentric quadrilateral, $O$ is center of circle $(ABCD)$. Then Incenter of four triangles $OAB,OBC,OCD,ODA$ lie on a circle. My question: Could You give a your solution for problem above. The problem like Bradley’s conjecture. You can see Bradley’s conjecture at here and page 73, here Please don't vote to close because why the topic at here 57 vote up and no close https://mathoverflow.net/questions/284458/does-this-geometry-theorem-have-a-name It is solved by A. Zaslavsky http://geometry.ru/olimp/2019/bicentr.pdf Thank You Dr @FedorPetrov @ĐàoThanhOai : The other question asked for the name of the theorem. You're asking for the solution. Without loss of generality assume that $O=(0,0)$ and the ABCD circle has radius 1. Then $$\begin{cases} A=(\cos \alpha,\sin\alpha),\\ B = (\cos\beta,\sin\beta),\\ C = (\cos\gamma,\sin\gamma),\\ D = (\cos\delta,\sin\delta),\\ AB = 2\sin\frac{\alpha-\beta}2,\\ BC = 2\sin\frac{\beta-\gamma}2,\\ CD = 2\sin\frac{\gamma-\delta}2,\\ AD = 2\sin\frac{\alpha-\delta}2,\\ \end{cases} $$ where we assume $2\pi\geq \alpha\geq \beta\geq \gamma\geq\delta\geq 0$, and by Pitot theorem: $$(\star)\qquad\sin\frac{\alpha-\beta}2 + \sin\frac{\gamma-\delta}2 = \sin\frac{\beta-\gamma}2 + \sin\frac{\alpha-\delta}2.$$ For the incircles coordinates we have $$\begin{cases} E = \tan(\frac{\pi}4-\frac{\alpha-\beta}4)\left(\cos\frac{\alpha+\beta}2,\sin\frac{\alpha+\beta}2 \right),\\ F = \tan(\frac{\pi}4-\frac{\beta-\gamma}4)\left(\cos\frac{\beta+\gamma}2,\sin\frac{\beta+\gamma}2 \right),\\ G = \tan(\frac{\pi}4-\frac{\gamma-\delta}4)\left(\cos\frac{\gamma+\delta}2,\sin\frac{\gamma+\delta}2 \right),\\ H = \tan(\frac{\pi}4-\frac{\alpha-\delta}4)\left(\cos\frac{\delta+\alpha}2,\sin\frac{\delta+\alpha}2 \right) \end{cases} $$ The points $E,F,G,H$ are concyclic iff $$\det \begin{bmatrix} OE^2 & E_x & E_y & 1\\ OF^2 & F_x & F_y & 1\\ OG^2 & G_x & G_y & 1\\ OH^2 & H_x & H_y & 1 \end{bmatrix} = 0. $$ In our case it takes the form: $$\det \begin{bmatrix} \tan(\frac{\pi}4-\frac{\alpha-\beta}4) & \cos\frac{\alpha+\beta}2 & \sin\frac{\alpha+\beta}2 & \cot(\frac{\pi}4-\frac{\alpha-\beta}4)\\ \tan(\frac{\pi}4-\frac{\beta-\gamma}4) & \cos\frac{\beta+\gamma}2 & \sin\frac{\beta+\gamma}2 & \cot(\frac{\pi}4-\frac{\beta-\alpha}4)\\ \tan(\frac{\pi}4-\frac{\gamma-\delta}4) & \cos\frac{\gamma+\delta}2 & \sin\frac{\gamma+\delta}2 & \cot(\frac{\pi}4-\frac{\gamma-\delta}4)\\ \tan(\frac{\pi}4-\frac{\alpha-\delta}4) & \cos\frac{\delta+\alpha}2 & \sin\frac{\delta+\alpha}2 & \cot(\frac{\pi}4-\frac{\alpha-\delta}4) \end{bmatrix} = 0, $$ which can be verified routinely under the condition $(\star)$. To verify the identity we can express everything in terms of $X:=e^{I\frac{\alpha-\beta}2}$, $Y:=e^{I\frac{\beta-\gamma}2}$, $Z:=e^{I\frac{\gamma-\delta}2}$, and $T:=e^{I\frac{\delta}2}$. In particular, we have $\sin\frac{\alpha-\beta}2 = \frac{X-X^{-1}}{2I}$, $\tan(\frac{\pi}4-\frac{\alpha-\beta}4) = \frac{(X-I)I}{X+I}$, $\cos\frac{\alpha+\beta}2 = \frac{X(YZT)^2 + X^{-1}(YZT)^{-2}}2$, and so on. The following SageMath code verifies that the determinant as a rational function over variables $X,Y,Z,T$ reduces to $0$ w.r.t. the polynomial ideal defined by $(\star)$: def row(t,z): return [ (t-I)/(t+I)*I, (z + 1/z)/2, (z - 1/z)/2/I, (t+I)/(t-I)/I ] R.<X,Y,Z,T> = PolynomialRing(QQ[I]) J = ideal( numerator( (X - 1/X) + (Z - 1/Z) - (Y - 1/Y) - (X*Y*Z - 1/(X*Y*Z)) ) ) M = matrix(Frac(R), 4, 4, [row(X,X*(Y*Z*T)^2), row(Y,Y*(Z*T)^2), row(Z,Z*T^2), row(X*Y*Z,X*Y*Z*T^2)] ) print( J.reduce( numerator(det(M)) ) ) The code prints "0" (run it online), thus establishing the determinant identity.
2025-03-21T14:48:31.232608
2020-06-12T05:35:52
362853
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gustave Emprin", "ato_42", "https://mathoverflow.net/users/156168", "https://mathoverflow.net/users/159328" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630067", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362853" }
Stack Exchange
Upper bounding VC dimension of an indicator function class I would like to upper bound the VC dimension of the function class $ F$ defined as follows: $$ F := \left\{ (x,t) \mapsto \mathbb{1} \left( c_Q\min_{q \in Q} {\|x-q \|}_1 - t > 0 \right) \; | \; Q \subset \mathbb{R}^{d}, |Q| = k \right\}, $$ where $k$ is a fixed positive integer, $x,q \in \mathbb{R}^d, t \in \mathbb{R}, \mathbb{1}(A) $ denotes the indicator function (=1 if A is true and 0 otherwise), $\| \cdot \| _ 1 $ is the L1 norm, and $c_Q$ is a constant that depends on $Q$. Context: I am studying a grouping procedure that minimizes $L_1$ norm. In particular, I would like understand how the complexity of the class of functions $\left\{ c_Q \min_{q \in Q} {\|x-q \|}_1 \; | \; Q \subset \mathbb{R}^{d}, |Q| = k \right\}$ scales with $k$ and $d$ (i.e., $O(d k \log k)$) . The above is a generalization of VC-dimension called Pseudodimension. I would appreciate any suggestions you might have. Thanks! Not an answer to the question We prove that for $p=2$ $$dim_{VC}\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(dk)\Big).$$ for the family of functions $$A_p=\{(x,t_1,...,t_k)\mapsto1(\min_i||x-q_i||_p-t_i>0)|(x_1,...,x_k)\in(\mathbb R^d)^k\},$$ which is still bigger than the dimension of $$B=\{(x,t)\mapsto1(\min_q||x-q_i||_p-t>0)|Q\subset\mathbb R^d, \#(Q)=k\}.$$ $$=\{(x,c_Qt)\mapsto1(c_Q\min_q||x-q_i||_p-t>0)|Q\subset\mathbb R^d, \#(Q)=k\}.$$ We note $Norm(\cdot)=||\cdot||_p$ Using $k$ clusters of $(d+1)$ points laid out in simplexes, we can find that $dim_{VC}(A_p)\geq(d+1)k$, so $d=_{k,d\to\infty}o(dim_{VC}(A_p))$. This relation will be of use later. Consider $n$ the VC-dimension of $A_p$ and a set $x_1,...,x_n\in\mathbb R^d$ that we can pulverize. We first try to give an upper bound to the cardinal of $\big\{\{x_i\}_{1\leq i\leq n}\cap f^{-1}(\{0\})\big\}_{f\in A_p}$, then use it to derive a majoration of $n$. For every $1\leq i\not=j\leq n$, we can choose some hyperplane $H_{i,j}=H_{j,i}$ separating the sets $Norm(q-x_i)<Norm(q-x_j)$ and $Norm(q-x_i)>Norm(q-x_j)$. To each $q\in\mathbb R^d\setminus(\bigcup_{i,j}H_{i,j})$, associate $\sigma_q$ such that $\sigma_q(i)<\sigma_q(j)$ iff $q$ and $x_i$ are on the same side of $H_{i,j}$. If $q\in H_{i,j}$, set $\sigma_q$ to be the permutation of a neighbouring region of $\mathbb R^d\setminus(\bigcup_{i,j}H_{i,j})$. If we note $N=\frac{n(n-1)}2$ the number of hyperplanes, $\mathbb R^d\setminus(\bigcup_{i,j}H_{i,j})$ has at most $P_d(N)=\dbinom{N+1}{0}+\dbinom{N+1}{1}+...+\dbinom{N+1}{d}$ open regions. Note that $P_n$ is of degree $d$. For $k$ and $d$ going to $\infty$, we have $d=_{d,k\to\infty}o(dk)=_{d,k\to\infty}o(N)$, so we should have $P_d(N)\sim_{k,d\to\infty}\binom{N+1}d\frac{((N+1)/d-0.5)^de^d}{\sqrt{2\pi k}}.$ Without proof of the former, let us use the much coarser $P_d(N)\leq (d+1)\binom{N+1}d$. For every $q\in\mathbb R^d$, $Norm(q-x_{\sigma_q(i)})$ is non decreasing, so when we picking a radius $t$, the ball of center $q$ and radius $t$ can have at most $n+1$ intersections with $\{x_1,...,x_n\}$ : no points, $\{x_{\sigma_q(1)}\}$, $\{x_{\sigma_q(1)},x_{\sigma_q(2)}\}$,... or all the $x_i$. For a ball of center $q_i$ and radius $t_i$, choosing $q_i$ gives at most $P_d(N)$ possibilities for $\sigma_q$, wich then translates to at most $(n+1)P_d(N)$ possibilities for $B(q_i,t_i)\cap\{x_i\}_{i\leq n}$. With $k$ balls, we get $\big((n+1)P_d(N)\big)^k$ possible sets. Since we can pulverize $x_1,...,x_n$, we have \begin{align*} 2^n&\leq\big((n+1)P_d(N)\big)^k\\ n\ln(2)&\leq k\ln\big((n+1)P_d(\frac{n(n-1)}2)\big)\\ n&\leq \frac{k}{\ln(2)}\ln\left((n+1)(d+1)\binom{N}{d}\right)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( k\ln(nd\frac{(\frac{N}d-\frac12)^d\mathrm e^d}{\sqrt d})\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( k\big(\ln(n)+\ln(d)+d\ln(\frac{N}d-\frac12)+d-\frac12\ln(d)\big)\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(\frac{N}d)\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(n)\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(dk\ln(n))\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(dk)\Big)+O\Big(dk\ln(\ln(n))\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(dk)\Big)+o\Big(dk\ln(n)\Big)\\ &\underset{\substack{k\to\infty\\d\to\infty}}=O\Big( dk\ln(dk)\Big) \end{align*} For $k,d\to\infty$, we used an equivalent for $\binom{N}k$ when $k=o(N)$, and used the fact that $\ln(N)=O( \ln(n))$. Using the better equivalent $P_d(N)\sim\binom{N+1}d$ (if true) should provide $O(dk\ln(k)).$ The number of open regions $P_d(N)$ was posted by @AnginaSeng at https://math.stackexchange.com/questions/2312255/cutting-a-d-dimensional-space-with-n-hyperplanes Thank you. If I'm correct, your result is for the function class $F := \left{ \min_{q \in Q} d(x,q) ; | ; Q \subset \mathbb{R}^{d}, |Q| = k \right},$ where $d(x,y)$ is a distance on $\mathbb{R}^d$. How can one adapt the proof to find VC dimension of the collection of sets (indexed over Q) $ C := \left{ (x \in \mathbb R^d,t\in \mathbb R), \min_{q \in Q} d(x,q) - t > 0 ; | ; Q \subset \mathbb{R}^{d}, |Q| = k \right}, $ (which corresponds to the VC of the indicator function class I defined) My result should be on the VC dimension of the family of functions $${(x,t_1,...,t_k)\mapsto1(\min_i||x-q_i||_2-t_i>0)|(x_1,...,x_k)\in(\mathbb R^d)^k},$$ which is still bigger than the dimension of $${(x,t)\mapsto1(\min_q||x-q_i||_2-t_i>0)|Q\subset\mathbb R^d, #(Q)=k}.$$ In the proof, the $x_i$ are the points to pulverize, and the center of balls are $q1,...,q_k$. It seems to me that $c_Q$ only shifts the value of $t$. The problem with my answer is that I used the euclidian distance instead of the $||\cdot||_1$, so my answer may not have anything to do with your question. I see your updated proof. Which part of the argument requires $p \in { 1, 2, \infty}$ ? The separation by an hyperplane. I am revising now and I was mistaken, the domains $||q-x_i||_1>||q-x_j||_1$ and converse are not separated by hyperplanes. The proof thus only holds for $p=2$. The fact that the mediator does not contain a plane changes $P_d(N)$. For exemple, consider the mediators (in $R^2$) of (-3,-1) , (3,1) and between (-4,1) and (2,3). Together, they define 5 domains in $R^2$, while $P_2(2)$ is only $4$.
2025-03-21T14:48:31.232941
2020-06-12T06:57:25
362857
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630068", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362857" }
Stack Exchange
Binary and n-ary topological spaces I am interested in various generalizations of the notion of topological space; also in topologies placed in untypical frameworks, i.e. intuitionistic topological spaces or nano topological spaces. Recently, I've noticed that some authors started to investigate "binary" and "n-ary" topological spaces. The main idea is: while in the product topology we are working with the power-set of cartesian product (of spaces), then in binary / n-ary case we are interested in the cartesian product of power-sets. My question is quite a general one: do you think that this whole idea can be reliable or sensible? Explanation: unfortunately, it seems that there is a lot of papers about "int. topo. spaces", "nano topo. spaces", "soft-sets topologies", "generalized topologies" (i.e. families closed under arbitrary unions without any additional conditions), "weak structures" etc., the same about binary topologies, but they are barely scratching the surface of the subject: namely, the authors often limit themselves to the presentation of "generalized" (or "intuitionistic", or "nano", or "binary") analogues of classical notions (various kinds of "continuity" and "convergence"). These papers are ok and I like them... but this whole branch of maths seems to be still on the initial stage. Examples and counter-examples are often based on simple and finite sets etc. Hence, have you met with more complex applications of these ideas? E.g. in formal logic, analysis, measure theory or in geometric / algebraic topology?
2025-03-21T14:48:31.233065
2020-06-12T07:23:50
362860
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630069", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362860" }
Stack Exchange
What is the maximal weight submodule of $\text{Hom}_{\mathfrak{g}}(M,N)$? Let $\mathfrak{g}$ be a finite-dimensional semisimple Lie algebra over an algebraically closed field $\mathbb{K}$ of characteristic $0$. Fix a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{g}$. For any (not necessarily finite-dimensional) $\mathfrak{g}$-module $X$, write $X^\text{wt}$ for the sum of all $\mathfrak{g}$-submodules of $X$ which is $\mathfrak{h}$-semisimple. Therefore, $X^\text{wt}$ is the maximal $\mathfrak{h}$-semisimple $\mathfrak{g}$-submodule of $X$. In fact, $$X^{\text{wt}}=\bigoplus_{\lambda\in\mathfrak{h}^*}\,X_\lambda\,,$$ where $$X_\lambda:=\Big\{x\in X\,\Big|\,h\cdot x=\lambda(h)\,x\text{ for each }h\in\mathfrak{h}\Big\}$$ for every $\lambda\in\mathfrak{h}^*$. Here are the clarification of some notations. Let $M$ and $N$ be $\mathfrak{g}$-modules. Then, $M^*$ is the usual algebraic dual $\text{Hom}_{\mathbb{K}}(M,\mathbb{K})$ of $M$, and we identify $M^*\underset{\mathbb{K}}{\otimes}N$ as a $\mathfrak{g}$-submodule of $\text{Hom}_{\mathbb{K}}(M,N)$ via the canonical injection given by the linear extension of the assigments $$(f\otimes n)\mapsto \big(m\mapsto f(m)\,n\big)$$ for all $f\in M^*$, $m\in M$, and $n\in N$. For example, we have $$\left(M^*\right)_\lambda=\left(M_{-\lambda}\right)^*$$ for every $\lambda\in\mathfrak{h}^*$, making $$\left(M^*\right)^\text{wt}=\bigoplus_{\lambda\in\mathfrak{h}^*}\,\left(M_{-\lambda}\right)^*\,.$$ Suppose that $M$ and $N$ are $\mathfrak{h}$-weight $\mathfrak{g}$-modules. I would like to know if there has been any study on the $\mathfrak{g}$-module $\big(\text{Hom}_{\mathbb{K}}(M,N)\big)^{\text{wt}}$. If $L$ is also an $\mathfrak{h}$-weight $\mathfrak{g}$-module, then we have the following easy fact: $$\text{Hom}_{\mathfrak{g}}\left(L\underset{\mathbb{K}}{\otimes}M,N\right)\cong \text{Hom}_{\mathfrak{g}}\Big(L,\big(\text{Hom}_{\mathbb{K}}(M,N)\big)^{\text{wt}}\Big)\,.$$ If all the $\mathfrak{h}$-weight spaces of $M$ and $N$ are finite-dimensional, then can we at least expect something like $$\big(\text{Hom}_{\mathbb{K}}(M,N)\big)^{\text{wt}}=\left(M^*\underset{\mathbb{K}}{\otimes}N\right)^\text{wt}\,?$$ Do we know whether $$\left(X\underset{\mathbb{K}}{\otimes} Y\right)^{\text{wt}}=(X^{\text{wt}})\underset{\mathbb{K}}{\otimes} (Y^{\text{wt}})$$ holds for all $\mathfrak{g}$-modules $X$ and $Y$?
2025-03-21T14:48:31.233214
2020-06-12T08:52:41
362862
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Math Lover", "Matthew Daws", "https://mathoverflow.net/users/129638", "https://mathoverflow.net/users/406" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630070", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362862" }
Stack Exchange
Definition of center of ternary ring of operators Let $H$ and $K$ be Hilbert spaces and $B(H,K)$ denotes the space of bounded operators from $H$ to $K$. Recall that a ternary ring of operators (TRO) $V$ is a closed subspace of $B(H,K)$ which is closed under the operation $(x,y,z) \to xy^*z$. Moreover, $V$ is called commutative if $ab^*c=cb^*a$ for all $a,b,c \in V$. Is there any definition of center of TRO In literature? Where is your definition of "commutative" for a TRO from? @MatthewDaws: Page 340 of https://msp.org/pjm/2003/209-2/pjm-v209-n2-p10-p.pdf What about defining $$C = \{ v\in V : av^*c = cv^*a \ (a,c\in V) \}. $$ This is evidently a closed linear subspace of $V$. Then, given $d,e,f\in C$ and $a,c\in V$, $$a(de^*f)^*c = (af^*e)d^*c = cd^*(af^*e) = cd^*(ef^*a) = c(d^*ef^*)a = c(f^*ed^*)a = c(de^*f)^* a$$ using that $d\in C$, then $f\in C$, then $e\in C$. Thus $de^*f\in C$, so $C$ is a sub-TRO of $V$, and clearly $C$ is commutative. If $V$ were itself commutative, then $C=V$. This seems like a reasonable definition of "center" to me. I don't know of a reference... If $V$ is $C^{\ast}-$Algebra, then it is clear that $C$ is contained in usual Center but it’s not clear whether usual Center is part of $C$ or not? Yes, I think I agree. Given that every $C^$-algebra is a TRO, but not conversely, why do you really expect a notion of "centre" for a TRO to agree with that for a $C^$-algebra? As I said, I don't have a reference... If they don’t agree then there would be two different notions of center for $C^{\ast}-$ Algebras. In that sense, definition of center like you defined won’t really be a good choice.
2025-03-21T14:48:31.233341
2020-06-12T09:37:28
362866
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "https://mathoverflow.net/users/142929", "user142929" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630071", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362866" }
Stack Exchange
On variations of a claim due to Kaneko in terms of Lehmer means This post is cross posted from Mathematics Stack Exchange, due that there was a mistake from my part (see the excellent partial answer and my thread of edits of my question on MSE) this post on MathOverflow is slightly different of On variations of a claim due to Kaneko in terms of Lehmer means asked 30 days ago (May 13) as the question with identificator 3672588 of Mathematics Stack Exchange. For a tuple of positive real numbers $\mathbb{x}=(x_1,x_2,\ldots,x_n)$ we denote its corresponding Lehmer mean as $L_q(\mathbb{x})$, where $-1\lt q\lt 0$, I know the definition of these means from the corresponding Wikipedia with title Lehmer mean.. Also we denote the sum of divisors function as $\sigma(n)=\sum_{1\leq d\mid n}d$ for integers $n\geq 1$. The idea of the post was to combine this definition of Lehmer mean with an equivalent formulation of the Riemann hypothesis, I refer the last paragraph of [1] (Kaneko's claim for a suitable choice of the integer $n_0$ from which the inequality in Kaneko's claim is true $\forall n\geq n_0$). From here, my belief that there should be an integer $n_0>1$ such that $\forall n\geq n_0$ the following inequality holds $$\sigma(n)<\exp\left(\frac{n}{L_q(1,\ldots,n)}\right)\log\left(\frac{n}{L_q(1,\ldots,n)}\right)\tag{1}$$ specifically for real numbers $-1\lt q\lt 0$. Fact. We've from the theory of Lehmer mean that we recover the inequality of Kaneko as $q$ tends to $0^{-}$. Sketch of proof. From the section of Properties and Special cases of the linked Wikipedia, and the continuity of a product of continuous functions (our exponential and logarithm).$\square$ Question. I would like to know what work can be done to prove an inequality of the form $(1)$ for a specific value of $-1<q<0$, the least/smallest value of $|q|$ for which it is possible to prove that your inequality is true (I mean very close to $0$), and that holds $\forall n\geq n_0$ for a suitable choice of $n_0>1$ for your corresponding choice of a specific value of $-1<q<0$. Many thanks. I emphasize that I'm asking what work can be done to prove an example for one of those inequalities $(1)$ for a very small quantity $|q|>0$ with $-1<q<0$ and that is true, our inequality, as a relaxation of Kaneko's claim for a segment of integers $n_0<n$. Feel free to add your feedback about if this type of inequalities (my interpretation of a relaxation or variation of Kaneko's claim) and combinations can be potentially interesting. References: [1] Jeffrey C. Lagarias, An Elementary Problem Equivalent to the Riemann Hypothesis, The American Mathematical Monthly, 109, No. 6 (2002), pp. 534-543. [2] P. S. Bullen, Handbook of Means and Their Inequalities, Springer, (1987). Feel free to add your feedback about the question, what I tried to evoke with my question (once I have noticed from the excellent partial answer of MSE the possible flaws in my statements) is try to prove an inequality $(1)$ similar than Kaneko's claim for suitable values of $-1<q<0$ and $n_0$, thus in my interpretation a relaxation, in some way, of the Riemann hypothesis. Many thanks for the attention and patience of users, and good day. As aside comment (or companion/comparison reference for this MO post) I've also edited on Mathematics Stack Exchange the post with title Around a weak form of the Riemann hypothesis inspired in the relationship between the Stolarsky means and the logarithmic mean, that is the MSE question with identificator 3626466 (asked Apr 15) without an available answer (please if can with your MSE acount add your feedback in comments on the site MSE about the quesiton).
2025-03-21T14:48:31.233579
2020-06-12T10:11:24
362867
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Derek Holt", "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/35840" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630072", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362867" }
Stack Exchange
Stabilizers in the action of $\mathrm{GL}(n, \mathbb Z)$ on $\mathbb Z^n$ How can we calculate effectively the subgroups of $G: = \mathrm{GL}(n, \mathbb Z)$ which fix pointwise a given submodule $S$ of $\mathbb Z^n$ in the action of $G$ on $\mathbb Z^n$ by left multiplication? Note: Suppose $S$ is freely generated by $\xi_1, \xi_2, \cdots, \xi_s$ then the clearly the problem is equivalent to describing the unimodular matrices fixing each of the tuples $\xi_1, \cdots \xi_s$. If the $\xi_j$s have a simple form, e.g., $\xi_j$s are the standard basis vectors $e_1, \cdots, e_n$ of $\mathbb Z^n$ then the form of matrices fixing each $\xi_j \ (j = 1, \cdots, s)$ is easy to determine. But in general the method to obtain the stabilizer is not clear. One can boil down to the case of a set of basis elements as follows: let $S'$ be the inverse image of the torsion subgroup of $\mathbf{Z}^n/S$ in $\mathbf{Z}^n$. Then the pointwise stabilizer of $S$ also fixes pointwise $S'$. And $S'$ is a direct factor of $\mathbf{Z}^n$. Hence by a change of basis (i.e., conjugation by some matrix in $\mathbf{GL}_n(\mathbf{Z})$ we can boil down to $S'$ being generated by the first $k$ vectors, which gives an obvious block-triangular decomposition. So the effective part of the question boils down to compute $S'$ and change the basis to get $S'$ as generated by the first basis vectors. Algorithmically, you would put the matrix defining $S$ into Smith Normal Form.
2025-03-21T14:48:31.233719
2020-06-12T13:04:30
362869
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ben McKay", "Chris Gerig", "Klaus Niederkrüger", "Overflowian", "https://mathoverflow.net/users/12310", "https://mathoverflow.net/users/13268", "https://mathoverflow.net/users/67031", "https://mathoverflow.net/users/99042" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630073", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362869" }
Stack Exchange
Which curves are boundary of pseudoholomorphic curves? I have posted it on Mathstackexchange but nobody replied. Consider a loop $\gamma:\mathbb{S}^1\to M^{2n}$ in a symplectic manifold $(M^{2n},\omega)$. Let $J$ be an $\omega$-compatible almost complex structure on $M$. My naive question is: when does $\gamma$ bound a pseudoholomorphic curve? More precisely, when there does exists a Riemann surface with boundary $\Sigma$, and a J-holomorphic map $u:\Sigma\to (M,J)$ such that $u|_{\partial \Sigma} \equiv \gamma$ ? I do not know much about pseudoholomorhic curves, I see that some people requires some conditions on the boundary like, contact type or totally real, if you want you can consider these assumptions or deal with these cases. I don't think this is known. There are conditions called moment conditions, if I remember correctly, that determine which real analytic closed curves are the boundaries of holomorphic curves in complex Euclidean space. But I think they are based on having global coordinates. I don't know if there is an answer to this question even for complete Kaehler manifolds. There always exists a Lagrangian torus which contains a given smooth loop, and in some scenarios the moduli of J-disks with Lagrangian boundary conditions will be empty. Alternatively, if the loop is contained in a contact hypersurface for which it is a Reeb orbit, we can try to get some existence results using Gromov-Witten theory by stretching the neck of the 4-manifold along that hypersurface. These comments might be useless in practice. @ChrisGerig a question: you say in some scenarios we don't have any J-disk, do we get any improvement considering higher genus surfaces? The 'moment conditions' that Ben McKay mentions are simply this: A closed curve $C$ in $\mathbb{C}^n$ bounds a compact Riemann surface (which might be singular) if and only if the integral around $C$ of any global holomorphic $1$-form on $\mathbb{C}^n$ vanishes. One direction is just Stokes' Theorem: If $\omega$ is a holomorphic $1$-form, then $\mathrm{d}\omega$ is a holomorphic $2$-form, and a holomorphic $2$-form vanishes when pulled back to any (possibly singular) complex curve. Thus, if $C = \partial X$ where $X\subset\mathbb{C}^n$ is a compact Riemann surface (with boundary), then $\int_C\omega = \int_X\mathrm{d}\omega = 0$. The converse, which (I think) is due to Harvey and Lawson (Boundaries of complex analytic varieties, I., Annals of Mathematics 102 (1975), 223–290), is rather deeper. I don't remember whether there is a test for when $C$ bounds an actual holomorphic disk (i.e., a Riemann surface of genus $0$). I believe I remember that when $C$ does bound a compact (singular) holomorphic Riemann surface, it bounds only one. (I don't have access to the Harvey-Lawson paper I cited just now. If you want the definitive statment, I suggest that you check that paper.) This applies to all pseudoconvex domains in complex Euclidean space, because if a closed real curve sits inside the domain, any compact Riemann surface it bounds also lies in that domain, by the maximal principal. ...the maximum principle. I want to explain why one should not expect that a given generic loop is ever the boundary of a holomorphic curve (unless $\dim M = 2$). My claim (or my intuition) depends of course on the definition of "generic", so let me try to justify the statement: Take your loop $\gamma$, then construct a totally real submanifold $L$ that contains $\gamma$. (If $\gamma$ is embedded the construction of $L$ does not pose any difficulty; and we are not requiring that $L$ is closed or anything similar.) Assuming now that $\gamma$ bounds a smooth map $f\colon \Sigma \to (M,J)$ for some Riemannian surface $\Sigma$, you can use the Riemann-Roch formula to compute the "expected" dimension of the space of holomorphic curves that are homotopic to $f$. Assuming that $\gamma$ is injective, and that $J$ is chosen "generically" (which of course is a bit of a mysterious property), you can assume that the expected dimension corresponds to the genuine dimension of the space of holomorphic curves. If the expected dimension is negative, then there will not be any holomorphic curve bounded by $\gamma$ in this homotopy class. (Note that morally the higher the genus of the surface the more negative the dimension! For example, if $S_1,S_2$ are closed Riemann surfaces and the genus of $S_2$ is $\ge 2$, then the expected dimension of a holomorphic curve in $S_1\times S_2$ that is homotopic to $\{p\}\times S_2$ is negative. Obviously if we take the product almost complex structure of $S_1\times S_2$ then the manifold will be foliated by the holomorphic curves $\{p\}\times S_2$, which seems to be a contradiction to what I wrote, but this is because the almost complex structure $j_1\oplus j_2$ is highly non-generic ... as soon as you slightly perturb it, no holomorphic curves in that homotopy class will survive.) Note that up to here, we have not used that $M$ is symplectic, but only that it is almost complex! The Riemann-Roch formula is $$ \operatorname{index \bar \partial_J} = \frac{1}{2}\dim M \cdot \chi(\Sigma) + \mu (f^*TM, f^*TL) , $$ where $\chi(\Sigma)$ is the Euler class of $\Sigma$ and $\mu(f^*TM, f^*TL)$ is the Maslov index of $f$ with respect to $L$ which measures by how much $TL$ "turns" along $\gamma$ with respect to the trivialization of the complex bundle $f^*(TM,J)$. In order for a generic $J$ to admit any holomorphic curve with boundary $\gamma$ it follows that the Maslov class $\mu(f^*TM, f^*TL)$ of the homotopy class of the potential holomorphic curve has to be large enough so that the Fredholm index is positive. For a chosen totally real submanifold $L$ there might thus exist certain homotopy classes that can be represented by holomorphic curves with boundary on $L$, but in the initial question you were not at all interested in a specific $L$ only in $\gamma$! We can instead choose a countable family of totally real submanifolds $L_k$ such that for any smooth $f\colon (\Sigma, \partial \Sigma) \to (M, \gamma)$ we find an $L_k$ in this family such that the index of $f$ with respect to this $L_k$ will be negative (except of course, when $\dim M = 2$, because in this case $L = \gamma$ without choice). We can then choose an almost complex structure $J$ close to the initial one that is regular for all of the countably many totally real submanifolds $L_k$ simultaneously. For this generic $J$ there is no holomorphic curve with boundary in $L_k$ that has negative index (with respect to this $L_k$). But this means that there is no holomorphic curve at all bounded by $\gamma$. This proves that in a certain sense there is never a holomorphic curve with boundary $\gamma$ except for extremely special choices of $\gamma$ and $J$ (this of course is all modulo "genericity" of the almost complex structure, which is not a very user-friendly definition, because you can never check if "your" $J$ is actually "generic". But this is the way that symplectic topology goes...). @ChrisGerig to construct some $L$, I would simply trivialize the pull-back bundle $f^* TM$ (as a complex bundle). This way I have a trivialization of $TM$ over $\gamma$. Next, I choose a totally real subbundle of $TM|_\gamma$ that contains the direction of the loop. If I apply the exponential map to this subbundle, I get a submanifold, and this submanifold is still close to $\gamma$ totally real. (maybe I need to choose a Riemannian metric such that $\gamma$ is totally geodesic or so?) @ChrisGerig I did not really think through all of the details needed to construct the sequence ${L_k}$, but any trivialization of $TM|_\gamma$ is isomorphic to any other one. A priori the only difference is that we have $S^1 \times \mathbb{C}^n$, with a certain section $\sigma$ given by the tangent direction of the loop. I think that you can split off $\mathbb{C}\cdot \sigma$ from $S^1 \times \mathbb{C}^n$, the remaining complementary subbundle is still trivial, and we can find then a totally real subbundle to obtain any desired Maslov index, don't we? :|
2025-03-21T14:48:31.234319
2020-06-12T14:41:55
362877
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630074", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362877" }
Stack Exchange
On the growth and bounds for a certain sequence of integers known as Bogotá numbers A Bogotá number is a non-negative integer equal to some smaller number, or itself, times its digital product, i.e. the product of its digits. For example, 138 is a Bogotá number because 138 = 23 x (2 x 3). Bogotá numbers up to 1000 are 0, 1, 4, 9, 11, 16, 24, 25, 36, 39, 42, 49, 56, 64, 75, 81, 88, 93, 96, 111, 119, 138, 144, 164, 171, 192, 224, 242, 250, 255, 297, 312, 336, 339, 366, 378, 393, 408, 422, 448, 456, 488, 497, 516, 520, 522, 525, 564, 575, 648, 696, 704, 738, 744, 755, 777, 792, 795, 819, 848, 884, 900, 912, 933, 944, 966, 992. It has been shown that the natural density of Bogotá numbers is 0: https://math.stackexchange.com/questions/3713294/on-the-density-of-a-certain-sequence-of-integers. The number of Bogotá numbers, B (n), less than or equal to n, for n = 10^ 0, 1, 2, 3..., 9 is 2, 4, 19, 67, 280, 1166, 4777, 19899, 82278, and 340649 as calculated by Freddy Barrera. Crude estimates and bounds for the value of B (n) are not difficult to come by. How precise can we get? Fix an integer $b > 1$ and let $p_b(n)$ denote the product of the base-$b$ digits of the integer $n$. Then let $\mathcal{B}_b$ be the set of numbers of the form $p_b(n)n$, for some integer $n \geq 0$, and put $\mathcal{B}_b(x) := \mathcal{B}_b \cap [0, x]$ for all $x > 0$. The OP is asking for bounds for $\mathcal{B}_b(x)$ (in the special case $b=10$). I guess that using the same methods of [1], one should be able to prove an upper of the form $x^{c_b + o(1)}$ for some constant $c_b$ depending on $b$. The details are many so I refer directly to the paper (the version on arXiv): For a parameter $\alpha > 0$, one splits $\mathcal{B}_b$ into two subsets: $\mathcal{B}_b^\prime$, consisting of the numbers with $p_b(n) > x^\alpha$, and $\mathcal{B}_b^{\prime\prime}$, the remaining numbers with $p_b(n) \leq x^\alpha$. An upper bound for $\mathcal{B}_b^\prime(x)$ is given considering that each of its elements has a $b$-smooth divisor $> x^\alpha$ (pag. 3, (6) and the three equations before) An upper bound for $\mathcal{B}_b^{\prime\prime}(x)$ is given by estimating a sum of multinomial coefficients (pag. 4-5, (11) and the two equations before). [1] C. Sanna, On numbers divisible by the product of their nonzero base b digits, Quaestiones Mathematicae (in press) https://arxiv.org/abs/1809.05463
2025-03-21T14:48:31.234482
2020-06-12T14:49:47
362879
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Batominovski", "Jeremy Rickard", "https://mathoverflow.net/users/22989", "https://mathoverflow.net/users/33026" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630075", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362879" }
Stack Exchange
References about transfinite socle series I would like to know if there is any literature that discusses "transfinite socle series" of a ring module. Below is my attempt at defining the series. Let $R$ be an associative unital ring and $M$ a unitary left $R$-module. Define the socle of $M$, denoted by $\text{soc}(M)$, to be the sum of all simple $R$-submodules of $M$. We create the transfinite socle filtration $$\text{Soc}(M):=\big(\text{soc}^\alpha(M):\alpha\text{ is an ordinal}\big)$$ of a given $R$-module $M$ as follows. First, define $\text{soc}^0(M):=0$ and $\text{soc}_0(M):=0$. If an ordinal number $\alpha>0$ has an immediate predecessor $\beta$, then $$\text{soc}_\alpha(M):=\text{soc}\big(M\big/\text{soc}^{\beta}(M)\big)\,,$$ and $\text{soc}^\beta(M)$ is the preimage $\pi_\alpha^{-1}\big(\text{soc}_\alpha(M)\big)$ of the socle factor $\text{soc}_\alpha(M)$ under the canonical projection $\pi_\alpha:M\twoheadrightarrow \big(M\big/\text{soc}^{\beta}(M)\big)$. If $\alpha$ is a limit ordinal, then $\text{soc}^\alpha(M)$ to be $\bigcup\limits_{\beta<\alpha}\,\text{soc}^\beta(M)$, and set $\text{soc}_\alpha(M):=0$. Finally, define $\overline{\text{soc}}(M)$ to be the union of all submodules $\text{soc}^\alpha(M)$ of $M$. (Clearly, there exists a smallest ordinal $\mu(M)$ such that $\overline{\text{soc}}(M)=\text{soc}^{\mu(M)}(M)$.) I do not have a specific question, but I would like to learn about any information regarding transfinite socle series. For example, if $\omega$ is the least infinite ordinal, then does it hold that $\text{soc}_{\omega+1}(M)=0$ for any $R$-module $M$ (in particular, when $R$ is an algebra over a field $\mathbb{K}$, and maybe when $M$ is a countable-dimensional vector space over the same field $\mathbb{K}$)? If this is not true (for a general $R$, or for the case where $R$ is an algebra over a field), what are counterexamples? Any reference, comment, and knowledge about transfinite socle filtration will be greatly appreciated. I found quite a lot of literature by Googling "transfinite socle series". @JeremyRickard Not sure what you meant by a lot, but I only found two. (Originally, I called such a filtration simply an "infinite socle series," and Google search was not very helpful. Then, a user suggested that I changed the term to "transfinite socle series." Googling this term results in better references, but there are only two viable links.) I'm not sure what you count as "viable", but most of the Google results I get in (at least) the first three pages of results seem relevant. As I said in comments, there is a fair amount of literature to be found by Googling "infinite socle series". More specifically, a module $M$ for which (in the notation of the question) $\overline{\text{soc}}(M)=M$ is called a "semi-artinian module", and a ring $R$ for which every module is semi-artinian (or equivalently for which $R$ is semi-artinian as a module for itself) is called a "semi-artinian ring". There is quite a lot of literature to be found on semi-artinian rings and modules. For the specific question asked at the end of the question, the following example is adapted from Section 5 of Nguyen V. Dung; Smith, Patrick F., On semi-artinian $V$-modules, J. Pure Appl. Algebra 82, No. 1, 27-37 (1992). ZBL0786.16002, although that paper is about a rather specific class of modules, and so it would not surprise me if similar examples were known previously, or if there are simpler examples. Let $\mathbb{K}$ be a field, let $R_n=\mathbb{K}[t]/(t^n)$ for $n\geq1$, and let $R$ be the subring of $\prod_{n\geq1}R_n$ consisting of elements $(r_n)_{n\geq1}$ such that, for some $a\in\mathbb{K}$, $r_n=a$ for all but finitely many $n$. Then $R$ is a countable dimensional $\mathbb{K}$-algebra, with $\bigoplus_{n\geq1}R_n$ as a codimension one ideal. Let $M=R$, the regular $R$-module. For $k\in\mathbb{N}$, $$\text{soc}^kM=\bigoplus_{n\geq1}\text{soc}_{R_n}^kR_n,$$ and so, since $\text{soc}_{R_n}^n R_n=R_n$, $$\text{soc}^\omega M=\bigoplus_{n\geq1}R_n,$$ and so $M/\text{soc}^\omega M=\text{soc}_{\omega+1}M$ is one-dimensional.
2025-03-21T14:48:31.234746
2020-06-12T15:53:36
362881
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andreas Blass", "https://mathoverflow.net/users/6794" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:630076", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/362881" }
Stack Exchange
From a point and continuing reflection in $2n+1$ points then midpoint of the end point and the first point is fixed Given $2n+1$ fixed points: $A_1, A_2,....,A_{2n+1}$ and point $P$. Let $B_1$ is the reflection of $P$ in $A_1$, $B_2$ is the reflection of $B_1$ in $A_2$,...., $B_{2n+1}$ is the reflection of $B_{2n}$ in $A_{2n+1}$. My question: Could You show that midpoints of $PB_{2n+1}$ is fixed point when $P$ is moved. The reflection of a point $p$ in a point $u$ is given (if we identify points with vectors) by $2u-p$. So the composition of two such reflections, say about points $u$ and $v$, is given by the translation $p\mapsto2v-(2u-p)=p+2(v-u)$. By induction, the composition of reflections in $u_1,v_1,u_2,v_2,\dots u_n,v_n$ is the translation $p\mapsto p+2(V-U)$ where $V$ and $U$ are the sums of the $v_i$'s and $u_i$'s, respectively. Composing with one last reflection in a point $w$, we get $p\mapsto 2w-p-2(V-U)$. The midpoint between the initial point $p$ and the result of all these reflections is therefore $w-(V-U)=w+U-V$, which is independent of $p$. The part about identifying points with vectors isn't really needed. All the linear combinations of vectors ($p,u_i,v_i,w$) used in this calculation are in fact affine combinations, i.e., the coefficients add up to 1. So this works in any affine space. We describe the $B_i$'s vectorially. For simplicity I write $AB$ instead of $\vec{AB}$ $$ OB_1=OP+2PA_1= OP+2(OA_1-OP)=2OA_1-OP$$ $$OB_2= OB_1+2(OA_2-OB_1)= 2OA_2-OB_1=2OA_2 -2OA_1+OP $$ $$OB_3=OB_2+ 2(OA_3-OB_2)= 2OA_3-OB_2=2OA_3-2OA_2+2OA_1-OP.$$ The midpoint $M$ of $PB_3$ is given by $$OM=\frac{1}{2}(OP+OB_3)=OA_3-OA_2+OA_1. $$ Same computation works for any odd number of $A$'s.