added
stringdate
2025-03-12 15:57:16
2025-03-21 13:32:23
created
timestamp[us]date
2008-09-06 22:17:14
2024-12-31 23:58:17
id
stringlengths
1
7
metadata
dict
source
stringclasses
1 value
text
stringlengths
59
10.4M
2025-03-21T14:48:30.335641
2020-04-16T17:41:37
357677
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Sam Hopkins", "bof", "https://mathoverflow.net/users/154202", "https://mathoverflow.net/users/25028", "https://mathoverflow.net/users/43266", "prideout" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628177", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357677" }
Stack Exchange
Can you color a planar graph given a coloring of its triangulation? Several proofs of the four color theorem (or failed attempts) start with something like "We need only consider triangulations, because every simple planar graph is contained in a triangulation". On an intuitive level, it makes sense. The triangulation is maximally connected. Since it has more edges than the original graph, it is more difficult to find a proper coloring. By solving a harder problem, you're solving the easier problem for free. However, this still seems like an assumption to me. Suppose I have a simple planar graph G and a triangulation of G called G'. Given a proper face coloring of G', how do I construct a proper face coloring of G? If $G'$ and $G$ have the same set of vertices and every edge of $G$ is also an edge of $G'$ then a proper coloring of $G'$ is a proper coloring of $G$. I've edited my question to be more precise, thanks. The 4CT is about face coloring, not vertex coloring. I think you're mis-summarizing the basic proof strategy for the 4CT. The point is that first we observe via planar duality that coloring faces of planar graphs is equivalent to coloring vertices of (dual) planar graphs. Thus the 4CT is the statement that every planar graph has a proper vertex coloring with 4 colors. Then we can assume that the plane graph (i.e., with a fixed embedding into the plane) we are trying to vertex color is triangulated, because addition of edges only makes coloring harder. See e.g., https://en.wikipedia.org/wiki/Four_color_theorem#Summary_of_proof_ideas. Ah, thanks Sam Hopkins. I suppose my question could still be interesting outside the context of the 4CT, but you're totally right. Doh! Since every face of $G'$ borders only $3$ others, a proper face coloring of $G'$ with $4$ colors is trivial; the greedy coloring algorithm works fine. Nothing that easy is going to help you get a proper face coloring of an arbitrary planar graph $G$ with $4$ colors. Actually most triangulated graphs have a proper face coloring with $3$ colors because of Brooks' theorem.
2025-03-21T14:48:30.335820
2020-04-16T18:55:27
357684
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "David Ben-Zvi", "Pulcinella", "https://mathoverflow.net/users/119012", "https://mathoverflow.net/users/582" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628178", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357684" }
Stack Exchange
Grothendieck Riemann Roch is abelian localisation on loop spaces Abelian localisation says approximately that for a proper equivaraint map $f:X\to Y$ between schemes with a $\mathbf{G}_m$ action, the pushforward on cohomology $f_*\omega$ can be computed by the pushforward on the fixed locus: $$f_{0*}\frac{i_E^*\omega}{e_T(E/X)}\ =\ \frac{i_F^*(f_*\omega)}{e_T(F/Y)},$$ See localization and conjectures from string duality (p. 5); $i_E:E\hookrightarrow X, i_F:F\hookrightarrow Y$ are the fixed loci and $f_0=f\vert_E$. $i_F^*$ is injective so this formula specifies $f_*\omega$ uniquely. The same paper suggests the following question: For an arbitrary proper map $g:X\to Y$, can we apply abelian localisation to $\mathcal{L}X\to\mathcal{L}Y$ (loop spaces) to get Grothendieck Riemann Roch for $g$? Here (edit:the derived loop space) $\mathcal{L}(-)$ carries the usual rotation action of $S^1$. Apparently this is obvious if you can get abelian localisastion to work for loop spaces. How does this ``obvious'' implication work, and does abelian localisation work for loop groups? Another way of answering it might have something to do with the nice paper by Grigory Kondyrev, Artem Prikhodko, but they don't seem to mention abelian localisation. What precisely do you mean by loop spaces? for the interpretation of GRR in the Kondyrev-Prikhodko story (see also the related story in https://arxiv.org/abs/1305.7175 and in https://arxiv.org/abs/1804.00879) one uses derived loop spaces, meaning maps from a homotopical version of the circle, and the relevant symmetry is not an ordinary $G_m$ action but an action of this homotopical $S^1$. Localization in this context is studied in Preygel's paper http://www.tolypreygel.com/papers/note_loop_short.pdf @DavidBen-Zvi This paper looks great, thanks! Do you know of anywhere where abelian localisation for more general derived stacks is proven?
2025-03-21T14:48:30.335964
2020-04-16T18:57:12
357685
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Mohan Swaminathan", "https://mathoverflow.net/users/110236" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628179", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357685" }
Stack Exchange
Descent of vector bundle along branched cover of curve Suppose $\pi:C'\to C$ is a branched cover of compact Riemann surfaces such that the associated extension of function fields is Galois with group $G$ -- so that $\pi$ presents $C$ as the quotient $C'$ by the action of $G = \text{Aut}(C'/C)$. Now, let $\rho:G\to GL(W)$ be a finite dimensional complex representation of $G$. Below, we identify locally free coherent sheaves with holomorphic vector bundles. Let $\underline W$ be the trivial vector bundle on $C'$ with fibre $W$. Define the vector bundle $W^\rho$ on $C$ to be the subsheaf of $\pi_*\underline W$ whose sections over $U\subset C$ are the $G$-equivariant holomorphic functions $U' = \pi^{-1}(U)\to W$. We want to compute $c_1(W^\rho)$ as follows. We have a natural map $\varphi:\pi^*W^\rho\to\underline W$ on $C'$ (coming from the adjunction counit $\pi^*\pi_*\underline W\to\underline W$) which is an injective map of coherent sheaves. Now, by looking at the zeros of the determinant of $\varphi$ (which occur exactly at the critical points of $\pi$), we can figure out the value of $c_1(W^\rho) = \frac1{|G|}c_1(\pi^*W^\rho) = -\frac1{|G|}\cdot\dim H^0(C',\text{coker }\varphi)$. Carrying out this computation explicitly, we seem to get the following answer. Given any branch point $p\in C$ of $\pi$, pick a preimage $p'\in C'$. Let $G_{p'}\subset G$ be the (necessarily cyclic) stabilizer group of $p'$, of order $n_p$. For $0\le i<n_p$, define the numbers $w_{p,i} := \dim\text{Hom}^{G_{p'}}((T_{p'}C')^{\otimes i},W)$ and set $w_p:=\sum_{0\le i<n_p}\frac{i}{n_p}w_{p,i}\in\mathbb Q$. We then get $c_1(W^\rho) = -\sum_p w_p$, where the sum is over the branch points of $\pi$. Is the result of this computation correct and can I verify it by comparing it to some well-known/basic theorem? (I tried to calculate explicitly in local holomorphic coordinates where the map is given by $z\mapsto z^{n_p}$ and got the above answer.) I find it quite interesting that the sum of the rational numbers $w_p$ is an integer. But clearly, these numbers can be defined without referring to the vector bundle $W^\rho$ or its Chern class. Is there some direct way in which we could prove this integrality statement? This question is motivated by trying to understand the index computation (Theorem 4.1) in Chris Wendl's paper on super-rigidity and equivariant transversality (https://arxiv.org/abs/1609.09867). This computation is related to the well-known semiorthogonal decomposition of the $G$-equivariant derived category of $C'$, or equivalently, of the quotient stack $C'/G$. The latter can be thought of as the curve $C$ with a root stack of order $n_p$ structure at each of the branch point $p \in C$. The semiorthogonal decomposition takes the form $$ D(C'/G) = \Big\langle D(p_1),\dots,D(p_1),\dots,D(p_m),\dots,D(p_m), D(C) \Big\rangle, $$ where the component $D(p_i)$ repeats $n_{p_i}-1$ times. The exceptional objects corresponding to the components $D(p_i)$ can be described as follows. Let $X_i = \pi^{-1}(p_i)$ (taken with reduced structure). This is a subscheme of length $|G|/n_{p_i}$ of $C'$. Denote by $E_{i,j}$ the sheaf $\mathcal{O}_{X_i}$ with $G$-equivariant structure corresponding to the action of $G$ on $T_{x_i}C'$, where $x_i \in X_i$ is any point. Then $E_{i,1}, \dots, E_{i,n_{p_i}-1}$ are the required objects. The projection functor to the component $D(C)$ of the above semiorthogonal decomposition is equal to $$ F \mapsto \pi^{\ast}((\pi_{\ast}F)^G). $$ Therefore, taking $F = W$ we obtain a distingusihed triangle $$ \pi^\ast((\pi_\ast W)^G) \to W \to W' $$ where $W'$ is the projection of $W$ onto the subcategory generated by $E_{i,j}$. Note that $(\pi_\ast W)^G = W^\rho$. Therefore, $$ c_1(\pi^\ast((\pi_\ast W)^G)) = - c_1(W'). $$ It remains to note that $W'$ is an extension of the sheaves $E_{i,j}$ and the integers $w_{i,j}$ encode the multiplicities. This implies that $$ c_1(\pi^\ast(W^\rho) = \sum \left(jw_{i,j} \frac{|G|}{n_i}\right) $$ (the last factor is the length of $E_{i,j}$). Finally, to obtain $c_1(W^\rho)$ the above expression should be divided by the degree $|G|$ of the map $\pi$. Thanks, this seems to confirm the computation in question. I'll wait and see if someone can explain why the sum of the very concretely defined rational numbers $w_p$ is an integer directly (i.e., without referencing vector bundles, coherent sheaves or their Chern classes) before accepting this answer.
2025-03-21T14:48:30.336254
2020-04-16T19:17:32
357688
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "YCor", "https://mathoverflow.net/users/14094" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628180", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357688" }
Stack Exchange
Frobenius algebras of small dimensions In Classification of commutative Frobenius algebras , Jeremy Rickard showed that there are infinitely many commutative (local without loss of generality) Frobenius algebras of vector space dimension 14 over an infinite field. Question 1: What is the smallest integer $l$ such that there are infinitely many commutative Frobenius algebras up to isomorphism of dimension $l$ (does this depend on the field?)? In case this is field independent it might be interesting to find all such algebras of a given dimension $\leq l-1$ to do some tests (as there are some open problems on such algebras). This motivates the next question: Question 2: Is there a quick way, using QPA, to obtain all finite dimensional local quiver algebras whose quiver has $r \geq 2$ loops of a given (small) vector space dimension over a finite field (with lets say 2 or 3 elements)? This might be used to give a classification of Frobenius algebras over the field with 2 elements for small dimensions. In analogous problems for other algebraic structures, it often depends on the field. For instance for nilpotent Lie algebras, in dimension $\le 5$ the classification is finite and uniform in characteristic $\neq 2,3$ for all fields, while in dimension $6$, it's still finite for algebraically closed fields, and some in the list split, over subfields $K$, as a family indexed by $K^/{K^}^2$. Then in dimension $\ge 7$ there are families indexed by the field itself. So your question has two distinct aspects, both of interest, one being the algebraically closed case (in large enough char.).
2025-03-21T14:48:30.336387
2020-04-16T19:32:46
357689
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "https://mathoverflow.net/users/112382", "jcdornano" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628181", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357689" }
Stack Exchange
Interpolating multivariate polynomials from their partial derivatives Let $P(x_1,\dots,x_n)$ be a multivariate polynomial over a ground field $K$. For a multi-index $\alpha=(a_1,\dots,a_n)$ we denote the partial derivative $\frac{\partial^{a_1+\dots+a_n}P}{\partial x_1^{a_1}\dots\partial x_n^{a_n}}$ by $\frac{\partial^\alpha P}{\partial\mathbf{x}^\alpha}$. Given points $\mathbf{p}_1,\dots,\mathbf{p}_m\in K^n$ and multi-indices $\alpha_1,\dots,\alpha_m$ such that the pairs $(\alpha_i,\mathbf{p}_i)$ $(1\leq i\leq m)$ are pairwise distinct, I am interested in polynomials $P(x_1,\dots,x_n)$ over $K$ that satisfy $$ \frac{\partial^{\alpha_i} P}{\partial\mathbf{x}^{\alpha_i}}(\mathbf{p}_i)=v_i\quad (1\leq i\leq m) $$ where $v_1,\dots,v_m\in K$ are chosen arbitrarily. Question: What is the minimum degree $d=d(\alpha_1,\dots,\alpha_m)$ with the property that for any choice of the values $v_1,\dots,v_m$ there exists a polynomial $P=P(x_1,\dots,x_n)$ of degree at most $d$ that satisfies these equalities? Notice that each $\frac{\partial^{\alpha_i} P}{\partial\mathbf{x}^{\alpha_i}}(\mathbf{p}_i)=v_i$ defines a hyperplane in the polynomial space since it is linear with respect to the coefficients of $P$. The intersection of these hyperplanes is the affine subspace of desired polynomials in the vector space of polynomials of $n$ variables of total degree at most $d$. Therefore, if these hyperplanes are transversal the answer is given by the smallest $d$ for which $\binom{d+n}{n}\geq m$. Special univariate cases: If $n=1$, $\alpha_1,\dots,\alpha_m$ are all zero and elements $p_1,\dots,p_m\in K$ are distinct, we are in the classical case of Lagrange interpolation; the aforementioned hyperplanes are transversal by a simple application of the Vandermonde determinant. The transversality could also be checked easily when $n=1$, $p_1=\dots=p_m$ and $\alpha_1,\dots,\alpha_m$ are distinct non-negative integers (the Taylor expansion provides a formula for $P$). In both cases the smallest $d$ is $m-1$, i.e. the first degree that satisfies $\binom{d+1}{1}\geq m$. Added: I am looking for the smallest $d$ that works. For instance, when $n=1$ it is not hard to see that large values of $d$ work: Suppose $p_1,\dots,p_m$ are distinct elements of $K$. Consider the linear map $$ P(x)\mapsto\left(\frac{{\rm{d}}^a P}{{\rm{d}}x^a}(p_i)\right)_{0\leq a<l,\, 1\leq i\leq m} $$ from the space of polynomials of degree $<d$ in $K[x]$ to $K^{lm}$. The kernel is given by the multiples of $\prod_{i=1}^m(x-p_i)^l$ which is of dimension $d-lm$ if $d\geq lm$. So the map is onto for $d$ large enough and any arbitrary set of identities $$\frac{{\rm{d}}^{a_i} P}{{\rm{d}}x^{a_i}}(p_i)=v_i\quad (a_i<l)$$ can be realized in that case. maybe this can help https://www.sciencedirect.com/science/article/pii/002186939190294I
2025-03-21T14:48:30.336568
2020-04-16T20:44:05
357694
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Will Sawin", "https://mathoverflow.net/users/18060" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628182", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357694" }
Stack Exchange
Tate Curves and SYZ fibrations I recently looked at some of the work of Nicaise on non-archimedean SYZ, and at the end of this paper arxiv.org/pdf/1708.09637 he constructs $E^{an}$ for $E$ a Tate curve. There is a retraction $\rho : E^{an} \rightarrow S^1 $ given by collapsing the trees down to the circle. The author then remarks that this is the "simplest case of a non-archimedean SYZ fibration". In what sense is this map a fibration? The fibres are either singleton points, or a tree, unless I have misunderstood this retraction map. In the usual case of SYZ fibrations for a torus, we'd expect the fibres to be copies of $S^1$. I am new to studying this area so apologies if I am missing something obvious. $S^1$ is the set of complex numbers with norm $1$. What does the set of elements of a non-archimedean local field with norm $1$ (or the smallest connected subset of the Berkovich space that contains this set) look like? A tree!
2025-03-21T14:48:30.336676
2020-04-16T20:58:57
357695
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Johannes Schürz", "Santi Spadaro", "Taras Banakh", "https://mathoverflow.net/users/11647", "https://mathoverflow.net/users/134910", "https://mathoverflow.net/users/61536" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628183", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357695" }
Stack Exchange
Selecting an almost disjoint family in a given family of sets A family $\mathcal A$ of infinite subsets of $\omega$ is called almost disjoint if for any distinct sets $A,B\in\mathcal A$ the intersection $A\cap B$ is finite. Let $\mathfrak a'$ be the largest cardinal such that for any cardinal $\kappa<\mathfrak a'$ and any family of infinite sets $\{X_\alpha\}_{\alpha\in\kappa}\subseteq[\omega]^\omega$ there exists an almost disjoint family of infinite sets $(A_\alpha)_{\alpha\in\kappa}$ such that $A_\alpha\subseteq X_\alpha$ for all $\alpha\in\kappa$. By Proposition 6.2 in this preprint, $$\mathfrak a\le\min\{\mathfrak a^+,\mathfrak c\}\le\mathfrak a'\le\mathfrak c,$$ where $\mathfrak a$ is the smallest cardinality of a maximal infinite almost disjoint family $\mathcal A\subseteq[\omega]^\omega$. Problem. What can be said about the cardinal $\mathfrak a'$? Is it equal to $\mathfrak c$? Or maybe to $\min\{\mathfrak a^+,\mathfrak c\}$? For problems motivating this question it would be desirable to have $\mathfrak a'>\mathfrak r$. So, let us ask Question. Is $\mathfrak a'\le\mathfrak r<\mathfrak c$ consistent? Why is it obvious that $\mathfrak{a} \leq \mathfrak{a}'$? @JohannesSchürz I agree, it is not so obvious. The proof of the inequality $\mathfrak a\le\mathfrak a′$ (even in the stronger form $\min{\mathfrak a^+,\mathfrak c}\le\mathfrak a′$ ) can be found in Proposition 6.2 of this preprint: https://www.researchgate.net/publication/340395288_Set-Theoretical_Problems_in_Asymptology Now I will correct the question appropriately. Thank you for the comment. I'm sorry, you are right, it is pretty obvious. For whatever reason, I was missing that $\mathfrak{a}$ of $[\omega]^\omega$ is the same as $\mathfrak{a}_X$ of $[X]^\omega$ for every $X \in [\omega]^\omega$... Yes, $\mathfrak{a}'=\mathfrak{c}$. This is a special case of Theorem 2.1 in the paper "Weak saturation properties of ideals", by J. Baumgartner, A. Hajnal and A. Maté, https://www.researchgate.net/publication/267113047_Weak_saturation_properties_of_ideals @SantiSpadaro Thank you, Santi, very much for the comment and the link. If you want, you can write it as an answer and I will accept it, which will allow to close this my question as answered.
2025-03-21T14:48:30.336843
2020-04-16T21:14:31
357696
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628184", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357696" }
Stack Exchange
Asymptotics of a certain integral in singularity theory Let $f:\mathbb{C}^2\to \mathbb{C}$ be an isolated plane curve singularity. Consider the versal deformation space $\mathbb{C}^\mu$ parameterizing deformations $f_\lambda$ for $\lambda \in \mathbb C^\mu$. This has discriminant set $D \subset \mathbb C^\mu$: by construction $\lambda \in D$ if and only if $f_\lambda^{-1}(0)$ is singular. On $\mathbb C^2$ we have the standard volume form $dx \wedge dy$, and for any $f_\lambda$, we can consider the associated ``Gelfand-Leray form'' $$ \omega_\lambda = \frac{dx}{(f_\lambda)_y} = - \frac{dy}{(f_\lambda)_x}, $$ which determines a (nowhere-vanishing) holomorphic $1$-form on the level surface $f_\lambda^{-1}(0)$ for any $\lambda \not \in D$. One can study the area of the Gelfand-Leray form as a function of $\lambda$. Define for $\lambda \not \in D$: $$ A(\lambda) = \int_{f^{-1}_\lambda(0)} \omega_\lambda \wedge \overline{\omega_\lambda}. $$ Under reasonable circumstances (e.g. for Brieskorn-Pham singularities; I'm not sure how generally this holds), the area is finite. I'm interested in understanding what happens to $A(\lambda)$ as $\lambda$ tends towards $D$. When the limiting value $\lambda_0$ parametrizes a nodal curve, for instance, I'm pretty sure you can see that $A(\lambda)$ tends to infinity; I can see this same behavior explicitly in some other carefully-constructed examples. My main question is whether this must happen for all degenerations to singular curves. Consider any family $\lambda(t)$ for $t \in [0,1)$ with $\lambda(t) \in D$ if and only if $t = 0$. As $t \to 0$, must $A(\lambda(t)) \to \infty$? I would ideally hope that this question has already been answered in the literature, but I can't find any discussion of this in the book of Arnol'd--Gusein-Zade--Varchenko.
2025-03-21T14:48:30.336987
2020-04-16T21:17:21
357697
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gerry Myerson", "Joseph O'Rourke", "Wojowu", "https://mathoverflow.net/users/112382", "https://mathoverflow.net/users/30186", "https://mathoverflow.net/users/3684", "https://mathoverflow.net/users/6094", "jcdornano" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628185", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357697" }
Stack Exchange
Continuous map from $\mathbb C^4$ to $\mathbb R$ that changes sign under circular permutation of coordinates and that is $0$ only for squares Does there exist a continuous map $f$ from $\mathbb C^4$ to $\mathbb R$ such that: i) there exists four distinct complex numbers $a$, $b$, $c$, $d$, s.t. $f(a,b,c,d)f(b,c,d,a)<0$ ii) for every $(x,y,z,t)\in \mathbb C^4$, $f(x,y,z,t)=0$ implies that $x$, $y$, $z$, $t$ are the corners of a square in the complex plane? This is related to the inscribed square problem. Indeed, let's say that a polygon defined by $(a,b,c,d)$ is $f$-good if it satisfies the condition i), then an approach to solve the inscribed square problem can be to prove that there exists $f$ such that any Jordan curve contain a $f$-good polygon. How does $f$ being a polynomial ("polynome") justify the tag "model theory"? I guess for definissable sets, do you think i should remove this tag? Whatever definissable sets are, I do not thing the tag is appropriate. The problem if f is a polynome is related to a first order formula in the theory of ordered fields, isn't it? Not every question about polynomials is a question about model theory. But what does it mean for $(x,y,z,t)$ to be a square? @GerryMyerson: Perhaps those four complex numbers form the corners of a square in the complex plane? i mean they are the vertices of a square i.e. they satisfy $|x-y| = |y-z| = |z-t|= |t-x| $ and $|x-z|=|y-t|$. I used $\mathbb C$ instead of $\mathbb R^2$ for presentation, but same equalities hold by replacing $|a+bi|$ by $a^2+b^2$ , so that the problem might be enounced in first order in some more general context. See the Inscribed Square Problem: "Does every plane simple closed curve contain all four vertices of some square?" @ Joseph O'Rourke, we almost posted at the same time. that is why i don't mension your comment, but endeed, you had the right interoretation... maybe I should edit the main post to avoid misunderstanding... Thank you Joseph O'Rourke, i mention this problem in the question but i now have a link ... that i m going to open right now , and add the in the question! @Gerry Myerson $f$ is real valued. I realized that I didn't post what I wanted, and that I previously switched quantificator in the i) , i'm going to ask a new question, by replacing "there exists" by "for all" ... or for an in between question, replace there exists in i) by for all , but with some condition on the type of thz polygone xyzt , this condition should be that there is a polygone of this type in any Jordan curve (why not triangles...) I saved the question of uselessity in case of an eventual trivial positive answer by an edit at the end, that take in consideration and give an issue to what I wrote in my upper comment. I also removed the tag model theory @LSpice, thank you very much for the english grammar and presentation edit ! i don't inderstand the upvotes of the comments, especially the one of Gerry Myerson, that does not give an example, because as I said it just after, f is supposed to be real valued...Actually the question can be answer by the negative, and I'm going to explain why in an answer, in a moment. The idea is that if $abcd$ is not a square, one can find a path from it to $bcda$ such that no 4-gon on the path is a square. This is a contradiction because as require in the hypothesis $f(a,b,c,d)$ and $f(b,c,d,a)$ have a different sign, so $f$ continuous has to take the value $0$ for some 4-gone on the path, and this can happend only if the 4-gon is a square. The disproof has been given by the great Anatole Khelif ! Anatole just answered now to the question "do you mind if I mention your name?" in a contrast, he took just one minut to answer by mail and by the negative, the question of the post!
2025-03-21T14:48:30.337282
2020-04-16T21:17:39
357698
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Mark L. Stone", "https://mathoverflow.net/users/75420" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628186", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357698" }
Stack Exchange
Log Fractional optimization problem Let $\mathbf{x}$ be a vector of $N$ variables. Then, how can I solve the following optimization problem? \begin{align} \max_\mathbf{x}&\quad \sum_{n} \log(1+\frac{x_n}{\alpha+\sum_{m}\beta_m^{(n)}x_m})\\ \text{subject to}&\quad\mathbf{A}\mathbf{x}\leq \mathbf{p}. \end{align} Constraints are linear. How about objective function? Is it quasiconcave? The objective is neither concave not convex. For example, let $n=2, x1 = x2 = \alpha = \beta_1 = 1, \beta_2 = 3$, then Hessian s indefinite (one positive eigenvalue, one negative eigenvalue). Either use a DC (difference of concave or convex) approach in which the non-convex optimization term is iteratively handled (which at best produces a local optimum), or use a non-convex solver (local or global). Here is an attempt for a special case. Let me write your problem as the following: $$ \begin{align} \max_\mathbf{x}&\quad \sum_{n} \log\left(1+\frac{x_n}{f_n(x)}\right)\\ \text{subject to}&\quad\mathbf{A}\mathbf{x}\leq \mathbf{p}. \end{align}. $$ Assume the following: (i) $x> 0$, (ii)the coefficients of $f_n(x)$ are all positive $\forall ~n$, and (iii) all elements of $A$ and $p$ are positive. Firstly, using AM-GM inequality: $$ 1+\frac{x_n}{f_n(x)} \geq 2\sqrt{\frac{x_n}{f_n(x)}}. $$ This leads to the relaxed problem (barring constants added/multiplied): $$ \begin{align} \max_\mathbf{x}&\quad \sum_{n} \log\left(\frac{x_n}{f_n(x)}\right)\\ \text{subject to}&\quad\mathbf{A}\mathbf{x}\leq \mathbf{p}, ~\mathbf{x}>0. \end{align}. $$ Now, introduce variables $t_n> 0, \forall n$, such that: $$ \frac{x_n}{f_n(x)} \geq \frac{1}{t_n} \Rightarrow x^{-1}_nt_n^{-1}f_n(x) \leq 1. $$ And then the relaxed problem is equivalent to (removing $\log$ as its a monotonic): $$ \begin{align} \min_\mathbf{x}&\quad \prod_{n} t_n\\ \text{subject to}&\quad\mathbf{A}\mathbf{x}\leq \mathbf{p},~\mathbf{x}> 0\\ &~~~~~\mathbf{t}>0,~x^{-1}_nt_n^{-1}f_n(x) \leq 1~ \forall n. \end{align}. $$ Note that the cost function and constraints are all posynomials, with the assumptions made. Thus, this is in the form of a Geometric Program (see https://en.wikipedia.org/wiki/Geometric_programming), which can be converted to a convex program. In fact, softwares like CVXPY will readily solve a problem in this form. As far as relaxation gap is concerned, that still needs some thought. Anyway, hope this helps.
2025-03-21T14:48:30.337437
2020-04-16T23:17:51
357703
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Sho Banno", "Todd Eisworth", "https://mathoverflow.net/users/156285", "https://mathoverflow.net/users/18128" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628187", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357703" }
Stack Exchange
Regular limit points of possible cofinalities Let $A$ be a non-empty set of regular cardinals such that $\vert A\vert <\text{min}\ A$, and $\{\nu_i\mid i<i_0\}\subseteq \text{pcf}\ A$ be a strict increasing sequence having limit length $i_0$.Then $\mathcal{J}_{<\nu_{i_0}:=\text{sup}\{\nu_i\mid i<i_0\}}A=\bigcup\{\mathcal{J}_{<\nu_i}A\mid i<i_0\}$ hence there is $\nu\in\text{pcf}\ A$ such that $\nu\geq\nu_{i_0}$. But in this case does $\nu_{i_0}\text{is regular}\Rightarrow\nu_{i_0}\in\text{pcf}\ A$ hold? or is there a counter-example in higher axioms than ZFC? I see when the special case, $A$ is an interval, this holds true. [When $A$ denotes a set of regulars, $\mathcal{J}_{<\nu}A:=\{B\subseteq A\mid (\forall D\ \text{ultrafilter on}\ A), B\in D\Rightarrow \text{cf}\ \prod A/{D}<\nu\}$.] This is open. In fact, the situation you describe is related to one of the most important open problems in pcf theory: Can there can be a progressive set of regular cardinals $A$ (that is, $|A|<\min(A)$) with the property that $pcf(A)$ contains a regular limit point? This is essentially equivalent to the question of whether $pcf(pcf(A))=pcf(A)$ for all sets of regular cardinals $A$ satisfying $|A|<\min(A)$: $pcf(pcf(A))=pcf(A)$ if $pcf(A)$ fails to have a regular (=weakly inaccessible) limit point, and if there is an example where pcf DOES have a weakly inaccessible limit point, then one can force over this model to get a progressive set $A$ for which $pcf(pcf(A))\neq pcf(A)$. Shelah talks a bit about this in his paper [Sh:666], as well as in the Analytical Guide appended to his book Cardinal Arithmetic. Back to your question: If $A$ is an interval of regular cardinals, then it is known $|pcf(A)|<|A|^{+4}$, so that $pcf(A)$ can never have a regular limit point in this situation, and the special case you mention in your question will never arise. In the more general case, what you are asking is one of an entire family of so-called compactness questions for pcf theory that are still very much open, and seemingly still out of reach. [Sh:666] Shelah, Saharon, On what I do not understand (and have something to say). I, Fundam. Math. 166, No. 1-2, 1-82 (2000). ZBL0966.03044. What is the idea used to force the existence of progressive $A$ such that $\text{pcf}\ \text{pcf}\ A\not=\text{pcf}\ A$ from a progressive $B$ with a regular limit point? and where the proof can be found? I do not have access to all of my materials because I am working from home, but there is discussion in the paper I referenced above, see Conjecture 1.10 and the subsequent pages. The idea is that if $\kappa$ is a weakly inaccessible limit point of $pcf(A)$, then you can force (via a mild forcing) a function $f\in\prod pcf(A)\cap\kappa$ dominating the ground model elements of the product.
2025-03-21T14:48:30.337757
2020-04-16T23:35:24
357704
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Benjamin Techer", "Federico Poloni", "https://mathoverflow.net/users/156409", "https://mathoverflow.net/users/1898", "https://mathoverflow.net/users/33741", "leo monsaingeon" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628188", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357704" }
Stack Exchange
Maximize function on rotation matrices Let $A$ be a fixed 3-by-3 matrix and $Q$ be a rotation matrix whose yaw, pitch, and roll angles are $\phi\in[0,\pi]$, $\theta\in[0,\pi]$, and $\psi\in[0,\pi/2]$, respectively: \begin{equation} Q= \begin{bmatrix} \cos\phi \cos\theta \cos\psi - \sin\phi \sin\psi & \sin\phi \cos\theta \cos\psi + \cos\phi \sin\psi & -\sin\theta \cos\psi \\ -\cos\phi \cos\theta \sin\psi - \sin\phi \cos\psi & -\sin\phi \cos\theta \sin\psi + \cos\phi \cos\psi & \sin\theta \sin\psi \\ \cos\phi \sin\theta & \sin\phi \sin\theta & \cos\theta \end{bmatrix} \end{equation} We define the rotational transformation$B=QAQ^\top$ and a scalar \begin{equation} F(\phi,\theta,\psi)=\left|B_{12}^2-B_{21}^2\right|+\left|B_{13}^2-B_{31}^2\right|+\left|B_{23}^2-B_{23}^2\right| \end{equation} The entries $B_{ij}$ depend on $(\phi,\theta,\psi)$. Is is possible to prove the concavity of $F$ for a given $A$? or otherwise find an example of $A$ for which $F$ is not concave ? I can find numerically the maximum of $F$ for a set of random matrix $A$, but I am enable to find a rigorous way that this maximum is indeed a global or local one. Welcome to MO. You question is unclear and needs more explanations: Do you fix $A$ and look at $F=F(Q)$ as a function of $Q$? Or on the contrary do you fix $Q$ and look at $F=F(A)$ as a function defined on the whole vector space of matrices? In the first case your question makes no sense, because the set of rotation matrices is not convex. In the second case the answer is no, there is no concavity (take $Q=Id$ and look matrices with zero coefficients except $A_{12}$, in which case $F(A)=\frac 12 |A_{12}|^2$ is clearly not concave). I vote as off-topics, not research level. Thanks @leomonsaingeon for you answer. I will try to make my question clearer. I fix $A$ (which is in my case the tensor of velocity gradient at point $(x,y,z)$ in the space), then I will look for a new frame, that consists of rotating the laboratory frame (to get a new frame $x*=Qx$), for which the new velocity gradient $B=QAQ^\top$ maximizes F. Practically, $F$ depends on $\phi$, $\theta$ and $\psi$ as variable and $A$ as a fixed matrix. Multiplying by $S = \operatorname{diag}(\pm 1, \pm 1, \pm 1)$ (any combination of signs) leaves the objective function unchanged. So this function has multiple maxima: if $Q$ is one, then $SQ$ is another one. If you construct a curve that joins two of these maxima, then either the function is constant on the curve (unlikely) or you'll find out that the function is not concave (more likely). Thanks @Federico for your answer. I think I didn't correctly formulate my question. My question is to find $Q$ that maximizes the objectif function $F$. What. I do practically is that for a fixed A, I write $F(\phi,\theta,\psi)$ and I look for the maximum of $F$. My concern is that I don't know if this maximum obtained numerically is unique or not. I think what I wrote answers your question as intended. If $Q = Q(\phi_1,\theta_1,\psi_1)$ is a maximum, then $SQ = Q(\phi_2,\theta_2,\psi_2)$ is another maximum. (You probably will have to take $S$ with exactly two minus signs because what you have is a parametrization of $SO_n$, not of $O_n$) Just look at your function on a path from $(\phi_1,\theta_1,\psi_1)$ to $(\phi_2,\theta_2,\psi_2)$ and you will probably see a non-concave section.
2025-03-21T14:48:30.338004
2020-04-17T00:27:17
357707
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "https://mathoverflow.net/users/89429", "user111" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628189", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357707" }
Stack Exchange
Reference request for the integral representation of the Hadamard product of two infinite series Define $F(x) = \sum_{n\geq 1} f_{n}x^n$ and $G(x) = \sum_{n\geq 1} g_{n}x^n$. Then the Hadamard product of $F$ and $G$ is $$H(x):=(F*G)(x) = \sum_{n\geq 1} f_{n}g_{n}x^n.$$ The author of Riesz equivalent of Riemann hypothesis and Hadamard product claims that $$H(x) = \frac{1}{2\pi}\int_{0}^{2\pi} F(\sqrt {x} e^{it})G(\sqrt {x} e^{-it}) \mathrm{d}t.$$ However, no reference/proof of this identity was given. So, does anyone know where I can find the proof/reference of this identity ? E.C. Titchmarsh, The theory of functions, Oxford University Press Section 4.6 Hadamard multiplication theorem, p.158 see also (https://math.stackexchange.com/questions/3526636/hadamard-product-of-two-generating-functions) Two derivations from operstional calculus: For $$A(x) = \sum_{n \geq 0} a_n x^n$$ and $$\widetilde{A}(x) = \sum_{n \geq 0} a_n \frac{x^n}{n!} = e^{a.x}$$ with $(a.)^n = a_n$, the Hadamard product is given by $$\sum_{n \geq 0} a_n x^n \frac{D_{x=0}^n}{n!} G(x)= \sum_{n\geq 0} a_ng_n x^n $$ with $d/dx= D_x$, or more concisely, $$\widetilde{A}(:xD_{x=0}:)G(x)= \exp(a.:xD_{x=0}:)G(x)=G(a.x)= (A*G)(x)$$ with $:xD_x:^n = x^nD_x^n$, by definition, a notational convenience. The derivatives may be coded as the Cauchy contour integrals $$g_n = \frac{D^n_{z=0}}{n!}G(z) = \frac{1}{2\pi i} \oint_{|z|<\epsilon} \frac{G(z)}{z^{n+1}}dz$$ where $\epsilon$ is less than the radii of the circles of convergence of the two series. So, with appropriate changes of variables, $$H(x)= (F*G)(x)$$ $$ = \frac{1}{2\pi i} \sum_{n \geq 0} f_n x^n \oint_{|z|<\epsilon} \frac{G(z)}{z^{n+1}}dz.$$ $$= \frac{1}{2\pi i} \oint_{|z|<\epsilon} \frac{F(\frac{x}{z})G(z)}{z}dz$$ $$= \frac{1}{2\pi i} \oint_{|z|<\alpha} \frac{F(\frac{\sqrt{x}}{z})G(z\sqrt{x})}{z}dz$$ $$= \frac{1}{2\pi} \int_0^{2\pi} F(\sqrt{x}\alpha^{-1}e^{-it})G(\sqrt{x}\alpha e^{it})dt$$ $$= \frac{1}{2\pi} \int_{0}^{2\pi} F(\sqrt{x}e^{-it})G(\sqrt{x}e^{it})dt,$$ assuming both series reps are convergent for $\alpha=1$. The last real integral is convergent for all functions bounded in the segment of integration. For some discussion of the validity of these formulas, see "Hadamard grade of power series" by Allouche and France. Alternatively, note (cf. this MSE answer) $$\exp(txD_x)f(x)=f(e^t x).$$ Then $$\frac{1}{2\pi} \int_{-\pi}^{\pi} F(ue^{-it})G(ve^{it})dt$$ $$= \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{-ituD_u}e^{itvD_v} dt F(u)G(v)$$ $$= \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{-it(uD_u-vD_v)} dt F(u)G(v)$$ $$=\frac{sin[\pi(uD_u-vD_v)]}{\pi(uD_u-vD_v)}F(u)G(v)$$ $$= \sum_{j,k \geq 0} sinc(\pi(j-k)) f_j g_k u^jv^k$$ $$= \sum_{k \geq 0} f_k g_k (uv)^k.$$
2025-03-21T14:48:30.338179
2020-04-17T00:56:22
357708
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "John", "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/174691" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628190", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357708" }
Stack Exchange
Is there a unique way to define an Euclidean Jordan Algebra as the product of two simple Euclidean Jordan Algebras? Let $A$ and $B$ be two simple Euclidean Jordan Algebras. It is well known that $A\times B$ can be made into a Jordan Algebra in a unique way by defining the operation component-wise. Is it true that the product Jordan Algebra can be made Euclidean in a unique way? (the inner product on $A\times B$ is the sum of the corresponding inner products on $A$ and $B$) To expect uniqueness, you probably want the requirement that it extends the original ones $\phi_A$ and $\phi_B$ on both $A\times{0}$ and ${0}\times B$. Otherwise you can take any positive linear combination of $\phi_A$ and $\phi_B$. My guess is that as long you require the inner product of an atomic idempotent with itself to be normalized, that the inner product is unique. This probably follows in some way from the classification theorem for Euclidean Jordan algebras.
2025-03-21T14:48:30.338272
2020-04-17T01:02:36
357709
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Yemon Choi", "https://mathoverflow.net/users/763" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628191", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357709" }
Stack Exchange
Poisson Summation Formula appears to fail when applied to Hermite Functions (why?) I came across an odd circumstance where it appears as though the poisson summation formula fails to yield a correct answer (involving Hermite Functions), and I don't quite understand why this happens. Define the family of Hermite Functions by: $$ \psi_n(x) = \frac{(-1)^n}{\sqrt{2^n n! \sqrt{\pi}}}e^{\frac{x^2}{2}}\frac{d^n}{dx^n}e^{-x^2} $$ And the Fourier Transform by: $$ \hat{f}(\xi) =\mathscr{F}\{f\}(\xi) =\frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{\infty} {f(x)e^{- i x \xi}dx} $$ It's known that Hermite Functions are eigenfunctions of the fourier transform: $$ \mathscr{F}\{\psi_n\}(x) = (-i)^n \psi_n(x) $$ Now the Poisson Summation formula states that: $$ \sum_{n = -\infty}^{\infty}f(n) = \sum_{n=-\infty}^{\infty}\hat{f}(n) $$ However, if we apply this to Hermite Functions, we arrive at: $$ \sum_{n = -\infty}^{\infty} \psi_k(n)=\sum_{n = -\infty}^{\infty} (-i)^k\psi_k(n) = (-i)^k \sum_{n = -\infty}^{\infty} \psi_k(n) $$ We know the sum converges for any choice of $k$, given that $\psi_k(x) \in S(\mathbb{R})$, where $S(\mathbb{R})$ denotes Schwartz space on the real number line (space of rapidly decreasing functions). The problem is introduced when $k = 4m+2$ for some $m \in \mathbb{Z}$. We should arrive at: $$ \sum_{n=-\infty}^{\infty} \psi_k(n) = - \sum_{n = -\infty}^{\infty} \psi_k(n) \\ \implies \sum_{n = -\infty}^{\infty} \psi_{4m + 2}(n) = 0 $$ But manually summing up (for example) $ \psi_2(n) = (4\pi)^{\frac{-1}{4}}(2n^2-1)e^{\frac{-n^2}{2}}$ appears to generate a non-zero number (around $1.331$ for this example), leading to a contradiction. Now my question is, what am I missing here? Is there some restriction on the poisson summation formula I'm not considering, or did I just make an algebra error? Any help would be appreciated. Keith Conrad has given an excellent answer already but I can't resist pointing to https://mathoverflow.net/questions/265299/the-2-pi-in-the-definition-of-the-fourier-transform/265366 (where he also makes some of the same points, but where there is also a wealth of input from some other users) Your mistake is that you are not using the correct form of the Poisson summation formula based on your choice of how to define the Fourier transform. There are multiple conventions on how to define the Fourier transform in terms of where you stick factors of $2\pi$ (in the exponent or as a coefficient) and this leads to multiple versions of the Poisson summation formula, one for each convention. Here is my suggestion: for the rest of your life, define the Fourier transform of a function $f$ in $S(\mathbf R)$ to be $(Ff)(y) = \int_{\mathbf R} f(x)e^{-2\pi ixy}\ dx$. (I can't write this as $\hat{f}(y)$ or as $(\mathscr F f)(y)$ because you decided to use both of those notations for something else.) By using $2\pi$ in the exponent in this way, you will never again be misled about the Poisson summation formula because it has a clean expression: $$ \sum_{n \in \mathbf Z} f(n) = \sum_{n \in \mathbf Z} (Ff)(n). $$ If you translate this back into your convention on what the Fourier transform is then this summation formula becomes an uglier expression: $$ \sum_{n \in \mathbf Z} f(n) = \sqrt{2\pi}\ \sum_{n \in \mathbf Z} \hat{f}(2\pi n). $$ (When you unravel the notation, these two summation formulas say exactly the same thing: $\sum_n f(n) = \sum_n (\int_{\mathbf R} e^{-2\pi inx}f(x)\ dx)$. For functions $\mathbf R \rightarrow \mathbf C$ there is just one Poisson summation formula but multiple notations for it in terms of "the" Fourier transform. If you did not have a factor of $1/\sqrt{2\pi}$ in your definition of the Fourier transform then you wouldn't need to multiply on the right side by $\sqrt{2\pi}$ but you'd still have to sum on the right side over Fourier transform values on $2\pi\mathbf Z$ instead of Fourier transform values on $\mathbf Z$. The simplest way to avoid factors of $2\pi$ in awkward places from a choice of how to define the Fourier transform is to place $2\pi$ in the exponent when defining the Fourier transform because when you unravel the notation that is exactly where it appears.) Let's check how this looks with $f(x)$ being your $\psi_2(x) = (2x^2-1)e^{-x^2/2}/\sqrt[4]{4\pi}$. Since $\hat{\psi_2} = -\psi_2$, the corrected Poisson summation formula for $\psi_2$ should say $$ \sum_{n \in \mathbf Z} \psi_2(n) = -\sqrt{2\pi}\sum_{n \in \mathbf Z} \psi_2(2\pi n). $$ When I calculate the sum on the left side and right sides over $-15 \leq n \leq 15$ I get the same number to lots of decimal places: $$1.3313348084818105674376253882631445973\ldots,$$ which you had found to be around $1.331$ on the left side. (The truncated sums on both sides over $-1 \leq n \leq 1$ are not close, being $0.113162\ldots$ on the left and $1.3313348\ldots$ on the right. The sum on the right converges much more rapidly because $\psi_2(2\pi n)$ has the factor $e^{-2\pi^2n^2} = e^{-19.73\ldots n^2}$ while on the left side $\psi_2(n)$ has factor $e^{-n^2/2} = e^{-.5 n^2}$. Truncations of the sum on the right already achieve the digits in the number displayed above when summing over $-2 \leq n \leq 2$, while truncations of the sum on the left don't reach that accuracy until you sum over $-13 \leq n \leq 13$.) Because of the extra factors of $2\pi$ in the ugly version of Poisson summation connected to your definition of the Fourier transform, there is no reason to think the sum of $\psi_2$ over $\mathbf Z$ should be $0$ from $\psi_2$ having its negative as its Fourier transform by your convention for the Fourier transform. Now I'll show you a genuinely vanishing sum from Poisson summation. For $a > 0$ and a nonnegative integer $m$, the $m$th derivative of $e^{-ax^2}$ is a polynomial multiple of $e^{-ax^2}$. Set $$ \frac{d^m}{dx^m}(e^{-ax^2}) = (-1)^mH_{a,m}(x)e^{-ax^2}. $$ For example, $H_{a,0}(x) = 1$, $H_{a,1}(x) = 2ax$, and $H_{a,2}(x) = 4a^2x^2 - 2a$. Here is a "general" Fourier transform formula involving the functions $H_{a.m}(x)e^{-(1/2)ax^2}$: $$ \int_{\mathbf R} H_{a,m}(x)e^{-(1/2)ax^2}e^{-iaxy}\ \frac{dx}{\sqrt{2\pi/a}} = (-i)^mH_{a,m}(y)e^{-(1/2)ay^2}. $$ I like to use $a = 2\pi$, giving us $$ \int_{\mathbf R} H_{2\pi,m}(x)e^{-\pi x^2}e^{-2\pi ixy}\ dx = (-i)^mH_{2\pi,m}(y)e^{-\pi y^2}. $$ So by my preferred definition of the Fourier transform, denoted $F$, the function $G_m(x) = H_{2\pi,m}(x)e^{-\pi x^2}$ has $F(G_m) = (-i)^mG_m$. Then by the cleaner form of Poisson summation associated to the definition of the Fourier transform that I use, $$ \sum_{n \in \mathbf Z} G_m(n) = (-i)^m\sum_{n \in \mathbf Z} G_m(n). $$ Therefore if $m$ is not a multiple of $4$ the sum on the left has to be $0$: $$ \sum_{n \in \mathbf Z} H_{2\pi,m}(n)e^{-\pi n^2} = 0. $$ When $m = 1$ this says $$ \sum_{n \in \mathbf Z} 2(2\pi)ne^{-\pi n^2} = 0, $$ which is obvious since terms at $n$ and $-n$ cancel. When $m = 2$ we have $H_{2\pi,2}(x) = 4\pi(4\pi x^2-1)$, so after dividing by $4\pi$ we get $$ \sum_{n \in \mathbf Z} (4\pi n^2 - 1)e^{-\pi n^2} = 0, $$ and it is not obvious that sum should be $0$. But if we calculate the sum over $-2 \leq n \leq 2$ it is already 0 to 9 decimal places, and the sum over $-4 \leq n \leq 4$ is 0 to over 30 decimal places. When $m = 3$ we get a boring sum equal to $0$ since $H_{2\pi,3}(x)$ is an odd polynomial (of degree $3$), and the case $m=5$ is uninteresting for a similar reason, but when $m = 6$ we get something not obvious again: $$ H_{2\pi,6}(x) = 64\pi^3(64\pi^3x^6 - 240\pi^2x^4 + 180 \pi x^2 - 15), $$ so we must have $$ \sum_{n \in \mathbf Z} (64\pi^3n^6 - 240\pi^2n^4 + 180 \pi n^2 - 15)e^{-\pi n^2} = 0. $$ Let $s(N)$ be the partial sum for the left side over $-N \leq n \leq N$. Then I compute $s(1) = -.637\ldots$, $s(2) = -.000001324\ldots$, $s(3)$ is $0$ to 14 decimal places, and $s(4)$ is $0$ to over 25 decimal places. Having tried to show why the version of Poisson summation using the definition of the Fourier transform that I prefer is nicer, I'll admit that the version of the Fourier transform that you prefer also admits a symmetric form of the Poisson summation formula. For $f \in S(\mathbf R)$ and $t > 0$, let $f_t(x) = f(tx)$. Then $\widehat{f_t}(y) = (1/t)\widehat{f}(y/t)$, so the ugly Poisson summation formula with $f_t$ in place of $f$ becomes $$ \sum_{n \in \mathbf Z} f_t(n) = \sqrt{2\pi}\ \sum_{n \in \mathbf Z} \hat{f_t}(2\pi n), $$ which says $$ \sum_{n \in \mathbf Z} f(tn) = \frac{\sqrt{2\pi}}{t}\ \sum_{n \in \mathbf Z} \hat{f}(2\pi n/t). $$ To make this look nice we want $t = 2\pi/t$, so $t = \sqrt{2\pi}$. Using that value for $t$, the previously ugly form of Poisson summation becomes the nicer-looking formula $$ \sum_{n \in \mathbf Z} f(\sqrt{2\pi}n) = \sum_{n \in \mathbf Z} \hat{f}(\sqrt{2\pi}n). $$ If you use $f(x) = \psi_2(x) = (2x^2-1)e^{-x^2/2}/\sqrt[4]{4\pi}$, for which $\widehat{\psi_2} = -\psi_2$ then we must have $$ \sum_{n \in \mathbf Z} \psi_2(\sqrt{2\pi}n) = 0, $$ so that is the vanishing sum you should have found instead of thinking $\sum_{n \in \mathbf Z} \psi_2(n)$ is $0$. And this vanishing sum is something we saw already: $\psi_2(\sqrt{2\pi}x) = (4\pi x^2-1)e^{-\pi x^2}/\sqrt[4]{4\pi}$, which up to scaling by $\sqrt[4]{4\pi}$ is $H_{2\pi,2}(x)$ from above, so this vanishing sum is up to scaling the earlier vanishing sum $\sum_{n \in \mathbf Z} (4\pi n^2-1)e^{-\pi n^2}$.
2025-03-21T14:48:30.338790
2020-04-17T01:09:58
357711
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "François Brunault", "Wilberd van der Kallen", "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/4794", "https://mathoverflow.net/users/6506", "https://mathoverflow.net/users/91826", "qqqqqqw" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628192", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357711" }
Stack Exchange
Abelianization of general linear group of a polynomial ring For $K$ a field, is it known what the abelianization of $GL_2(K[X])$ is? X is a polynomial variable. K[X] is not a field extension. Is it easy to determine the relations? "Determining the relations", which is a sloppy way to mean describe defining relators with respect to given (not yet here) generators, is much harder than describing the abelianization. One may use Nagao's Theorem. Especially when the field has only two elements, as the easy way is then blocked. Indeed I should have quoted Nagao (I referred to the Serre proof). J-P. Serre duly quotes Nagao (J. Poly. Ozaka Univ. 1959) who originally proved the amalgamation decomposition. Serre (Astérisque 1977) provided a new proof using action on trees (now known as Bass-Serre theory). This is related to algebraic $K$-theory: given a ring $R$, the Whitehead group $K_1(R)$ is defined as the abelianization of $\mathrm{GL}\infty(R) = \bigcup{n \geq 1} \mathrm{GL}_n(R)$. If $R$ is an Euclidean domain then $K_1(R) \cong R^\times$. You can read about these things in Milnor's Introduction to algebraic $K$-theory. There is a discussion on whether $K$ has two elements or is larger, which strongly affects the conclusion. One has the determinant map $\mathrm{GL}_2(K[X])\to K^*$. To show that it's the abelianization, it's enough to show that the kernel $\mathrm{SL}_2(K[X])$ is contained in the derived subgroup of $\mathrm{GL}_2(K[X])$. Since $K[X]$ is a Euclidean domain, $\mathrm{SL}_2(K[X])$ is generated by elementary matrices $e_{12}(y)$ and $e_{21}(y)$ where $y$ ranges over $K[X]$, $e_{ij}(y)=I_2+yE_{ij}$, and $(E_{ij})_{1\le i,j\le 2}$ is the canonical basis of the space of matrices. For $t\in K^*$, write $d_1(t)=tE_{11}+E_{22}$. Then $d_1(t)e_{12}(y)d_1(t)^{-1}=e_{12}(ty)$, and in particular, the commutator (with suitable conventions) is $e_{12}((t-1)y)$. If $K$ is not reduced to the field on 2 elements, one concludes that $e_{12}(y)$ (and similarly $e_{21}(y)$) is a commutator for each $y$, since one can choose $t\in K^*$ such that $t-1$ is a nonzero scalar. In this case, we deduce that: For $|K|\ge 3$ the abelianization of $\mathrm{GL}_2(K[X])$ is the determinant map onto $K^*$. [Side note: the argument also shows that if $|K|\ge 4$ then $\mathrm{SL}_2(K[X])$ is a perfect group: indeed one then chooses $t\in K^*$ with $t^2\neq 1$ and use $c_{12}(t)=tE_{11}+t^{-1}E_{22}$ instead of $d_t$, which satisfies $c_{12}(t)e_{12}(y)c_{12}(t)^{-1}=e_{12}(t^2y)$.] It remains to deal with the case when $K$ is the 2-element field $\mathbf{F}_2$, for which we have $\mathrm{GL}_2(\mathbf{F}_2[X])=\mathrm{SL}_2(\mathbf{F}_2[X])$. In this case the determinant map is trivial. But yet the abelianization map is non-trivial (in contrast to $n\ge 3$ where $\mathrm{SL}_n(K[X])$ is a perfect group for every field $K$). Indeed, it is known (for $K$ any field) that $\mathrm{GL}_2(K[X])$ is the amalgamated product of its subgroups $\mathrm{GL}_2(K)$ and $B(K[X])$ over their intersection $B(K)$. Here $B$ is the upper triangular group. A reference for this nontrivial fact is Serre's Astérisque book "Arbres, amalgames, $\mathrm{SL}_2$", or its English translation "Trees". Let us specify the latter to $K=\mathbf{F}_2$: in this case, $\mathrm{GL}_2(\mathbf{F}_2)$ is a non-abelian group of order $6$, while $B(\mathbf{F}_2[X])$ is the upper unipotent (abelian) group consisting of those matrices $e_{12}(y)$ where $y$ ranges over $K[X]$. Let me write presentations for these two groups: $$\mathrm{GL}_2(\mathbf{F}_2)=\langle z,x_0\mid z^2=x_0^2=(zx_0)^3=1\rangle$$ $$B(\mathbf{F}_2[X])=\langle x_i:i\ge 0\mid x_i^2=[x_i,x_j]=1:i,j\ge 0\rangle$$ These presentations are arranged so that $x_0$ represents the element $e_{12}(1)$, which generates the amalgamated subgroup (of order two). Hence a presentation of the amalgamated product is obtained by amalgamating the presentations: $$\mathrm{GL}_2(\mathbf{F}_2[X])=\langle z, x_i:i\ge 0\mid z^2=(zx_0)^3=x_i^2=[x_i,x_j]=1:i,j\ge 0\rangle.$$ To abelianize we just have to read this as presentation of abelian group. We know that $\mathrm{GL}_2(\mathbf{F}_2)$ abelianizes to a cyclic group of order $2$ in which $x_0$ and $z$ are identified, and we get $$\mathrm{GL}_2(\mathbf{F}_2[X])_{\mathrm{ab}}=\langle x_i:i\ge 0\mid x_i^2: i\ge 0\rangle_{\mathrm{AbGrp}}:$$ this freely generated by $(x_i)_{i\ge 0}$ as 2-elementary abelian group, and $x_i$ is the image of $e_{12}(X^i)$ in the abelianization map. [Side note: given that the derived subgroup of the $24$-element group $\mathrm{SL}_2(\mathbf{F}_3)$ has index $3$, similarly using an amalgam decomposition yields that the abelianization of $\mathrm{SL}_2(\mathbf{F}_3[X])$ is the free 3-elementary abelian group on the images of $e_{12}(X^i)$, for $i\ge 0$.] As an alternative to YCor's beautiful answer, one can use the following theorem of P. M Cohn [1, Theorem 9.5]. Theorem. Let $R$ be a ring which is quasi-free for $\text{GE}_2$ and denote by $N$ the ideal generated by all $\alpha - 1$ with $\alpha \in U(R)$. Then there is a split exact sequence $$0 \rightarrow R/N \rightarrow \text{GE}_2(R)^\text{ab} \rightarrow U(R)^\text{ab} \rightarrow 0.$$ and the mapping $\alpha \mapsto \begin{pmatrix} \alpha^\text{ab} & 0 \\ 0 & 1 \end{pmatrix}$ induces a splitting. In the above statement $U(R)$ denotes the unit group of $R$, $G^\text{ab}$ denotes the abelianization of a group $G$ and $g \mapsto g^\text{ab}$ the abelianization homomorphism (in particular in the case of $U(R)\to U(R)^\text{ab}$), $\text{GE}_2(R)$ is the subgroup of $\text{GL}_2(R)$ generated by the elementary matrices $\begin{pmatrix} 1 & r \\ 0 & 1 \end{pmatrix}$, $\begin{pmatrix} 1 & 0 \\ s & 1 \end{pmatrix}$ with $r,s \in R$ and the diagonal matrices $\begin{pmatrix} \alpha & 0 \\ 0 & \beta \end{pmatrix}$ with $\alpha, \beta \in U(R)$. I will not expand the definition of a quasi-free ring for $\text{GE}_2$ but just mention that a discretely normed ring is quasi-free for $\text{GE}_2$. In particular $K[X]$ is quasi-free for $\text{GE}_2$ if $K$ is a field. Note that if $\text{SL}_2(R)$ is generated by the elementary matrices, e.g. $R$ is Euclidean, then $\text{GE}_2(R) = \text{GL}_2(R)$. This holds in particular for $R = K[X]$ with $K$ a field since $K[X]$ is Euclidean (only) in this case. If $K$ is the field with two elements, then the above theorem yields an isomorphism of the additive group of $K[X]$ with $\text{GL}_2(K[X])^\text{ab}$. If $K$ is a field with more than two elements, the same theorem yields an isomorphism of $\text{GL}_2(K[X])^\text{ab}$ with $U(K)$. [1] P. M. Cohn, "On the structure of the $\mathrm{GL}_2$ of a ring", 1966 (MSN).
2025-03-21T14:48:30.339214
2020-04-17T02:04:41
357712
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Wolfgang", "https://mathoverflow.net/users/29783" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628193", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357712" }
Stack Exchange
Bell polynomial with variables 1 and 0 Let $B_{n,k}(x_1,\cdots,x_{n-k+1})$ be the Bell polynomial. If $x_1=\cdots=x_{n-k+1}=1$, we know that $B_{n,k}(x_1,\cdots,x_{n-k+1})=S(n,k)$, where $S(n,k)$ is the Stirling number of second kind. Now, let $$y_i=\begin{cases}1,& i\leq r\\0,& r<i\leq n-k+1\end{cases} \text{,}$$ where $r$ is an integer number that $1\leq r<n-k+1$. Can we find a simple formula to express $G_r(n,k):= B_{n,k}(y_1,\cdots,y_{n-k+1})$ as the $\mathbb{Z}$-linear combination of Stirling numbers of second kind? Theoretically, we can use the formula $$B_{n,k}=\sum_{i=1}^{n-k+1}\binom{n-1}{i-1}x_iB_{n-i,k-1}$$ recursively, but I wonder if there is a simpler choice. Or, at least, is there any consequence involving this $G_r(n,k)$? The other question is that if $r$ is small enough, one has $G_r(n,k)=0$. Now let $$H(n,k):=\max\{1\leq r<n-k+1\vert G_r(n,k)=0\} \text{,}$$ What can we say about $H(n,k)$? The trivial bound is $H(n,k)\geq \frac{n}{k}$. Supposedly, you have found something nice for $r=n-k$ to begin with. May you share it? From the generating function: $$\sum_{n\geq k \geq 0} B_{n,k}(x_1,\ldots,x_{n-k+1}) \frac{t^n}{n!} u^k = \exp\left( u \sum_{j=1}^\infty x_j \frac{t^j}{j!} \right)$$ it follows that $$\sum_{n\geq k \geq 0} G_r(n,k) \frac{t^n}{n!} u^k = \exp\left( u \sum_{j=1}^r \frac{t^j}{j!} \right) = \prod_{j=1}^r \exp\left( u \frac{t^j}{j!} \right) ,$$ from where we can get an explicit formula: $$G_r(n,k) = \sum_{i_1+\dots+i_r = k\atop i_1+2i_2+\dots+ri_r=n} \frac{n!}{1!^{i_1} 2!^{i_2} \cdots r!^{i_r}}.$$ It is also follows that $G_r(n,k)=0$ iff $kr<n$, that is $H(n,k) = \left\lfloor \frac{n-1}{k}\right\rfloor$.
2025-03-21T14:48:30.339340
2020-04-17T02:32:12
357713
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "VS.", "https://mathoverflow.net/users/136553" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628194", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357713" }
Stack Exchange
Small linear relations in unbalanced diophantine equations from primitive Pythagorean triples $r$ is parameter. Pick coprime $m,n\in[r,2r]$ with $mn$ even. Consider the Linear Diophantine Equation $$a^4u+b^4v+c^2z=0$$ where $a=m^2-n^2$, $b=2mn$ and $c=m^2+n^2$. Is it true that there are constants $$\alpha,\beta,\gamma,\delta>0$$ such that $$|u|,|v|<\alpha r^2\implies|z|>\beta r^6$$ $$|z|<\gamma r^6\implies|u|+|v|>\delta r^2$$ holds? I think above is true for the following reason: $a^4u+b^4v\bmod c^2$ seems to admit enough room to get $|u|,|v|>c>r^2$. Then since $a^4|u|,b^4|v|>r^{10}$ then it seems $r^6$ should be the lower bound for $|z|$. How to show this formally is unclear to me. I tried playing with $a^4=m^8-4m^6n^2+6m^4n^4-4m^2n^6+n^8$ and $b^4=16m^4n^4$ and $c^2=m^4+2m^2n^2+n^4$. I can't seem to nail down enough relations to make a formal proof as done in Small linear relations between primitive Pythagorean triples $\mathsf{II}$. The relations I found gave following basis for solution space to $a^4u+b^4v+c^2z=0$: $$v_1=(u,v,z)=(2m^2n^2,m^4+n^4,-2m^2n^2(m^4+2m^2n^2+n^4))=(2m^2n^2,m^4+n^4,-2m^2n^2(m^2+n^2)^2)$$ $$v_2=(u,v,z)=(8m^2n^2,3(m^4+n^4)-2m^2n^2,-8m^2n^2(m^4+n^4))=(8m^2n^2,2(m^4+n^4)+(m^2-n^2)^2,-8m^2n^2(m^4+n^4)).$$ It is unclear that if these are the shortest basis. It is not clear from this how to prove 1. even though these basis satisfy 1. In general is there algebraic methods to recover formal relations that guarantee reduced basis for $2$ and $3$ dimensional cases which will help looking for the full integral complement in null space so that lattice methods could be utilized as done in Small linear relations between primitive Pythagorean triples $\mathsf{II}$? Lenstra-Lenstra-Lovasz suffices for 2.. However I think it will be an overkill here. Perhaps there is an algebraic technique? Whatever the reduced basis should be is not clear to me. $(3v_1-v_2)/(2m^2n^2)$ gives a shorter basis $(-1,1,(m^4+n^4-6m^2n^2))$. $(3v_1-v_2)/(2m^2n^2)$ is $(-1,1,a^2-b^2)$. Consider the Linear Diophantine Equation $$a^{2t}u+b^{2t}v+c^2z=0$$ where $t\geq2$. There is always a $(u,v,z)\neq(0,0,0)$ solution with $\|(u,v,z)\|_\infty=O(r^{4(t-1)})$ since $c^2|(a^{2t}-b^{2t})$ and we can take $(u,v,z)=(-1,1,\frac{a^{2t}-b^{2t}}{c^2})=(-1,1,\frac{a^{2t}-b^{2t}}{a^2+b^2})$. Thus for $a^{4}u+b^{4}v+c^2z=0$ there is a solution with $|u|,|v|=1$ and $|z|\leq 64r^4$ if $m,n\in\mathbb Z$ are coprime in $[r,2r]$ with $mn$ even and $a=m^2-n^2$, $b=2mn$ and $c=m^2+n^2$.
2025-03-21T14:48:30.339618
2020-04-17T04:07:05
357718
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "DSM", "https://mathoverflow.net/users/155380", "https://mathoverflow.net/users/44191", "user44191" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628195", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357718" }
Stack Exchange
Shortest path in a k-partite graph which passes through each partition exactly once Suppose $G$ is a connected undirected positive edge-weighted k-partite graph, and let $G_1, \cdots, G_k$ be the vertex partitions. Also assume that $G_1$ and $G_k$ have exactly one vertex each, call them $v_1$ and $v_k$ respectively. How does one determine if there exists a path between $v_1$ and $v_k$ which passes though each partition exactly once? And if its given that such paths exist, how does one find a shortest (sum of edge weights) one? Presumably, you want some sort of efficient way to do so? Otherwise, brute force is an obvious method to do so. If so, you're going to need to define "efficient". Yes, I am looking for a polynomial time algorithm; better if it can be done in O($n^2$), n being the number of vertices in $G$. Is $k$ presumed constant as $n$ grows, then? If not, then how is the time allowed to depend on $k$? Sorry for the ambiguity in my earlier comment. For a fixed $k$, a quadratic complexity algorithm would also be great. Ideally, I would want the complexity to be polynomial in both $n$ and $k$. There's a trivial $O(n^2 (k - 1)!)$ algorithm: list every permutation of ${2, 3, \dots, k - 1}$, and for each of them, only allow paths visiting the partitions in that order. I doubt it's possible to get polynomial in both $n$ and $k$, as that should solve the traveling salesman problem (by letting $k = n$). Thank you. I get your point, although the (k-1)! term is not very encouraging. :)
2025-03-21T14:48:30.339735
2020-04-17T05:19:40
357720
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ARG", "https://mathoverflow.net/users/18974" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628196", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357720" }
Stack Exchange
relative transformation of coordinates on a flat surface I have a few coordinates that form a triangle. I have a relative point to that triangle. if the coordinates get translated to a new triangle I want to calculate the new relative point. How do I do this generally not only for 2 dimensions but for larger ones too? # triangle translation (5, 2) -> (2, -3) (2, -3) -> (-3, 6) (-3, 6) -> (6, 5) # relative point (5, -7) -> (x, y) how do I solve for x, y? EDIT: as I've done more research into basic geometric transformations I can see that I'm asking for a generalization of all possible transformations of the space. This is not merely a rotation, or a dilation, etc. This particular example rotates the space then stretches it at an increasing rate. So what is the generalized solution for calculating spatial transformations? This question seems to be more appropriate for MathSE. I'm not too sure what you are asking, but a generic affine transformation has the form $\big( \begin{array}{cc}a&b\c&d\end{array} \big) \vec{x} + \big( \begin{array}{c}e\f\end{array} \big)$. There are 6 unknown parameters ($a,b,c,d,e$ and $f$). From your three points you get 3 pairs of equations, so you have 6 equations with 6 parameters, and you just need to solve.
2025-03-21T14:48:30.339833
2020-04-17T06:17:56
357723
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "GH from MO", "Gerry Myerson", "https://mathoverflow.net/users/11919", "https://mathoverflow.net/users/3684" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628197", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357723" }
Stack Exchange
Question about proof of irrationality of $\zeta(3)$ I'm reading this article of Henri Cohen about Apery's proof of the irrationality of $\zeta(3)$ but I don't really get the details of "THEOREME 1". My first doubt is about the relation $a_n \sim A \alpha^n n^{-3/2}$. I know that if $a_n$ fulfilled the relation $a_n-34a_{n-1}+1=0$ then as its characteristic polynomial is $x^2-34x+1$ and as $\alpha$ is one of its roots, if we denote by $\bar{\alpha}$ the second root, then we would have $a_n=A_1\alpha^n+A_2\bar{\alpha}^n$. Then, as $0<\bar{\alpha}<1$ we have that $a_n/\alpha^n \longrightarrow A_1$. However, the relation for $a_n$ is $$a_n-(34-51n^{1}+27n^{-2}-5n^{-3})+(n-1)^3n^{-3}a_{n-2}=0$$ and I don't know how can we rigurously take care of the extra terms. Furthermore, how does one get the extra $n^{-3/2}$ term? Secondly, why does that relation implies that $\zeta(3)-a_n/b_n = O(\alpha^{-2n})?$ After that, it stays that it can be shown that from the prime number theorem we have that $\log d_n \sim n$ where $d_n=\text{lcm}(1,2, \cdots, n)$. I've managed to prove that $$\dfrac{\log d_n}{n} \leq \pi(n) \dfrac{\log n}{n} $$ but I'm not able to prove that $\log d_n/n$ converges to $1$. Lastly, I don't know how from this last result it is true that for any $\varepsilon >0$ we get $$\zeta(3)-\dfrac{p_n}{q_n}=O(q_n^{-r-\varepsilon})$$ I'm not really good at asymptotic behaivour and big-O notation so I would really appreciate if someone could answer with rigurous and detailed explanations. Thank you very much. There is a difference between "not getting the details" and "having doubt about the proof". @GHfromMO I think it depends on where you are. I think that in Indian English, "I have a doubt" means "I don't understand". See, e.g., https://ell.stackexchange.com/questions/91043/i-have-a-doubt-v-im-in-doubt @GerryMyerson: Thanks for the information!
2025-03-21T14:48:30.339965
2020-04-17T07:30:24
357727
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628198", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357727" }
Stack Exchange
Reference request: Category of finite dimensional representations of loop algebra is not semisimple For $\mathfrak{g}$ a semisimple Lie algebra, we may define its (untwisted) loop algebra as $L(\mathfrak{g}) = \mathfrak{g} \otimes \mathbb{C}\lbrack t,t^{-1} \rbrack$. Let $\mathcal{F}$ be the category of finite-dimensional $L(\mathfrak{g})$-modules. It is stated throughout the literature (e.g. [1]) that $\mathcal{F}$ is not semisimple. While it is straightforward to find an object in $\mathcal{F}$ that is reducible but not decomposable, I would like to know whether a purely categorical/algebraic (i.e. abstract) proof of this fact is known? [1] Senesi, P. 2009. Finite-dimensional Representation Theory of Loop Algebras: A Survey. (https://arxiv.org/abs/0906.0099).
2025-03-21T14:48:30.340045
2020-04-17T08:22:48
357730
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carlo Beenakker", "Henri Cohen", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/81776" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628199", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357730" }
Stack Exchange
Why is $-\int_{-\infty}^\infty \log\left[1+2f'(x)(1-\cos\phi)\right]\,dx$ equal to $\phi^2$? I came across this integral involving the derivative $f'(x)$ of the Fermi function $f(x)=(1+e^x)^{-1}$: $$I(\phi)=-\int_{-\infty}^\infty \log\left[1+2f'(x)(1-\cos\phi)\right]\,dx.$$ I'm pretty certain that $I(\phi)=\phi^2$ for $|\phi|<\pi$, periodically repeated, as in the plot. It means that only the ${\cal O}(\phi^2)$ term in a Taylor expansion of the integrand has a nonzero contribution, but I am unable to prove this. (Mathematica returns a polylog function, and will not simplify it further.) Any help would be much appreciated, I'm hoping such a simple answer will have a simple derivation --- perhaps without needing special functions? Why do you think it is $\phi^2$ ? Numerically, it does not seem so, I cannot reproduce your graph, and indeed the result is a combination of polylogs. apologies, typo corrected (minus sign); my colleague Omrie Ovdat just noticed that if you replace $\cos\phi$ by $\cos i\phi$, then Mathematica returns $-\phi^2$ --- so the inverted parabola without the periodic repetition. @user82588 -- presumably that is how Mathematica arrives at the answer; since the result is so simple, I was actually hoping there would be a more direct way of obtaining it, without the need for special functions, in particular because I hope to gain understanding for why only the $\phi^2$ survives integration --- in the physics problem where this integral appeared, the quadratic answer is meaningful and I would hope to understand its origin. It is clear that $I(0)=0$, hence by Leibniz's integration rule, it suffices to show that $$\int_{-\infty}^\infty\frac{-2f'(x)\sin\phi}{1+2f'(x)(1-\cos\phi)}\,dx=2\phi,\qquad |\phi|<\pi.$$ We can assume, without loss of generality, that $0<\phi<\pi$. By a bit of algebra, we can rewrite the last equation as $$\int_{-\infty}^\infty\frac{e^x\sin\phi}{e^{2x}+2(\cos\phi)e^x+1}\,dx=\phi.$$ By the change of variables $u=(e^x+\cos\phi)/\sin\phi$, this becomes $$\int_{\frac{\cos\phi}{\sin\phi}}^\infty\frac{du}{u^2+1}=\phi,$$ which in turn can be verified by calculus: $$\int_{\frac{\cos\phi}{\sin\phi}}^\infty\frac{du}{u^2+1}=\arctan(\infty)-\arctan\left(\frac{\cos\phi}{\sin\phi}\right)=\frac{\pi}{2}-\left(\frac{\pi}{2}-\phi\right)=\phi.$$ The proof is complete. We have $$I(t)=-\int_{-\infty}^\infty l_x(t)\,dx,$$ where $$l_x(t):=l(t):=\ln(1+2f'(x)(1-\cos t)).$$ Next, $$l_x''(t)=l''(t)=-\frac{2 e^x \left(c \left(e^{2 x}+1\right)+2 e^x\right)}{\left(2 c e^x+e^{2 x}+1\right)^2},$$ where $c:=\cos t\in(-1,1]$ for $|t|<\pi$. So, by substitution $e^x=u$, for $c\in(-1,1]$, $$I''(t)=-\int_{-\infty}^\infty l''_x(t)\,dx=2,$$ which implies $I(t)=t^2$ for $|t|<\pi$ (since $l_x(0)=0=l'_x(0)$ and hence $I(0)=0=I'(0)$). thank you, and apologies that I can only accept a single answer.
2025-03-21T14:48:30.340247
2020-04-17T08:28:18
357733
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "Emil Jeřábek", "Jochen Wengenroth", "YCor", "https://mathoverflow.net/users/12705", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/21051", "https://mathoverflow.net/users/36886", "https://mathoverflow.net/users/41291", "მამუკა ჯიბლაძე" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628200", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357733" }
Stack Exchange
Comparison of topology of pointwise convergence and compact-open topologies for Sierpiński space Let $\{0,1\}$ be equipped with the Sierpiński topology $\{\emptyset, \{0,1\},\{1\}\}$, and $\mathbb{R}^d$ with the usual Euclidean topology. Then is the pointwise-convergence (point-open) topology on $C(\mathbb{R}^d,\{0,1\})$ indeed weaker than the compact-open topology? I have in mind the case where $n>1$. It's "Sierpinski" (Sierpiński). Actually "Sherpinski" is probably a better transliteration for pronouncing, but is seldom or never used (for him). But definitely it's not "-isnki". Also "product", not "produce". @YCor I don't know enough English grammar, but I think the correct way is to write it exactly as it is written in Polish. I believe transliteration can be only used for languages with non-latin alphabets. @მამუკაჯიბლაძე But "Sherpinski" does exist (not for the mathematician), so it has most likely been transliterated at some point. Many surnames of foreign origin have been transliterated at some point: one can prefer to change the spelling than the pronunciation. Even in our case, one can see "Sierpinski" as a widespread transliteration of "Sierpiński" (to write the latter I have to copy-paste from a text including the 'ń' character, so don't always do it). I acknowledge that. Let me still add that there are quite subtle aspects to Polish pronunciation. In particular, the starting sound of Sierpiński is ɕ, and it is not the same as ʃ to which the English "sh" corresponds Doesn't the question reduce to the case $n=1$? A function into a product is continuous if and only if so are the components (=compositions with the projections). Half of Poland was a part of Russian empire for some time in relatively recent history, and as a consequence some names of Polish origin also spread among Russians. There is a chance that Sherpinski is in fact a transliteration from the Russian version of the name rather than a direct modification of the Polish name. I feel the question is getting more hits from the etymological discussion on "Sherpinski" than the content . @EmilJeřábek Hard to say. Russian pronunciation of the name Sierpiński has S rather than Sh, it sounds like Serpinskij (Серпинский). Another example - Łoś is properly pronounced like "wash", but with "sh" softened like the first sound in Sierpiński (ɕ). Russian pronunciation is Los' (Лось), with softened s. The word means moose both in Polish and in Russian. But I think you agree that quite probably English pronunciation stems from transliteration rather than from Russian version of the word. So it's not equal to the topology of pointwise convergence, its stronger, but what does it reflect exactly? After your edit there is no more $n$. You thus agree with my older comment (which now looks rather pointless)? Definetly, it was obvious (once you pointed it out ofcourse) Functions $\mathbb R^d\to \{0,1\}$ are indicator functions $f=I_{A}$ with $A=f^{-1}(\{1\})$ and continuity with respect to the Sierpiński topology precisely means that $A$ is open in $\mathbb R^d$. The topology of pointwise convergence on $C(\mathbb R^d,\{0,1\})$ is strictly coarser than the compact-open topology: Since the only interesting open set in $\{0,1\}$ is $\{1\}$ the sets $W(K)=\{I_B: K\subseteq B\}$ ($K$ compact and $B$ open in $\mathbb{R}^d$) form a base of the compact-open topology and the sets $W(E)$ with finite $E$ form base of the point-open topology (that name is probably not very common). To show that the claim one has to find a compact set $K$ such that $W(K)$ does not contain any $W(E)$ for a finite set $E$. This is the case, e.g, for the closed unit ball $K$ in $\mathbb R^d$: For any finite $E$ take a union $B$ of smalls open balls centered at the points of $E$ so that the volume of the $B$ is strictly less than the volume of $K$. Then $I_B \in W(E) \setminus W(K)$. What does it exactly mean for a sequence of indicators $I_{B_n}$ to converge to an indicator ($B_n,B$ open subsets of $\mathbb{R}^d$) in $C(\mathbb{R}^d,{0,1})$? I'm having trouble interpreting.
2025-03-21T14:48:30.340892
2020-04-17T09:19:50
357736
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carl-Fredrik Nyberg Brodda", "Florian Lehner", "Pat Devlin", "https://mathoverflow.net/users/120914", "https://mathoverflow.net/users/156157", "https://mathoverflow.net/users/22512", "https://mathoverflow.net/users/97426", "wandering_lambda" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628201", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357736" }
Stack Exchange
Dense graphs where every maximal independent set is large A maximal independent set of a graph $G$ is a subset of vertices $S$ such that each vertex of $G$ is either in $S$ or adjacent to some vertex in $S$, and no two vertices in $S$ are adjacent. Consider graphs of $n$ nodes that are dense, i.e., there are $m$ edges, where $m \ge n^{1+\epsilon}$, for some constant $\epsilon>0$. Update: As pointed out in the comments, one can take $n/2$ isolated vertices and then any dense graph of high $\ge 5$ on the remaining $n/2$ vertices. However, I'm more interested in regular graphs where every node has the same degree. I've updated my question is as follows: Does there exist a family of regular dense graphs of girth $\ge 5$ where every maximal independent set has a size of at least $\Omega(n)$? Note that the girth $\ge 5$ condition rules out obvious candidates such as the complete bipartite graph which has girth $4$. You'll want to add the condition that no two vertices in $S$ are adjacent. Couldn't you just take the disjoint union of $n/2$ isolated vertices and any dense graph of girth at least $5$ on $n/2$ vertices? @FlorianLehner: you're right, but actually I'm more interested in regular graphs. I've updated the question. I’m inclined to believe the answer is no they do not exist. A natural candidate would be the incidence graph of a projective plane. This is a regular girth 6 bipartite graph with $n$ vertices, roughly $n^{3/2}$ edges, but its independence domination number (the parameter you care about) is only order $\sqrt{n}$. https://www.semion.io/doc/domination-in-designs No, the object you’re looking for does not exist. A result of Harutyunyan, Horn, and Verstraete (see Theorem 4.15 of this survey) states the following Theorem: There is a constant $c > 0$ such that every $d$-regular graph $G$ on $n$ vertices of girth at least 5 satisfies $i(G) \leq \dfrac{n(\log(d)+c)}{d}$. Here, $i(G)$ is the independence domination number, which is the size of the smallest maximal independent set. Setting $m=n^{1+\varepsilon}$, this bound gives $i(G) \leq O(n^{1-\varepsilon} \log(n)) = o(n)$.
2025-03-21T14:48:30.341076
2020-04-17T09:53:34
357738
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628202", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357738" }
Stack Exchange
On reflexive bialgebras Let $A$ be a bialgebra. We can consider $A$ as a relfexive algebra (i.e. $A\cong A^{o*}$) or relfexive coalgebra (i.e. $A\cong A^{*o}$ where in each case $o$ denotes what is sometimes called restricted dual or simply dual coalgebra of an algebra). Equivalently, one can say that $A$ is reflexive as a coalgebra if and only if every cofinite ideal of $A^*$ is closed. Concerning $A$ considered as a coalgebra, I found a comprehensive study by Heyneman-Radford online which sorts out the question when it is reflexive. It is called "Reflexivity and Coalgebras of Finite Type". It contains for example, as Theorem 4.2.6, the result that an almost connected coalgbera $C$ is reflexive if and only if $C_1$ is finite dimensional, where $C_0\subseteq C_1\subseteq\cdots$ is the coradical filtration of $C$. My question concerns reflexivity of a bialgebra $A$ considered as both an algbera and coalgebra. When is a bialgebra reflexive considered as an algebra and how does this property relate to $A$ considered as relfexive as a coalgebra? Can we, somehow, connect the properties of $A$ being relfexive as an algebra and $A$ being reflexive as a coalgebra? Is there a similar study to Heyneman-Radford concerning reflexive algebras or bialgebras? An online search gives results about functional analysis which might talk about a different reflexivity notion. Remark. If it makes a difference that an antipode is present, you can always assume that $A$ is actually a (possibly braided) Hopf algebra and treat the analogous question seperately for such a (possibly braided) Hopf algebra. There shouldn't be any relations in general. Given an algebra $A$, there can be multiple different co-multiplications on $A$ to make it a bialgebra, or even a Hopf algebra. For instance, let $M(G)$ be the set of functions $G\rightarrow k$ where $G$ is a finite group and $k$ is a field. This is a $k$-algebra using pointwise multiplication. Every different group structures on $G$ lead to different Hopf algebra structure on $M(G)$.
2025-03-21T14:48:30.341219
2020-04-17T10:04:54
357739
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Markus Shepherd", "https://mathoverflow.net/users/37951" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628203", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357739" }
Stack Exchange
Explicit formula for $n$th prime in terms of Riemann zeros: We all know there exists an explicit Formula for prime counting function in terms of Riemann zeros. I'm wondering if similar formula exists for $n$th prime in terms of Riemann zeros? Or any other conjectured explicit formula for nth prime? ( Probably analytic). I'm looking for analytic function for nth prime. Could you define exactly what you mean by a formula for the nth prime?
2025-03-21T14:48:30.341277
2020-04-17T10:16:33
357740
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628204", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357740" }
Stack Exchange
Yoneda extensions in derived categories If given an abelian category $\mathcal{A}$, we can consider the bounded derived category $D^b(\mathcal{A})$. For two objects $A,B \in \mathcal{A}$, we know that there is a natural identification between $$\text{Ext}_{\mathcal{A}}^i(A,B)$$ and $$\text{Hom}_{D^b(\mathcal{A})}(A,B[i])$$ using Yoneda extensions. This is proven in Verdier's thesis (see also group of Yoneda extensions and the EXT groups defined via derived category). I wondered whether one in general can identify the morphisms in the derived category with Yoneda extensions in the following way: Given two complexes $E^\bullet, F^\bullet \in D^b(\mathcal{A})$, can we identify the set of morphisms $$\text{Hom}_{D^b(\mathcal{A})}(E^\bullet, F^\bullet[i])$$ with sequences of morphisms in the derived category $D^b(\mathcal{A})$ $$F^\bullet \to Z_{i-1}^\bullet \to \dots \to Z_0^\bullet \to E^\bullet $$ such that the above sequence breaks into distinguished triangles, i.e. there exist distinguished triangles $$F^\bullet \to Z_{i-1}^\bullet \to G_{i-1}^\bullet, \quad G_{i-1}^\bullet \to Z_{i-2}^\bullet \to G_{i-2}^\bullet, \dots$$ In the case that $E^\bullet,F^\bullet$ are complexes concentrated in degree 0 (i.e. objects in $\mathcal{A}$), this is precisely the Yoneda extension with the $Z_j^\bullet, G_j^\bullet$ also complexes concentrated in degree 0. For $i=1$ and $E^\bullet, F^\bullet$ arbitrary, this also holds true since a morphism $E^\bullet \to F^\bullet[1]$ can be completed to a distinguished triangle $$G^\bullet \to E^\bullet \to F^\bullet[1]$$ which we can rotate to obtain $$F^\bullet \to G^\bullet \to E^\bullet.$$ I am curious about the general case. There is such a sequence, but it's not very interesting. Given an element of $\text{Hom}_{D^b(\mathcal{A})}(E,F[i])$, then in the same way you describe, this gives a distinguished triangle $$F\to Z_{i-1}\to E[-i+1].$$ Then you can just take the distinguished triangles $E[-i+1]\to0\to E[-i+2]$, $E[-i+2]\to0\to E[-i+3]$, $\dots$, up to $E[-1]\to0\to E$, which can be spliced to give a sequence $$F\to Z_{i-1}\to Z_{i-2}\to\dots\to Z_0\to E,$$ where $Z_{i-2}=\dots= Z_0=0$. This might seem like cheating, but I don't think it is. In the case of (a nonzero element of) $\text{Ext}_{\mathcal{A}}^i(A,B)$, the only reason you can't have zero objects in your sequence $$B \to Z_{i-1} \to Z_{i-2}\to\dots \to Z_0 \to A$$ is that you insist that the $Z_j$ are not just objects of $D^b(\mathcal{A})$, but of $\mathcal{A}$. If you drop this requirement, then you just have too much freedom.
2025-03-21T14:48:30.341437
2020-04-17T10:17:17
357741
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Mark", "Nick L", "Sasha", "https://mathoverflow.net/users/150386", "https://mathoverflow.net/users/4428", "https://mathoverflow.net/users/99732" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628205", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357741" }
Stack Exchange
Singular $3$-fold with $b_{2}=1$ Suppose that $X$ is a complex, rational, $\mathbb{Q}$-Gorenstein $3$-fold with at most terminal singularities, and $H_{2}(X,\mathbb{R})=\mathbb{R}$. Is $X$ Fano? By Kleiman ampleness criterion this is equivalent to $(-K_{X})^3>0$, but the standard argument from the smooth case of using Kodaira dimension doesn't quite work. A nodal hypersurface of high degree is a variety of general type with $b_2 = 1$. Is it rational? Every nodal hypersurface in $\mathbb{P}^{4}$ of degree at least 5 is non-rational. See remark 3 here https://arxiv.org/pdf/math/0405150.pdf. I required that the variety in rational, so atleast in dimension 3 your example doesn't work. Actually dimension $3$ is my main motivation so if you don't mind I will add this assumption. This should be fine. Either $K_X$ is anti-ample, ample, or trivial, since Picard rank is 1. The latter two are impossible since you're assuming terminal+rational. Thanks. I figured it would be rather strange to have a rational variety with $-K_{X}=0$ or even $K_{X}$ is ample. But I would like to know a proof if it is possible. I guess this just follows from exercise 4.1.9 here https://www.dpmms.cam.ac.uk/~cb496/birgeom.pdf.
2025-03-21T14:48:30.341543
2020-04-17T10:56:15
357744
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628206", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357744" }
Stack Exchange
For which (g,q) does there exist a supersingular curve? We say a curve over a finite field $\mathbb F_q$ is supersingular if it's Frobenius eigenvalues (on $H^1(X,\mathbb Z_\ell)$) are of the form $q^{1/2}\alpha$ for $\alpha$ a root of unity. As far as I know, the question: "For which pairs $(g,q)$ does there exist a supersingular curve of genus $g$ over $\mathbb F_q$" is open. Partial results that I know are: Over characteristic $2$, every genus occurs: van der Geer, van der Vlugt. Genus $4$ in arbitrary characteristic is known: https://arxiv.org/abs/1903.08095 The Hermitian curve $y^q + y = x^{q+1}$ is supersingular for $q = p^n$. slide 20. The Supersingularity of Hurwitz Curves proves for a specific farmily of $(g,q)$. Theorem 1.1 here lists, for $4\leq g\leq 11$, some congruence conditions on $p$ for which there exist supersingular curves. I am sure I have missed a few but I are there are any large families that I have missed? What is the state of the art today?
2025-03-21T14:48:30.341639
2020-04-17T12:03:11
357746
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Sergey Guminov", "Will Sawin", "https://mathoverflow.net/users/131868", "https://mathoverflow.net/users/18060", "https://mathoverflow.net/users/64302", "user2520938" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628207", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357746" }
Stack Exchange
Operations on perverse sheaves on disk The category of perverse sheaves on the disk is isomorphic to the category of diagrams $$E\substack{\substack{c\\\to}\\\substack{v\\\leftarrow}}F$$ With $E,V$ finite dimensional vector spaces, and the maps satisfying $cv+1_F$ and $vc+1_E$ are isomorphisms. We have on the disk maps $f_k:\Delta\to \Delta:z\mapsto z^k$. Given a diagram $D$, corresponding to the perverse sheaf $P_D$, what is the diagram for $Rf_{k*}P_D$,$Rf_{k!}P_D$, $f_k^!P_D$ and $f_k^*P_D$? I'm not sure how the notation you're using matches up to different presentations of this equivalence. I'm going to assume $E$ is nearby cycles, $F$ is vanishing cycles, $c$ is the obvious map from nearby cycles to vanishing cycles arising from the mapping cone, and $v$ is the less-obvious map. I'll write each operator as sending $(E, F, c,v)$ to $(E', F', c', v')$ The stalks at the special point and geometric generic point are both preserved by pullback, so the mapping cone $ E \to_c F$ of the natural map between them is preserved by pullback. Thus $E'=E, F'=F, c'=c$. However, $v$ might be changed. Indeed, the monodromy operators $cv+1$ and $vc+1$ are raised to their $k$th power by pullback along $f_k$. Since the $k$'th power is $\sum_{i=0}^k \binom{k}{i} (cv)^i$ or $\sum_{i=0}^k \binom{k}{i} (vc)^i$ respectively, to obtain these as $c'v'+1= cv'+1$ and $v'c'+1= v'c+1$, we must have $$v' = \sum_{i=1}^k \binom{k}{i} v(cv)^{i-1} = \sum_{i=1}^k \binom{k}{i} (vc)^{i-1}v. $$ So $f_k^*$ sends $(E, F, c,v)$ to $(E, F, c, \sum_{i=1}^k \binom{k}{i} v(cv)^{i-1})$. $f_k^!$ can be obtained by dualizing, applying $f_k^*$, and dualizing again. I guess dualizing sends $E$ to $E^\vee$, $F$ to $F^\vee$, $c$ to $v^T$ and $v$ to $c^T$. Applying this, we see $f_k^!$ sends $(E, F, c,v)$ to $(E, F, \sum_{i=1}^k \binom{k}{i} c(vc)^{i-1},v)$. Now pushforward along $f_k$ preserves the stalk at the special point but not the nearby cycles or vanishing cycles. Rather, it sends nearby cycles to a vector space of dimension $k$ times higher where $(1+v'c')$ acts by the $k$'th root. Thus, we set $E' = E^k$ where we want $1+v'c'$ to act by sending $(e_1,\dots, e_k)$ to $(e_2, e_3,\dots, e_k , (1+vc)e_1$. We can think of each copy of $E$ as representing the stalk at one of the $k$ points in the geometric generic fiber of $f_k$, and since the natural map from the stalk at the special fiber to the stalk at the geometric generic fiber is nontrivial at each of those points, we would like the kernel of $c'$ to include the kernel of $c$ written diagonally. This can be achieved by setting $F' = E^{k-1} \times F$ and letting $c'$ send $(e_1,\dots, e_k)$ to $(e_2-e_1,e_3-e_2, \dots, e_k-e_{k-1}, c e_1)$. This gives the correct kernel and cokernel. To get the correct value for $1+v'c'$, we just need $v'$ to send $(f_1,\dots, f_k)$ to $(f_1, \dots, f_{k-1}, v f_k-f_1 - \dots - f_{k-1})$. This gives $f_{k*}$, and $f_{k!}$ is the same since $f_k$ is finite. I think one can check this is correct using the adjunction, but I haven't worked it out. Thanks for all your help! I will carefully read this over the next couple of days. I think the computation of $v'$ for the case of $f_{k*}$ is off. It seems that the last component of $1+v'c'$ is $e_k+vce_1$ instead of $e_1+vce_1$. Changing the last component of $v'$ from $vf_k$ to $vf_k-f_1-\ldots-f_{k-1}$ seems to work. Also, there is a minor typo, $F'$ should probably be $E^{k-1}\times F$, not $E^k\times F$. This answer was very helpful, by the way! @SergeyGuminov Corrected, thanks. I realized that I don't understand one thing: is the functor $f^$ in this answer the whole inverse image functor, or just the 0-th perverse cohomology? Is there a reason why $f^$ is $t$-exact in this case (if at all true)?
2025-03-21T14:48:30.342012
2020-04-17T13:23:35
357751
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ARG", "Abdelmalek Abdesselam", "Hermi", "ThighCrush", "gmvh", "https://mathoverflow.net/users/14443", "https://mathoverflow.net/users/168083", "https://mathoverflow.net/users/18974", "https://mathoverflow.net/users/45250", "https://mathoverflow.net/users/516629", "https://mathoverflow.net/users/7410", "kneidell" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628208", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357751" }
Stack Exchange
How to understand the combinatorial Laplacian $\Delta$ which is defined on the graph? I have a question about the combinatorial Laplacian $\Delta$ which is defined by $$\Delta(u,v)=c(u)1_{u=v}-c(u,v)$$ where $u, v$ are some vertices in the graph $G=(V, E)$, and $c(u,v)$ is a conductance function defined on the edge $uv$ (i.e. weighted functions). If I define a function $F: V\to \mathbb{R}$, we can define the gradient $\nabla F(e)$ by $$\nabla F(uv):=c(u,v)(F(v)-F(u))$$. But how to understand the $\Delta F(uv)$ by the combinatorial Laplacian $\Delta$? Actually, textbook claims that $$\nabla \cdot \nabla F= -\Delta F$$ I have no idea to prove that. The divergence $\nabla\cdot f$ is defined by $$\nabla\cdot f(v)=\sum_{e} f(e).$$ So $\nabla\cdot \nabla F(v)=\sum_{xy} c(x, y)(F(y)-F(x))$. Given $F:V\to\mathbb R$, if I understand correctly, $\nabla F$ would be a function on the set of edges. How do you define $\nabla\cdot \nabla F$ then? For the gradient the graph needs to be a digraph, whereas the Laplacian does not need oriented edges. @kneidell The divergence is defined by $\nabla\cdot f(v)=\sum_{e} f(e).$ So $\nabla\cdot \nabla F(v)=\sum_{xy} c(x, y)(F(y)-F(x))$. Just to add an (in my opinion important) piece of information. Say $F$ is a function on the vertices of a graph, so $F:V \to \mathbb{R}$. Then $\nabla F$ is a function from the edges to $\mathbb{R}$ (here I see an edge as a pair of vertices $(x,y)$, so edges are oriented): $$\nabla F (x,y) := F(y) - F(x)$$ Now this definition is very natural in many ways. For example, you would expect that the integral of the gradient of a function along a path is just the difference of the values of the function at the end of this path. And this holds here: if $\vec{p}$ is an oriented path (say from $a$ to $b$) then $\sum_{\vec{e} \in \vec{p}} \nabla F(\vec{e}) = F(b) - F(a)$. You can add a weight to the edges, but this is (in my opinion) not the important point for intuition. Here is the important piece of information: if your graph has bounded degree$^*$, $\nabla$ defines an operator from $\ell^2V$ to $\ell^2E$. (The pairing on $\ell^2V$ is just $\langle f \mid g \rangle_V = \sum_{v \in V} f(v)g(v)$. Same pairing on $\ell^2E$ just that the sum is over the edges) So you may ask, what is the adjoint of this operator? Well the defining property can be tested on Dirac masses (which are a nice basis of our spaces): $$ \langle \nabla^* \delta_{\vec{e}} \mid \delta_x \rangle = \langle \delta_{\vec{e}} \mid \nabla\delta_x \rangle $$ So this is $+1$ if $\vec{e}$ has $x$ as target, $-1$ if $x$ is the source and 0 otherwise. Extended by linearity this gives: (here $G(x,y)$ is a function on the edges) $$ \nabla^* G(x) = \sum_{y \in N(x)} G(x,y) - \sum_{y \in N(x)} G(y,x) $$ where $y \in N(x)$ means $y$ is a neighbour of $x$. (If your edges are not oriented, it is natural to consider only alternating functions on the edges, that is $G(x,y) = -G(y,x)$; the above expression simplifies then a bit) The rest is just a computation: $$ \begin{array}{rl} \nabla^* \nabla F(x) &= \displaystyle \sum_{y \in N(x)} \nabla F(x,y) - \sum_{y \in N(x)} \nabla F(y,x) \\ &= \displaystyle \bigg( \sum_{y \in N(x)} [F(y) - F(x)] \bigg) - \bigg( \sum_{y \in N(x)} [F(x) - F(y)] \bigg) \\ &= \displaystyle 2 \bigg( \sum_{y \in N(x)} [F(y) - F(x)] \bigg) \\ &= \displaystyle 2 \bigg( \big[ \sum_{y \in N(x)} F(y) \big] - \deg(x) F(x) \bigg) \\ \end{array} $$ And that's the formula for the Laplacian (when the conductance is 1). Note that I got a difference of a factor of 2 (because my definition of divergence is a bit different). But having a divergence which is the adjoint of the gradient, is a very important point, in my opinion. If you add a weight to the edges, the computation are slightly more complicated, but it's just [possibly painful] bookkeeping. $^*$ if you have weighted edges you could have an infinite number of edges as long as their weight is bounded EDIT: a small addendum, for the case where the edge have a weight, as I realised there are many ways to add a weight in the above setup: you can add it to the definition of the gradient (but then the property that the integral along a curve is the difference of the values at the ends fail) you can add it to the definition of the divergence you can add it to the norm on $\ell^2E$ I would recommend doing using the third one (which is the most natural: since the edge have a weight, incorporate it the norm in $\ell^2E$). This means the inner product on $\ell^2E$ is $$\langle f \mid g \rangle = \sum_{\vec{e} \in E} c(\vec{e}) f(\vec{e}) g(\vec{e}) $$ Because edges can be written as pair of vertices $(x,y)$ this reads $$\langle f \mid g \rangle = \sum_{(x,y) \in E} c(x,y) f(x,y) g(x,y) $$ [In your context, you probably want $c(x,y) = c(y,x)$.] Now if you look at $$ \langle \nabla^* \delta_{\vec{e}} \mid \delta_x \rangle = \langle \delta_{\vec{e}} \mid \nabla\delta_x \rangle $$ then this is $c(y,x)$ if $\vec{e}$ has $x$ as target, $-c(x,y)$ if $x$ is the source and 0 otherwise. Extended by linearity this gives: (here $G(x,y)$ is a function on the edges) $$ \nabla^* G(x) = \sum_{y \in N(x)} c(x,y) G(x,y) - \sum_{y \in N(x)} c(y,x) G(y,x) $$ If you assume $c(x,y) = c(y,x)$ and $G(x,y) = -G(y,x)$ (as you should in the unoriented case), you get: $$ \nabla^* G(x) = 2 \sum_{y \in N(x)} c(x,y) G(x,y) $$ Then, direct computation yields $$ \begin{array}{rl} \nabla^* \nabla F(x) &= \displaystyle 2 \sum_{y \in N(x)} c(x,y) \nabla F(x,y) \\ &= \displaystyle 2 \bigg( \sum_{y \in N(x)} c(x,y) [F(y) - F(x)] \bigg) \\ &= \displaystyle 2 \bigg( \sum_{y \in N(x)} [ c(x,y) F(y) - c(x,y) F(x)] \bigg) \\ &= \displaystyle 2 \bigg( \big[ \sum_{y \in N(x)} c(x,y) F(y) \big] - \big[ \sum_{y \in N(x)} c(x,y) \big] F(x) \bigg) \\ &= \displaystyle 2 \bigg( \big[ \sum_{y \in N(x)} c(x,y) F(y) \big] - c(x) F(x) \bigg) \\ \end{array} $$ where $c(x)$ is a short-hand for $\sum_{y \in N(x)} c(x,y)$. This is the Laplacian (up to a sign). The fact that you put a "$-$" sign or not depends entirely on your taste: if you want a Laplacian with negative spectrum, you should put a "$-$", otherwise don't (it's a standard trick to see that $A^*A$ has positive spectrum). One can get rid of the factor of 2 (which appears in my computation). This factor comes in my construction just because I see $(x,y) $ and $(y,x)$ as two separate edges. To get rid of it, just set for every edge an orientation (that is, either $(x,y) \in E$ or $(y,x) \in E$ but not both). The notation is sometimes a bit more clumsy, but you get the exact same thing (without the factor of 2). See this post for interpretation of the Laplacian Very nice answer, that I am still digesting. However I think there's a minor mistake concerning a minus sign, in that MINUS the divergence is the adjoint of the gradient, and that this leads to the nice formula for the discrete divergence that you give... right? @ThighCrush thanks! yes, switched the sign just after the sentence "extending by linearity" because it fits with what the OP was asking. But basically, if you define the divergence as the adjoint of the gradient (which is not what I did here) you get that $\nabla^* \nabla$ is positive definite. I could edit the post, but that might make your comment seem strange... Fix a vertex $v$. Then $$ \nabla F(uv) = c(u,v)\big(F(v)-F(u)\big) $$ for $u$ adjacent to $v$. Now \begin{align*} \nabla\cdot\nabla F(v) &= \sum_{uv} c(v,u)\big(F(u)-F(v)\big)\\ &= -F(v)\left(\sum_u c(v,u)\right) + \sum_u c(v,u) F(u)\\ &= -\sum_u \big(c(u)\mathbb{1}_{u=v}-c(v,u)\big)F(u)\\ &=-\sum_u \Delta(v,u)F(u) \end{align*} where the sums are always over $u$ adjacent to $v$, and I assume $c(u)=\sum_u c(v,u)$. Thanks for your answer. But I am still confused about how to say $\nabla \cdot \nabla F= -\Delta F$ by your final result? What is the $\sum \Delta(v,u)$? Consider $\Delta$ to be a matrix and $F$ to be a vector. Then the final line is just $-\Delta F(v)$. Well, that is my question what is the $\Delta F(v)$? Why the final line is $\Delta F(v)$? We define the $\Delta(u,v)$ but how to define $\Delta F(v)$. I mean why $\sum_u \Delta(v,u)F(u)= \Delta F(u)$? If one defines a linear operator by a kernel (a matrix in this case), I'd think that it is generally understood that it operates on a function by convolution (matrix multiplication in this case).
2025-03-21T14:48:30.342503
2020-04-17T13:29:36
357753
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Stanley Yao Xiao", "https://mathoverflow.net/users/10898", "https://mathoverflow.net/users/56097", "user56097" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628209", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357753" }
Stack Exchange
Reference request: probability that d numbers are coprime The following theorem can be found in Hardy-Wright (Theorem 459), except that they state it only for $d=2$. Do you know of a reference where the proof of this general statement is written? Theorem: Let $d\ge2$ be an integer. Let $F$ be a bounded subset of $\Bbb R^d$. For every positive real number $r$, denote by $F(r)$ the set of points $x$ of $\Bbb Z^d$ such that $x\over r$ belongs to $F$. Assume that the cardinality of $F(r)$ divided by $r^d$ converges to some non-zero limit when $r$ goes to infinity. Then, when $r$ goes to infinity, the cardinality of the set of $(x_1,\ldots,x_d)$'s in $F(r)$ such that $\operatorname{GCD}(x_1,\ldots,x_d)=1$ is equivalent to $r^d/\zeta(d)$ when $r$ goes to infinity. Thank you very much for your time! Thank you Daniele for the editing! Let $F$ be a bounded subset of $\mathbb R^d$ with $d \geq 2$. We define $F_r := rF \cap \mathbb Z^d$ for any real number $r>0$ and assume that the limit $$ \mathcal V(F) := \lim_{r \to + \infty} \frac{|F_r|}{r^d} $$ exists (for convex subsets, this is the Lebesgue volume of $F$). We rewrite the proof of Theorem 459 of Hardy–Wright so that it yields the following more general result. If $\mathcal V(F)$ is well-defined, then we have $$ \lim_{r \rightarrow + \infty} \frac{\left| \left\{ x \in F_r, \operatorname{gcd}(x_1, \cdots, x_d) = 1 \right\} \right|}{r^d} = \frac{\mathcal V(F)}{\zeta(d)}. $$ Proof. We can and will assume that $0 \notin F$, which will not change any of the limits. We also fix $N$ such that $F \subset [-N,N]^d$. For every rational $r>0$, let $f(r) = \left| \left\{ x \in F_r, \operatorname{gcd}(x_1, \cdots, x_d) = 1 \right\} \right|$. As $0 \notin F$, $|F_r| = f(r)=0$ when $r<1/N$ and $f(r) \leq |F_r| \leq (2rN+1)^d \leq (3rN)^d$ for all $r \geq 1/N$, so $|F_r| \leq (3rN)^d$ in all cases. For any point $x$ of $F_r$, there is a unique integer $k \in \mathbb N$ such that the gcd of the coordinates of $x$ is $k$, and then $x/k$ contributes to $f(r/k)$. Consequently (the right-hand side being in fact a finite sum) $$ |F_r| = \sum_{k=1}^{+ \infty} f(r/k). $$ By Möbius inversion, we then get $$ f(r) = \sum_{k=1}^{+ \infty} \mu(k) |F_{r/k}|. $$ The sum of $\mu(k)/k^d$ converges absolutely towards $1/\zeta(d)$ as $d \geq 2$, so $$ \frac{f(r)}{r^d} - \frac{\mathcal V(F)}{\zeta(d)} = \sum_{k=1}^{+ \infty} \frac{\mu(k)}{k^d} \left( \frac{|F_{r/k}|}{(r/k)^d} - \mathcal V(F) \right). $$ Let $\varepsilon>0$. By definition of $\mathcal V(F)$, we fix $n_0$ such that if $r/k \geq n_0$, $\left| \frac{|F_{r/k}|}{(r/k)^d} - \mathcal V(F) \right| \leq \varepsilon$, which gives the inequality $$ \sum_{k=1}^{\lfloor r/n_0 \rfloor} \frac{1}{k^d} \left| \frac{|F_{r/k}|}{(r/k)^d} - \mathcal V(F) \right| \leq \zeta(d) \varepsilon. $$ On the other hand, the bounds on $|F_{r/k}|$ give $$ \sum_{k > \lfloor r/n_0 \rfloor} \frac{1}{k^d} \left| \frac{|F_{r/k}|}{(r/k)^d} - \mathcal V(F) \right| \leq \left((3N)^d + \mathcal V(F)\right) \times \frac{ (\lfloor r/n_0 \rfloor)^{1-d} }{d-1}. $$ Hence, for $r$ large enough, the absolute value of $\left|\frac{f(r)}{r^d} - \frac{\mathcal V(F)}{\zeta(d)}\right|$ is smaller than $2 \zeta(d) \varepsilon$, which proves the desired convergence. $\blacksquare$ Acknowledgements. This post greatly benefitted from exchanges with Samuel Le Fourn. The proof for general $n \geq 2$ is the same as the $n = 2$ case. For simplicity we shall consider the region $F = [1,X]^n$ where $X$ is some large positive number. We put $F_d$ be the subset of $F \cap \mathbb{Z}^n$ consisting of those tuples whose coordinates are all divisible by $d$. Note that $d \leq X$, by definition. We see that $$\displaystyle |F_d| = \frac{X^n}{d^n} + O \left(\frac{X^{n-1}}{d^{n-1}}\right).$$ We now write $F^\ast$ for the subset of $F$ consisting of those tuples whose coordinates are co-prime. Then $$\begin{align*} |F^\ast| & = \sum_{d \leq X} \mu(d) |F_d| \\ & = \sum_{d \leq X} \mu(d) \left(\frac{X^n}{d^{n}} + O \left(\frac{X^{n-1}}{d^{n-1}} \right) \right)\\ & = \prod_{p \leq X} \left(1 - \frac{1}{p^n}\right) X^n + O \left(\sum_{d \leq X} \frac{X^{n-1}}{d^{n-1}}\right) \\ & = \prod_p \left(1 - \frac{1}{p^n} \right) X^n + O \left(X^{n-1} \log X \right) \end{align*}. $$ To be more explicit, we consider the product $$ \begin{align*} \prod_{p} \left(1 - \frac{1}{p^n} \right) & = \prod_{p \leq X} \left(1 - \frac{1}{p^n} \right)\prod_{p > X} \left(1 - \frac{1}{p^n} \right) \\ & = \prod_{p \leq X} \left(1 - \frac{1}{p^n} \right) \left(\exp \left(\sum_{p > X} \log \left(1 - p^{-n} \right) \right) \right) \\ & = \prod_{p \leq X} \left(1 - \frac{1}{p^n} \right)\left(1 + O(X^{1-n})\right) \end{align*}$$ which justifies the first previous calculaton. Finally, by the Euler product of the Riemann zeta function we have $\zeta(n)^{-1} = \prod_{p} (1 - p^{-n})$. Thank you very much for your time, Stanley. I agree that the same proof works. However, I am not looking for a proof but for a place where it is written so that I can refer to it in a paper: it really is a reference request rather than a mathematical question. Also, I do want the fully general statement. Your particular statement is proved here for example: https://arxiv.org/abs/1602.07180 From my answer you should gain the confidence to simply say "this is proved explicitly for $n = 2$ by Hardy and Wright and the general case follows similarly". The first sentence in my answer says essentially the same thing You can find the general idea (Möbius inversion) in he book Vinogradov, I. M. Elements of number theory Dover Publications Inc., 1954 (Ch. 2.3.d and problem 2.17). But it is written in a strange form.
2025-03-21T14:48:30.342865
2020-04-17T13:39:24
357755
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "LFH", "LSpice", "https://mathoverflow.net/users/2383", "https://mathoverflow.net/users/80903" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628210", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357755" }
Stack Exchange
Lifting one parameter subgroup $e^{t K}$ to the universal cover of $\mathrm{Sp}(2N,\mathbb{R})$ I would like to lift an arbitrary one-parameter subgroup $e^{t K}$ with $K\in\mathfrak{sp}(2N,\mathbb{R})$ to the universal cover $\widetilde{\mathrm{Sp}}(2N,\mathbb{R})$ (or at least its two-fold cover, i.e., the metaplectic group). I follow the paper of John Rawnsley On the universal covering group of the real symplectic group, where an element of the universal covering group $\widetilde{\mathrm{Sp}}(2N,\mathbb{R})$ is represented as pair \begin{align} \widetilde{\mathrm{Sp}}(2N,\mathbb{R})=\left\{(g,c)\in\mathrm{Sp}(2N,\mathbb{R})\times\mathbb{R}\,\big|\,e^{ic}=\varphi(g)\right\}\,, \end{align} where $\varphi: \mathrm{Sp}(2N,\mathbb{R})\to S^1\subset\mathbb{C}$ is a normalized circle function defined as follows. We start with a complex structure $J: \mathbb{R}^{2N}\to \mathbb{R}^{2N}$ that is compatible with the symplectic form $\Omega$ on $\mathbb{R}^{2N}$. For every group element $g\in\mathrm{Sp}(2N,\mathbb{R})$, we then define $C_g=\frac{1}{2}(g-JgJ)$, which commutes with $J$. We can therefore identify $C_g$ with a $N$-by-$N$ complex matrix, which we can use to compute a determinant. We then define the circle function as \begin{align} \varphi(g)=\frac{\det{C_g}}{|\det{C_g}|}\,, \end{align} where the determinant is meant in the above sense (of a complex matrix, rather than of real $2N$-by-$2N$ matrix). The universal covering group is then defined with the group multiplication \begin{align} (g_1,c_1)\cdot(g_2,c_2)=(g_1\cdot g_2,c_1+c_2+\eta(g_1,g_2))\,, \end{align} where $\eta:\mathrm{Sp}(2N,\mathbb{R})\times \mathrm{Sp}(2N,\mathbb{R})\to\mathbb{R}$ is the unique smooth function, such that $\varphi(g_1g_2)=\varphi(g_1)\varphi(g_2)e^{i\eta(g_1,g_2)}$ everywhere. My question: How can I find the unique continuous function $c_K: \mathbb{R}\to\mathbb{R}$ that satisfies \begin{align} \varphi(e^{tK})=e^{i c_K(t)}\,. \end{align} Essentially, I would like to lift the curve $e^{tK}$ to its double cover. Of course, I could just numerically evaluate $\varphi(e^{tK})$ and correct by an offset of $2\pi$, whenever I go around the circle, but I am hoping that there is a smarter and MORE EXPLICIT way! More thoughts: I believe $c_K$ should satisfy the differential equation $\dot{c}_K(t)=-i\frac{d}{dt}\log\varphi(e^{tK})$. Maybe this can be solved somehow or used to write a formal solution!? Not on your actual question, but on your presentation of the metaplectic group, you may find Teruji Thomas's papers on the Weil representation useful: see http://users.ox.ac.uk/~mert2060/research, and specifically "The character of the Weil representation" and "The Maslov index as a quadratic space". (I haven't read "Weil representation and transfer factor".) Thanks a lot - I will check out the literature. In fact, I am interested in the question exactly from the perspective of the Weil representation, where I had read Berezin: The method of second quantization and de Gosson: Symplectic Geometry and Quantum Mechanics. I made some progress in the sense that I believe that I could reduce it to a more standard problem: Morally speaking, I have \begin{align} c_K(t)=\mathrm{Im}\log\det\left(\frac{e^{tK}-Je^{tK}J}{2}\right)=\mathrm{Im}\mathrm{Tr}\log\left(\frac{e^{tK}-Je^{tK}J}{2}\right)\,. \end{align} The tricky thing is that $\log{(e^x)}$ is only equal to $x$ in certain patches. Consider the special case, where $J$ commutes with $K$, such that $[J,K]=0$. In this case, we can simplify to find \begin{align} c_K(t)=t\,\mathrm{Im}\,\mathrm{Tr}(K) \end{align} and everything is good. However, I'm not aware if there is similar simplification for more general expressions. Question: Is there any way to describe $c_K(t)$ more explicitly with analytic functions, rather than just defining it to incorporate the Winding number by hand? Ok, I solved the problem. We need to use the cocycle function $\eta(M_1,M_2)$, which is defined to satisfy $\varphi(M_1M_2)=\varphi(M_1)\varphi(M_2)e^{i\eta(M_1,M_2)}$. The idea is that we write $K=u\tilde{K}u^{-1}$, such that $c_{\tilde{K}}(t)=t\mathrm{Im}\mathrm{Tr}(\tilde{K})$. This can always be found by using a transformation $u$ that brings $K$ into a standard block diagonal form with respect to $J$, i.e., both of them are block diagonal (they may not quite commute, but almost). We can then use the cocycle relation to see that $c_K(t)=c_{\tilde{K}}(t)+\eta(u,e^{K})+\eta(ue^{K},u^{-1})$. This can possibly be simplified further, but the idea should be clear. I hope this helps somebody with a similar problem in the future... I think that this is probably part of the question, rather than an answer. Well, this was just my approach - adding this to the question would make it rather lengthy and originally I thought that people would probably know what to do. Anyway, now that I solved the problem, my post actually expanded into a brief summary of the full solution...
2025-03-21T14:48:30.343175
2020-04-17T13:46:11
357756
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628211", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357756" }
Stack Exchange
local acyclicity when restricting to an hypersurface Let $X$ be a smooth scheme over $\mathbb{C}$ and a constructible sheaf $K$ of complex vector spaces on $X\times\mathbb{A}^1$ and a function $g:X\rightarrow \mathbb{A}^1$. Suppose that $K$ is locally acyclic with respect to the composition $g\circ p$, where $p: X\times\mathbb{A}^1\rightarrow X$ is the projection, is $\sigma^* K$ locally acyclic with respect to $g$, where $\sigma$ is the zero section of $p$?
2025-03-21T14:48:30.343234
2020-04-17T13:58:36
357757
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Jochen Wengenroth", "LL 3.14", "https://mathoverflow.net/users/153203", "https://mathoverflow.net/users/21051" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628212", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357757" }
Stack Exchange
A density question Suppose $\Omega= (0,1)\times(0,1)\subset \mathbb R^2$. Assume that $f, g \in C^{\infty}(\Omega)$ and that $$ \int_\Omega \left(f(x_1,x_2)- \frac{m}{(n+1)}g(x_1,x_2)\right) x_1^n \,x_2^m \,dx_1\,dx_2 = 0 \quad \text{for all $n,m=0,1,\ldots$}.$$ Does it follow that $f \equiv g \equiv 0$ on $\Omega$? Smoothness of $f$ and $g$ does not guarantee that the "moments" $\int f(x_1,x_2)x_1^nx_2^mdx_1dx_2$ exist. The set is bounded here, Jochen Wengenroth Certainly not. Assume $\partial h/\partial x_1=g/x_1$, and assume that $h$ and $g$ have compact support. Then integrate by parts to obtain $$\int\int (f-x_2{\partial h\over\partial x_2})x_1^nx_2^m=0.$$ This implies $f=x_2(\partial h/\partial x_2)$ and nothing more.
2025-03-21T14:48:30.343314
2020-04-17T14:02:42
357758
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gerhard Paseman", "Manfred Weis", "https://mathoverflow.net/users/31310", "https://mathoverflow.net/users/3402" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628213", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357758" }
Stack Exchange
Planarity of a subgraph Given a complete symmetric graph $G(V,E)$ with real-valued edgeweights and assume that every induced $k_4$ that is induced by a quadruplet from $v$'s vertices has a unique perfect matching of maximal weightsum and, whose edges have different weights. Question: if from $E$ all edges are removed that resemble the heavier edge of the heaviest perfect matching of the $K_4\subseteq G$, is the resulting graph planar? Probably not. Let V have six vertices, and put small weights on a K3,3 subgraph and large weights on the rest of the edges. I suspect there is a weighting that will leave the nonplanar digraph after removing the heavy edges. Gerhard "Weighted Graphs Can Be Tricky" Paseman, 2020.04.17. @GerhardPaseman are you trying to enforce a violation of one of Kuratowski's criteria for the planarity of graphs? $K_{3,3}$ is a Moebius ladder graph and thus locally planar, but not globally by virtue of the twist. At least a clever idea for a possible counter example. I'm suggesting an idea for how to construct a variety of examples. Let some sufficiently complicated graph class be called C. Make your post as above, but replace planar by C. I'm suggesting a counterexample by picking a graph outside of C, putting small weights on the edges, and large weights on the complementary edges. Gerhard "Trying To Throw Off Balance" Paseman, 2020.04.17.
2025-03-21T14:48:30.343431
2020-04-17T14:16:07
357760
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "DCM", "https://mathoverflow.net/users/61771" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628214", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357760" }
Stack Exchange
Useful notion for locally convex spaces - well known? In my current work the following property of maps between locally convex spaces showed up at several places and proved to be useful. It seems quite elementary to me, so I would like to know whether it already has an established name or is equivalent to some other property. Let $f: U\rightarrow F$ be a (possibly non-linear) map between an open set $U\subset E$ of a locally convex space into another locally convex space $F$. Suppose for every continuous semi-norm $\Vert \cdot \Vert$ on $F$ there exists a non-decreasing function $\omega:[0,\infty)\rightarrow [ 0,\infty)$ and a continuous semi-norm $\Vert \cdot \Vert'$ on $E$ such that $$ \Vert f(x)\Vert \le \omega(\Vert x \Vert')\quad \text{ for all } x \in U.$$ Let's for now call such a map 'robust'. It's clear that the concatenation of robust maps, when defined, is robust again. Further, linear maps (defined on $U=E$) are robust if and only if they are continuous. My main example of a non-linear robust map is concatenation $$ C^\infty(M)\rightarrow C^\infty(M), a \mapsto \chi \circ a, $$ where $M$ is a smooth manifold and $\chi:\mathbb{C}\rightarrow \mathbb{C}$ is a smooth function that is well behaved, e.g. all of its derivatives are bounded, but also $\chi(z)=\exp(z)$ works. This has some nice implications: For example the parameter-to-solution map for the initial value problem $$\dot x +ax, \quad x(0)=1$$ can be written as $f(a)(t)=\exp(-\int_0^ta)$ and is thus easily seen to define a robust map $f:C^\infty[0,\infty)\rightarrow C^\infty[0,\infty)$. A more sophisticated consequence is that one can construct parametrices of classical pseudodifferential operators in a robust way (concatenate the principal symbol with $\chi(z)=z^{-1}$ (smoothed near $0$)). In general robustness seems to come up as a property of 'parameter dependence' in a smooth setting. Typically in this setting one already has a continuous dependency, which allows to make constructions, constants etc. uniform in small neighbourhoods of a fixed parameter. Robustness allows to get uniformity over large classes of parameters, as long as an appropriate semi-norm ($\Vert \cdot \Vert'$ above) stays bounded. This is relevant for recovering parameters in statistical inverse problems. To summarise, my question is the following: Question. Is robustness equivalent or related to some well known property in functional analysis or has it been studied anywhere? My feeling is that robustness probably doesn't give you enough for it to be the basis of an `interesting' theory. It seems to me you can get some pretty nasty robust mappings even when $U \subset E = F = \mathbb{R}$. Have you considered whether Dini/Lipschitz continuity might be more what you're after here?
2025-03-21T14:48:30.343738
2020-04-17T14:22:19
357761
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628215", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357761" }
Stack Exchange
Upper and lower bounds on the entries of a matrix power Say I have a non-negative square $n\times n$ irreducible stochastic matrix $A$ (i.e., each column sums to 1), for which the following holds: $$A_{ij} > 0 \iff A_{ji} > 0.$$ I know that no more than 15 entries per row are positive, and if $A_{ij} > 0$, then $\frac{1}{15} \leq A_{ij} \leq \frac{6}{11}$ (this last bound is less important, the point is that it's $< 1$). For instance, this implies that $A^2_{ii} \geq 1/15$. It's easy to prove that, for each $k > 0$, the smallest positive entry of $A^k$ is $\geq \frac{1}{15^k}$. It is also easy to prove that the greatest entry of $A^k$ will be $\leq \frac{6}{11}$, as $$A_{ij}^k \leq A^{k-1}_i(A^{k-1})^j \leq \max A_i^{k-1}.$$ However I would like to either a) give a better bound on the smallest positive entry, b) give a bound on the greatest entry that also varies in the exponent. The reason to do this is to give a good bound on the principal ratio $\gamma$ (defined below) of the Perron vector $x$ of $A$, i.e., the positive vector satisfying $Ax=x$ and $||x||_1 = 1$. We know that there's some exponent $d < n$ such that $A^d$ is positive. By Minc's Nonnegative matrices, Theorem 3.1, one has $$\gamma := \max_{i,j}\frac{x_i}{x_j} \leq \max_{s,t,j}\frac{A^d_{sj}}{A^d_{tj}},$$ but the best we have with the previous discussion is $\rho \leq 6\cdot 15^d / 11.$ So, apart from a better entrywise bound, is it possible that I can get a (perhaps constant?) bound on $\gamma$? Some of the references I'm working with are: Ostrowski, A.M. (1960). On the Eigenvector belonging to the Maximal Root of a Non-negative Matrix. Minc, H. (1974). Nonnegative matrices. Schneider, H. (1958). Note on the fundamental theorem on irreducible non-negative matrices There are a number of related questions here on MO: What is the Birkhoff norm of a Perron vector? concentration for eigenvectors The height of the Perron-Frobenius eigenvector Lower bound on the entries of the Perron vector
2025-03-21T14:48:30.343899
2020-04-17T14:23:46
357763
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/14443", "kneidell" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628216", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357763" }
Stack Exchange
Bounds on the component group of the center of a reductive group arising as centralizer Let $k$ be an algebraically closed field and let $G$ be a reductive, but not necessarily connected, algebraic group over $k$. (For me: reductive group = no non-trivial connected normal unipotent subgroups.) Let $Z=Z(G)$ and $Z^\circ=Z(G)^\circ$ denote the center of $G$ and its identity component. In the case where $G$ is connected, the order of the component group $Z/Z^\circ$ is bounded in terms of the root datum of $G$; namely, it is known that $Z^\circ$ coincides with the solvable radical of $G$ and that $Z/Z^\circ$ is the center of the semisimple group $G/Z^\circ$, and hence has order no greater than the order of the fundamental group of $G$, which is given in terms of the root datum of $G$ (this is explained, e.g., in chapter 9 of [1]). However, if $G$ is disconnected, it may very well be that the radical contains $Z^\circ$ properly, in which case this type of argument would fail. Assume now that $G=\mathrm{C}_H(g)$ where $H$ is a connected reductive group and $g\in H$ is semisimple. Let $Z$ and $Z^\circ$ be as above. My Question: Is there a way to bound the order of $Z/Z^\circ$ in terms of the root datum of $G^\circ$? Or of $H$? Edit. The original scope of the question was restricted following user YCor's comment. [1] Malle, Gunter; Testerman, Donna, Linear groups and finite groups of Lie type., Cambridge Studies in Advanced Mathematics 133. Cambridge: Cambridge University Press (ISBN 978-1-107-00854-0/hbk). xiv, 309 p. (2011). ZBL1256.20045. You can take $G=S\times F$ for an arbitrary finite abelian group $F$ (of order coprime to the characteristic) and given reductive $S$. For such a group $Z/Z^\circ$ contains $F$ and hence is arbitrary large. So you certainly want to put further restrictions. @YCor certainly, you're correct. I forgot about this.. I changed the question to focus only on the case of disconnected centralizers. I don't know if I can actually give a more generalized setting.. OK it's fine (I don't think you need to emphasize negative assumptions such $G$ being disconnected, just say once that $G$ is not assumed connected). I guess what's important in your new assumption is being the centralizer of a semisimple element in a fixed connected reductive group.
2025-03-21T14:48:30.344064
2020-04-17T14:24:58
357764
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628217", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357764" }
Stack Exchange
A question about Holder exponents of a function at different points in its domain Suppose that $f(x)$ is continuous on $[0,1]$. We make an agreement that if there exists an interval $[a,b]\subseteq[0,1]$ including point $y$ such that $f(x)$ satisfies $\alpha$-Holder condition on $[a,b]$, $f(x)$ satisfies $\alpha$-Holder condition at $y$. Define $$A(y)=\sup\{\alpha : f(x) \text{ satisfies $\alpha$-Holder condition at $y$}\}.$$ What will the graph of $A(y)$ look like on $[0,1]$ for various $f(x)$, for example for functions of unbounded variation? Are there some constraints for $A(y)$?
2025-03-21T14:48:30.344136
2020-04-17T14:32:42
357765
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ahmad Jamil Ahmad Masad", "Aphelli", "Carlo Beenakker", "GH from MO", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/113991", "https://mathoverflow.net/users/11919", "https://mathoverflow.net/users/165657" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628218", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357765" }
Stack Exchange
Thomas Ordowski conjecture at OEIS sequence A007097 There is a conjecture that I found interesting concerning the prime numbers, and my question is to find a proof or a disproof of it. That conjecture was given by Thomas Ordowski at OEIS sequence A007097, you can see it at the following link https://oeis.org/A007097 The conjecture is as follows: let $a(n)$ denote the "primeth recurrence" defined by $a(n+1)$ is the $a(n)$-th prime (with initial condition $a(0)=1$); then $\log(a(1)) \log(a(2)) \cdots \log(a(n)) \sim a(n)$. Notification: before this bounty, when I asked this question in April 2020, the member "GH from MO" made a false disproof claim of the conjecture which appeared later to be a false disproof claim. And because of that I started this bounty. Thank you. I think you will get a better response if you make the question self-contained. @Carlo Beenakker thank you for advice, I will do it as soon as possible. @GH from MO: I’m sorry but (I saw this from the followup on MSE) I can’t see the contradiction. Why can’t $\log{a_n}$ and $\log{a_{n-1}}$ be equivalent? @Mindlack: You are right, I don't know what I was thinking! In fact $a(n)\sim a(n-1)\log a(n-1)$ implies that $\log a(n)\sim\log a(n-1)$. @AhmadJamilAhmadMasad: My argument was incorrect (see the previous remark), so please remove "This conjecture was disproved, see the MathOverflow link" under https://oeis.org/A007097 Thanks in advance! @GH from MO I will do that as soon as possible. @GH from MO, Done. The comment was removed.
2025-03-21T14:48:30.344270
2020-04-17T15:18:16
357769
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Carl-Fredrik Nyberg Brodda", "HJRW", "https://mathoverflow.net/users/120914", "https://mathoverflow.net/users/135406", "https://mathoverflow.net/users/1463", "jpmacmanus" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628219", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357769" }
Stack Exchange
Reference request: Recent progress on the conjugacy problem for torsion-free one-relator groups? I am aware that the Spelling Theorem of B. B. Newman implies that one-relator groups with torsion are hyperbolic, and thus have a solvable conjugacy problem. My understanding is that for one-relator groups without torsion, the conjugacy problem is still open, though the most recent reference I have for this is over 20 years old. Has there been any recent developments in this area? Any references for work in this area would be appreciated. Thanks. This is still open. There was a claimed proof in the 90s, but I believe this is generally accepted to contain gaps. @Carl-FredrikNybergBrodda That's interesting. Is this paper from 1992 the claimed proof you are referring to? Yes, that's the one. At the WOW, I am told the consensus was that the problem remains open. As someone who has actively tried to solve it, I'm certainly under the impression that the problem is open. I'm interested to hear about the claimed solution of Juhasz, which I didn't know about. I notice that the linked paper is only a (self-confessed) sketch. I wonder if a more detailed account was ever written? As mentioned in the comments, this is still considered an open problem. I thought I'd flesh out a few aspects. A solution was claimed in 1992 by Juhasz, but it seems to have failed to convince experts. The small cancellation theory involved in the proof seems to be very intricate, though I am not aware of any concrete example of a gap in the proof. Perhaps someone can fill me in on this. At the Winter One-relator Workshop at the University of East Anglia two years ago, in 2018, (see WOW, this very question was brought up. I was not there myself, but it was organised by my supervisor; I myself asked him this question not long ago, and he conveyed that the consensus was that the problem remained open. As another example from the literature, in these excellent notes by Andrew Putman, one reads "Whether or not torsion-free one-relator groups have a solvable conjugacy problem is a famous and difficult open question". I am not sure exactly when these notes are from, but certainly after 2016. So: the problem remains open.
2025-03-21T14:48:30.344458
2020-04-17T15:57:22
357772
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "LL 3.14", "Michael Engelhardt", "fluxy44", "https://mathoverflow.net/users/134299", "https://mathoverflow.net/users/153203", "https://mathoverflow.net/users/156454", "https://mathoverflow.net/users/33741", "leo monsaingeon" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628220", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357772" }
Stack Exchange
How to compute integral of a gaussian over a noncentered ball? Let $\mathcal{B}(x,r)$ the ball of center $x \in \mathbb{R}^n$ and radius $r>0$ (so $\mathcal{B}(x,r) = \{y \in \mathbb{R}^n : \|y-x\| \leq r\}$, where all norms are $\ell^2$-norms). I would like to express the following integral analytically: \begin{equation} \int_{\mathcal{B}(x,r)} \exp\left(\frac{-\|y-\mu\|^2}{2r^2}\right)dy. \end{equation} For $\mu=x$, there is an easy analytic expression in the literature, using radial symmetry. However, it seems more complicated to me when $\mu \neq x$, and I did not find a lot of literature on this case. Is it possible to express this analytically? How could I proceed? Already when $x=\mu$ there is no analytical solution to my knowledge except a special function ... could you precise your claim? yup, no analytical solution eve for centered gaussians, even in dimension 1... vote to close For even $n$, there certainly is a simple analytical solution for $x=\mu $ due to the measure in spherical coordinates. You are right. By analytical I meant express the solution in terms of erf function for instance (or other well known function). Without a real loss of generality, I can assume that $x=0$, $2r^2=1/π$, and I check $$ J(x)=\int_{\vert y\vert\le 1} e^{-π\vert x-y\vert^2} dy=(G\ast\mathbf 1_{\mathbb B^n})(x),\qquad G(x)=e^{-π\vert x\vert^2}. $$ The function $J$ is radial (if $A\in O(n)$, just calculate $J(Ax)$ by the change of variables $y=Az$) so that $J(x)=g(\vert x\vert)$ and thus \begin{align} g(r) &=\int_{\vert y\vert\le 1} e^{-π\vert r e_1-y\vert^2} dy \\ &=\int_{\vert y_1\vert\le 1} \int_{\vert y'\vert^2\le 1-y_1^2}e^{-π(r-y_1)^2-π \vert y'\vert^2} dy\\ &=\int_{\vert t\vert\le 1}e^{-π(r-t)^2} \int_0^{\sqrt{1-t^2}} e^{-π \rho^2} \rho^{n-2} d\rho \vert\mathbb S^{n-2}\vert dt \\ &=\int_{\vert t\vert\le 1}e^{-π(r-t)^2} E_{n-1}(\sqrt{1-t^2})dt, \end{align} where $E_{n-1}$ is an Erf-type function. Not very explicit, but an integral in 1D.
2025-03-21T14:48:30.344617
2020-04-17T17:32:41
357776
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "YCor", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/142777", "user2679290" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628221", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357776" }
Stack Exchange
Examples of 3-transitive expander family of Schreier graphs What are examples of expander family of 3-transitive Schreier graphs? Meaning for an action that is 3-transitive. It is better to have an option for randomization. We know that choosing 2 elements at random in a simple Lie group leads to expander family of Cayley graphs. Is the same thing true for example, in case we randomize elements in Schreier graph of $\mathrm{PSL}_2$ acting on the projective plane? Let's ask this formally: Given $\epsilon$, is there a family of bounded degree Schreier graphs with a 3-transitive group action such that if I randomize x generators, I have $\epsilon$-expansion in probability p independent of n(the number of vertices of the graph). I am looking for a known result similar to this. Or maybe references. And if not 3, then 2-transitive would be OK. Thanks There are known examples of such families (e.g. using Brooks' spectral gap for congruence actions of $\mathrm{SL}_2(\mathbf{Z})$). Could you be more precise? to me the question is presently written in a too informal style to really know what you wish to have. Added formality. I don't know what results are there, so I have asked that informally. Anything close to what I wrote would be nice. If you need to take specific generators for each $n$ , that would be interesting too. So you're looking for a family $(G_i,X_i)$ with $G_i$ a finite group acting on $X_i$ (3-transitively) such that for some $x$ and each $\varepsilon>0$, the probability $p_i=p_i(x,\varepsilon)$ that a random $x$-tuple in $G_i$ (using uniform probability on $G_i^x$) is generates $G_i$ with $\varepsilon$ spectral gap on $\ell^2(X_i)$, satisfies $\liminf p_i>0$. (This is not exactly what you're asking, but roughly is it?) Yes. looking for a result or related tools/references. Also, if you have certain predetermined generators, that would be fine. With predetermined generators, I mentioned Brooks' result: hence for each fixed generating subset of $\mathrm{SL}_2(\mathbf{Z})$, you have expansion for the Schreier graphs on $\mathbb{P}^1(\mathbf{Z}/p\mathbf{Z})$ when $p$ ranges over primes. Thanks. Could you be more specific? ( a paper) Brooks proved the spectral gap for representations of $\mathrm{SL}_2(\mathbf{Z})$ factoring through a congruence quotient ($\mathrm{SL}_n(\mathbf{Z}/n\mathbf{Z}$ when $n$ ranges over positive integers). In particular, the $\ell^2(\mathbb{P}^1(\mathbf{Z}/p\mathbf{Z}))$ when $p$ ranges over primes, has a spectral gap. Couldn't find such a paper. Could you be more specific? Do you remember any detail from the title/abstract? What is the name of the method. I thinks it's: R. Brooks, The first eigenvalue in a tower of coverings. Bull. Amer. Math. Soc. 13 (1985),no. 2, 137-140. But actually the spectral gap itself for congruence subgroups is rather due to Selberg: On the estimation of Fourier coefficients of modular forms, Proc. Symp. Pure Math. VII, Amer. Math. Soc.(1965), 1-15. Thanks a lot! LOL, I searched for Shimon Brooks whom I know a bit. Well, it was solved. It is a bit long for here but given $G=SL(2,p)$, $Cay(G, S) $ expander, we can see that $Sc:=Sch[G,P^1(F_p),S]$ is also an expander, by comparing their mixing time and find it is less than that of the cayley graph. The action is 3-transitive because the action of $GL$ is. The idea is that $s_t ... \cdot s_1 $ is mixed after $t$ step and that is what acts on any element in the Scherier graph. We divide it into classes in which $s_t ... s_1 \cdot x = y$ , and use the mixing condition. Actually for a field $K$ the $\mathrm{SL}_2(K)$-action on distinct triples of $\mathbb{P}^1(K)$ has its orbits indexed by ${K^}/{K^}^2$, so is not transitive in general (for a finite field of odd characteristic it has 2 orbits). That is, it's not 3-transitive, albeit quite close. If $-1$ is not a square in $K$ (i.e., if $K$ has cardinal in $4\mathbf{Z}+3$), the action is however transitive on 3-element subsets, and also in this case the $\mathrm{GL}_2(\mathbf{Z})$-action is 3-transitive.
2025-03-21T14:48:30.344880
2020-04-17T17:45:16
357778
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628222", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357778" }
Stack Exchange
Generalizations of Sard-Smale Theorem Sard-Smale theorem holds for Fredholm maps $f:M\rightarrow B$ between separable Banach manifolds $M,N$. There are some constrains relating the Fredholm index $\operatorname{ind}(f)$ of $f$ to its differentiablity class. More precisely, we need to require $f\in C^{r}$, where $r>\max{(\operatorname{ind}(f),0)}$. I would like to know what are the generalizations of this theorem into two directions: assuming weaker regularity on $f$; assuming weaker structure on the spaces $M,N$. For instance, I know that the result is valid for bounded Fréchet manifolds in the case where $f$ is Lipchitz (see here). The result is also valid for some domains with empty interior (see here). I would also like to know if the above (or others) generalizations are maximal, in the sense that there are counterexamples in the case of weaker requirements. Any help is welcome.
2025-03-21T14:48:30.344967
2020-04-17T18:15:14
357780
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "1.414212", "Snacc", "https://mathoverflow.net/users/127239", "https://mathoverflow.net/users/519543" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628223", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357780" }
Stack Exchange
(Explicit) Basis for Kohnen's plus-space of modular forms of half integral weight Sorry if this is trivial, but I could not find any reference. Let $k,a,b$ be integers. The space of modular forms of integer weight $M_k(\text{SL}_2(\mathbb{Z}))$ admits a basis of the form $\{ E_4^aE_6^b : 4a+6b = k \}$ where $E_4$ and $E_6$ are normalized Eisenstein series of weight 4 and 6 respectively. Similarly, the space of modular forms of half integer weight $M_{k+\frac{1}{2}}(4)$ for $\Gamma_0(4)$ has a basis of the form $ \{ \theta^aF^b : \frac{a}{2}+2b = k+ \frac{1}{2} \},$ where $\theta = 1+\sum_{n\geq1}q^{n^2}$ and $F=\sum_{n\geq1, n \text{ odd}}\sigma_1(n)q^n$ (see page 254 of Kohnen's paper). Question: Does there exists similar 'explicit' basis for the Kohnen's plus subspace : $$M_{k+\frac{1}{2}}^+(4) = \{ f \in M_{k+\frac{1}{2}}(4) : a_f(n) = 0 \text{ if } (-1)^kn\equiv 2,3\pmod4 \,\} ?$$ Extras: What about the generalized cases of forms of level $N$ in respective spaces? For $M_k(\Gamma_0(N))$ this is answered here. What about $M_{k+\frac{1}{2}}(4N)$ and $M_{k+\frac{1}{2}}^+(4N)$ ? There does exist an explicit basis when $k$ is EVEN: denote by $E_{k,4}$ the Eisenstein series $E_k(4\tau)$. Then the Rankin-Cohen brackets $[\theta,E_{k-2j,4}]_j$ for $0\le j\le\lfloor k/6\rfloor$ (with a small modification for $k=2$) form a basis of $M^+_{k+1/2}(4)$, where of course $\theta$ is the usual generator of $M_{1/2}(4)$. The case $k$ odd is more difficult. The best I could come up with is to use brackets of $\theta$ with the two Eisenstein series of weight $k$ and quadratic character modulo 4, but the Kohnen space is only a subset of this. Thank you very much. So, at least the question is not trivial, and I think the case for general $N$ is trickier. - I will wait for some more time (no offence!) for other ideas and accept the answer if none comes. Do you have a source I can look at for the proof that $[\theta,E_{k-2j,4}]j$ for $0\leq j\leq \lfloor k/6\rfloor$ form a basis for $M^+{k+1/2}(4)$?
2025-03-21T14:48:30.345115
2020-04-17T19:23:26
357785
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Aleksandar Milivojević", "Pavel", "https://mathoverflow.net/users/104342", "https://mathoverflow.net/users/43645" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628224", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357785" }
Stack Exchange
Sullivan minimal model in the case of $H^1(V)\neq 0$ Is there a simple construction of a Sullivan minimal model $\Lambda U \rightarrow V$ in the case that $H^1(V)\neq 0$? Do you have a reference? I envisage a degree-wise construction as in the case of $H^1(V)=0$ explaining how to deal with the new phenomenon. The existence is proven very generally in Felix, Halperin, Thomas, Rational Homotopy Theory but I find the proof very complicated and not transparent enough in this simple case (they prove that relative Sullivan models exist and that any relative Sullivan algebra is isomorphic to a product of a minimal relative Sullivan algebra and an acyclic algebra) This probably isn't the answer you're looking for, but I think the most transparent construction is the one given by Sullivan himself in Infinitesimal Computations, section 5. It is not written in "formal" language, but is clear (and can be formalized). The construction is indeed degree-wise as in the simply connected case, but one might have to add generators to the same degree infinitely many times before moving on to the next degree (for example this happens if you start to model the wedge of two circles). Thanks for the reference! I have to confess that I have never looked in the original... But I will have a look at it when I have more time and try to rewrite it here if nobody else is interested in getting the points :-) Gelfand and Manin explain it very nicely in their book "Methods of Homological Algebra", last chapter.
2025-03-21T14:48:30.345250
2020-04-17T20:14:41
357789
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Matthew Daws", "Nik Weaver", "https://mathoverflow.net/users/23141", "https://mathoverflow.net/users/406" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628225", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357789" }
Stack Exchange
Measure algebra on the Bohr compactification vs the bidual algebras The following question probably reduces to some standard abstract harmonic analysis Twister play, but I'd still welcome some comments on it. Let $G$ be a locally compact Abelian group and let $bG$ denote its Bohr compactification (the Pontryagin dual of $\widehat{G}$ with the discrete topology). Denote by $\mathfrak{A}$ the space $L_1(G)^{**}$ furnished with either Arens product. Is there a canonical action of $M(bG)$ (the measure algebra on $bG$) on $L_\infty(G)$ that would give rise to an isometric homomorphism $M(bG)\to \mathfrak{A}$? But $C(b G)$ isometrically embeds in $L^1(G)^* \cong L^\infty(G)$, so wouldn't you expect $M(b G) \cong C(b G)^$ to be a quotient of $L^\infty(G)^$? I'm going to say no. The "canonical" pairing of $M(bG)$ with $L^\infty(G)$ is to integrate a function in $L^\infty(G)$ against the restriction to $G$ of a measure in $M(bG)$. But this is not faithful: any measure supported on $bG\setminus G$ would go to zero in $L^\infty(G)^*$. To pick up mass on this corona we would want to extend functions in $L^\infty(G)$ to $bG$. But you have to be almost periodic to canonically extend to $bG$, so it doesn't seem like there's any way to get what you want. There is at least a case when it is true though. Suppose $G$ itself is compact (this argument doesn't need $G$ abelian), so that $bG = G$. The $M(G)$ is the multiplier algebra of $L^1(G)$ and as $L^1(G)$ has a contractive approximate identity, there is an isometric embedding $M(bG) = M(G) = M(L^1(G)) \rightarrow L^1(G)^{**}$. Let me sketch this. Let $A$ be a Banach algebra with contractive approximate identity $(e_\alpha)$. I will regard the multiplier algebra $M(A)$ as double centralisers: pairs of maps $L,R$ from $A$ to $A$ with $$ L(ab) = L(a)b, \qquad R(ab) = aR(b), \qquad aL(b) = R(a)b \qquad (a,b\in A). $$ It turns out that, using the approximate identity, one can show that $L,R$ are automatically linear, and also (closed graph theorem) that $L,R$ are bounded. (Or make this part of the definition, if you wish). Turn $A^*$ and $A^{**}$ into $A$-bimodules in the usual way. Given $(L,R)\in M(A)$ let $x^{**}\in A^{**}$ be an accumulation point of the bounded net $(L(e_\alpha))$. For $x^*\in A^*$ and $x\in A$ compute: $$ \langle x^{**} \cdot a, x^* \rangle = \langle x^{**}, a \cdot x^* \rangle = \lim_\alpha \langle a \cdot x^*, L(e_\alpha) \rangle = \lim_\alpha \langle x^*, L(e_\alpha)a \rangle = \lim_\alpha \langle x^*, L(e_\alpha a) \rangle = \langle x^*, L(a) \rangle. $$ Thus $x^{**}\cdot a = L(a)$ (or the canonical image thereof in $A^{**}$). Similarly, $$ \langle a \cdot x^{**}, x^* \rangle = \lim_\alpha \langle x^*, a L(e_\alpha) \rangle = \lim_\alpha \langle x^*, R(a) e_\alpha \rangle = \langle x^*, R(a) \rangle, $$ so that $a\cdot x^{**} = R(a)$. This gives us the required embedding. For those that know about Arens products, there are clear links. I believe this construction is due to McKilligan (MathSciNet or JLMS Article). As Nik Weaver notices, of course $C(bG) = C(G) \subseteq L^\infty(G)$ and so we obtain a quotient map $\theta : L^\infty(G)^{*} \rightarrow M(G)$. Let $\phi:M(G)\rightarrow L^1(G)^{**}$ be the map we just constructed. Let $\mu\in M(G)$ so the associated double centraliser is $L(f) =\mu * f, R(f) = f * \mu$ for $f\in L^1(G)$. Then for $F\in C(G)$ (and writing $\cdot$ for the module actions, which are related to but not quite equal to convolution), $$ \langle \theta(\phi(\mu)), F \rangle = \langle \phi(\mu), F \rangle_{L^\infty(G)^*, C(G)}. $$ Now, a bounded approximate identity argument and a calculation shows that every $F\in C(G)$ is equal to $f\cdot F'$ for some $F'\in C(G)$ and $f\in L^1(G)$. Thus $$ \langle \theta(\phi(\mu)), F \rangle = \langle F', \mu * f \rangle_{C(G), L^1(G)} = \langle \mu, F \rangle. $$ So $\theta \circ \phi$ is the identity, and hence $M(bG)$ is a complemented subspace of $L^\infty(G)^*$ in this case. I have had a quick think, and I cannot see how to say much in the non-compact case. Edit: Some comments on uniqueness, prompted by interesting questions of Nik Weaver. Given a bounded approximate identity $(e_\alpha)$ let $x_0^{**} \in A^{**}$ be some accumulation point. Then our embedding $M(A)\rightarrow A^{**}$ is $(L,R) \mapsto L^{**}(x^{**}_0)$, which is isometric if $\|x^{**}_0\|=1$. Notice that if $x^{**} = L^{**}(x^{**}_0)$ then $x^{**}\cdot a, a\cdot x^{**} \in A \subseteq A^{**}$ for each $a\in A$. Conversely, if $x^{**}\in A^{**}$ is any element with $x^{**}\cdot a, a\cdot x^{**} \in A$ for each $a\in A$, then we can define linear maps $L,R:A\rightarrow A$ with $L(a) = x^{**}\cdot a$ etc. and then $(L,R)\in M(A)$. It is tempting, but wrong to think that we have shown that $$ M(A) \cong \{ x^{**}\in A^{**} : A\cdot x^{**}, x^{**}\cdot A \subseteq A \}. $$ What goes wrong is that we can have a non-zero $x^{**}\in A^{**}$ with $A\cdot x^{**} + x^{**}\cdot A =\{0\}$. In the example of $A=L^1(G)$ we know that $A\cdot A* + A^*\cdot A$ is the sum of the left/right uniformly continuous functions. So any $x^{**}\in L^\infty(G)^*$ which annihilates these, but is non-zero, induces the zero multiplier. Notice that this cannot happen for $C^*$-algebras for example, as here $A^*\cdot A = A\cdot A^* = A^*$. So, the embedding of $M(A)$ into $A^{**}$ depends on the choice of $x_0^{**}$ an accumulation point of a bai of $A$. You can characterise such $x_0^{**}$ as the "mixed identities" of $A^{**}$ (a right identity for the 1st Arens product and a left identity for the 2nd Arens product). Any such mixed identity is some accumulation point of some bai of $A$. But in this case, when $G$ is compact, can't we trivially embed $M(G)$ in $L^\infty(G)^*$ by taking $\mu$ to integration against $\mu$? Seems like this is a homomorphism for either Arena product. @NikWeaver Nope: what about a point mass? $L^\infty(G)$ is only equivalence classes for Haar measure, so singular measures can't be integrated against. So how does a point mass act on $L^\infty(G)$ in your construction? For that matter, how does it correspond to a multiplier of $L^1(G)$? There is another way to view the construction (maybe a better one). If we convolve an $L^1$ and an $L^\infty$ function then we get a uniformly continuous function, which we can integrate against. Then take a limit along a bai in $L^1(G)$. So a point mass $\delta_s$ say "integrates" against an $L^\infty(G)$ function $F$ say by integrating $F$ against $L^1(G)$ functions which ever more closely approximate a point mass at $s$: so sort of average of the value of $F$ near $s$. I guess it's all rather non-constructive... Does "take a limit" mean Banach limit or something? What you've described doesn't converge in the standard sense. Yes... Well, just some accumulation point. I think this slightly subtle point is lost in usual arguments in some phrase like "moving to a subnet if necessary, we may assume that..." I see. So, just to be clear, identifying $M(G)$ with the multipliers of $L^1(G)$ requires a choice of b.a.i. of $L^(G)$ and a choice of an ultrafilter on $\mathbb{N}$? (Thank you for your patience, this is interesting.) (No need to thank me: these are interesting questions which I've not thought hard about before). Let me have a think and edit my answer, rather than writing a too quick comment...
2025-03-21T14:48:30.345806
2020-04-17T21:32:18
357793
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "David Roberts", "Praphulla Koushik", "categoricalimperative", "fosco", "https://mathoverflow.net/users/118688", "https://mathoverflow.net/users/156466", "https://mathoverflow.net/users/4177", "https://mathoverflow.net/users/7952" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628226", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357793" }
Stack Exchange
Kashiwara-Schapira Trilogy I’m going to start reading Kashiwara-Shapira’s trilogy Categories and Sheaves, Sheaves on Manifolds, and Perverse Sheaves with someone soon. Flipping through the table of contents for Sheaves on Manifolds (SM), it seems like we could read either Cats n Sheaves or SM first. Which would you recommend we start with? Thank you much! What do you want to learn about? What's your end goal? I want to learn about the ways sheaves appear in and connect different areas of mathematics Why not also read other sources, such as some SGA/stacks project, Mac Lane and Moerdijk, or even Bredon's "Sheaf theory" (it's old, but takes a different viewpoint to the alg geom school that took sheaves up shortly after)? https://twitter.com/viditnanda/status/1249033158659538945/photo/1 "Sheaves on Manifolds" is a good way to learn... well, how sheaves play a role on manifold theory. Especially if you want to compute cohomology of sheaves, and of other functors on derived categories. Beautiful and inspiring mathematics. But if your aim is "to learn about the ways sheaves appear in and connect different areas of mathematics", you should definitely turn to topos theory and categorical logic. Any prerequisites to start reading topos theory and categorical logic? Can you also share if you have any personal choice for reference for the same.. Do you know what a sheaf on a topological space is? I bet you do, since you wanted to read Kashiwara :) what else? Basic category theory (Yoneda lemma, co/limits, adjoints and their properties). First-order logic. Universal algebra. I do not know much about First-order logic/Universal algebra. I will read that first.. Thanks..
2025-03-21T14:48:30.345966
2020-04-17T22:21:49
357795
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628227", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357795" }
Stack Exchange
Self-intersecting path of stacked regular tetrahedra (This question occurred to me after reading @IanAgol's reminisces of Conway's spiral tetrahedron billiard path.) Let $T_i$ be a regular tetrahedron, and $P$ a collection of regular tetrahedra glued together face-to-face. Say that $P$ constitutes a "path of stacked regular tetrahedra" iff two conditions hold: The dual graph (node for each $T_i$, arc if $T_i$ shares a face with $T_j$) is a path. No edge of the construction is incident to more than three tetrahedra. The first condition intuitively insists on a snake-like object. The second condition excludes too many tetrahedra circling about one edge. (Without this, $5$ dihedral angles of $70.5^\circ$ fit into $360^\circ$, but $6$ do not.) My question is: Q. What is the fewest number of tetrahedra in a path $P$ of stacked regular tetrahedra that self-intersects? $P = \cup_i T_i$ self intersects if a pair of distinct tetrahedra share a point strictly interior to both. So such a self-intersecting snake might be called a tetrahedral ouroboros. This example1 establishes an upperbound of $31$ tetrahedra (adding one more would self-intersect), but clearly this is not the minimum number of tetrahedra. (This example was aiming toward closure, not self-intersection.)                     Fig.6(detail): $QH_7$: $4L+2=30$. 1 Elgersma, Michael, and Stan Wagon. "The quadrahelix: A nearly perfect loop of tetrahedra." arXiv:1610.00280 (2016). Michael Elgersma, a coauthor on the paper I cited, provided an answer to my question: an $11$-tetrahedra snake suffices to self-intersect. Here is his illustration of the first $10$ tetrahedra: He also sent me a convincing proof of optimality: Theorem: The shortest tetrahedal snake that has self intersections, has length $11$.
2025-03-21T14:48:30.346108
2020-04-17T22:53:26
357797
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628228", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357797" }
Stack Exchange
Characterization of uniformly convergent sequence of functions Let $f_n$ a sequence of continuos function from a metric space $(X,d)$ itself. If $f_n$ converge uniformly to a function $f $ and $\varphi$ is real valued uniformly continuos function defined on $X$, then $\varphi\circ f_n$ converge to $\varphi\circ f$ uniformly. Proof: As $\varphi $ is uniformly continuos, given $\epsilon>0$, there exists $ \delta>0$ such that $d(x,y)<\delta$ implies $\vert \varphi(x)-\varphi(y)\vert<\epsilon$. As $f_n$ converge uniformly to $f$, there exists $n_0\geq 1$ such that $n\geq n_0$ implies $d(f_n(x),f(x))<\delta$ for all $x\in X$. Thus, for $n\geq n_0$ we have $ \vert \varphi(f_n(x))-\varphi(f(x))\vert< \epsilon $, concluiding that $\varphi\circ f_n$ converge to $\varphi\circ f$ uniformly. Question: Conversely, if $\varphi\circ f_n$ converge uniformly to $\varphi\circ f$ for all real valued uniformly continuos function $\varphi$ defined on $X$, then $f_n$ converge uniformly to f? Is easy to show that $f_n$ converge to $f$ pointwise. Indeed, for fixed $x$, the function $\delta(y)=d(y,f(x))$ is uniformly continuos. Then, as, in particular, $\delta \circ f_n(x)$ converge to $\delta \circ f(x)$, we have that $f_n(x)$ converge to $f(x)$.
2025-03-21T14:48:30.346193
2020-04-17T23:06:45
357799
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Dr. Pi", "Gerry Myerson", "Mateusz Kwaśnicki", "https://mathoverflow.net/users/108637", "https://mathoverflow.net/users/3684", "https://mathoverflow.net/users/9232" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628229", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357799" }
Stack Exchange
Inequality about exponential integrals I am reading about Dirichlet polynomials in the book Analytic Number Theory by Iwaniec-Kowalski. During the proof of Theorem 9.1 for any positive real numbers $T, N$ they define a piecewise linear and continuous function $f(t)$ which is an upper bound for the indicator function of $[0,T]$ and vanishes whenever $t>T+N$ or $t<-N$. Then they use the following bound without comment, $$ x > 1 \Rightarrow \left| \int_{t \in \mathbb R}f(t) x^{it }\mathrm d t \right| =O\left( \frac{1}{N (\log x )^2} \right) .$$ Can anyone justify this inequality? Integrate by parts twice, to get $(i \log x)^{-2} \int_{\mathbb R} f''(t) x^{it} dt$. Thanks! Can I ask what happens with $f''(t)$ when $t$ is near the points of discontinuouity? OK, that was a little bit too fast; but the idea is the same: integrate by parts twice on each interval $(t_{j-1}, t_j)$ where $f$ is linear. The terms with $(i \log x)^{-1} f(t_j) x^{i t_j}$ cancel out, the terms $(i \log x)^{-2} f'(t_j) x^{i t_j}$ are $O(N^{-1} (\log x)^{-2})$, and what remains is an integral involving $f'' = 0$. Great, it works! How could you have predicted that the terms would cancel out without looks at the specific formulas? Edit: i guess this only has to do with the fact that $f$ is continuous. Yes, this is just the matter of continuity. The above calculation is the usual trick for the Fourier transform, re-written in terms of the Mellin transform. "the said authors"???
2025-03-21T14:48:30.346588
2020-04-17T23:08:39
357800
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Iosif Pinelis", "Thomas Dybdahl Ahle", "https://mathoverflow.net/users/36721", "https://mathoverflow.net/users/5429" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628230", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357800" }
Stack Exchange
Bounding $E[\|\Sigma^{-1/2}(X-\mu)\|_2^3]$ for 2-dimensional Bernoulli Let $X\in\{0,1\}^2$ have mean $\mu=\left[\begin{smallmatrix}p_1\\p_2\end{smallmatrix}\right]$ and $\Pr[X_1 = X_2 = 1] = p\le \min\{p_1,p_2\}$. (Note we must have $1-p_1-p_2+p\ge 0$ for the distribution to be well defined.) We can then compute the covariance matrix $\Sigma = E[(X-\mu)(X-\mu)^T] = \left[\begin{smallmatrix}p_1(1-p_1)&p-p_1p_2\\p-p_1p_2&p_2(1-p_2)\end{smallmatrix}\right]$. I would like to use the Berry Essen bound, and for that we need to upper bound the quantity $\gamma=E[\|\Sigma^{-1/2}(X-\mu)\|_2^3]$. I believe one should be able to show $$\gamma \le C\left(\tfrac1{\sqrt{p_1(1-p_1)}}+\tfrac1{\sqrt{p_2(1-p_2)}}+\tfrac1{\sqrt{\min\{p_1,p_2\}-p}}\right) , $$ for some universal constant $C>0$. The symbolic computation of $\Sigma^{-1/2}$ is a bit unwieldy though, and so I wonder if there is some tricks I may use to arrive at this result more neatly? Or if not, any proof would be appreciated. Update: Using Pinelli's observation, we can compute $$\begin{align}\gamma |\Sigma|^{3/2} &= p[(1-p_1)(1-p_2)v]^{3/2} \\&+ (p_1-p)[(1-p_1)p_2(1-v)]^{3/2} \\&+ (p_2-p)[p_1(1-p_2)(1-v)]^{3/2} \\&+ (1-p_1-p_2+p)[p_1 p_2 v]^{3/2} , \end{align}$$ where $v = p_1+p_2-2p$ and $|\Sigma|=p_1p_2(1-v)-p^2$. It seems like I forgot a factor $1/\sqrt{p}$. In fact it seems that bounds of $\gamma \le \frac{2} {\sqrt{p\,(\min\{p_1,p_2\}-p)}} $ or $\gamma \le \frac{1} {\sqrt{p}}+ \frac{2} {\sqrt{\min\{p_1,p_2\}-p}} $ are sufficient and correct. Your inequality does not hold in general: You don't need to compute to compute $\Sigma^{-1/2}$, because $\|\Sigma^{-1/2} x\|_2^2=x^T\Sigma^{-1}x$ for all $x\in\mathbb R^2$. Using this simple observation with e.g. $p=0$, $p_1=1/2$, $p_2=1/2-\epsilon$, and $\epsilon\downarrow0$, we find $$\gamma =\frac{\left(1-\epsilon -\epsilon ^2\right)^{3/2}}{\sqrt{\epsilon }}+\frac{1}{2} (1+\epsilon )^{3/2}+\frac{\left(1-\epsilon +2 \epsilon ^2\right)^{3/2}}{2 \sqrt{1-2 \epsilon }}\to\infty,$$ whereas $$\frac1{\sqrt{p_1(1-p_1)}}+\frac1{\sqrt{p_2(1-p_2)}}+\frac1{\sqrt{\min(p_1,p_2)-p}}\to4+\sqrt2.$$ Good point, and good approach. It looks like the term I need is $\frac{1}{\sqrt{(p_1+p_2-2p)(1-p_1-p_2+2p)}}$ rather than $\frac{1}{\sqrt{\min{p_1,p_2}-p}}$. What do you think about that version? @ThomasDybdahlAhle : I think the new version requires a rather different approach. Therefore and because your current question has been answered, the new version should be posted as a separate question. My conjecture was disproved, and you did point in a good direction, but I still don't have a useful upper bound, which was the main point of the question. Do you have a suggestion for how to state the new question without too much overlap with this one? @ThomasDybdahlAhle : Your original question was fully answered. Therefore, I think any additional questions you may have should be posted separately. We have the simplex $0\le p, 0\le p_1-p, 0\le p_2-p, 0\le 1-p_1-p_2+p$. From this, we can deduce $v\in[0,1]$ and so we may bound $$\begin{align} \gamma |\Sigma|^{3/2} &= p[(1-p_1)(1-p_2)v]^{3/2} \\&\quad+ (p_1-p)[(1-p_1)p_2(1-v)]^{3/2} \\&\quad+ (p_2-p)[p_1(1-p_2)(1-v)]^{3/2} \\&\quad+ (1-p_1-p_2+p)[p_1 p_2 v]^{3/2} \\&\le p[(1-p_1)(1-p_2)v] \\&\quad+ (p_1-p)[(1-p_1)p_2(1-v)] \\&\quad+ (p_2-p)[p_1(1-p_2)(1-v)] \\&\quad+ (1-p_1-p_2+p)[p_1 p_2 v] \\&=2|\Sigma| . \end{align}$$ In other words $\gamma\le2|\Sigma|^{-1/2}$. This bound is tight except at the four 0-faces/corners of the simplex. In particular, the determinant is 0 exactly in the 1-faces of the simplex (see the picture), and $|\Sigma|^{-1/2}$ approaches those as $1/\sqrt\epsilon$, which $\gamma$ does too. The bound still isn't quite tight though, as it approaches the 0-faces/corners as $1/\epsilon$, while $\gamma$ approaches them as $1/\sqrt\epsilon$. (E.g. if we let $p=\epsilon, p_1=2\epsilon, p_2=3\epsilon$.)
2025-03-21T14:48:30.346820
2020-04-17T23:46:10
357802
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alexander Chervov", "Joseph Van Name", "KhashF", "Piyush Grover", "YCor", "https://mathoverflow.net/users/10446", "https://mathoverflow.net/users/128556", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/22277", "https://mathoverflow.net/users/30684" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628231", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357802" }
Stack Exchange
Abstract mathematical concepts/tools appeared in machine learning research I am interested in knowing about abstract mathematical concepts, tools or methods that have come up in theoretical machine learning. By "abstract" I mean something that is not immediately related to that realm. For instance, a concept from mathematical optimization does not qualify since optimization is directly related to the training of deep networks. In contrast, to me Topological Data Analysis is a non-trivial example of applying algebraic topology to data analysis. Here are few examples that I have encountered in the literature (all in the context of deep learning). Betti numbers have been utilized to introduce a complexity measure which could be used for comparing deep and shallow architectures: https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2014-44.pdf A connection between Sharkovsky's Theorem and the expressive power of deep networks: https://arxiv.org/pdf/1912.04378.pdf An application of Riemannian geometry: https://arxiv.org/pdf/1606.05340.pdf Algebraic geometry naturally comes up in studying neural networks with polynomial activation functions. This paper discusses functional varieties associated with such networks: https://arxiv.org/abs/1905.12207 I find it useful to compile a list of such research works on ML that draw on pure math. @nbro it seems to me that the question is math-focussed. Hard to answer it since every example you gave arises as 'naturally' in machine learning as optimization does. Possibly relevant https://mathoverflow.net/questions/266028/algebraization-of-bayesian-networks @PiyushGrover In that case, please feel free to include anything that you deem appropriate. I am not well versed in ML and I am mostly familiar with deep learning where optimization by gradient descent algorithms is an essential part of the training process. @nbro The works that I have cited use results from algebraic geometry, algebraic topology and dynamical systems. So I believe that this is the correct forum to ask for similar papers. Related: https://mathoverflow.net/questions/204176/group-theory-in-machine-learning Networks with only ReLU activations and integer weights are just rational functions over tropical semirings. Probably, one the most striking is the "UMAP" (Uniform manifold approximation and projection) - a method of dimensional reduction in machine learning. The authors of the method use CATEGORY THEORY for its discovery. Well, there are certain discussions to what extent category theory is really required, see John Baez blog and references there in, still it is the author's original viewpoint how the method has been discovered. (The algorithm/implementation can be understood without category theory). The method become quite quickly very popular - gaining 748 citations in two years according to google scholar. And found applications in many fields including bioinformatics (UMAP Nature), as well as capable to produce beautiful images MO355631. It is similar to previously widely used method - tSNE (T-distributed stochastic neighbor embedding), however, quite often produces better results with less computational efforts, thus beating the predecessor both in quality and speed. The documentations can be found here: UMAP docs.
2025-03-21T14:48:30.347059
2020-04-18T00:30:26
357803
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628232", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357803" }
Stack Exchange
Infinite positions in 3D chomp I've recently come back to investigating ordinal chomp. See A winning move for the first player in $3 \times 3 \times \omega$ Ordinal Chomp for a definition. I made a new discovery, that the position \begin{pmatrix}\omega&\omega&1 \\ \omega&\omega&1 \\ 2&0&0\end{pmatrix} is a losing position, first by a computer search, then by a proof showing that all possible moves from this position are winning (I can outline the proof on request). It seems like positions like this (I call them quadruple $\omega$ positions) are extremely rare, and I haven't been able to find another example so far. My question is: Are there other quadruple $\omega$ positions in $3 \times 3 \times \omega$ chomp, and if so, what are they?
2025-03-21T14:48:30.347144
2020-04-18T00:37:27
357804
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Minseon Shin", "Piotr Achinger", "R. van Dobben de Bruyn", "https://mathoverflow.net/users/127776", "https://mathoverflow.net/users/15505", "https://mathoverflow.net/users/3847", "https://mathoverflow.net/users/82179", "user127776" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628233", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357804" }
Stack Exchange
Splitting after pullback under finite flat morphisms I recently became aware of an interesting result, which claims for smooth projective curve over a finite field, you can pull back any vector bundle, under combinations of Galois covers and Frobenius, so that it splits into direct sum of line bundles. I wasn't able to find the reference (I'd appreciate if anyone can provide the reference). You can find it here, Theorem 5.5. I was wondering whether similar results are known in higher dimensions? Thanks! I'll remove that part of question. @ R. van Dobben de Bruyn After thinking about your argument, I'm a little bit skeptical. Is the Frob morphism also faithfully flat? In the paper linked above a short exact sequence is turned to a split one by the action of Frob. Sorry I'm being dumb. Yes, Frobenius is faithfully flat (see also Kunz's theorem), but there is no flat base change happening. That would be if we did an extension of the ground field or something. Deleted my comment. Here is a similar result for affine schemes (possibly singular and of arbitrary dimension), if we allow finite flat (possibly ramified) covers: Gabber, Liu, Lorenzini, Hypersurfaces in projective schemes and a moving lemma, Proposition 6.10. First of all, the proof breaks down because of complications with Chern classes, preventing one from reducing everything to dynamics of Frobenius on a fixed finite type moduli space. Second, I would be surprised if this was possible e.g. for the cotangent bundle on $\mathbf{P}^2$.
2025-03-21T14:48:30.347288
2020-04-18T01:57:51
357805
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gro-Tsen", "Igor Khavkine", "Moishe Kohan", "https://mathoverflow.net/users/109255", "https://mathoverflow.net/users/17064", "https://mathoverflow.net/users/2622", "https://mathoverflow.net/users/39654", "user143410" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628234", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357805" }
Stack Exchange
Lower bound for domain of exponential map on Lorentzian manifolds Let $M$ denote a manifold admitting a Lorentzian metric $g_{ab}$. Essentially, I would like to know the "minimum domain" on which the exponential map is defined at $p\in M$. To make this concrete, let $(M,g_{ab})$ be time-orientable and let $T\in T_pM$ denote a future-directed timelike vector at $p\in M$. Let $B_T(r)$ denote the open (Euclidean) ball of radius $r$ about the origin of $T_pM$ defined with respect to $T$. Of course, there always exists $r>0$, such that the exponential map on this ball $\text{exp}_p:B_T(r)\to M$ is well defined. Question: Are there known lower bounds---depending possibly, e.g., on the curvature in a neighborhood of $p$---for the minimum size of $r$? i.e., loosely speaking, for some "observer" at $p$ associated with $T$, what is the minimum (ball-shaped) domain for which the exponential map is well-defined? Clarification: As Igor Khavkine mentions in his answer, there exists a bound on the "injectivity radius" at $p\in M$ expressed in terms of any $r$ where the exponential map is assumed to be defined. However, I'm interested in a lower bound on $r$ itself. I'm most interested in the Lorentzian case, but pointers to bounds for ordinary Riemannian manifolds (with no need for the auxiliary vector $T$) may also be enlightening. The question as written makes no sense since the Euclidean metric that we are supposed to use in order to estimate the radius of the domain is unrelated to the Lorentzian metric for which the exponential map is defined. Even if you were dealing with a Riemannian metric, curvature tells you absolutely nothing about lower bounds, just think about open subsets of $R^n$. In Riemannian geometry, the largest such $r$ is the injectivity radius. And there are curvature based bounds for it. To make sense of such a radius in Lorentzian geometry, you need also some reference Riemannian metric. This approach is taken in the following reference, where curvature based bounds are also given: Injectivity Radius of Lorentzian Manifolds (2006) by Bing-Long Chen, Philippe G. LeFloch arXiv:math/0612860 I believe there's some confusion. I think the injectivity radius is the largest $r$ such that $\exp_p$ is injective on $B_T(r)$, but this may be smaller than the largest $r$ such that $\exp_p$ is defined, which OP was asking about: on the unit sphere, as on any geodesically complete manifold, $\exp_p$ is defined on the entire tangent space $T_p M$, but it is injective only up to $r = \pi$. I grant that your interpretation is a better literal reading of the OP, but I think there is some liberty in interpreting what "well defined" means. I suspect that I answered the intended question, but I may be wrong. Apologies for any confusion due to imprecise phrasing. However, I'm glad you mention the Chen-LeFloch paper, Igor. Note their injectivity radius bound (Thm 1.1) assumes the exponential map is defined on ball $B_T(r_0)$ for some $r_0>0$. Their bound is then stated in terms of $r_0$. Hence, if one had a lower bound on the (ball-shaped) domain of the exponential map (i.e., a lower bound on $r_0$), Chen-Lefloch would then provide a bound on the minimum possible injectivity radius (for some fixed metric with bounded curvature). So, a bound on $r_0$--which is what I want--would then be very useful. @user143410, consider the example of the interior of a Euclidean half-space. The $r_0$ at $p$ cannot be greater than the distance of $p$ to the (open) boundary of the half space. This means that there is no uniform lower bound for $r_0$ on the manifold. Also, whatever the lower bound is at $p$, you cannot get it from local curvature, since the manifold is everywhere flat. The lower bounds that you are looking for somehow depend on the structure of this space "at large". If this example is not what you had in mind, you need some conditions to eliminate it. Right, I wouldn't expect a uniform bound and, as your example indicates, I should have anticipated that global properties would enter. Essentially, the (ball-shaped) domain of the exponential map is determined by the length of the shortest inextendible geodesic passing through $p$. Perhaps proofs for classic singularity theorems will provide some insight for me. I would think there must be a way to estimate the size of $r_0$ for say a spacetime ($R^D$,$g_{ab}$) with $g_{ab}$ non-flat and satisfying some (physically-reasonable) global constraints. You've waited a long time for an answer. And I am surprised that no one has written one. Let $M=\mathbb{R}\times(0,\infty)$. The exponential map isn't defined at $(x,t)$ for vectors $(u,s)$ with $s<-t$. Since there are points with arbitrarily small $t$ there are points with arbitrarily small injectivity radius. Note that this is independent of curvature. This is a result of incompleteness.
2025-03-21T14:48:30.347603
2020-04-18T03:40:55
357810
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "David T.", "Nate Eldredge", "https://mathoverflow.net/users/156476", "https://mathoverflow.net/users/4832" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628235", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357810" }
Stack Exchange
probability a negative drift random walk is positive after certain time Suppose I have a simple continuous time random walk starting at $0$ at time $0$ with Poisson transition rate 1 and probability $p$ the jump is $+1$ and probability $1-p$ thejump is $-1$. Suppose $p < 1/2$ so the drift is negative. I am interested in $q_t$, the probability that the random walk will be nonnegative at some time after t. Is there a nice closed form expression ? The question is not the probability being nonnegative AT a particular time t but AT or AFTER a particular time t. Ah, sorry, I misread.
2025-03-21T14:48:30.347690
2020-04-18T06:25:13
357819
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "April", "Deane Yang", "Gro-Tsen", "https://mathoverflow.net/users/118316", "https://mathoverflow.net/users/17064", "https://mathoverflow.net/users/35593", "https://mathoverflow.net/users/40804", "https://mathoverflow.net/users/613", "mme", "user35593" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628236", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357819" }
Stack Exchange
How close are the exponential maps on $\mathbb{S}^2$ at two nearby points? Consider the two dimensional sphere $\mathbb{S}^2$ and let $p, q \in \mathbb{S}^2$. Let $\text{exp}_{p}$ and $\text{exp}_{q}$ be the exponential maps on $\mathbb{S}^2$ at points $p$ and $q$ respectively. I am interested in the map $\psi := \text{exp}_{p}^{-1} \circ \text {exp}_{q}$ defined on the unit disc $\mathbb{D} \subset \mathbb{R}^2$. I expect that if $p$ and $q$ are nearby points, then the map $\psi$ is close to the identity map. My question is: Is there a way to quantify this closeness? More precisely, is it possible to write the Taylor series expansion of $\psi = (\psi_1, \psi_2)$ around the origin as \begin{align*} \psi_1(x,y) &= \psi_1(0) + \frac{\partial \psi_1}{\partial x}(0)~x + \frac{\partial \psi_1}{\partial y}(0)~ y + \ldots \\ \psi_2(x,y) &= \psi_2(0) + \frac{\partial \psi_2}{\partial x}(0)~x + \frac{\partial \psi_2}{\partial y}(0)~ y + \ldots \end{align*} and show that the only significant coefficients in the above expansion are $\frac{\partial \psi_1}{\partial x}(0) \approx 1$, $\frac{\partial \psi_2}{\partial y}(0) \approx 1$ and all other partial derivatives $\frac{\partial^{m+n} \psi_i}{\partial^{m} x \partial^{n} y}(0) \approx 0$, for all other choices of $m,n$ and any $i = 1,2$. Edit: After reading the comments/ answers of user35593 and Gro-Tsen , I realized that the following comments are essential regarding the above question: First is regrding the comment of user35593 (please read his/her comment below): The question makes sense only when we specify a certain way of identifying the tangent spaces at the points $p$ and $q$. Consider the rotation $R :\mathbb{R}^3 \rightarrow \mathbb{R}^3$ which maps $p$ to $q$, then it also maps $T_{p}(\mathbb{S}^2)$ to $T_{q}(\mathbb{S}^2)$ and it gives a way of identifying the two tangent spaces. The above question I asked is for the tangent spaces identified in this manner. Second is regarding the answer of Gro-Tsen : One possible interpretation of ``$\frac{\partial^{m+n} \psi_i}{\partial^{m} x \partial^{n} y}(0) \approx 0$'' which I am interested in is: Given $\epsilon >0$, is it possible to find $\delta >0$ such that whenever $d_{\mathbb{S}^2}(p,q)\leq \delta$ we have \begin{align*} \left| \frac{\partial^{m+n} \psi_i}{\partial^{m} x \partial^{n} y}(0) \right| \leq \epsilon \end{align*} whenever $m+n \leq 100$ (except for the cases of $\frac{\partial \psi_1}{\partial x} (0)$ and $\frac{\partial \psi_2}{\partial y} (0)$ which I expect to be $\approx 1$). Thanks! Have you tried writing down explicit formulae? Letting $\mathbb{S}^2 = {x^2+y^2+z^2=1} \subseteq \mathbb{R}^3$, we can assume $q=(0,0,1)$ and $p=(\sin δ,0,\cos δ)$, then $\exp_p(u,v) = (\sin r,\cos θ, \sin r,\sin θ, \cos r)$ where $(r,θ)$ are the polar coordinates of $(u,v)$, and $ψ$ takes $(u,v)$ to $(u',v')$ where $(\sin r',\sin θ', \cos r', \sin r',\cos θ') = (-\sin δ,\cos r + \cos δ,\sin r,\cos θ, \sin r,\sin θ, \cos δ,\cos r + \sin δ,\sin r,\cos θ)$. At least the map $(r,θ)\mapsto (r',θ')$ looks reasonably explicit. Your map maps the tangent space of q to the tangent space of p. You need to specify how you identify the tangent spaces with $\mathbb{R}^2$, i.e. what bases you choose in order to arrive at a map $\mathbb{R}^2\rightarrow \mathbb{R}^2$. You could for example consider the geodesic from p to q. and identify the tangent vectors to the geodesic at p and q with each other. @Gro-Tsen Thanks! Yes, I did try writing out explicitly the exponential map like you suggested. But the problem which I faced as well as with what you suggest is that $(r',\theta')$ is only implicitly defined using the above equlation you mentioned. I couldn't make any further progess with this. @user35593 You are right, I should have clarified your point in the question. The tangent spaces at $p$ and $q$ are identified the way you suggested. An alternate description is if $R :\mathbb{R}^3 \rightarrow \mathbb{R}^3$ is the rotation which maps $p$ to $q$, then it also maps $T_{p}(\mathbb{S}^2)$ to $T_{q}(\mathbb{S}^2)$ and it gives a way of identifying the two tangent spaces. What should be true is that for fixed $p$, and $q$ nearby, the maps $F_q = \exp_q^{-1} \exp_p$, considered as a map on (a ball in) $\Bbb R^2$ by using a fixed trivialization of $TS^2$ near $p$. These should vary smoothly in $q$, with $F_p$ given by the identity. Since $D^n F_p = 0$ for $n>1$, and the maps $F_p$ are smooth in $p$, the Taylor approximation should give the existence of $C_n$ so that $|D^n F_q| \leq C_n |p-q|$ for $q$ near to $p$. This seems like the sort of smallness estimate you want. Gro-Tsen's answer implies that $C_n \to \infty$, though. @MikeMiller: Awesome! This perfectly answers my question :) You can argue that your map and therefore also the partial derivatives at 0 depend smoothly on $p, q$. For $p=q$ you get the identity. Then the coefficients are exactly 1 resp. 0. For $p\noteq$ they are of order $d(p,q)$ away from the identity case. It probably is easiest to consider, given a curve $c$ such that $c(0) = p$, the map $$ \psi_t = \exp_p^{-1}\circ\exp_{c(t)}: T_{c(t)}\mathbb{S} \rightarrow T_p\mathbb{S}. $$ In particular, its derivative with respect to $t$ at $t = 0$ is a essentially a Jacobi field and satisfies the Jacobi equation, which here is just $$ J'' + J = 0. $$ I believe the following proves that the partial derivatives of $\psi$ at the origin cannot be bounded by a constant (so they certainly cannot be $\approx 0$ for any reasonable meaning of this symbol). Assume on the contrary that (for some fixed $p,q$ distinct and not antipodal), the partial derivatives of $\psi := \exp_p^{-1} \circ \exp_q$ (defined in some neighborhood of the origin) are all bounded by a constant. Then, by summing the Taylor series expansion of $\psi$ at $0$ we see that $\psi$ extends to a real-analytic function $\psi\colon\mathbb{R}^2 \to \mathbb{R}^2$, which by analytic extension must still satisfy $\exp_p \circ \psi = \exp_q$. Let me argue why this is impossible. We can assume w.l.o.g. that the coordinates $(u,v)$ on the tangent plane to $\mathbb{S}^2$ at $q$ were chosen so that $\exp_q$ maps the axis $(u,0)$ to the great circle connecting $q$ and $p$, and more precisely, if $0<\delta<\pi$ is the distance between $q$ and $p$ on $\mathbb{S}^2$, that $\exp_q$ takes $(\delta,0)$ to $p$. Furthermore, we can similarly assume on the coordinates $(u',v')$ of the tangent plane at $p$ that $\exp_p$ maps the axis $(u',0)$ to the same great circle and takes $(-\delta,0)$ to $q$. Then $\exp_q(u,0)$ is the point obtained by traveling a distance $u$ on $\mathbb{S}^2$ starting from $q$ in the direction of $p$, and $\exp_p(u,0)$ is the point obtained by traveling a distance $u$ on $\mathbb{S}^2$ starting from $p$ in the direction opposite to $p$, thus $\psi(u,0) = (u-\delta,0)$ for $u$ in the neighborhood of $0$, hence everywhere by analytic extension. On the other hand, if $(u,v)$ lies on the circle $C$ with radius $\pi$ around the origin then $\exp_q$ takes $(u,v)$ to the antipode $\tilde q$ of $q$. But the inverse image of $\tilde q$ by $\exp_p$ is discrete (since $\exp_p$ is a diffeomorphism outside of circles of radius $k\pi$ around the origin which are mapped to either $p$ or its antipode $\tilde p$, and we are assuming $p,q,\tilde p,\tilde q$ distinct); and $\psi$ must map $C$ (which is connected) inside this inverse image: so $\psi$ must be constant on $C$. But this contradicts the fact that $\psi(\pi,0) = (\pi-\delta,0)$ and $\psi(-\pi,0) = (-\pi-\delta,0)$ (as per previous paragraph) are not equal. Nice answer! Though I only realized after the comment of user35593 that the way we identify the tangent spaces is very important and the question makes sense only if we identify it appropriately - like your observation above shows. Hence I have edited the question. The expectation is $\psi \approx \text{Id}$ only when $T_{p}(\mathbb{S}^2)$ and $T_{q}(\mathbb{S}^2)$ are identified like I have mentioned in the edited version. This is not the way you have identified the tangent spaces though. It is the way I identified the tangent spaces. However, now that you've edited the question to clarify what you mean by $\approx 0$, and the order of the quanrifiers, this doesn't answer it. But it's not related to the identification of the tangent spaces. The derivatives being bounded would not necesssrily imply the Taylor series to coverge unless you mean bounded by a common constant in which case we would only have convergence on a square. Not a solution but too long for a comment. Note that the map depends only on the distance $d(p,q)$ of $p,q$, i.e. we have $\phi(t)\colon \mathbb{R}^2\rightarrow \mathbb{R}^2$ where $t=d(p,q)$. Note further that $\phi(t_1+t_2)=\phi(t_1)\circ \phi(t_2)$, i.e. $\phi(t)$ is a one-parameter group. Let $D=d\phi(t)/dt|_{t=0}$. Note that then $d\phi(t)/dt=D(\phi(t))$. Hence $\phi(T)(u)$ can be found by solving the ODE $f'(t)=D(f(t))$ on the interval $[0,T]$ with initial condition $f(0)=u$. Let us now find $D$ for our case of the sphere. The exponential and its inverse on the sphere are $exp(v)=\begin{pmatrix} sin(|v|)\frac{v}{|v|}\\ cos(|v|) \end{pmatrix}$ and $log \left(\begin{pmatrix} x\\ y\\z \end{pmatrix}\right)=\frac{arccos(z)}{\sqrt{1-z^2}}\begin{pmatrix} x\\ y \end{pmatrix}$. Applying and infinitesimal rotation by $\alpha$ after the exponential map yields $$ \begin{pmatrix} sin(|v|)\frac{v_1}{|v|}-\alpha \cdot cos(|v|)\\sin(|v|)\frac{v_2}{|v|} \\ cos(|v|)+\alpha \cdot sin(|v|)\frac{v_1}{|v|} \end{pmatrix} $$ Now applying the logarithm using $$ d\frac{arccos(z)}{\sqrt{1-z^2}}/dz=\frac{arccos(z)z-\sqrt{1-z^2}}{\sqrt{1-z^2}^3} $$ yields $$ v+\alpha \cdot sin(|v|)\frac{v_1}{|v|} \frac{(|v|cos(|v|)-sin(|v|))}{sin(|v|)^3} sin(|v|)\frac{v}{|v|} -\begin{pmatrix} \alpha \frac{cos(|v|)}{sin(|v|)}|v|\\0\end{pmatrix}\\ =v+\alpha \cdot \frac{(|v|cos(|v|)-sin(|v|))v_1}{sin(|v|)|v|^2} v -\begin{pmatrix} \alpha \frac{cos(|v|)}{sin(|v|)}|v|\\0\end{pmatrix} $$ Hence $$ D(v)=\frac{(|v|cos(|v|)-sin(|v|))v_1}{sin(|v|)|v|^2} v-\begin{pmatrix} \frac{cos(|v|)}{sin(|v|)}|v|\\0\end{pmatrix}. $$
2025-03-21T14:48:30.348388
2020-04-18T09:17:26
357828
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Alexandre Eremenko", "Carlo Beenakker", "Eric", "Kostya_I", "Timothy Budd", "https://mathoverflow.net/users/102097", "https://mathoverflow.net/users/11260", "https://mathoverflow.net/users/25510", "https://mathoverflow.net/users/47484", "https://mathoverflow.net/users/56624", "https://mathoverflow.net/users/75935", "litmus" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628237", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357828" }
Stack Exchange
Intercept the missile A stealth missile $M$ is launched from space station. You, at another space station far away, are trusted with the mission of intercepting $M$ using a single cruise missile $C$ at your disposal . You know the target missile is traveling in straight line at constant speed $v_m$. You also know the precise location and time at which it was launched. $M$, built by state-of-the-art stealth technology however, is invisible (to you or your $C$). So you have no idea in which direction it is going. Your $C$ has a maximum speed $v_c>v_m$. Can you control trajectory of $C$ so that it is guaranteed to intercept $M$ in finite time? Is this possible? I can think of 3 apparent possibilities: Precise interception is possible. (It is possible in two dimensions, by calibrating your missile's trajectory to the parameters of certain logarithm spiral). Precise interception is impossible, but for any $\epsilon\gt 0$, paths can be designed so that $C$ can get as close to $M$ as $\epsilon$ in finite time. There's no hope, and your chance of intercepting or getting close to $M$ diminishes as time goes by. This question is inspired by a similar problem in two dimensions by Louis A. Graham in his book Ingenious Mathematical Problems and Methods. for the two-dimensional problem, see Tanvolutes: Generalized involutes What do you mean by a "cruise missile"? Cruise missiles have wings and travel in the atmosphere. I'm probably missing something here, but I have to ask. It is stated that the stealth missile $M$ is traveling in a straight line, and we know its location and time at launch. If, for instance, both space stations are maintaining same orbit (i.e. their distance is constant); shouldn't that mean that we know the (shortest or longest) path the stealth missile will take assuming the sphere is symmetrical around its x-axis? There is no hope. Timothy Budd already explained why staying on the sphere $S_t$ of possible locations of $M$ will not work; so this is to explain why leaving it will not help. What is enough to show is the following claim: at large times, if $t_1<t_2$ and $C_{t_{1,2}}$ are at $\epsilon$-neighborhoods of $S_{t_{1,2}}$ respectively, then the radial projections of $C_{t_{1,2}}$ onto the unit sphere are at distance at most $c\cdot \log {t_2/t_1}$ from each other. With the claim, essentially the same argument will work. (Details: think about the task in terms of gradually ruling out possible directions in which $M$ can go. One can only exclude a direction if at some time $t$, $C$ is at the $\epsilon$-neighborhood of the corresponding point of $S_t$. Suppose Alice has a radar that she can direct to any point of $S_t$ and rule out the $2\epsilon$-neighborhood of it. Let Bob follow any strategy, and let Alice do this: whenever Bob's $C$ is in the closed $\epsilon$-neighborhood of $S_t$, she directs the radar to the projection of Bob's $C$ on $S_t$; in between those times, she turns her radar at constant speed. Clearly Alice will know more than Bob, yet by the claim, the angular speed of the radar will be bounded by $c/t$, and by Budd's argument, Alice will not be able to rule out everything.) But the claim is fairly obvious: between $t_1$ and $t_2$, $C_t$ cannot get more than $(v_c-v_m)(t_2-t_1)+2\epsilon$ behind $S_t$, otherwise it wouldn't be able to catch up. If $t_2-t_1<t_1\cdot \frac{v_c-v_m}{100v_m}$, then this means that $C_t$ is always at distance at least $\frac12v_m t$ from the origin, and then the speed of the projection is bounded by $\mathrm{const}/t$ as in Budd's answer. If $t_2/t_1\geq 1+\frac{v_c-v_m}{100v_m}$, then we can just bound the distance between projections of $C_{t_{1,2}}$ by $2$, and take the constant $c$ to be $2\cdot \log\left(1+\frac{v_c-v_m}{100v_m}\right)$ Well, Timothy has answered that comprehensively: to cover a curve by a curve, you don't need an $\epsilon$-neighborhood at all, just that the length at your disposal is large enough. It's imposible to cover a 2d surface by a curve, you need a neighborhood. In our case, the size of the neighborhood at our disposal decreases too quickly. Yes, now I see why. Budd answered my comment just as I posted another one here so I deleted it. Together your answers are really conclusive. Upon closer inspection, in this argument, why do we have to make the $S_t$ sphere our anchor and reference points in the first place? What about countless other plans that refers not to $S_t$ at all? "What is enough to show is the following claim..." Why is this claim enough? @Eric, I added some explanation, hope that helps! If $C$ stays on the growing sphere of possible locations of $M$ then there seems to be no hope. Suppose $v_m = 1$ and $v_c = \sqrt{2}$, $t$ is the time since launch and we are trying to get to within distance $\epsilon$ from $M$. As soon as $C$ reaches the $t$-multiple of the unit sphere, say at time $t=t_0$, it will follow a trajectory $t\cdot x(t)$ with $\|x(t)\|=1$ on the unit sphere with velocity $\|\dot{x}(t)\|=1/t$ (by Pythagoras). The area of the positions of $M$ that $C$ can exclude by moving on the sphere is roughly $$ \int_{t_0}^t \frac{\mathrm{d}t}{t} \frac{\epsilon}{t} = \frac{\epsilon}{t_0} - \frac{\epsilon}{t} < \frac{\epsilon}{t_0}$$ So to exclude an area of order $4\pi$ we are in trouble if $\epsilon < 4\pi t_0$. Yes, that strategy is the natural one that comes to mind and was suggested by Robin in the comments. It would work if $v_c(t)=ct+k$, i.e., increasing linearly with time, for possibility 2. Oops, somehow I missed that comment.... let me leave the answer as a (trivial) baseline. I thought I understood your argument. But now I'm a little confused. What is $x(t)$? Can you explain a bit more in detail about "it will follow a trajectory $t\cdot x(t)$ ... (by Pythagoras)"? $\mathbb{R}_+\to S^2\subset\mathbb{R}^3: t\to x(t)$ is the position of C projected onto the unit sphere. The speed of C in $\mathbb{R}^3$ is then $v_c^2=|x(t)|^2+t^2|\dot{x}(t)|^2=2$ if $|\dot{x}(t)|=1/t$. But then can't the same function and integration be used to show that it's the same for a unit circle in the two dimensional case? Where's the difference? There is no $\epsilon/t$ in the integral in the two-dimensional case. The excluded length on the unit circle is then $\int_{t_0}^t \mathrm{d}t/t = \log(t/t_0)$.
2025-03-21T14:48:30.348825
2020-04-18T11:16:44
357836
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628238", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357836" }
Stack Exchange
Function of moderate growth: history, motivation, and uses I recently came across functions of moderate growth via Are functions of moderate growth a bornological space? and I was wondering, what are some concrete uses or applications of this space? Where does it appear and why was it introduced historically? I'm most interested in the continuous variant, discussed in Section 5 of Garrett - Examples of function spaces. There is no Wikipedia page so I ask here. Since you've linked to my question, let me give a first answer. The $M$ in the notation $\mathcal{O}_M$ comes from "multiplication". $\mathcal{O}_M$ is the space of functions that can be multiplied with Schwartz functions/distributions to still give Schwartz functions/distributions. Its fourier transform $\mathcal{O}_C'$ is the space of distributions that can be convolved with Schwartz functions/distributions and still produce Schwartz functions/distributions. Once you accept that $\mathcal{S}$ and $\mathcal{S}'$ are useful to study the Fourier transform and its various applications, the subspaces that let you do multiplication and convolution are equally natural. A nice observation is that $\mathcal{O}_M$ and $\mathcal{O}_C'$ are algebras w.r.t. multiplication and convolution respectively. So they are quite natural operator algebras on $\mathcal{S}$ and $\mathcal{S}'$. Useful from time to time is the lemma that convolution of $\mathcal{S}'$ and $\mathcal{S}$ gives you a function in $\mathcal{O}_M$. A small complement to Johannes' excellent answer. If one applies $(-\Delta)^{-\alpha}$ to a function in $\mathscr{S}$, then the result is smooth but the fast decay at infinity is lost. So a convenient space for the codomain of this transform is $\mathscr{O}_{\rm M}$. In other words a (semiregular) kernel like $\frac{1}{|x-y|^{\beta}}$, say with $0<\beta<d$ when working on $\mathbb{R}^d$, lives in $\mathscr{O}_{{\rm M},x}\widehat{\otimes}\mathscr{S}'_{y}$. With multilinear algebra, one can make some constructions like, starting from $$ A\in V\otimes W \otimes X'\otimes Y $$ and $$ B\in V'\otimes X\otimes Z\ , $$ constructing some new element $$ A\bullet B\in W\otimes Y\otimes Z $$ by "contraction of indices" for the dual pairs of vector spaces $V,V'$ and $X,X'$. One of the key points of Schwartz's theory of distributions, is that one can also do this in infinite dimension, provided the spaces used are like $\mathscr{S},\mathscr{S}'$, etc. It also works with $\mathscr{O}_{\rm M}$. I used that a lot, e.g., in my article "A Second-Quantized Kolmogorov–Chentsov Theorem via the Operator Product Expansion", using a technique I call multiply and conquer for dealing with $\mathscr{O}_{\rm M}$ thanks to its multiplier space characterization. See also, my MO answer Can distribution theory be developed Riemann-free? and in particular the "high tech proof" therein of the last fact mentioned by Johannes: convolution of elements in $\mathscr{S}$ and $\mathscr{S}'$ produces an element of $\mathscr{O}_{\rm M}$. This is an elementary example of the "multiply and conquer" method. In addition to @Abdelmalek Abdesselam's and @Johannes Hahn's good points, I can add a few things from my own context: One very non-negotiable thing is that Eisenstein series, with a significant role in the spectral theory of automorphic forms, are not in the relevant $L^2$ space, but are only (in an appropriate sense) "of moderate growth". The simplest explanatory analogue is the case of Fourier transforms on the real line, and the very standard apparatus there (as mentioned by Abdelmalek and Johannes). To make one comparison, we can realize that to apply a tempered distribution $u$ to the Fourier inversion integral $f(x)=\int e^{2\pi ixy}\widehat{f}(y)\,dy$ it is not obvious, and, in fact, not necessarily correct, that $u(f)=\int u(e^{2\pi ixy})\widehat{f}(y)\,dy$. Namely, the exponential is not a Schwartz function. Nevertheless, the (purely imaginary…) exponentials are bounded, smooth, etc. So, for example, compactly supported distributions can be sensibly applied to them (compatibly with everything else), and can move inside the Fourier inversion integral. But in many circumstances one wants to have somewhat more general (though still fairly docile) distributions, not just compactly supported ones. Here, already just on the real line (not to mention the automorphic contexts…), it is possible to tell some needless lies. The basis of the possible lies is that (on the real line), continuous, compactly-supported functions are not sup-norm dense in the collection of all bounded continuous functions. (It's not even about measurability….) Thus, the translation action of $\mathbb R$ on bounded continuous functions on it (with sup norm) is not continuous. (E.g., $\sin(x^2)$). This is very bad… and cannot be counted a pathology, but, rather, a misunderstanding of the proper topology. I think the fundamental point is that the sup-norm closure of continuous, compactly-supported functions is continuous functions going to $0$ at infinity. Thus, the translation action of $\mathbb R$ is continuous on such functions, with sup norm. That is, rather than looking just at bounded continuous functions on $\mathbb R$, we look at the spaces $V_t$ (with $t$ real…) of continuous functions $f$ such that $\lim_{x\to \infty}\lvert x\rvert^t\cdot \lvert f(x)\rvert=0$. The continuous, compactly-supported functions are dense in each, so the translation action of $\mathbb R$ is (mercifully) continuous on each. So, the most technically useful, and topologically useful, version of "bounded continuous" is $\bigcap_{t>0} V_t$, which describes a slightly larger space as a (projective) limit…. Hilariously, the colimit over $t$ (an ascending union) does produce the same topology as the naïve version (the latter incorrect on each limitand).
2025-03-21T14:48:30.349229
2020-04-18T11:30:07
357839
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Deane Yang", "Quarto Bendir", "Tobias Diez", "https://mathoverflow.net/users/156492", "https://mathoverflow.net/users/17047", "https://mathoverflow.net/users/613" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628239", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357839" }
Stack Exchange
Smoothness of solution map for PDE I am wondering what sort of results are available for the following sort of problem, or where to look in the literature for work dealing with such problems, especially in the degenerate elliptic context. It seems like there should be a general functional-analytic framework, but I'm not sure where to look. Suppose that one has an order-$N$ (nonlinear) differential operator $P:C^\infty\to C^\infty$ which for any $k\geq N$ extends to a map $C^{k,\alpha}\to C^{k-N,\alpha}$ which is smooth as a map between Banach spaces. Suppose that $P$ has an inverse $Q:C^\infty\to C^\infty$. Suppose that there is $m\in\mathbb{N}$ such that, for any $k\geq 0$, $Q$ has a unique continuous extension to a map $C^{k,\alpha}\to C^{k+m,\alpha}$. Is $Q$ necessarily smooth as a map between these Banach spaces? For elliptic operators, smoothness of the solution operator is established in Section II.3.3 of "Nash-Moser Inverse Fuction Theorem" by R. Hamilton (1982). I'm not very familiar with Hamilton's paper, but doesn't that only cover the case of linear $P$? Yes, he only discusses the linear case but for operators depending on parameters. This allows you then to use the Nash-Moser inverse function theorem to conclude that a non-linear solution operator exists (locally) and is smooth. You might be also interested in my PhD thesis https://arxiv.org/abs/1909.00744 (non-linear elliptic operators are discussed in section 2.2.4) Aren't there still two differences? 1) Hamilton's conclusions assert tameness, but nothing about the degree of tameness; 2) the conclusion on smoothness is as a map $C^\infty\to C^\infty$, which doesn't (?) automatically extend to smoothness as a map $C^{k,\alpha}\to C^{k+m,\alpha}$ How do you establish the existence of $Q$? Did you use a Nash-Moser argument? Or probably something better, because normally the resulting $m$ would be negative and $k$ would have to be sufficiently. large. And perhaps you should edit the title, either removing "elliptic" or inserting "degenerate". The situation is much simpler if the PDE is elliptic, because $m = N$. @DeaneYang Is it so simple even in the elliptic case? In well-known cases like e.g. the Calabi-Yau complex Monge-Ampère equation, even once one has a solution map $C^{k,\alpha}\to C^{k+2,\alpha}$ and the PDE is known to be uniformly elliptic at the solutions, it doesn't seem totally obvious that the map $C^{k,\alpha}\to C^{k+2,\alpha}$ is smooth. (Is it smooth? It seems like it could follow from estimates for the linearized equation, but it isn't obvious to me how. Is it analytic?) In the elliptic case, you have a smooth map $F: A \rightarrow B$, where the linearization $F'(a): A \rightarrow B$ is is smooth and has a bounded inverse. In that case, I believe everything follows by the same argument you would use in the finite dimensional inverse function theorem. It basically follows by implicit differentiation. So if you can find two Banach spaces, where the linearization of your nonlinear PDE is a smooth map between the two spaces and the inverse of the linearization is bounded, then I'm pretty sure the inverse to the nonlinear map is smooth, too. If the inverse maps back into a weaker space, then you need to use the Nash-Moser theorem. You'll get a smooth inverse, but it will be $C^{k,\alpha} \rightarrow C^{k-m,\alpha$ for some large $m$. The argument here is actually the same as for the Banach implicit function theorem. Like I asked Tobias in my second comment above, isn't there some technical difficulty about which function spaces one identifies for the inverse map to be smooth? In Hamilton's paper, it seems that the only conclusion is that the map $C^\infty\to C^\infty$ is smooth and tame, and it seems like the Hamilton-Nash proof does not (and possibly is incapable of?) finding any kind of optimal $m$ for $C^{k,\alpha}\to C^{k+m,\alpha}$ to be smooth. But maybe I don't understand it properly. Also I'm curious why you say that $m$ is usually negative? My impression was that the loss of derivatives is typically somewhat small, at least small enough that you gain some regularity in passing from the RHS to the solution. Is this wrong? The Nash-Moser theorem does give an inverse between two Banach spaces, just not the two you start with. It does not give you the optimal one. That probably requires arguments specific to a PDE. And, in most versions of Nash-Moser, if the inverse to the linearization loses derivatives, the inverse to the nonlinear PDE loses a lot of derivatives. A near optimal version is in a paper by Hormander. (http://www.acadsci.fi/mathematica/Vol10/vol10pp255-259.pdf). I just found a nice short proof by Saint Raymond of the Nash-Moser implicit function theorem. https://pdfs.semanticscholar.org/a21a/e45e8e408fe2bc97cde4741b6e3c25360fa2.pdf
2025-03-21T14:48:30.349559
2020-04-18T12:40:59
357841
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Dan Turetsky", "Jiayi Liu", "Johannes Schürz", "Jonathan Schilhan", "https://mathoverflow.net/users/103802", "https://mathoverflow.net/users/134910", "https://mathoverflow.net/users/32178", "https://mathoverflow.net/users/74918" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628240", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357841" }
Stack Exchange
Complexity of a combinatorial constraint For two $k$-partitions $X,Y\in k^\omega$ of $\omega$ (seen as functions $\omega\rightarrow k$), we say $X,Y$ are almost disjoint iff $X^{-1}(i)\cap Y^{-1}(i)$ is finite for all $i<k$. Question: Does there exist a set $Q\subseteq 3^\omega\times (2^\omega)^r$ such that: for every $X\in 3^\omega$, there exists $Y\in (2^\omega)^r$ such that $(X,Y)\in Q$; for every $(X^0,Y^0),(X^1,Y^1)\in Q$, if $X^0,X^1$ are almost disjoint, then $ Y^0_s,Y^1_s$ are almost disjoint for some $s<r$. If $r=1$, then obviously $Q$ does not exist since every three $2$-partitions, there are two of them not almost disjoint and there are three $3$-partitions that are mutually almost disjoint. If we replace $3^\omega$ by $2^\omega$ then such $Q$ obviously exist. It seems, by Cohen forcing, $Q$ (if exist) cannot be $\Sigma_1^1$. Could it be $\Pi_1^1$? In 2, did you mean for some $s < r$, rather than all $s < r$? Because as it is now, making $r$ larger makes it harder for such a $Q$ to exist, and so your argument for $r=1$ settles the question. It is "for some $s<r$". Thanks~ The answer is yes for $r=3$. Take an ultrafilter $\mathcal{U}$ on $\omega$. Define a function $f \colon 3^\omega \rightarrow 3$ such that $f(X):=i$ iff $X^{-1}(i) \in \mathcal{U}$. Note that if $X_1$ and $X_2$ are almost disjoint, then $f(X_1) \neq f(X_2)$. Let $Q:=\{(X,Y) \in 3^\omega \times (2^\omega)^3 \colon \,\, f(X)=i \Rightarrow Y_i=(\emptyset,\omega) \land f(X)\neq i \Rightarrow Y_i=(\omega, \emptyset)\}$. Unfortunately, this $Q$ cannot be $\Pi_1^1$ as Jonathan pointed out. For $r=4$, it seems that you can get $Q$ to be $\Pi^1_1$. Let $\mathcal{U}$ be a $\Sigma^1_2$ ultrafilter (say in L). Then we take exactly the same $Q$ as you did, but simply add an additional coordinate to the $Y$ part for the witness that $f(X)=i$. The $Q$ that you define will never be $\Pi^1_1$, since $x \in \mathcal{U}$ would become $\Pi^1_1$, but this is impossible for an ultrafilter. Concerning your second comment, I had the same idea, just wasn't 100% sure that an ultrafilter cannot be $\Pi_1^1$.... How does this relate to your $\Pi_1^1$ P-point base (in L)? Can't I just say $X \in \mathcal{U}$ iff $\forall Y \in \mathcal{B} \colon X \cap Y \neq \emptyset$? Concerning your first comment, I assume you mean taking a witness from a $\Pi_1^1$ ultrafilterbase in L? However, this $Q$ might not work in $V$ as condition (1) need not be absolute?! The formula $\forall Y \in \mathcal{B}(X \cap Y \neq \emptyset)$ is $\Pi^1_2$ since you need to say $\forall Y (Y \notin \mathcal{B} \vee X \cap Y \neq \emptyset)$. My first comment was just a consistency result. In $L$ it is not hard to construct a $\Sigma^1_2$ ultrafilter. A $\Sigma^1_2$ set is the projection of a $\Pi^1_1$ set $A \subseteq 2^\omega \times 2^\omega$, so if $x \in \mathcal{U}$, then there is a witness $y$ so that $(x,y) \in A$. Yes, you are absolutely right!! Thanks! Thanks for answering~ I didn't expect different $r$ would make a difference. Your construction gives a $Q$ for $r=2$ as following. Put $(X,Y)$ in $Q$ where $Y=(\emptyset,\emptyset), (\emptyset,\omega), (\omega,\emptyset)\in (2^\omega)^2$ depending on $f(X)=0,1,2$ respectively. Where $(\emptyset,\emptyset)$ denotes such $(Y_0,Y_1)\in (2^\omega)^2$ that $Y_0^{-1}(1)=Y_1^{-1}(1)=\emptyset$ (similarly for $(\emptyset,\omega), (\omega,\emptyset)$). So $Q$ could be $\Pi_1^1$ for $r=3$ by Jonathan's comment.
2025-03-21T14:48:30.349821
2020-04-18T13:56:47
357847
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Conrad", "Fedor Petrov", "Max Alekseyev", "Michael Rozenberg", "YCor", "darij grinberg", "https://mathoverflow.net/users/112382", "https://mathoverflow.net/users/133811", "https://mathoverflow.net/users/135040", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/2530", "https://mathoverflow.net/users/4312", "https://mathoverflow.net/users/7076", "jcdornano" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628241", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357847" }
Stack Exchange
Polynomial inequality $n^2\sum_{i=1}^na_i^3\geq\left(\sum_{i=1}^na_i\right)^3$ Let $n\ge 3$ be an integer. I would like to know if the following property $(P_n)$ holds: for all real numbers $a_i$ such that $\sum\limits_{i=1}^na_i\geq0 $ and $\sum\limits_{1\leq i<j<k\leq n}a_ia_ja_k\geq0$, we have $$n^2\sum_{i=1}^na_i^3\geq\left(\sum_{i=1}^na_i\right)^3.$$ I have a proof that $(P_n)$ holds for $3\leq n\leq8$, but for $n\geq9$ my method does not work and I did not see any counterexample for $n\ge 9$. Is the inequality $(P_n)$ true for all $n$? Or otherwise, what is the largest value of $n$ for which it holds? Thank you! Can you be more precise on "it seems that it's wrong for a big value of $n$"? you have an explicit counterexample? for some explicit $n$? @YCor I have no a counterexample. See please better my post. But in your post you say "it seems that it's wrong for a big value of $n$". What makes you believe that it's wrong for some large $n$? you wrote you have no counterexample for $n=9$. @YCor I just think so because I solved during my live one problem or maybe two. Can I think so? You do not allow me? :) Well, it's confusing, as it conveys some wrong information. I edited your post; of course you can write that your guess is that it fails for large $n$, but "it seems" suggested that you have a serious reason to believe so. Than you, Mark! It's OK. For me more interesting, how we can approach to this task. For exponent $2$ it's obvious, of course. I checked that for $n=9$ and $a_2=...=a_9$ it's true. @Mark Sapir When I was in high school, I was taught that nothing does not help, when we need to prove inequalities. Did you try $pqr$-method? (I mean, fix all variables except $a_1,a_2,a_3$, also fix $\sum a_i$ and $\sum\limits_{1\leq i<j<k\leq n}a_ia_ja_k$.) @Fedor Petrov I proved it for $3\leq n\leq 8$ by using of the $uvw$'s method and Rolle For $n\geq9$ it does not work, I think. Maybe I don't see something. With Rolle (applied $n-3$ times to $\prod (x-a_i)$) it gives something which is wrong for large $n$. But I think Rolle does not reduce an inequality to the equivalent one: the antiderivative of a polynomial with real roots only may fail to have real roots only. I suggested something less elegant and straightforward: is three variables are mutually distinct, we may vary them so that the difference LHS-RHS increases. It reduces the problem to the situation when $a_i$'s take only two different values. SageMath (actually just Python) gives $(-2, 1, 1, 1, 1, 1, 1, 1, 1)$ as a counterexample for $n=9$. Can you check? It could be much more harder than olympiad problem if it were true. Thank you very much, Darij !!! For nonnegative arguments, the inequality in question follows from the power mean inequality. @Max Alekseyev Or Holder, or Jensen, or the Vasc's EV, or more and more... Is it obvioussly true if we take $a_0+a_1....+a_n=0$? $111$ taken $9$ times and $-199$ gives a counterexample for $n=10$ with a positive sum of cubes (more generally $a+\frac{1}{10}$ taken $9$ times and $-9a+\frac{1}{10}$ for $a > \frac{3}{80}$ and close to it - $a=\frac{3}{80}+\frac{1}{800}$ and normalizing to integers gives the above @jcdornano yes: $6\sum a_i^3=(\sum a_i)^3-3(\sum a_i)(\sum a_i a_j)+3\sum a_i a_j a_k$. Take $n=3k$, $2k$ variables equal to $3$ and $k$ variables equal to $-5$ for large $k$. Then $\sum a_i=k>0$, and $\sum_{i<j<k} a_ia_ja_k=\frac16 (\sum a_i)^3+O(k^2)=\frac{k^3}6+O(k^2)>0$ for large $k$. But $\sum a_i^3<0$. Thank you, Fedor! This is just a long comment, but translating to the notation of symmetric functions, you ask if whenever $e_{111}(x) \geq 0$ and $e_3(x) \geq 0$, we have $$ n^2 p_{(3)}(x) \geq p_{111}(x). $$ This latter is equivalent with $$ n^2 \left( 3e_3-3e_{21}+e_{111} \right) \geq e_{111}. $$ Perhaps one can try different bases and see if something nice pops out... A sort of (partial) explanation for what happens: Let $N \ge 3$ the degree and let $A=\sum{a_k}, B=\sum_{j<k}a_ja_k, C=\sum_{j\ne k\ne m \ne j}a_ja_ka_m $. We are given that $A \ge 0, C \ge 0$ and we need to prove that $N^2(A^3-3AB+3C) \ge A^3$. Now we can assume wlog $A =1$ since if $A=0$ the inequality is obvious and otherwise we can divide by $A>0$ and consider $c_j=\frac{a_j}{A}$ and prove the inequality for them etc So we need to prove $1-3B+3C \ge \frac{1}{N^2}$ under the hypothesis as above (the polynomial $X^N-X^{N-1}+BX^{N-2}-CX^{N-3}+...$ has real roots and $C \ge 0$ Then if we let $b_j=a_j-\frac{1}{N}, A_1,B_1,C_1$ the corresponding symmetric polynomials in $b_j$ we have $A_1=0, B_1=B-\frac{N-1}{2N}=B-\frac {1}{2}+\frac{1}{2N}, C_1=C-B+\frac{2B}{N}+\frac{1}{3}-\frac{1}{N}+\frac{2}{3N^2}$ so the inequality becomes ($C-B=C_1+...$ from the last equality) $1+3C_1-\frac{6B}{N}-1+\frac{3}{N}-\frac{2}{N^2}\ge \frac{1}{N^2}$ and since $\frac{6B}{N}=\frac{6B_1}{N}+\frac{3}{N}-\frac{3}{N^2}$ all reduces to $3C_1-\frac{6B_1}{N} \ge 0$ But now the polynomial $X^N+B_1X^{N-2}-C_1X^{N-3}+...$ has real roots too as they are just $b_k$ and hence $B_1 \le 0, B_1=-B_2, B_2 \ge 0$ so the inequality reduces to $2B_2+NC_1 \ge 0$ and we know that $C_1=C+\frac{N-2}{N}B_2-\frac{(N-1)(N-2)}{6N^2}$ So we need $C_1$ negative but $C \ge 0$ By differentiating $N-3$ times and using Gauss Lucas/Rolle (so the cubic that results which is in standard form) has real roots so $4(-p)^3 \ge 27q^2$, we get some constraints on $B_2, -C_1$ which are enough to give the result for $N \le 6$ with some crude approximations Then if we try easy counterexamples for the $b_k$ of the type $N-1$ $a$ and one $-(N-1)a$ we can solve at $N=10$, $a > \frac{3}{80}$ close enough to it to satisfy the inequality $C>0$ (which is satisifed at $a=\frac{3}{80}$ that one giving equality in the OP inequality as normalizing to integers we get $11$ taken $9$ times, $-19$ taken once and it is easy to see that the constraints are good and $S_1=80, S_3=5120$ and obviously $100\cdot 5120=80^3$ So as noted in the comments taking $9$ of $111$ and one of $-199$ gets a counterexample with a positive sum of cubes (corresponding to $a=\frac{3}{80}+\frac{1}{800}$ normalized to integers)
2025-03-21T14:48:30.350352
2020-04-18T13:57:25
357848
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Noam D. Elkies", "Will Jagy", "https://mathoverflow.net/users/14830", "https://mathoverflow.net/users/3324" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628242", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357848" }
Stack Exchange
$3$-ranks of elliptic curves and representations $p=ax^3+by^3$ Let $p$ be a prime with $p\equiv2\pmod3$ and $E_p$ the elliptic curve $y^2=x^3+9p^2$ which has a rational $3$-torsion point. Let $\alpha$ from $E_p(\mathbb Q)$ to $\mathbb Q^*/{\mathbb Q^*}^3$ be the $3$-descent map such that generically $\alpha(P)=y-3p$. Assuming a weak form of BSD (parity) the image of $\alpha$ has cardinality $3$ or $27$, and the following six conditions are equivalent: (1) Cardinality of image of $\alpha$ is $27$. (2) The rank of $E$ is $2$. (3) $p=x^3+6y^3$ (4) $p=2x^3+3y^3$ (5) $p=4x^3+12y^3$ (6) $p=9x^3+18y^3$ always with $x$, $y$ in $\mathbb Q$. My question is this: can one prove the equivalence of (3), (4), (5), and (6) without assuming BSD ? There may be a completely elementary way to do this. just a computer search for small primes, with bounds prescribed for $x,y$ It slowed way down, and you can see that I found just three representations for prime 617, considerably bigger bounds would be needed to finish that one. Plenty of primes $p \equiv 2 \pmod 3$ did not allow any of the four representation with fairly small bounds on $x, y.$ I edited out those primes by hand. Each prime is printed, then the next four lines are i the form A+B x y D which means $$ |A x^3 + B y^3| = p D^3 $$ 2 2+3 1 0 1 4+12 1 1 2 1+6 2 -1 1 9+18 2 -1 3 5 4+12 1 -3 4 9+18 1 -2 3 1+6 1 -1 1 2+3 1 1 1 41 4+12 1 3 2 9+18 5 -1 3 1+6 7 -4 1 2+3 52 1 19 47 1+6 1 -2 1 2+3 4 -3 1 9+18 5 2 3 4+12 67 -41 20 83 9+18 1 -5 3 2+3 1 3 1 4+12 11 -1 4 1+6 67 -38 7 131 2+3 4 1 1 1+6 5 1 1 4+12 7 -3 2 9+18 97 -77 3 173 1+6 5 2 1 4+12 7 1 2 9+18 17 -13 3 2+3 73 -51 13 227 9+18 7 -8 3 4+12 47 35 16 2+3 224 -1047 247 1+6 299 -16 49 311 2+3 7 -5 1 9+18 49 -8 15 4+12 227 105 58 1+6 683 -425 77 359 2+3 2 -5 1 4+12 7 5 2 9+18 29 -43 15 1+6 41 -45 11 383 1+6 1 -4 1 9+18 5 8 3 4+12 61 -39 8 2+3 76 -23 13 401 9+18 11 -4 3 2+3 13 -11 1 1+6 55 -37 7 4+12 511 -3 110 443 9+18 11 -1 3 4+12 25 -17 2 1+6 53 8 7 2+3 556 -987 179 449 9+18 11 2 3 1+6 49 -27 1 2+3 79 5 13 4+12 1129 2015 614 503 2+3 4 5 1 4+12 103 -2119 610 9+18 295 -209 51 1+6 2015 453 259 509 1+6 5 4 1 2+3 29 -73 13 9+18 307 -173 69 4+12 739 -1683 478 617 2+3 31 -101 17 4+12 41 -27 4 9+18 61 13 15 ----------------------------------------- Once you've found representations for any two of the four cubics, you can get the other two using the group law on the elliptic curve. For $p=617$ this yields $37607^3 + 6 \cdot 18247^3 = 5257^3 p$. @NoamD.Elkies that is so nice. It seems pretty rare that the program gave only one of the representations. Eventually you will see cases where only one of the four cubics has a representation within your search limits, and also more $p$ for which you find solutions for two or three but the other(s) is/are out of range (though still accessible via the group law, as happened for $p=617$). Unfortunately i am constrained by the situation to deliver a negative answer. (As it often happens in life, it may be harder to fix the framework and disprove, then to claim and prove.) Because of this nature of the answer, i took the decision to give first examples move (in general) the problem inside $E^p(\Bbb Q)$, and illustrate there inside an equivalent situation. Passing for instance [ from (3) to construct a solution for (4) ] is equivalent to [ starting with one point of infinite order in $E^p(\Bbb Q)$ i.e. the algebraic rank is $\ge 1$ and showing it is $\ge 2$, i.e. finding a further independent point ]. Finally, there will be some claims of positive related structural facts. Let us experiment with $p$ among $1979$, $1997$, $2003$. Let $E(a,b,p)$ the curve given by the affine equation $$ E(a,b,p)\ :\ ax^3+by^3=p\ . $$ In all considered $(a,b)$-cases we will have $36ab=d^3$, for some $d\in\Bbb Q$. Let us record the possible cases in a table: $$ \begin{array}{|r|r|r|r|l|} \hline a & b & c & d & \text{case in OP}\\\hline\hline 1 & 6 & p & 6 & (3) \\\hline 2 & 3 & p & 6 & (4) \\\hline 4 & 12 & p & 12 & (5) \\\hline 9 & 18 & p & 18 & (6) \\\hline \end{array} $$ There is a map from $E(a,b,p)$ : $ax^3+by^3=p$ to $E^p$ : $Y^2=X^3+9p^2$, which is defined on rational points $(x,y)=\in E(a,b,p)(\Bbb Q)$ by $$ \begin{aligned} (x,y)&\to (X,Y):=\Big(\ -ab\; xy\ ,\ 3(ax^3-by^3)\ \Big)\ ,\\ X &= -d\;xy\ ,\\ Y &= 3(ax^3-by^3)\ ,\\[3mm] x &= +\left(\frac 1{2a}\left(-\frac Y3+p\right)\right)^{1/3}\ ,\\ y &= -\left(\frac 1{2b}\left(-\frac Y3-p\right)\right)^{1/3}\ , \end{aligned} $$ The sum of the two numbers $ax^3$ and $-by^3$ is $\frac Y3$, their product is $\frac {X^3}{36}$, so the two numbers are in all considered cases the roots of the equation in $T$ $$ T^2 -\frac Y3T+\frac{X^3}{36}=0\ . $$ (These are my "human choices", very uncomfortable for the study of each $E(a,b,c)$ alone, but made to put all cases (3), (4), (5), (6) in a common hat. For instance the above equation in $T$ does not depend on $a,b$, so it is better suited as an intermediate between the cases.) Let us check: $$ \begin{aligned} Y^2 &= 9(ax^3-by^3)^2 \\ &= 9(ax^3+by^3)^2-36ab\; x^3y^3 \\ &= 9(ax^3+by^3)^2+d^3\; x^3y^3 \\ &= 9p^2+X^3 \end{aligned} $$ At this point i will start an experiment first, then relate the obtained solutions to the question. It is clear that each $(x,y)$ from one of the cases (3) to (6) induces a $\Bbb Q$-rational point $(X,Y)$ on $E^p$, but conversely, taking one $\Bbb Q$-rational point $(X,Y)$ on $E^p$ does not induce an $(x,y)$ back, except for the case where we can extract the cubic root. This happens at most once, since $a$ must be $\frac 12\left(-\frac Y3+p\right)$ modulo cubes to insure $x\in\Bbb Q$. (And if this is the case, the cubic root for $y$ gives rise to a $y\in\Bbb Q$.) Consider the sage code Code 1 at the end of this answer. It gives solutions of $x^3+6y^3=2003$ only for $R$ of the shape $R=Q+(3mP+3nQ)\in Q+3E^p(\Bbb Q)$, where $P,Q$ are the sage choice of the free part generators for $E^p(\Bbb Q)$. We change slightly the code adapted to fit $p$ and $a,b,d$ covering the other cases that are relevant, and the results are condensed in the following table: $$ \begin{array}{|r|r|r|r|r|c|c|c|} \hline a & b & p & \text{equation} &d & \text{case} & R\text{ mod }3E^p(\Bbb Q) & \alpha(R)\\\hline\hline 1 & 6 & 2003 & x^3 + 6y^3 = 2003 & 6 & (3) & 2Q & 1/6\\\hline 2 & 3 & 2003 & 2x^3 + 3y^3 = 2003 & 6 & (4) & P & 2/3\\\hline 4 & 12 & 2003 & 4x^3 + 12y^3 = 2003 & 12 & (5) & 2P+Q & 4/12\\\hline 9 & 18 & 2003 & 9x^3 + 18y^3 = 2003 & 18 & (6) & P+Q & 9/18\\\hline\hline % 1 & 6 & 1997 & x^3 + 6y^3 = 1997 & 6 & (3) & 2P+Q & 1/6\\\hline 2 & 3 & 1997 & 2x^3 + 3y^3 = 1997 & 6 & (4) & 2Q & 2/3\\\hline 4 & 12 & 1997 & 4x^3 + 12y^3 = 1997 & 12 & (5) & P & 4/12\\\hline 9 & 18 & 1997 & 9x^3 + 18y^3 = 1997 & 18 & (6) & P+Q & 9/18\\\hline\hline % 1 & 6 & 1979 & x^3 + 6y^3 = 1979 & 6 & (3) & P + T & 1/6\\\hline 2 & 3 & 1979 & 2x^3 + 3y^3 = 1979 & 6 & (4) & P+Q + T & 2/3\\\hline 4 & 12 & 1979 & 4x^3 + 12y^3 = 1979 & 12 & (5) & P+2Q + T & 4/12\\\hline 9 & 18 & 1979 & 9x^3 + 18y^3 = 1979 & 18 & (6) & Q & 9/18\\\hline % \end{array} $$ Here $\alpha$ is the $3$-descent morphism mentioned in the OP mapping a (generic) point $R(X,Y)\in E^p(\Bbb Q)$ to $(Y-3p)$ modulo cubes in the set generated multiplicatively by $2,3,p$ of cardinality $3^3=27$. (For $T=(0,3p)$ the expression $Y-3p$ is "bad", not in $\Bbb Q^\times$, but we set $\alpha(T)$ to be the inverse of $\alpha(-T)=\alpha((0,-3p))=-3p-3p=-6p$. So $2\cdot 3\cdot p$ is always in the image of $\alpha$. To have full possible image, we will deal with $\langle 2,3\rangle$ inside $\Bbb Q^\times $ modulo cubes, exactly hat happens in the OP.) In all cases (3) to (6) we have correspondingly $$ \alpha(R)=6b=\frac ab\text{ modulo cubes .} $$ $R(X,Y)$ is here the point obtained from $(x,y)$ with $ax^3+by^3=p$ via the above map $(x,y)\to(X,Y)$. The relation to the $3$-Selmer-group is transparent. Discussion of the experimental results. If we start with $p=2003$ and a special solution for (3), we have equivalently found a point of the shape $Q+3(?)$ on $E^p(\Bbb Q)$. So we have only one point of infinite order on a curve that (conjecturally) should have rank two. Can we construct with only this information (algorithmically) an other independent point? Then we would use the $3$-descent mophism $\alpha$ and get points for the other cases (4), (5), (6), and conversely, each point for the cases (4), (5), (6) gives a further independent point.) This is a highly complex arithmetical task still, and although it is "only half of the work", i don't have / see a construction. (This is only my fault caused by the missing arithmetical education during all that operator algebra efforts, but from this subjective perspective, i would claim that...) The answer to the question is from this (constructive) point of view negative. This state of the art is of course not satisfactory, i will try to insert also some positive statements related to the OP. Consider all values in $G=\langle 2,3\rangle\subset\Bbb Q^\times$ modulo cubes. We obtain a multiplicative group isomorphic to the additive group $(\Bbb Z/3,+)^2\cong \Bbb F_3^2$, and the last incarnation is (enriched to) a vector space over $\Bbb F_3$ in a "box" $$ \begin{array}{|c|c|c|} \hline 1 & 2 & 2^2\\\hline 3 & 2\cdot 3 & 2^2\cdot 3\\\hline 3^2 & 2\cdot 3^2 & 2^2\cdot 3^2\\\hline \end{array} $$ We make the following choices of tuples $(a,b)$ in a corresponding "box": $$ \begin{array}{|c|c|c|} \hline - & (9,18) & (18,9)\\\hline (4,12) & (1,6) & (2,3)\\\hline (12,4) & (3,2) & (6,1)\\\hline \end{array} $$ Proposition: Let $(a,b)$ and $(a',b')$ be two tuples, corresponding to a basis of $G$, seen as $\Bbb F^2_3$. Assume we have rational points $(x,y)$ and $(x',y')$ on $E(a,b,p)$ and $E(a',b',p)$. Then we have rational points for all tuples mentioned in the eight entries above. (And $E^p$ has rank two and the image of $\alpha$ is the full group $\langle2,3,p\rangle$ with $27$ elements.) Short proof: $\alpha$ is a morphism. $\square$ A detailed version of this is as follows: Proof: Let $R(X,Y)$, $R'(X',Y')$ be the images of the two points in $E^p(\Bbb Q)$. Then $$ \begin{aligned} \alpha(R) &= Y-3p\\ &=3(ax^3-by^3)-3p\\ &=3(ax^3+by^3-p)-6by^3\\ &=-6by^3\\ &\equiv 6b\equiv \frac ab\text{ modulo cubes} \end{aligned} $$ in all our cases. (The last equivalence because of $36ab=d^3$.) Similarly $\alpha(R')=a'/b'$. These two values $a/b$ and $a'/b'$ are linearly independent seen in $\Bbb F_3^2$ as assumed, so all other eight (non-trivial) values in the above "box"$\cong \Bbb F_3^2$ can be reached as a non-trivial linear combination with coefficients $0$ and $\pm 1$. In case of a $-1$ coefficient, replace the corresponding point $R(X,Y)$ or $R'(X',Y')$ by its opposite, $-R$, respectively $-R'$. We pass from $(a,b)$ and/or respectively $(a',b')$ to $(b,a)$ and/or respectively $(b',a')$. The corresponding values $(a/b)^{\pm 1}$, $(a'/b')^{\pm 1}$ are also a basis, when seen in $\Bbb F_3^2$, so we can reduce the analyzis to the case when the linear combination has coefficients $0$ and $1$. The only case remained to be studied is $R+R'$. (This above reduction is not necessary, but for the typist the $\pm 1$ choices in $\pm R\pm R'$ are now no longer needed.) Then $\alpha(R+R')=(a/b)(a'/b')=(aa')/(bb')$ corresponds to a tuple $(A,B)$ in the above "box", $(aa')/(bb')=A/B$ modulo cubes. From $\alpha(R+R')=A/B=6B$ modulo cubes, we see that the $Y$-component $Y''$ of the point $R+R'=(X'',Y'')$ insures the cubic root occuring in $$ y'' =\left(\frac 12\left(\frac Y3-p\right)\right)^{1/3} =\left(\frac 16(\frac Y-3p)\right)^{1/3} $$ lands in $\Bbb Q$, associate then the corresponding $x''\in \Bbb Q$, so that we have a solution $(x'', y'')$ for the equation of the tuple $(A,B)$ . This completes the proof. $\square$ As a consequence, we see that any two properties among (3), (4), (5), (6) imply all properties (1) to (6). With the same argument, either (1) or (2) imply all (3), (4), (5), (6). Without further work, this is all that can be said. Note: If we start with one point $(x,y)$ satisfying (3), say, than let $R$ be the induced point $E^p(\Bbb Q)$. We can build in $E^p(\Bbb Q)$ all points $R+3NR$, then lift them back to points satisfying (3). We get thus from one point $(x,y)$ countable number of points, so the equation (3) defines an elliptic curve of rank at least one. We need more to "produce an other point" (of a different nature). Note: In the case $p=1979$, the generators of the free part of $E^p(\Bbb Q)$ are as follows: sage: P, Q = EllipticCurve([0, 9*1979^2]).gens() sage: Q.xy() (-272, 3889) sage: P.xy() (-18216695/66564,<PHONE_NUMBER>1/17173512) I see no "simple argument" (in both theory and in practice) to produce the above P, having only the knowledge of Q = (-272, 3889). Note: Trying to move the scene inside of the arithmetic of a number field is also a complicated story. The fields $\Bbb Q(6^{1/3})$, $\Bbb Q(2^{1/3}, 3^{1/3})$ have class number one. (See the appended Code 2.) We have a concrete description of their units. But the problem of realizing the norm $p$ in two ways that differ in their nature is the same one. For instance, with $a=6^{1/3}$ and $p=1979$ we find an element of norm $p$ by factorizing in $\Bbb Q(a)$ the $\Bbb Q$-prime $1979$ as $$ p=(337 + 150a + 102a^2) \; (11 + 6a -6a^2)\ . $$ The factor $(11+6a-6a^2)$ has norm $1979$. One solution to our problem (corresponding to a number field element written with missing $a^2$ part) is $$ \frac 1{35}(-263+258a)\ . $$ It is now a non-trivial process to find further solutions (missing $a^2$ part) using units and elements with cubic norm. Code 1: def my_cubic_root(a): """a is in QQ, if it is a cube, we return a^(1/3) in QQ, else None. Needed since we have problems with bool((-27)^(1/3) in QQ)...""" if a == 0: return 0 sign, p, q = 1, a.numerator(), a.denominator() if p < 0: sign, p = -1, -p pp, qq = p^(1/3), q^(1/3) if pp in QQ and qq in QQ: return sign * QQ(pp) / QQ(qq) p = 2003 E = EllipticCurve([0, 9*p^2]) P, Q = E.gens() T = E.point( (0, 3*p) ) a, b = 1, 6 # first case in the table d = QQ( (36*a*b)^(1/3) ) J = [-8..8] for m, n, k in cartesian_product([J, J, [0,1,2]]): R = m*P + n*Q + k*T if R == E.point(0): continue X, Y = R.xy() s, t = (Y/3 + p)/2, (Y/3 - p)/2 x = my_cubic_root(s/a) if x: # then x in QQ and x is not zero y = my_cubic_root(-t/b) # and then also y in QQ # x, y have possibly a big gcd, we want to print... # so it may seem better to force integer numbers... LCM = lcm( x.denominator(), y.denominator() ) x0, y0, z0 = x*LCM, y*LCM, LCM print(f"R = {m}P + {n}Q + {k}T\nx = {x0}\ny = {y0}\nz = {z0}\n") This delivers a long list of some points $R=(X,Y)\in E^p(\Bbb Q)$ ($R=mP+nQ+kT$ with $-8\le m,n\le 8$ found in terms of the sage-generators $P,Q$ and the $T$ torsion point) for which we get back a solution $(x,y)$. Here is a very small part of the long list... R = 0P + -7Q + 0T x =<PHONE_NUMBER>7617451806991358756909219533477638105712374213846397555195879982437905172639534240075646580404902724011380073423114348849619768865347595418688900649922166546243453830769840933 y =<PHONE_NUMBER>048955321129903281080010960645348316159791215455141268018661461219887281802896500458956709284909948105938453014382939187977502562188660147114609049623362562200245401625880031 z =<PHONE_NUMBER>517764006897615049800738288846177073891261914077119796463740061635879465681855349469378188013181189812878489256239707989996792109243772929949155694651080013613238673253020821 R = 0P + -4Q + 0T x = -1248383626448011320864639335968278987254059481029611805066801 y =<PHONE_NUMBER>30488404006175343285917721481004636409866227451640 z =<PHONE_NUMBER>42502359685014789309113123529012318539767007228077 R = 0P + -1Q + 0T x = 5593 y = -1969 z = 401 R = 0P + 2Q + 0T x =<PHONE_NUMBER>55957 y =<PHONE_NUMBER>57740 z =<PHONE_NUMBER>4511 R = 0P + 5Q + 0T x = -12730202841795146665456659465095899487115785453015815852478856082252223469530919315332698381397 y =<PHONE_NUMBER>223599750845585053528881247631263458296145320784676818035645960426486076253452910281 z =<PHONE_NUMBER>90808140253477784349063183165963304192648709822885554148067212479172504836399445931 Code 2: sage: R.<x> = PolynomialRing(QQ) sage: K.<a> = NumberField(x^3-6) sage: K.class_number() 1 sage: L.<b,c> = NumberField( [x^3-2, x^3-6] ) sage: L.class_number() 1 sage: xi = (-263 + 258*a)/35 sage: xi.norm() 1979 sage: K(1979).factor() (102*a^2 + 150*a + 337) * (-6*a^2 + 6*a + 11) sage: (11 + 6*a - 6*a^2).norm() 1979
2025-03-21T14:48:30.351279
2020-04-18T14:05:26
357849
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gerry Myerson", "Lo Scrondo", "https://mathoverflow.net/users/141435", "https://mathoverflow.net/users/3684" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628243", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357849" }
Stack Exchange
Generalized eigenvectors product Let's consider a real square matrix $A$ with eigenvalues $\lambda_n$ and eigenvectors $\mathbb x_n$, i.e. $A \mathbb x_n = \lambda_n \mathbb x_n$. Suppose there are some generalized eigenvectors $\mathbb y_n$ too, i.e. $A \mathbb y_n = \lambda_n \mathbb y_n + \mathbb x_n$. Now let a vector product $(. ,. )$ be such that $(\mathbb y_1, \mathbb y_2) = (A \mathbb y_1, A \mathbb y_2) $. We can write down this product as: $$(\mathbb y_1, \mathbb y_2) = \lambda_1 \lambda_2 (\mathbb y_1, \mathbb y_2) + \lambda_1 (\mathbb y_1, \mathbb x_2) + \lambda_2 (\mathbb x_1, \mathbb y_2) + (\mathbb x_1, \mathbb x_2) $$ What I'm struggling to prove is that if $\lambda_1 \lambda_2 \neq 1$, then $(\mathbb y_1, \mathbb y_2) = 0$ and $\lambda_1 (\mathbb y_1, \mathbb x_2) + \lambda_2 (\mathbb x_1, \mathbb y_2) + (\mathbb x_1, \mathbb x_2) = 0$. Someone could help? "vedtor product"? Do you mean an inner product, aka a scalar product (such as, the dot product)? @GerryMyerson as an example let the matrix $A$ be symplectic and the product be the usual symplectic product
2025-03-21T14:48:30.351369
2020-04-18T14:16:04
357851
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Gerry Myerson", "HenrikRüping", "Jeremy Rickard", "R.P.", "https://mathoverflow.net/users/17907", "https://mathoverflow.net/users/22989", "https://mathoverflow.net/users/3684", "https://mathoverflow.net/users/3969" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628244", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357851" }
Stack Exchange
Finding $q(x)$ such that $p(q(x))$ is reducible over $\mathbb{Q}[x]$ Let $p(x) \in \mathbb{Z}[x]$, such that $\deg (p) \ge 3$. Can we always find $q(x) \in \mathbb{Z}[x]$, such that $\deg (q) < \deg(p)$ and $p(q(x))$ is reducible over $\mathbb{Q}[x]$? Is there any algorithm to find $q(x)$? Note that the degree of $q$ is less than degree of $p$. I'm looking for a proof or any reference of this result. Any help would be appreciated. what happens for the example $p(x)=x^4+1$. Do you know whether there is such a $q$? @HenrikRüping Take $q(x) = x^3$. Note: in this answer, I have inadvertently disregarded your requirement for $q$ to have integral coefficients. I do however prove that a $q$ with rational coefficients does exist, so I will just let this answer be on here for the moment. To answer your main question, yes this is possible (and simple to prove). We may assume $p$ irreducible, otherwise there is nothing to prove. Let $\alpha$ be a zero of $p$ and consider $K = \mathbb{Q}(\alpha)$. Now the subset $S$ of $K$ consisting of elements $\beta$ such that $\mathbb{Q}(\beta)=K$ is equal to the complement in $K$ of a finite number of lower-dimensional $\mathbb{Q}$-vector spaces; in particular it is non-empty. Likewise, the subset $T$ of $K$ consisting of all elements that are linearly independent over $\mathbb{Q}$ with $\{1, \alpha\}$ is also a complement of a finite number (in this case just one) of lower-dimensional vector spaces (here we need the degree of $p$ to be at least $3$). Now take any $\beta$ in $S \cap T$ (which is non-empty by what I wrote above): since $\mathbb{Q}(\beta)=K$, we have $\alpha=q(\beta)$ for some $q$ in $\mathbb{Q}[X]$ whose degree we can choose to be $< \deg (p)$ by subtracting the appropriate multiple of the minimal polynomial of $\beta$. Since $\beta$ is in $T$, we have that the degree of $q$ is $>1$. Then $\beta$ is a root of the polynomial $p \circ q$, so the latter must have a factor of degree $\deg (p)$, whereas its degree is $\deg(p)\deg(q)>\deg(p)$. Hence it is reducible. When you write "by subtracting the appropriate multiple of $p$", do you mean "by subtracting the appropriate multiple of the minimal polynomial of $\beta$"? Yes, I guess that's what I should have written... Thank you. Also I noticed after writing this that the question asked for $q$ to have integral coefficients, I don't know whether my method can easily give that. Does it not follow that, if $p$ is irreducible and of odd degree, then $q(x)=x^2$ will do? I don't think so, how would it follow? @GerryMyerson For example, if $p(x)=x^3-2$, then $p(x^2)$ is irreducible. Hmm. I was following the argument in the answer. If $\beta=\alpha^2$, then ${\bf Q}(\beta)={\bf Q}(\alpha)$, and ${,1,\alpha,\beta,}$ is linearly independent over ${\bf Q}$, so $\beta$ is a root of $p(x^2)$, so $p(x^2)$ must have a factor of degree equal to the degree of $p$ while it has degree twice that. What goes wrong? Ah! found it! It's $\sqrt{\alpha}$ that's a zero of $p(x^2)$. Sorry.
2025-03-21T14:48:30.351825
2020-04-18T14:38:43
357853
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "erz", "https://mathoverflow.net/users/36886", "https://mathoverflow.net/users/53155" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628245", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357853" }
Stack Exchange
Dense stratification of a separable Hilbert space Let $\{X_i\}_{i \in \mathbb{N}} $ be a sequence of $n$-dimensional linear subspaces of the separable Hilbert space $H$ and let $\{\phi_i\}_{i \in I}$ be a sequence of continuous injective linear maps from $H$ to itself such that: $\phi_i(X_i)=X_{i+1}$, $X_i\cap X_j =\{0\}$ if $i\neq j$. Is there a general criteria under which $ X := \cup_{i \in \mathbb{N}} X_i, $ is dense in $H$? Note 1: I don't want to assume that $\phi_i =\phi^i$ where $\phi$ is a hypercyclic operator and $X_0$ contains a hypercylic vector. Since, typically, those objects can rarely be made explicit and rely on existence type results. Vague Intuition I expect some type of Wiener-Tauberian-like theorem where one looses the ability to take the span but somehow it is generated by iterations of $\phi$... WLOG (if it helps), we may assume that $H=L^2(\mathbb{R})$. But then if $n=1$ you want $X_0$ to contain a supercyclic vector Sure, but if n is large then the condition strictly relaxes May be of interest: https://arxiv.org/abs/1205.3575 Yes, but this doesn't give me an explicit criterion I can check to verify if $X_0$ has this property also..
2025-03-21T14:48:30.351935
2020-04-18T14:40:25
357854
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "AlexArvanitakis", "Jake Wetlock", "Nicola Ciccoli", "darij grinberg", "https://mathoverflow.net/users/153228", "https://mathoverflow.net/users/22757", "https://mathoverflow.net/users/2530", "https://mathoverflow.net/users/6032" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628246", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357854" }
Stack Exchange
Skew differential graded algebra A sigma, or skew, derivation is a natural generalisation of the notion of derivation depending on an algebra automorphism $\sigma$ which when equal to $id = \sigma$ reduces to the usual notion of a derivation. For a precise definition see here https://planetmath.org/SigmaDerivation Does there exist a notion of skew differential graded algebra in the literature? If so where do these objects arise? EDIT: To confirm I am asking if there exists a graded analogue of skew derivation algebra. So an $\mathbb{N}_0$-graded algebra $A = \bigoplus_{k \in \mathbb{N}_)0} A_k$, together with a degree $1$ map $d$ satisfying $d^2 = 0$, and a skew analogue of the graded Leibniz rule: $$ d(a \wedge b) = da \wedge \sigma(b) + (-1)^k \sigma(a)db, ~ a \in A_k $$ @Najib: I have edited the question to include a definition of the object I am wondering about. Is this the kind of construction you're interested in? https://perso.univ-rennes1.fr/bernard.le-stum/Publications_files/TwistedCalculus.pdf Is your wedge graded (anti)commutative? If so then this $d$ seems poorly defined. @Alex: Yes, it is assumed to be anti-commutative. I have adjusted the ansatz. @NicolaCiccoli: The same but with functioning references: https://arxiv.org/abs/1503.05022v1 This edited version of the "skew Leibniz rule" has appeared in geometry: if $\varphi: N\to M$ is a map of (super) manifolds, a section $X$ of the pullback bundle $\varphi^\star TM$ is a linear map $C^\infty(M)\to C^\infty(N)$ satisfying precisely your skew-leibniz rule: $$ X(fg)=X(f)\varphi^\star g +(-1)^{\deg X\deg f} (\varphi^\star f) X(g)$$ (See J. Nestruev, Smooth manifolds and observables, paragraph 9.47.) So if you put $\mathbb N_0$-gradings on your structure sheaves (what is $\mathbb N_0$? Positive integers?) and pick an $X$ of degree $+1$ which squares to zero you seem to arrive at your setup. (Identifying $\varphi^\star$ with $\sigma$ and $C^\infty(M)$ with $A$.) On the algebraic side there is the (comparatively more obscure) notion of $({\bf l},{\bf r})$-coderivation of Berglund (Definition 3.2) which should be dual to your proposal in the case ${\bf l}={\bf r}=\sigma$. Just like any normal person, I write $\mathbb{N}_0 := {0,1,2,3,\dots}$. @JakeWetlock; Oh I've never seen that before. In that case supermanifolds with N_0 gradings on their structure sheaf are called "N-manifolds", see the original AKSZ paper
2025-03-21T14:48:30.352245
2020-04-18T16:01:41
357862
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Arnold", "Gerry Myerson", "Yemon Choi", "https://mathoverflow.net/users/151209", "https://mathoverflow.net/users/158000", "https://mathoverflow.net/users/3684", "https://mathoverflow.net/users/763" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628247", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357862" }
Stack Exchange
References of research papers which lead to starting of Sieve Theory Question - I am thinking to present one or two papers on Sieve Theory in my masters thesis. I will also present 3 other papers on Riemann Zeta Function which I have studied earlier . But I have no previous knowledge of Sieve Theory. Although I know about some books in sieve theory like Montgomery ( Topics in multiplicative number theory) , Friedlander and Iwaniac ( Sieve theory: A musical) but my Institute only accepts research papers in masters thesis. So, can any researcher here suggest 2 research papers which led to the beginning of the subject of sieve theory. I will study them and present them along with the papers I have already studied. I have searched in past 4-5 days for some papers in different domains of ANT over web pages of different professors and some other websites but it doesn't lead me much as I am just a beginner in this field. Thanks! "I started working on masters thesis as per guidance of a professor not of my Institute who abandoned me and didn't even bothered to tell." Yes, you've told us this several times now. May I suggest that you find another professor, at your institute, who will give you an idea for a master's thesis & help you with it? Trying to write a thesis without supervision is asking for trouble, and MathOverflow is not a substitute for a supervisor. I have nothing to contribute, concerning sieve theory. Have you considered working in something other than Number Theory? Have you considered transferring to another institute? These may not seem like attractive options to you, but they may be better than what you are trying to do. Flooding MO with edits of your old questions that already have accepted answers seems to me to be an abuse of this website. @GerryMyerson it's just been edited again (2022-01-05 in this timezone) Sieve theory as such is generally considered to have started with Brun's 1915 and 1919 papers. The titles are "Über das Goldbachsche Gesetz und die Anzahl der Primzahlpaare" and ""La série $1/5+1/7+1/11+1/13+1/17+1/19+1/29+1/31+1/41+1/43+1/59+1/61+\cdots$, où les dénominateurs sont nombres premiers jumeaux est convergente ou finie". Legendre's sieve predates Brun's sieve, but it seems to be thought of more of a precursor to sieve theory than anything else. can you please tell where I can find original paper these? Or English translations but of both of these papers!! I searched the internet. I can find A book "Goldbach conjecture " But it will not have papers (original ) Although 1st paper is given in its references. But i have no clue about 2 nd? Can you please tell where they are available!! The 2nd paper is at https://gallica.bnf.fr/ark:/12148/bpt6k486270d/f138.item.r=brun @Gerry Myerson thanks for the help.
2025-03-21T14:48:30.352459
2020-04-18T16:19:04
357865
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "LSpice", "Mikhail Borovoi", "Monty", "Not a grad student", "https://mathoverflow.net/users/2383", "https://mathoverflow.net/users/29422", "https://mathoverflow.net/users/4149", "https://mathoverflow.net/users/64244" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628248", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357865" }
Stack Exchange
Weyl group actions on standard parabolic subgroups of classical groups $\DeclareMathOperator\U{U}\DeclareMathOperator\GL{GL}$Let $E/F$ be a quadratic extension of local fields and $G=U(V)$ a unitary group associated to hermitian space $V$ over $E/F$. We fix a minimal parabolic subgroup $P_0$ of $G$ and call $P=NM$ a standard parabolic subgroup of $G$ if it contains $P_0$. Write $M=\GL_{a_1}(E) \times \dotsb \times \GL_{a_r}(E) \times \U(W)$, where $W$ is a subspace of $V$. Let $A_0$ be the maximal split torus of $G$ and $N_G(A_0)$ and $Z_G(A_0)$ its normalizer and centralizer respectively. If $P'=N'M'$ is another standard parabolic subgroup (i.e., there is a Weyl group element $w \in N_G(A_0)(F) / Z_G(A_0)(F)$ such that $w \cdot P=P'$), then I am wondering whether $M'$ should be of the form $\GL_{b_1}(E) \times \dotsb \times \GL_{b_r}(E) \times \U(W)$ and $\{b_1,\dotsc,b_r\}$ is a permutation of $\{a_1,\dotsc,a_r\}$? And if $\rho=\rho_1 \boxtimes \dotsb \boxtimes \rho_r \boxtimes \tau$ is a representation of $M$, then $w\cdot \rho$ is also the form of $\rho_{S(1)} \boxtimes \dotsb \boxtimes \rho_{S(r)} \boxtimes \tau$, where $S$ is a permutation of $\{1,2,\dotsc,r\}$? Dear Monty, I suggest that you edit your question. Make an effort to explain your notation (and maybe to correct grammar and typos ... ). What is "another associated standard parabolic subgroup"? Associated to what? What are standard parabolic subgroups if you have not chosen a maximal torus and a Borel subgroup? Where does your element $w$ live? In $F$-points or in $\bar F$-points? The answer to your question might depend on all this. @Mikhail, Oh, I have read your comment so late. I am very sorry for not explaing the notations in detail. I corrected my question more precisely. I would appreciate if you see it again. The question is still not clear. First, $w\cdot P$ is not standard. Second, you write: "then I am wondering whether $M'$ should be of the form ${\rm GL}{b_1}(E) \times \dotsb \times {\rm GL}{b_r}(E) \times U(W)$..." What do you mean by "of the form"? Anyway, if you state your question clearly and read the description of the Weyl group on page 1272 of the reference in the answer of "Not a grad student", you can answer your question yourself, at least in the case when $G$ is quasi- split. Concerning the general case, you can ask a question at Math.StackExchange.com for the description of the Weyl group and the root system of your group $G$. @Mikhail, thank you! The all I wanted for is on the reference of Goldberg. Not quite, since the Weyl group of the quasi-split unitary group is not the symmetric group. See page 1272 of Goldberg, "R-Groups and Elliptic Representations of Unitary Groups," http://www.jointmathematicsmeetings.org/proc/1995-123-04/S0002-9939-1995-1224616-6/S0002-9939-1995-1224616-6.pdf The reference in the answer has no page 1272. Thanks. There are several papers by David Goldberg with similar titles. This seems to answer the converse of the question. Thank you very much! The reference you informed me is the very one I am searching for. There is another action like taking contragredient representation. But I am still wondering why for $GL(n)$, there is no such involution action but only permutation. Do you know some intuitive reason for why there does not Weyl group element corresponding to taking $g^{-1}$ to each components of $GL(n_i)$? @May I ask you one more? Is there a similar involution Weyl group action for other classical group like symplectic group?
2025-03-21T14:48:30.352682
2020-04-18T17:02:50
357868
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628249", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357868" }
Stack Exchange
Bound for matrix inner product based on singular values Regarding the matrix inner product based on singular values, Lewis (1995) "The convex analysis of unitarily invariant matrix functions" states the result by von Neumann that $\langle X,Y \rangle \leq \langle \sigma_X ,\sigma_Y \rangle$. Does anyone know any easy proof or reference for it. I couldn't understand the reference cited in the paper. For a "pedagogical" proof, see A Note on von Neumann's Trace Inequality by Rolf Dieter Grigorieff. It has been remarked in the literature that "unexpectedly, finding a decent proof of this seemingly simple result turns out to be anything but trivial". The aim of the present note is to present still a further proof which seems to be elementary enough to correspond to the simplicity of the statement in von Neumann's Theorem. The original proof by von Neumann is in "Some matrix-inequalities and metrization of metric-space," Tomsk Univ. Rev. 1 (1937) 286-300 -- which I have not found online.
2025-03-21T14:48:30.352777
2020-04-18T17:09:15
357869
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "ABIM", "Jochen Glueck", "https://mathoverflow.net/users/102946", "https://mathoverflow.net/users/36886" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628250", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357869" }
Stack Exchange
Bound on number of linearly independent eigenvectors of adjoint of composition operator Fix $N>1$. Let $f\in C^{\infty}(\mathbb{R},\mathbb{R})$ be such that the composition operator via $$ \begin{aligned} C_f:C^{\infty}(\mathbb{R},\mathbb{R}) &\rightarrow C^{\infty}(\mathbb{R},\mathbb{R}) \\ g & \mapsto g \circ f, \end{aligned} $$ is a bounded operator. Here the topology on $C^{\infty}(\mathbb{R},\mathbb{R})$ is the Whitney topology. Let $C^f$ denote the adjoint operator $C_f$. Is there a criterion on $f$ to verify when $C^f$ has at most $N$ linearly independent eigenvectors? What topology do you use on $C(\mathbb{R}, \mathbb{R})$ (so that it makes sense to talk about a bounded operator and about the adjoint operator)? @JochenGlueck I meant $C^{\infty}$; also, the topology is Whitney's one. Thank you for the clarification. My pleasure :) Thanks for pointing it out.
2025-03-21T14:48:30.352854
2020-04-18T17:18:37
357871
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628251", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357871" }
Stack Exchange
On the equivalence of the two definitions of Hadamard differentiability Sorry if this question is not eligible. I have posted it on Math.SE, but hasn't received any responses. Let $X$ be a Hausdorff locally convex space. As it is known [e.g., see Yamamuro S. (1974). Differential Calculus in Topological Linear Spaces. Springer-Verlag. P. 9], the following two definitions of Hadamard differentiability of a function $f:X\to \mathbb R$ at a point $c$ are equivalent. (1) There is a continuous linear functional $A \in L(X,\mathbb R)$ such that for any convergent sequence $t_n \rightarrow 0$ of positive real numbers and a convergent sequence $x_n \rightarrow x$ in $X$, $\lim_{n \to +\infty} \frac {f(c+t_nx_n)-f(c)} {t_n} = A(x)$. (2) There is a continuous linear functional $A \in L(X,\mathbb R)$ such that for any continuous function $g:\mathbb R \to X$ such that $g(0)=c$ and the derivative $g'(0)$ exists, the function $f \circ g$ is differentiable at $0$ and $(f \circ g)'(0)=A(g'(0))$. Averbukh & Smolyanov [Averbukh V.I., Smolyanov O.G. (1968). The various definitions of the derivative in linear topological spaces // Russian Math. Surveys 23:4. P.67–113. Theorem 3.2, 3.3] showed that the condition of continuity of $g$ in (2) can be strengthened to differentiablility without changing the result. I wonder can differentiablility of $g$ be further strengthened to continuous differentiablility without changing the result? Put differently, the question can be stated as follows. Given a sequence $t_n \rightarrow 0$ of pairwise distinct positive real numbers and a sequence $x_n \rightarrow x$ in $X$, is there a continuously differentiable function $g:\mathbb R \to X$ such that $g(t_n)=t_nx_n$ and $g'(0)=x$? Thank you.
2025-03-21T14:48:30.352972
2020-04-18T17:58:39
357874
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Deane Yang", "abx", "alberth", "https://mathoverflow.net/users/103164", "https://mathoverflow.net/users/150722", "https://mathoverflow.net/users/40297", "https://mathoverflow.net/users/613", "user347489" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628252", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357874" }
Stack Exchange
Grothendieck decomposition of locally free sheaves Let $f:\mathbb{P}^1 \rightarrow X$ be a morphism with $X$ a smooth projective algebraic variety of dimension $n$, then by Grothendieck $\mathcal{N}_{f}=\bigoplus_{i=1}^{n-1}\mathcal{O}_{\mathbb{P}^1}(a_i)$, where $\mathcal{N}_{f}$ is the normal sheaf associated to $f$. Is there some relation between those $a_{is}$ and the properties of the morphism $f$? and what is the geometric interpretation?. ..for example if $X$ has dimension $2$ then $\mathcal{N}_{f}=\mathcal{O}_{\mathbb{P}^1}(a_1)$ thus $a_{1}=c_{1}(\mathcal{N}_{f})$ (the first Chern class of the normal sheaf) and it is the autointersection of $f(\mathbb{P}^1)\subset X$ , but in higher cases $c_{1}(\mathcal{N}_{f})=\sum_{i=1}^{n-1}a_i$ and then I don't know what is the relation between them... I wanna find a geometric interpretation of those $a_is$ in $f$. Anyone could help me with this ? There is a huge literature about this. You might start with Kollár's Rational curves on algebraic varieties. Yes i am reading this book and Debarre , but i didnt find some about this coefficients Come on! The coefficients tell you how your map deforms, a fundamental tool in algebraic geometry. Read better. for example i kow that $H^{0}(\mathcal{N}_f)$ and $H^{1}(\mathcal{N}_f)$ tell me first order deformations and obstruction , if $a_i >-1$ for all $i$ then $f(\mathbb{P}^1)$ does not has obstruction, but which is the relation between $a_i$ and the way of deformate $f$ @abx, I don't think the "Come on!" and "Read better" are appropriate or needed in your comment. Looking at the proof of the result should answer this. On th eother hand, I hope we get an enlightening answer from a member of the community :) @user347489 thank u ..do u have any reference for this ?, because i know that all locally free sheaf over $\mathbb{P}^1$ is a sum of line bundles and all line bundles over $\mathbb{P}^1$ looks like $\mathcal{O}(d)$ for d an integer...i thought that in the prooff dont construct this line bundles only use the fact that how they looks like..
2025-03-21T14:48:30.353139
2020-04-18T19:21:27
357879
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "FourierFlux", "https://mathoverflow.net/users/123273" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628253", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357879" }
Stack Exchange
Proof of extended supermartingale convergence theorem There is a supermartingale convergence theorem which is often cited in texts which use Stochastic Approximation Theory and Reinforcement Learning, in particular the famous book "Neuro-dynamic Programming" the theorem is: "Let $Y_t, X_t, Z_t, t = 1,2,3,....$ be three sequences of random variables and let $\mathcal{F_t}$ be sets of random variables such that $\mathcal{F_t} \subset \mathcal{F_{t+1}}$ for all t, suppose that: (a) The random variables $Y_t, X_t, Z_t$ are nonnegative and are functions of the random variables in $\mathcal{F}_t$ (b) For each $t$ we have $E[Y_{t+1}|\mathcal{F_t}] \leq Y_t - X_t +Z_t$ (c) $\sum_{t=0}^\infty Z_t \lt \infty$ Then: $\sum_{t=0}^{\infty}X_t \lt \infty $ and there exists a nonnegative random variable $Y$ such that $Y_t \rightarrow Y$ with probability 1." The problem is, I can't find any proofs of this anywhere. Most texts on probability theory prove the standard Martingale Convergence Theorem but I feel this is a big step from that. Can anyone direct me to a source which proves the above or write out a proof from the standard convergence theorem? Here's one approach. First notice that $$ R_t: = Y_{t} + \sum_{i=1}^{t-1} X_i - \sum_{i=1}^{t-1} Z_i $$ is a supermartingale, since $$ R_{t+1} - R_t = Y_{t+1} - Y_t + X_t - Z_t, $$ giving $$ E(R_{t+1} - R_t | \mathcal{F}_t) = E (Y_{t+1} | \mathcal{F}_t) - Y_t + X_t - Z_t $$ which is less than or equal to $0$ with probability $1$, by (b). Now, we don't have a fixed lower bound for the supermartingale $R$, so we can't apply the convergence theorem directly. However, for any $a>0$, consider the stopping time $$ \tau_a = \inf\{t: \sum_{i=1}^t Z_i>a\}, $$ with $\tau_a=\infty$ if $\sum_{i=1}^t Z_i\leq a$ for all $t$. We can define $$ R^{(a)}(t):=R(t\wedge \tau_a)= \begin{cases} R_t&\text{ if }t<\tau_a\\ R_{\tau_a}&\text{ if }t\geq \tau_a \end{cases}. $$ $R^{(a)}$ is also a supermartingale for any $a$, and $R^{(a)}(t)$ is bounded below by $-a$. So by the martingale convergence theorem, for any given $a$, $R^{(a)}(t)$ converges to some finite limit with probability $1$. By countable additivity, we get that with probability $1$, $R^{(a)}(t)$ converges to a finite limit for all $a\in\mathbb{Z}$. But if $\sum_{i=0}^\infty Z_i<\infty$, which from (c) we assume happens with probability $1$, then for all large enough $a\in\mathbb{Z}$, we have $\tau_a=\infty$, and so $R^{(a)}(t)=R(t)$ for all $t$. Since we know $R^{(a)}(t)$ converges, we also get that $R(t)$ converges. Finally, since $R(t)$ converges and $\sum_{i=0}^{t-1} Z_i$ converges, we also have that $Y_t+\sum_{i=1}^{t-1} X_i$ converges. Since $\sum_{i=1}^{t-1} X_i$ is non-decreasing in $t$, and $Y_t$ is non-negative for all $t$, the only way this can happen is if $Y_t$ and $\sum_{i=1}^{t-1} X_i$ both converge, as required. Thanks, excellent! After a little digging one other alternative way is to reference a super-martingale convergence theorem for the sum missing the $-X_t$ term to get convergence and then this forces the $\sum X_t$ to be finite A.S. As far as I know this statement was first proved by Robbins and Siegmund in their paper "A convergence theorem for non negative almost supermartingales and some applications" from 1971.
2025-03-21T14:48:30.353343
2020-04-18T21:56:06
357889
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628254", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357889" }
Stack Exchange
Usefulness of the $\sigma(L^\infty,L^1)$ topology in the context of differential equations In Brezis's Functional Analysis book through chapters 3-4, I've seen the $\sigma(L^\infty,L^1)$ topology on $L^\infty$ but did not see (so far) any application of it in differential equations. Is there an example of a differential equation where this topology might be helpful in analyzing the $L^\infty$ norm of the solution for example? Or maybe some other thing I don't know about? This topology is ubiquitous in modern analysis of PDEs, typically in conjunction with the Banach-Alaoglu theorem. (Bounded sets in $L^\infty$ are relatively compact for the predual $\sigma(L^\infty,L^1)$ / weak-$\ast$ topology.) Here is one (not so) specific scenario: When trying to prove existence of solutions to non-linear PDEs, one possible and classical approach is to approximate the equation by a sequence of better-behaved ones, for which one can construct a solution $u_n$. If the approximation is well constructed (typically the "structural" properties that one expects formally from the original equation should be reflected in the approximation procedure) then the approximate solutions usually satisfy some "nice" estimates. For example if $\{u_n\}$ satisfies Lipschitz bounds uniformly in $n$ then one can conclude that the derivatives $ \nabla u_n\overset{\ast}{\rightharpoonup} \nabla u$ for the weak-$\ast$ ($\sigma(L^\infty,L^1)$) topology. The Arzelà-Ascoli theorem moreover gives strong uniform convergence $u_n\to u$, which helps passing to the limit in non-linear terms. For example, if the PDE is quasi-linear elliptic of the form $div(A(u) \nabla u)=f(u)$ then typically one approximates $A_n,f_\approx A,f$ (e.g. by truncation $A_n(u):=A(\min(n,u))$). The above procedure allows to pass to the limit in the weak formulation $$ \int A_n(u_n)\nabla u_n\cdot \nabla\varphi \to \int A(u)\nabla u\cdot \nabla\varphi $$ and $$ \int f_n(u_n)\varphi\to \int f(u)\varphi $$ for any fixed reasonably smooth test-function $\varphi$. Another such example is the theory of viscosity solutions Due to Crandall-Lions. Here the Hamilton-Jacobi PDE $$ \partial_t u+H(x,u,\nabla u)=0 $$ is usually approximated by a "viscous" Hamilton-Jacobi-Bellman equation $$ \partial_t u_\epsilon+H(x,u_\epsilon,\nabla u_\epsilon)=\epsilon\Delta u_\epsilon, $$ and the solution is constructed as $u=\lim\limits_{\epsilon\to 0}u_\epsilon$. This is really a comment but will be too long for that. It is also admittedly rather tangential to your question but I hope it can add some context to it and provide useful information. Some of the most important spaces in analysis consist of spaces of bounded functions or operators with the supremum norm. These are often non-separable and non-reflexive Banach spaces. A priori that is not a disadvantage but they do have further negative properties concerning the interaction of their functional analytical properties with those of the underlying spaces—density properties, representability of the dual, tensor-product representations for spaces on products, stability and continuity of certain natural functors. All of these suggest that the norm topology is, in a certain sense, too strong and this has led to the introduction of certain weaker lc topologies which, however, retain the all important property of completeness. At first sight these constructions might appear to be rather ad hoc but in fact they are all examples of so-called mixed topologies (Wiweger) and they have been intensively studied by many mathematicians under a host of names (strict topologies, two-normed spaces, Saks spaces). The general idea is that one replaces the norm by the finest lc topology which agrees on the unit ball with a suitable natural topology on the unbounded elements. Examples of spaces where this has been applied are: bounded continuous functions on a completely regular spaces, bounded holomorphic functions on a complex domain, bounded operators on a Banach spaces (or even between two such), in particular the case of Hilbert spaces and, more generally, von Neumann algebras. Your case is, of course, that of bounded measurable functions—here one can mix the norm with the weak $\ast$- or the $L^1$-topology (the latter was the original application of this method by Saks—hence the name “Saks space”). The main importance of this to differential equations is that it can happen that the unit ball in such a resulting space is compact, something which can never happen in infinite dimensional Banach spaces, and this is vital in many arguments in ode’s and pde’s. For example, the natural extension of the Peano existence theorem fails for fields with values in a Banach space but there is a version which holds for certain Saks spaces, in particular for the one implicit in your question.
2025-03-21T14:48:30.353674
2020-04-18T21:58:28
357890
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andy Sanders", "Eduardo Longa", "Robert Bryant", "https://mathoverflow.net/users/13972", "https://mathoverflow.net/users/49247", "https://mathoverflow.net/users/85934" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628255", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357890" }
Stack Exchange
Every homotopy class contains at least a harmonic representative Let $(M^3,g)$ be a closed, connected and oriented Riemannian $3$-manifold. A circle-valued map $v : M \to S^1$ is harmonic iff the gradient $1$-form $\omega_v = v^* d\theta \in \Omega_1(M)$ is harmonic in the Hodge sense: $d \omega_v = 0$ and $\delta \omega_v = 0$. It can be seen that this happens precisely when $v$ minimizes the Dirichlet energy in its homotopy class $[v] \in [M:S^1]$. Thus, by Hodge theory, each homotopy class of a circle-valued map contains a harmonic representative. My question is whether every homotopy class of $S^2$-valued maps contains a harmonic representative. More precisely: given $u : M \to S^2$ a smooth map, does there exist a harmonic map $u_0 : M \to S^2$ such that $u$ is smooth and homotopic to $u$? A parallel question: if $u_0 : M \to S^2$ is any harmonic map, can we say that $u_0^* \sigma \in \Omega_2(M)$ is a harmonic $2$-form, where $\sigma$ is the area form of (the round) $S^2$? No, the theory for targets with some positive curvature is very complicated. For example, there is no harmonic map from the 2-torus to the 2-sphere of degree 1 (for any choice of metrics). As Andy says, the answer is 'no': It is known that there is no harmonic map of degree $1$ from the torus to the $2$-sphere. I forget who first observed this. (Amended after Andy's comment: It's originally due to J. C. Wood in the early 1970s, see Andy's comment for the exact reference.) If I have time, I can put in the argument, but the essential outline of the argument is this: There are two kinds of harmonic maps from the torus to the $2$-sphere. Those that are conformal and those that are not. If it is conformal, then, up to reversing the orientation on the torus, it is a holomorphic map, and it is well-known that a non-constant holomorphic map from the torus to the $2$-sphere has degree at least 2. (In fact, there is such a holomorphic map of any degree $d\ge 2$.) If it is not conformal, then a simple calculation shows that the degree of the mapping is zero. (Essentially, one produces an explicit $1$-form on the torus whose differential is the pullback of the area form on the $2$-sphere.) Thus, there is no harmonic map of degree 1 from the torus to the $2$-sphere. I think the result goes back to Wood, a discussion can be found here https://core.ac.uk/download/pdf/82593933.pdf. @AndySanders: Thanks for the reference! I knew it had been known a long time, but I had forgot where I learned it. What does this imply for the case of dimension $3$? @EduardoLonga: Do you mean dimension 3 for the range or the domain, or both? The domain of dimension $3$ and the codomain being the round $2$-sphere. @EduardoLonga: I don't know an example off the top of my head, but I suspect that existence of a harmonic mapping in a given homotopy class of maps $f:M^3\to S^2$ fails in many cases, just because the regularity theory for a nonlinear PDE gets harder as the dimension of the domain goes up. Maybe Andy knows something more specific about this. A good place to start is the work of Schoen and Uhlenbeck on regularity of harmonic maps.
2025-03-21T14:48:30.353898
2020-04-18T22:04:22
357892
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Pat", "Vesselin Dimitrov", "https://mathoverflow.net/users/151501", "https://mathoverflow.net/users/26522" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628256", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357892" }
Stack Exchange
Sections of infinite order of elliptic surfaces Let $X\to \mathbb{P}^1$ be a non-isotrivial elliptic surface over $\mathbb{C}$ with a section and with $X$ a smooth projective connected surface over $\mathbb{C}$. Let $\sigma:\mathbb{P}^1\to X$ be a section of infinite order. Let $U\subset \mathbb{P}^1$ be the maximal dense open over which $f$ is smooth. The fibres over points in $U$ are naturally ellipic curves. Does there exist a point $P$ in $U$ such that $\sigma(P)$ is torsion in the elliptic curve $X_P$? Note that the non-isotriviality condition is necessary. Otherwise the answer is clearly negative. Yes, there is are infinitely many such special parameters $P$ at which $\sigma(P)$ becomes torsion in its fibre. Moreover, all large enough orders are realizable for the torsion point. This comes by a version of the implicit function theorem. When the data is over the algebraic numbers, one further knows that these special parameters $P$ are a set of bounded height (so they are rather sparse). See Masser and Zannier's paper Torsion anomalous points and families of elliptic curves and, for a generalization, Prop. 3.1 in Habegger's paper Special points on fibered powers of elliptic surfaces. @VesselinDimitrov Thank you for this. Can you make this into an answer?
2025-03-21T14:48:30.354135
2020-04-18T22:36:16
357895
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Brendan McKay", "Luis Ferroni", "Penelope Benenati", "https://mathoverflow.net/users/115803", "https://mathoverflow.net/users/147861", "https://mathoverflow.net/users/9025" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628257", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357895" }
Stack Exchange
Algorithmic combinatorial discrete problem (randomized lazy update?) We are given a vector $\mathbf{b}$ of size $h$. Initially we have $\mathbf{b}_i=1$ for all $i\in \{1, 2, \ldots, h\}$. In a sequential fashion, at each time step $t=1, \ldots, n$, an index $j(t)$ is (adversarially) selected from $\{1, 2, \ldots, h\}$, $\mathbf{b}_{j(t)}$ is incremented by $1$, and the only information we receive is index $j(t)$. We are also given a vector of integers $\mathbf{a}$ of size $h$. Our goal is to maintain over time vector $\mathbf{a}$, such that, at each time step for all $i\in \{1, 2, \ldots, h\}$ we have that $\mathbf{a}_i$ is equal to $\sum_{k=1}^i \mathbf{b}_k$ up to a constant factor (even a logarithmic factor in $n$ or $h$ would be acceptable). Initially (at time step $0$) each integer $\mathbf{a}_i$ is set to be equal to $\sum_{k=1}^i \mathbf{b}_k$ which in turn is equal to $i$. Question: How can we build an algorithmic (possibly randomized) strategy to maintain over time vector $\mathbf{a}$ with a total (expected if randomized) number of elementary operations (i.e., the algorithmic time complexity) equal to $\mathcal{O}(n\log n)$ (or, if possible, $\mathcal{O}(n)$)? If I understood correctly, one structure that helps you to do so is a Persistent Segment Tree. Thank you Luis. With Persistent Segment Trees one can answer queries related to the sum of intervals in an array, which in this case corresponds to the the array $B[i]$ for $i\in{1,2,\ldots,h}$ of bins. Here, instead, we want to maintain $A[\cdot]$ updated overtime (up to a constant factor as I wrote above the question). This seems much more difficult, because, for any given approximation threshold, adding just one ball can require to simultaneously update $\Theta(h)$-many records of $A[\cdot]$. If you have any idea, I would be very happy to know it! I think there is a trick for range updates on Segment Trees called "lazy propagation" which is useful. Thanks. I will take a look at that. I just changed array $A[\cdot]$ and $B[\cdot]$ into vectors $\mathbf{a}$ and $\mathbf{b}$ only for notational purpose. Do you want all of $\boldsymbol a$ to exist all the time, or do you just want to find $a_i$ when you need it? The whole vector $\mathbf{a}$ with all its $h$ components, all the time.
2025-03-21T14:48:30.354309
2020-04-18T23:23:35
357896
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Ben Wieland", "Jesse Wolfson", "Kevin Casto", "Phil Tosteson", "https://mathoverflow.net/users/4639", "https://mathoverflow.net/users/5279", "https://mathoverflow.net/users/52918", "https://mathoverflow.net/users/9581" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628258", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357896" }
Stack Exchange
Mixed Hodge structures on (infinite) covers of complex varieties? Let $X$ be a complex variety, and let $\tilde{X}\to X$ be a covering map. Does the singular cohomology $H_\ast(\tilde{X};\mathbb{Z})$ carry a natural mixed Hodge structure? If the cover is finite, then $\tilde{X}$ is also a complex variety (by generalized Riemann existence), so the answer is yes. I'm hoping to understand what we can say for infinite sheeted covers, perhaps under some 'niceness' assumptions on $X$, e.g. $X$ is smooth (for the examples I'm interested in, I cannot take $X$ compact). If the answer in general is no, is there a natural class of counterexamples? Are there reasonable assumptions we can put on $X$ that fix these? What about the infinite-genus surface as a cover of a finite-genus one? In this case $H_1$ is infinitely generated. Technically, there is nothing that prevents a mixed hodge structure from being infinitely generated. But I would worry about examples like $\mathbb C/ \mathbb Z \to \mathbb C/\mathbb Z^2$ which maps $\mathbb G_m$ to an elliptic curve. Should this map be compatible with Hodge structures? In a $p$-adic analytic setting, this sort of example is OK-- i.e. the map on etale cohomology is compatible with Galois representations, so perhaps there is something one can say. Also, if you haven't seen it already, this paper seems relevant. https://www.math.ucla.edu/~totaro/papers/public_html/sing.pdf @PhilTosteson, thanks for the reference. Theorem 2.2 of Totaro's talk is especially relevant (I'd remembered some inkling of this, maybe from the last time I looked over Totaro, but it's nice to have it clearly stated). @PhilTosteson, your example with $\mathbb{G}m$ mapping to an elliptic curve is actually quite close to the type of examples I'm interested in (the difference being that my infinite covers are not in general algebraic). For $X\Gamma$ a Shimura variety, with Hermitian locally symmetric space $\mathcal{X}$, let $\Gamma'\subset \Gamma$ be a lattice of lower rank. Then I'm interested in finite etale maps over $\mathcal{X}/\Gamma'$ (and really only those that are pullbacks of finite etale maps over $X_\Gamma$). Hodge structures are only for algebraic varieties, not analytic varieties, so it has to depend on the pair $(Y\to X)$. 2. Not every cover works; eg, a $\mathbb Z$ cover of an elliptic curve has no compatible Hodge structure. 3. But some covers are OK, such as the cover corresponding to a map to a semiabelian variety. This is only for normal subgroups and heads in a nilpotent direction. See Richard Hain, eg, this. 4. Your case sounds tough. If your sublattice is Shimura, just use that Shimura variety as a proxy and if it isn't, I'm skeptical. @BenWieland, could you explain a little more what you mean by "Hodge structures are only for algebraic varieties"? The original examples are compact complex manifolds, and as Totaro notes in his ICM talk, once you have resolution of singularities (due to Guillen-Navarro-Aznar), you get weights on the de Rham cohomology of any compact analytic space, by the same argument Deligne uses in the algebraic case. Hodge structures are for complete varieties. Algebraic varieties have essentially unique completions, thus Hodge structures. Analytic varieties do not. An infinite cover of an algebraic variety does not have a chosen completion and thus does not have a Hodge structure. If you want to assign it a Hodge structure, it will have to depend on the variety downstairs having a Hodge structure. But an analytic variety with a chosen completion is fine. @BenWieland, thanks for clarifying. Now I'm more on board, though I'd say it's more correct to say that Hodge structures are for complex analytic spaces with an equivalence class of compactifications. For algebraic varieties or compact spaces, there is a unique equivalence class. For others, not so much. But, one could imagine that given a cover of an algebraic variety, there might be some natural compactification, bootstrapping from the compactification downstairs. This is what Phil's comments and Totaro's paper helped me explore in my examples. Since the compactification upstairs does not map to the compactification downstairs (consider the case that downstairs is already compact!), there is no reason to think it relevant. Again, consider a $\mathbb Z$ cover of an elliptic curve. There is a unique compactification, but no Hodge structure can be compatible with the map, not that one nor any other. Similarly, Schottky uniformization. The Hain link I gave above was about Torelli space, which has no compactification, not being homotopy equivalent to a finite complex. I think that is typical of useful examples.
2025-03-21T14:48:30.354618
2020-04-18T23:41:03
357899
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Dustin Clausen", "happymath", "https://mathoverflow.net/users/3931", "https://mathoverflow.net/users/70889" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628259", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357899" }
Stack Exchange
Descent for $K(1)$-local spectra For odd primes, we have an equalizer diagram for the $K(1)$- local sphere given by $$L_{K(1)}S \rightarrow K{{ \xrightarrow{\Psi^g}}\atop{\xrightarrow[i_K ] {}}} K$$ where $g$ is a topological generator of $\mathbb{Z}_p^{\times}$ and $\Psi^g$ denotes the Adams operation. Now we can apply the functor $Mod(-): \textrm{Spectra} \to \textrm{Symmetric monoidal infinity categories} $ and then apply the functor $L_{K(1)}$. This gives us a diagram of the form $$L_{K(1)}Sp \rightarrow Mod_{K(1)}(K){{ \xrightarrow{\Psi^g}}\atop{\xrightarrow[i_K ] {}}} Mod_{K(1)}(K)$$ where $L_{K(1)}Sp$ are the $K(1)$-local spectra and $Mod_{K(1)}(K)$ are the $K(1)$-local $K$ modules. Now is this an equalizer diagram? I have been told that this is most likely true, so I was just wondering if there was a reference for this statement. Thanks in advance. It's not quite true: need to require a $p$-adic continuity condition for the $\Psi^g$-semilinear automorphism of the $K(1)$-local $K$-module. You can see https://arxiv.org/pdf/2001.11622.pdf Proposition 3.10 for a slight variant which also works at the prime 2. Thanks! I just have a question regarding terminology. So what does mod p homotopy groups mean? Are they maps from Mod p Moore spectrum? That's right, at least up to a shift. I usually think of it as the honest homotopy groups of the (homotopy) cofiber of the multiplication by p map. P.S.: if you found the proof of Proposition 3.10 a little hard to follow, I think a couple more details will be added in an updated version :) Thanks a lot! An updated version would be very helpful :)
2025-03-21T14:48:30.354739
2020-04-18T23:44:15
357900
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Yi Wang", "https://mathoverflow.net/users/152963" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628260", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357900" }
Stack Exchange
Is there a method to find the order of a lift of an element of order 2 to the Schur cover? Let $G$ be a finite non-abelian simple group, $M.G$ the Schur covering group of $G$. Is there a method to find the order of a preimage of an element of order 2 in the natural homomorphism $\pi: M.G\rightarrow G$? We can find the result using the 'Atlas of finite groups' for some finite simple groups, but is there a general method? Thank you very much!
2025-03-21T14:48:30.354805
2020-04-18T23:51:45
357901
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Anton Petrunin", "abx", "https://mathoverflow.net/users/1441", "https://mathoverflow.net/users/40297" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628261", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357901" }
Stack Exchange
Riemannian manifold as a metric space I am looking for a reference to the following simple statement; it must be classical. (It is easy to proof, but I want to have a reference.) A metric space $X$ that corresponds to a Riemannian manifold $(M,g)$ completely determines the underlying smooth manifold $M$ and the metric tensor $g$. Isn't this the Myers-Steenrod theorem? "If $(M,g)$ and $(N,h)$ are connected Riemannian manifolds and $f:(M,d_g)\to(N,d_h)$ is an isometry, then $f:(M,g)\to(N,h)$ is a smooth isometry" It was proven by Dick Palais. MR0088000 (19,451a) Reviewed Palais, Richard S. On the differentiability of isometries. Proc. Amer. Math. Soc. 8 (1957), 805–807. 53.2X MathSciNet @article {MR88000, AUTHOR = {Palais, Richard S.}, TITLE = {On the differentiability of isometries}, JOURNAL = {Proc. Amer. Math. Soc.}, FJOURNAL = {Proceedings of the American Mathematical Society}, VOLUME = {8}, YEAR = {1957}, PAGES = {805--807}, ISSN = {0002-9939}, MRCLASS = {53.2X}, MRNUMBER = {88000}, MRREVIEWER = {K. Krickeberg}, DOI = {10.2307/2033302}, URL = {https://doi-org.ucc.idm.oclc.org/10.2307/2033302}, } According to Palais, if I read his paper correctly, Myers and Steenrod proved the differentiability of isometries but Palais obtained an explicit description of smooth functions on the manifold from the metric geometry. A small precision: what Palais shows is that one can reconstruct canonically the riemannian metric from the distance. The theorem about isometries as stated by @Quarto Bendir is indeed due to Myers-Steenrod. The theorems seem to be equivalent.
2025-03-21T14:48:30.354936
2020-04-19T00:50:49
357903
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "D.S. Lipham", "Wlod AA", "https://mathoverflow.net/users/110389", "https://mathoverflow.net/users/95718" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628262", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357903" }
Stack Exchange
Perfect images of complete Erdős space Let $\mathbb P$ denote the space of irrational numbers. In an answer to this question, Taras Banakh showed that the perfect images of $\mathbb P$ are precisely the Polish spaces with no compact neighborhoods. Here, perfect means a continuous, closed, surjective mapping with compact point pre-images. Increasing the dimension slightly, we go from $\mathbb P$ to complete Erdős space $$\mathfrak E_{\mathrm{c}}=\{x\in \ell^2:x_n\in \mathbb P\text{ for all }n<\omega\}.$$ Here, $\ell^2$ is the Hilbert space of square-summable sequences of real numbers. Question 1. Is every perfect image of $\mathfrak E_{\mathrm{c}}$ homeomorphic to $\mathfrak E_{\mathrm{c}}$? Question 2. Is $\mathbb P$ a perfect image of $\mathfrak E_{\mathrm{c}}$? What about the original Erdos space? (Does everybody but I know?) @WlodAA That one is usually more difficult to work with because it does not have as many representations, is not Polish, etc. But that may be a good follow-up question. It seems that the answer to Question 1 is "no". According to this paper, the Julia set of $f(z)=\pi\sinh(z)$ is equal to the entire complex plane $\mathbb C$, and is the perfect image of a "Cantor bouquet". The endpoint set of any Cantor bouquet is homeomorphic to $\mathfrak E_{\mathrm{c}}$. But according to the image below (from the same paper), these endpoints are mapped to a dendritic connected set (see the dark lines including the imaginary axis).
2025-03-21T14:48:30.355069
2020-04-19T01:10:43
357904
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Will Sawin", "dragoboy", "https://mathoverflow.net/users/100578", "https://mathoverflow.net/users/18060" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628263", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357904" }
Stack Exchange
Sato-Tate for modular forms Let $f$ be non-CM holomorphic modular forms of weight $k\geq 2$, and $a(p)$ be its $p^{th}$ Fourier-coefficient. the Sato–Tate conjecture is known to be true in this case, I want to know whether such a result is known with $a(p^n)'s$ (where $n$ is any natural number) as well. The statement follows from the usual Sato-Tate conjecture and the Hecke relations between $a(p)$ and $a(p^n)$. However, it will be a different measure. Got it, thanks!
2025-03-21T14:48:30.355138
2020-03-30T21:49:17
357905
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Emolga", "Lutz Lehmann", "Pachirisu", "https://mathoverflow.net/users/113612", "https://mathoverflow.net/users/35593", "https://mathoverflow.net/users/47293", "https://mathoverflow.net/users/54608", "user35593" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628264", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357905" }
Stack Exchange
A (discontinuous) Initial value problem with exactly two solutions It is an old result of Kneser that if $f$ is a continuous function, and there are two solutions to the IVP $$y'=f(x,y), \quad y(x_0)=y_0,$$ then there are uncountably many solutions. I am interested in the general case, where $f$ is not continuous. Is there an IVP with exactly two solutions? So at some point $x_1$ you have two solution values $y_l(x_1)$ and $y_u(x_1)$. Now consider the initial value problem with $y(x_1)=(1-c)y_l(x_1)+cy_u(x_1)$, $0<c<1$, and trace its solution(s) back to $x_0$. What value $y(x_0)$ can you hope to achieve? Can you avoid $y_0$, and by what mechanism? @LutzLehmann Why is there a solution to the IVP with initial condition $y(x_1)=(1-c)y_l + cy_u$? Because on the segment between $x_0$ and $x_1$ it is bounded by $y_l(x)\le y(x)\le y_u(x)$ unless it crosses one of these solutions. As the right side is continuous everywhere that means that a local solution can always be continued until it leaves that region. But at the crossing point one can then continue with the bounding solution to still get a full solution that ends in $y_0$. @Lutz The right side is not continuous. I lack the imagination of how that could happen. You still need to account for what these inner solutions do if you trace them towards $x_0$. Their domain might end before that, which is quite typical of discontinuous right side if you do not generalize to sliding mode solutions. If that does not happen, they have to cross one of the original solution, and you need something that would prevent the inner solution to continue along the other one. I do not think that that is possible, but I also have no formal proof of it. The ODE $y'=\begin{cases} 2x & if y=x^2\ 0 & if y=0\ 1 & otherwise \end{cases}$ with $y(0)=0$ seems to satisfy your conditions.
2025-03-21T14:48:30.355289
2020-04-16T07:47:38
357906
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628265", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357906" }
Stack Exchange
Does the functor sending a DGA to its zeroth component admit a right adjoint? Let $A$ be a ring and write $\underline{A}^\bullet$ for the associated trivial DGA. We have a functor $$\mathrm{ev}_0\colon\mathbf{dgAlg}_{\underline{A}^\bullet}\longrightarrow\mathbf{Alg}_A$$ sending a DGA $(B^\bullet,\mathrm{d}^\bullet,\mu^{\bullet,\bullet})$ to the $A$-algebra $(B^0,\mu^{0,0})$. This functor has a left adjoint: the “de Rham complex relative to $A$ functor” $$\Omega^\bullet_{(-)/A}\colon\mathbf{Alg}_A\longrightarrow\mathbf{dgAlg}_{\underline{A}^\bullet}$$ sending an $A$-algebra $B$ to $\Omega^\bullet_{B/A}$. Q: Does $\mathrm{ev}_{0}$ also admit a right adjoint?
2025-03-21T14:48:30.355360
2020-04-19T02:42:53
357908
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Andreas Blass", "Anixx", "Mike Battaglia", "https://mathoverflow.net/users/10059", "https://mathoverflow.net/users/24611", "https://mathoverflow.net/users/6794" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628266", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357908" }
Stack Exchange
Is standard, affine infinity of extended reals quite small on the scale of infinities? Some time ago I had a conversation with a guy who was into surreal numbers and he said that in surreal numbers the affine infinity is quite minor entity compared to the ordinality of natural numbers $\omega$ (although itself affine infinity is not a surreal number, it can be somehow "inserted" between them). Lets consider possible generalizations of derivatives of otherwise non-differentiable functions. First funtion is peacewise, $$f_1(x)= \begin{cases} x/2-\pi/2, & \mbox{if } x < 0 \\ x/2+\pi/2, & \mbox{if } x > 0 \\ 0, & \mbox{if } x = 0 \end{cases}$$ We can formally express its derivative at $0$ as $w_1=\pi\delta(0)+1/2$. In this expression the $1/2$ would be regular part and defined as the limit of the derivative from the left and right and the delta part would be responsible for the jump of $\pi$. Since $w_1$ can be formally (via Fourier transform) be represented as $\sum_{k=0}^\infty 1$, this makes a sense of counting natural numbers. Now consider another function, $f_2(x)=\sqrt[3]{\left| x\right| } \text{sgn}(x)$ Its derivative at $0$ is infinite as well. Let designate it as $w_2$. It has no jump, thus there is no delta function part. What can we say about those values $w_1$ and $w_2$? They have different algebraic properties. If we add a linear function $g(x)=5x$ to $f_1(x)$ and $f_2(x)$, $f_1'(0)$ will change: its finite part would increase by $5$. On the other hand $f_2'(0)$ will remain the same. Similarly, if we multiply $f_1(x)$ by a factor, say $5$, both the finite part of $f_1'(0)$ and the jump will increase, while if we multiply $f_2(x)$, $f_2'(0)$ will remain unchanged. In other words, $w_1$ behaves more like a number: $2w_1\ne w_1$, $w_1+5\ne w_1$ while $w_2$ behaves weirdly: $a w_2= w_2\, (a>0)$ and $w_2+a=w_2$. These are the properties only known to be satisfied by affine infinity... On the other hand, we should presume that $w_1>w_2$ because, well, the function $f_1(x)$ grows more at $0$ than $f_2(x)$. We see that there are two infinite values such that the first one is greater than the second one, but still first one behaves more like a number but the second one behaves more like $\infty$. Can we thus conclude that putting the object named $\infty$ (the one which is invariant regarding addition or multiplication by positive reals) before some other infinite quantities be natural? UPDATE After thinking of it some more, I think the second example is not perfect here since its delta part is not zero. It is $\frac{\left(\pi \delta (0)+\frac{1}{2}\right)^{5/3}-\left(\pi \delta (0)-\frac{1}{2}\right)^{5/3}}{\frac{5}{3}!}$. The power of $\delta(0)$, $2/3$ is too small to make a jump here. A better example would be a function $f_3(x)=-x \ln \left| x\right|,$ whose derivative at zero has truly zero delta-part and infinite regular part. Moreover, purely formally we can see that $w_3=f_3'(0)=-\ln 0$, which formally satisfies $-a\ln 0 +b=-\ln (e^b 0^a)=-\ln 0$ (for real $a>0$). These are the qualities usually attributed to $\infty$. I'm surprised by your statements that $f'_2(0)$ will remain the same when you add $5x$ to $f_2(x)$ and when you multiply it by$5$. Since these statements are counterintuitive (for me), I'd appreciate an explanation for them. @AndreasBlass the limits of $f_2'(x)$ from left and right remain infinite before and after you add or multiply. There is no jump, so the delta-function component also does not change... Yes, I agree, this claim is arguable after all... @AndreasBlass and, after thinking of it some more, I think the second example is not perfect here. The delta part of the derivative is not zero. It is $\frac{\left(\pi \delta (0)+\frac{1}{2}\right)^{5/3}-\left(\pi \delta (0)-\frac{1}{2}\right)^{5/3}}{\frac{5}{3}!}$ The power of $\delta(0)$, $2/3$ is too small to make a jump here. A better example is function $f_3(x)=-x \ln \left| x\right|$, whose derivative has truly zero delta-part and infinite regular part. And... Indeed we can see that $w_3=\ln 0$, which satisfies ($a \ln 0 +b=\ln (e^b 0^a)=\ln 0$) FWIW, Conway had a notion of $\infty$ as a "surreal gap," which is after all the natural numbers but before any infinite quantities. If I recall correctly, the addition rules on surreal gaps are different from the regular surreal numbers, so that you do get the familiar $\infty + n = \infty$ for any natural $n$. So perhaps in that sense, you could identify "affine infinity" with that particular surreal gap (and this seems to be something like the way he was thinking, anyway). @MikeBattaglia thanks. This is indeed what I was talking about, and found a graphic model that illustrates the situation. @MikeBattaglia are there multiple surreal gaps (it seems, in derivative model, there are multiple, such as $-\ln 0$ and $\ln (-\ln 0)$, which are not equal but obey similar addition/multiplication rules). @MikeBattaglia interestingly enough, those "surreal gaps" have addition invariance only regarding real numbers, when added to a transfinte irregular number they do not remain the same. In a sense, one can think of them as irregular numbers whose regular part became infinite, without affecting the irregular part (an analogy in complex numbers would be $i+\infty$).
2025-03-21T14:48:30.355688
2020-04-19T03:12:42
357911
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Jeff ", "https://mathoverflow.net/users/17773", "https://mathoverflow.net/users/91273", "kodlu" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628267", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357911" }
Stack Exchange
counting number of circulant subsequences Let $k\ge1$ and $m\ge1$ be given integers. For any $x=(x_1,\ldots,x_k)\in\{\pm 1\}^k$, define $f(x)=\#\{1\le j\le k: x_j=x_{j+1}=\cdots=x_{j+m-1}\}$. Question: given $0\le l\le k$, for how many $x\in\{\pm 1\}^k$ does $f(x)=l$? Here, for notation simplicity, let $x_{k+1}=x_1,x_{k+2}=i_2,\ldots,x_{k+m-1}=x_{m-1}$. For example, suppose $k=4$ and $m=3$, if $x=(+1,+1,+1,+1)$ or $x=(-1,-1,-1,-1)$, then $f(x)=4$; if $x=(+1,+1,+1,-1)$, then $f(x)=1$. There are two $x$'s such that $f(x)=4$, eight $x$'s such that $f(x)=1$, and six $x$'s such that $f(x)=0$. It would be great to have a general and explicit formula for the number of $x\in\{\pm 1\}^k$ such that $f(x)=l$, and the formula should depend on $m,k,l$. Or some references that could help? Thank you. If you used $x_j$ instead of $i_j$ this would be much more readable. What you can use is Polya's theory of counting, for alphabet size $k=2.$ An $(n, k)−$necklace is an equivalence class of words of length $n$ over an alphabet of size $k$ under rotation. The basic enumeration problem is: For a given $n$ and $k,$ how many $(n, k)-$necklaces are there? Equivalently, we are asking how many orbits the cyclic group $C_n$ has on the set of all words of length $n$ over an alphabet of size $k.$ Denote this value by $a(n, k).$ Theorem: $$a(n,k)=\frac{1}{n}\sum_{d|n} \phi(d) k^{n/d}.$$ Have fun! Edit: In case it is unclear, you want the orbit sizes of each one of these equivalence classes. Thanks Kodlu! Your idea of using Polya's Enumeration Theorem sounds like a right track. However, I still didn't see how it applies to my problem. PET can count the number of colorings up to rotation and/or reflection. In my problem, we are counting the number of colorings subject to the constraints f(x)=l, and such constraint seems not equivalent to either rotation or reflection. It is clear that if x is a rotation or reflection of y then f(x)=f(y), but not vice versa. PET still sounds promising and it is just unclear how to view the set {x: f(x)=l} as an equivalent class. Sorry if I miss sth.
2025-03-21T14:48:30.355859
2020-04-19T03:32:17
357913
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Iosif Pinelis", "Thomas Kojar", "https://mathoverflow.net/users/36721", "https://mathoverflow.net/users/40793" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628268", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357913" }
Stack Exchange
Process with covariance $E[Y_{t}Y_{s}]=a_{1}-a_{2}|t-s|$ We have a centered Gaussian process $X_{t}$ where we have exact equality $$E[X_{t}X_{s}]=a_{1}-a_{2}|t-s|$$ for $|t-s|<\epsilon_{0}\ll \frac{a_{1}}{a_{2}}$ and $a_{i}>0$. Q: I am curious if there is any other concrete Gaussian process $(Y_{s})_{s\in [0,\epsilon_{0}]}$ out there with the same exact covariance when $|t-s|<\epsilon_{0}$ for some $\epsilon_{0}>0$ (not asymptotical behaviour with error term, but exact equality). It will be interesting if $Y_{t}$ is in terms of some known process like a functional of Brownian motion or a stationary solution of some SDE. We are not concerned with $Y_{t}$ having different distribution than $X_{t}$(even though they do looked as Gaussian processes over $t\in [0,\epsilon']$ for $\epsilon'$ small enough). Our main concern is if such covariances have been studied in the literature or if we can devise one. Some idea: start from $Y_{t}=\int_{0}^{t}f(r,t)dW_{r}$ and try to find a deterministic $f(r,t)$ with the desired covariance: by Ito isometry $\int_{0}^{s}f(r,s+h)f(r,s)ds=a_{1}-a_{2}h$. Our process Let $X_{\epsilon}(x)\sim N(0,\ln\frac{1}{\epsilon})$ with covariance: For simplicity above we suppressed the $\epsilon$ and just let $X_{t}:=X_{\epsilon}(t)$. Our particular process. Consider the hyperbolic measure $\lambda:=\frac{1}{y^{2}}dx dy$ in the upper half-plane and a White noise process W indexed by Borel sets of finite hyperbolic area: $$\{A\subset \mathbb{H}: \lambda(A)<\infty; \sup_{(x,y),(x',y')\in A}|x-x'|<\infty\}$$ with covariance: $$E[W(A_{1})W(A_{2})]:=\lambda(A_{1}\cap A_{2}).$$ Then let $X_{t}=W(V_{\epsilon}+t)$ for $$V_{\epsilon}:=\{(x,y)\in \mathbb{H}: x\in [-1/4,1/4]\text{ and }max(2|x|,\epsilon)\leq y<1/2\}.$$ By Pólya’s theorem, any even real-valued function $f$ on $\mathbb R$ with $f(\infty-)=0$ which is convex on $[0,\infty)$ is positive definite. So, any such function is the (auto)covariance function of a stationary Gaussian process; see e.g. Section "Properties of the Autocovariance Function", page 2. Now just take any two different functions, $f_1$ and $f_2$, of the Pólya class such that $f_2(t)=1-|t|=f_2(t)$ for $|t|\le1/2$. Then the corresponding stationary Gaussian processes, say $(X_{1,t})$ and $(X_{2,t})$, with the covariance functions $f_1$ and $f_2$ will have different distributions. Therefore, these two processes will be different from each other. To be more specific, note first here that, by vertical and horizontal re-scaling, without loss of generality $a_1=a_2=1$, so that $$EX_sX_t=1-|t-s|\quad\text{if}\quad|t-s|\le u, \tag{1}$$ where $u\in(0,1)$. Let then $$Y_t:=B_{t+1}-B_t=\int_t^{t+1}dB_s,$$ where $(B_t)_{t\in\mathbb R}$ is the standard Brownian motion with $B_0=0$. Then $$EY_sY_t=1-|t-s|\quad\text{if}\quad|t-s|\le 1$$ (with $EY_sY_t=0$ if $|t-s|>1$), so that $$EY_sY_t=EX_sX_t\quad\text{if}\quad|t-s|\le u,$$ as desired. For more examples, take any $h\in(0,1)$ and let $$U_t:=\frac1{\sqrt2}\,(Y_{(1-h)t}+Z_{(1+h)t}),$$ where $(Z_t)$ is an independent copy of the Gaussian process $(Y_t)$. Then $$EU_sU_t=1-|t-s|=EY_sY_t \quad\text{if}\quad|t-s|\le1/(1+h)$$ and hence $$EU_sU_t=EX_sX_t \quad\text{if}\quad|t-s|\le\min[u,1/(1+h)],$$ as desired. Thank you Iosif. The focus of the question was on finding some known process with the above mentioned covariance. Also I didn't write $\leq u$, I wrote $<u$. The distribution comment was just auxiliary, not the focus of the question. If you like in your setting, you should think that the underlying interval we study is $[0,u]$ for $u<\frac{1}{2}$ and so we get the same distribution there by uniqueness of covariance for Gaussian processes. @OOESCoupling : Your question was "if there is any other Gaussian process $Y_s$ out there with the same exact covariance when $|t-s|<\epsilon_0$ for some $\epsilon_0>0$". This question was fully answered by just modifying the process you had with such a covariance and thus using no additional knowledge. Your comment about the distinction between $<u$ and $\le u$ looks strange to me, since the covariance function of any 2nd-order process continuous at 0 is continuous everywhere, by Bochner's theorem. Anyhow, now I have given another version of the answer, based on Pólya's theorem. Thank you Iosif. We are only looking at them as Gaussian processes restricted over $[0,\frac{1}{2}]$ ignoring the rest. Not as processes over the entire positive line, which one can devise many examples as you wrote. The focus of the question is to construct/find some concrete known Gaussian process over the interval $[0,\frac{1}{2}]$ like an SDE solution. @OOESCoupling : You keep changing the question. It was "if there is any other Gaussian process [with certain general properties]". Now, after getting full answers to the previous versions of your questions, you change the question to "It will be interesting if $Y_t$ is in terms of some known process like a functional of Brownian motion or a stationary solution of some SDE." An appropriate thing to do in such situations is, not to invalidate valid answers, but to post the additional questions separately. Otherwise, what did I spend my time for? Thank you Iosif. I had this line "It will be interesting if Yt is in terms of some known process like a functional of Brownian motion or a stationary solution of some SDE" from the very beginning as you can check in the edit history. If you don't mind, can you modify/delete the above answer to avoid confusion? Thank you Iosif. Indeed, you had "It will be interesting ..." from the very beginning. I admit I had not noticed that. Almost all my attention was paid to your highlighted question: "if there is any other Gaussian process $(Y_{s}){s\in [0,\epsilon{0}]}$ out there with the same exact covariance when $|t-s|<\epsilon_{0}$ for some $\epsilon_{0}>0$". That question was fully answered. Why do you want me to delete the answer? @IosifPineils Thank you Iosif. The above answers the different question of showing that $Y_{t}$ is not equal in distribution to $X_{t}$. That was not required in the question. The focus was to construct a concrete $Y_{t}$ that gives the covariance from the highlighted question and the title. I wrote the "it will be interesting.." to contextualize the question's focus. So if you don't mind, please modify/delete the above answer to avoid confusion for others. Thank you Iosif. @OOESCoupling : "The above answers the different question of showing that $Y_t$ is not equal in distribution to $X_t$. That was not required in the question." -- In a strange sense, you are correct here: my answers answer a significantly more nontrivial question than the one you asked. Indeed, your question was only "if there is any other Gaussian process" with certain general properties (say P) of its covariance function. Previous comment continued: Such a question could have been answered trivially, by taking, e.g. an independent copy $(Y_t)$ of your process $(X_t)$. Then $(Y_t)$ would obviously be different from $(X_t)$ and it would have the same covariance function, trivially with the same property P. So, both of my answers gave you much more than this trivial answer -- which latter would already, at least formally, completely answer your question. Previous comment continued: Namely, both of my answers gave you Gaussian processes $(Y_t)$ that are, not only different from $(X_t)$, but also different from $(X_t)$ in distribution. Therefore, each of my answers answers your question more than completely. @OOESCoupling : As for your "it will be interesting ...": I admitted that I had not noticed that. But, essentially, this does not matter much. What matters foremost is your highlighted, quite explicit question "Q: I am curious if there is any other Gaussian process [...]". The addition "it will be interesting ..." looks like just a wish which would be good to also have, in addition to the answer to the main question. Perhaps, you meant to make this additional wish your main question, but in fact you did not. I hope you can take a responsibility for that. @OOESCoupling : I have added specific examples, in terms of stochastic integrals of the Brownian motion. By vertical and horizontal re-scaling, without loss of generality $a_1=a_2=1$, so that $$EX_sX_t=1-|t-s|\quad\text{if}\quad|t-s|\le u, \tag{1}$$ where $u\in(0,1)$. Take any $h\in(0,1)$ and let $$U_t:=\frac1{\sqrt2}\,(X_{(1-h)t}+Y_{(1+h)t}),$$ where $(Y_t)$ is an independent copy of your Gaussian process $(X_t)$. Then $$EU_sU_t=1-|t-s|=EX_sX_t \tag{2}$$ if $|t-s|\le u/(1+h)$, as desired. Moreover, letting $u$ take the largest possible value such that (1) still holds, we will see that the first equality in (2) will fail to hold if $|t-s|=u$, which shows that the distribution of $(U_t)$ is different from that of $(X_t)$. Therefore, the process $(U_t)$ is different from the process $(X_t)$.
2025-03-21T14:48:30.356412
2020-04-19T03:32:21
357914
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Chris Gerig", "https://mathoverflow.net/users/12310" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628269", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357914" }
Stack Exchange
A clarification on why the injectivity radius is involved in Lemma 10.7 of Compactness results in Symplectic Field Theory by B.-E.-H.-W.-Z I'm trying to understand why in the following lemma (Lemma 10.7 of [BEHWZ]), the upper bound on the $L^{\infty}$-norm of the differential is given in terms of the injective radius w.r.t to a specific metric on the domain. Let me give the full statement of the Lemma here and explain the notation: Lemma 10.7 There exists an integer $K=K(E)$ which depends only on the energy bound $E$ such that, by adding to each marked point set $M_n$ a disjoint set $Y_n$ of cardinality $2K$, we can arrange a uniform gradient bound $$ \| \nabla F_n(x) \| \leq \dfrac{C}{\rho(x)} \ x \in \dot{S_n} \setminus Y_n $$ where the gradients are computed with respect to the cylindrical metric on $\Bbb R \times V$ associated with a fixed Riemannian metric $V$, and the hyperbolic metric on $\dot{S_n}\setminus Y_n$, and where $\rho(x)$ is the injectivity radius of this hyperbolic metric at the point $x \in \dot{S_n}\setminus Y_n$ $S_n$ is a closed Riemann surface, $F_n$ is a pseudoholomorphic map defined on $S_n \setminus Z_n$, where $Z_n$ is some discrete set of points that act as puncture. We assume that, up to add some more punctures, we have an hyperbolic metric on $\dot{S_n}:= S_n\setminus Z_n$. Why can't they prove, with the very same proof, the existence of some uniform bound independent of the injectivity radius? Since this is a stronger statement it could be that it doesn't apply in this case but it's not clear to me where exactly in the proof we have to introduce the injectivity radius. I feel it might be because the metric they're working with might "change" when doing the usual rescaling argument to have a sphere/plane bubbling off. I used the word "change" because I'm not able to make it more precise and I'd like to understand it better. Thanks in advance for any insights on this proof. REFERENCES [BEHWZ] Bourgeois, F.; Eliashberg, Y.; Hofer, H.; Wysocki, K.; Zehnder, E. Compactness results in symplectic field theory. Geom. Topol. 7 (2003), 799--888. Can you find those holomorphic charts $\psi_n$ (in the proof) whose bound on $\nabla\psi_n$ is independent of the injectivity radius (at the specified points)? If not, that looks like the reason.
2025-03-21T14:48:30.356607
2020-04-19T05:38:21
357918
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "InsideOut", "Moishe Kohan", "Vladimir Dotsenko", "YCor", "https://mathoverflow.net/users/1306", "https://mathoverflow.net/users/14094", "https://mathoverflow.net/users/39654", "https://mathoverflow.net/users/69185" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628270", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357918" }
Stack Exchange
Finding invariant closed subspace which are also subgroups for the action of $\text{SL}(2, \Bbb Z)$ on $\Bbb R^n\times \Bbb R^n$ I recently came across to the following action of $\text{SL}(2,\Bbb Z)$ on the space $\Bbb R^n\times\Bbb R^n$ defined as $$ \begin{pmatrix} a & b \\ c & d \end{pmatrix}\cdot \big(v,\,w\big)\mapsto \big(dv-cw,\,-bv+aw\big)=\big(v,\,w\big) \begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} $$ where $v=(v_1,\dots,v_n),\, w=(w_1,\dots,w_n)\in \Bbb R^n$. I would like to determine the $\text{SL}(2,\Bbb Z)$-invariant closed subsets. Some examples Of course the trivial subspaces $\{0\}$ and $\Bbb R^n\times\Bbb R^n$ are among these. One can easily check that every linear subspace of the form $V\times V$, where $V<\Bbb R^n$ is invariant under these action. On the other hand, if $V,W<\Bbb R^n$ are different subspaces, then $V\times W$ is not invariant. Similarly, if $\Lambda$ is any lattice in $\Bbb R^n$, then $\Lambda\times\Lambda$ is also invariant. Not only, considering $\Bbb R^n\times\Bbb R^n$ as $\Bbb R^2\times\cdots\times\Bbb R^2$, where $\big(v,\,w\big)$ is identified with $\big(\big(v_1,\,w_1\big),\dots,\big(v_n,\,w_n\big)\big)$, if $\Lambda$ is a lattice in $\Bbb R^2$, the $n$-times product $\Lambda\times\cdots\times\Lambda$ is $\text{SL}(2,\Bbb Z)$-invariant. These are some examples of invariant closed subspaces I have found so far. I would like to determine all of them. Any idea, hint or reference is very appreciated! Edit: I was thinking this problem from a generic point of view. However, I am mainly interested on invariant closed subset of $\Bbb R^n\times\Bbb R^n$ which are also subgroups. It's not an unusual action. First since this is the restriction of an algebraic action of $G=\mathrm{SL}_2(\mathbf{R})$ in which it's Zariski dense, its invariant subspaces are the same as for the $G$-action and since $G$ is semisimple this behaves well. Also this action, call it $W_n$, obviously splits coordinate-wise as $W_1^{\otimes n}$, and $W_1$ is absolutely irreducible. Describing invariant subspaces is then an exercise. Hi @YCor, thanks for the answer first. I have a couple of doubts. Why do you say that the invariant subspaces for the $\text{SL}(2\Bbb Z)$ action are the same as for the $G$-action? The $n$-times product of lattice is not $G$-invariant. Am I missing something? The second doubt is about the splitting, the action is diagonal on $\Bbb R^2\times\cdots\times \Bbb R^2$. So the splitting may contain factors greater than $W_1$. When you say "subspace", what exactly do you mean? Presumably not vector subspace (because you mention lattices) - that was probably the context of the answer of @YCor . So what is it? Subset ? Algebraic subvariety? Yes, maybe the term "subspace" has been misleading. I actually mean topological subspaces, where $\Bbb R^n\times \Bbb R^n$ is endowed with the product of the standard euclidean topology. The point Is that I have examples with linear subspaces and lattices. Topological subspace just means any subset (with the endowed topology, but this is irrelevant to the question). You really want no further assumption such as "closed"? Yes, I am interested in closed subspaces in particular. Right, I forgot to write it. Have you looked at the case $n=1$? It's already non-trivial. In general, the question is equivalent to describe closed subsets of the (usually non-Hausdorff) quotient space. I did. In that case we have the action on $\text{SL}(2,\Bbb Z)$ on $\Bbb R\times\Bbb R\cong\Bbb R^2$. So there are lattices of course. If you take a vector $v\in\Bbb R^2$ such that $v$ is not a multiple of an integral vector (all entries are integers) then the orbit is dense in $\Bbb R^2$. If the entries are rational multiples of integers, instead, the orbit is discrete. Once $n\ge 3$ (maybe even $2$), this is hopeless since the action has nonempty domain of proper discontinuity.
2025-03-21T14:48:30.356890
2020-04-19T07:39:27
357923
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "User", "abx", "https://mathoverflow.net/users/156533", "https://mathoverflow.net/users/40297" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628271", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357923" }
Stack Exchange
On dimension estimate of Brill-Noether Locus Let $C$ be a smooth curve of genus $g$ on a $K3$ surface $X$ such that $\text{gon}(C) =k$ and $\text{cliff}(C) =k-2$. Then we have the following dimension estimate result, If the linear system $|C|$ is base point free with the corresponding Brill-Noether number $\rho(g,1,k) \leq 0$, then for $d \leq g-k+2$ every irreducible component of $W^1_d(C)$ has dimension atmost $d-k$. At this point my Question is that : Does there exist analogous result for smooth curves $C$ lying on a general surface of $\mathbb P^3$ of degree $\geq 5$? Any help from anyone is welcome. I have doubts about your statement for K3. Can you give a reference? @abx, The reference that I have got is theorem $3.7$ of the article link Thanks! Indeed the paper you quote give this statement. But they refer to a paper of Aprodu-Farkas, who prove it for a general curve in $|C|$. I have counter-examples for special cuves. @abx, Is there any analogous result for general curve in $|C|$ in the case of hypersurfaces of degree bigger than $5$ in $\mathbb P^3$? It might be true, but I don't think it is known. Note that since your surface is general your curve is a complete intersection. Some things are known about these (e.g. the Clifford index), but I don't think such a precise result is known.
2025-03-21T14:48:30.357030
2020-04-19T07:47:23
357924
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Anixx", "Ben McKay", "Dabed", "David Loeffler", "Wojowu", "https://mathoverflow.net/users/10059", "https://mathoverflow.net/users/13268", "https://mathoverflow.net/users/142708", "https://mathoverflow.net/users/2481", "https://mathoverflow.net/users/30186" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628272", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357924" }
Stack Exchange
Are the shapes of the $\mathbb{R}^2$ plane and a disk of infinite radius different? Or otherwise, why their areas differ by $\frac\pi{12}$? The calculation of the area of the $\mathbb{R}^2$ plane depends on filtering used. I think, the most natural filtering is along the radius in polar coordinates: $$S_{\mathbb{R}^2}=\int_0^\infty 2\pi r dr=2\pi\left(\frac{\tau^2}2+\frac1{24}\right)=\pi\tau^2+\frac\pi{12}$$ where $\tau=\int_0^\infty dx$. The regularized value of this area is $0$. On the other hand, the area of a disk with radius $\tau$ (equal to the length of the real semi-axis) is $S_c=\pi\tau^2$, and its regularized value is $-\frac\pi{12}$. Thus, $S_{\mathbb{R}^2}-S_c=\frac\pi{12}$. I wonder, where this area difference comes from? Does it originate from the fact that the plane should not be considered a disk of infinite radius? Or it is some glitch of integration technique? How on earth can it be meaningful to attach any value to this integral except $\infty$? How do you get the integral equal to $2\pi\left(\frac{\tau^2}2+\frac1{24}\right)$? @Wojowu the formula appears in this post of OP, I got lost before arriving to it so I cannot comment further on it. @Anixx: You ask quite a few questions about your own theory of divergent integrals, which surely very few people are qualified to answer. Is there a good reference where your theory is developed in full rigour and detail, so that it might be possible for mathematicians other than you to answer your questions? @BenMcKay for now, I think, there is no theory because I found that the proposed approach was non-natural definition. It seems, the Levi-Civita type of construction is more natural. On the other hand, I am looking at whether one can be merged/embedded into the other. If you want an outdated text, I can provide a reference... Apparently, the discrepancy comes from my use of non-natural definition of multiplication of divergent integrals. When using a more natural and intuitive Levi-Civita field kind of construction, the multiplication gives $\int_0^\infty dx\cdot \int_0^\infty dx=\omega^2=2\int_0^\infty x dx$. So, the areas in both cases are $\pi\omega^2$, where $\omega=\int_0^\infty dx$.
2025-03-21T14:48:30.357494
2020-04-19T08:02:49
357925
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Denis Nardin", "Marco Farinati", "Pedro", "https://mathoverflow.net/users/155668", "https://mathoverflow.net/users/21326", "https://mathoverflow.net/users/43054", "https://mathoverflow.net/users/98863", "user155668" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628273", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357925" }
Stack Exchange
Hochschild homology of acyclic complex Let $A$ be a differential graded algebra over a commutative ring $R$. Suppose that $H_*(A)=0$, i.e. $A$ is acyclic. Question: Does this imply that the Hochschild homology $HH_*(A)$ also vanishes identically? If not, what hypotheses does one need to impose on $R$ and $A$? (In the case I am mostly interested in, $R$ is a polynomial ring in one variable.) Remark: if we assume that $R$ is a field, then it's a general fact that one can cook up an $A_{\infty}$-algebra structure on $H_*(A)$ such that $A \to H_*(A)$ is an $A_{\infty}$ quasi-isomorphism. Moreover, $A_{\infty}$-algebra quasi-isomorphisms are invertible and $HH_*(-)$ is functorial. So it follows that that $HH_*(A)=0$ is $H_*(A)=0$. So the result should be true in this case. I guess the answer depends on how precisely define the Hochschild homology over a general ring $R$ (i.e., are you using the derived tensor product or not?). I think I'd like to define HH_* just using the bar complex (i.e. the same way as I would defined it over a field). However, I'm happy to assume that A is flat over R, so I presume I don't have to derive anything? (In fact, in the case I'm interested in, A is a free associative algebra and R is a polynomial ring in one variable.) If you use the derived tensor product (which coincides with the classical one if $A$ is flat over $R$), then the definition of $HH_\ast(A)$ is manifestly invariant under quasi-isomorphism of algebras. In particular the map $A\to 0$ is a quasi-isomorphism... @DenisNardin Sorry, but while it's clear that a morphism of dg-algebras $A \to B$ induces a morphism $HH_(A) \to HH_(B)$, it's not clear why a quasi-isomorphism of dg-algebras induces an isomorphism on Hochschild homology. Maybe I'm using a bad definition for $HH_*(-)$? Could you say a few words about why this is clear? Both derived tensor products and geometric realizations of simplicial objects preserve quasi-isomorphisms, and the bar complex is obtained by composing the two operations. I'm not sure I can say much more than what is in Weibel's Homological algebra. Hochschild homology is a derived invariant, and in particular a quasi-isomorphism invariant, since quasi-isomorphic algebras are derived invariant. Your algebra is non-unital, I assume, since $1$ is usually a non-trivial cycle in general. In any case, however, you can consider the Hochschild cyclic complex of $A$, call it $C_*(A)$, which has an internal differential coming from $A$ and an external one coming from the usual Hochschild differential. This is a bicomplex and so there is a spectral sequence with second page $HH_*(H_*(A))$ that converges (perhaps you need to check this part carefully) to $HH_*(A)$. The Hochschild homology of $k$ is $k$, and hence so is that of $A$. If $A$ is non-unital and $H_*(A) = 0$, you would get that $HH_*(A) = 0$, too. K[x]/x^2 with dx=1 is a unital dg algebra that is acyclic @MarcoFarinati I guess I am too used to considering weight graded algebras. :)
2025-03-21T14:48:30.357739
2020-04-19T09:13:09
357928
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Joseph O'Rourke", "Manfred Weis", "https://mathoverflow.net/users/31310", "https://mathoverflow.net/users/6094" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628274", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357928" }
Stack Exchange
Restrictions on crossing edges in Delaunay triangulations what can be said about crossing edges in Delaunay triangulations, i.e. about pairs of edges that constitute to the heaviest perfect matching int the $K_4$ induced by the quadruplet of adjacent vertices: does their existence imply that all edges of their induced $K_4$ are also edges of the DT? what is the maximal number of pairs of crossing edges a given edge can belong to? Could you please explain what is a crossing edge in a DelTri? I don't understand the definition in terms of "the heaviest perfect matching in ... $K_4$ ..." Are the endpoints of crossing edges cocircular? @JosephO'Rourke if the four vertices are in convex configuuration then the two diagonals have maximal weightsum and thus resemble the pair of crossing edges; if they are however in deltoid configuration then the pair of edges with maximal weightsum doesn't geometrically intersect but, by definition, are crossing. If a DT would contain a pair of crossing edges that are adjacent to vertices $(A,C)$ and $(B,D)$ but e.g. edges $(A,B),,(C,D)$ were not in the DT, replacing edges $(A,C),,(B,D)$ with $(A,B),,(C,D)$ would reduce the overall sum of edgelengths in the resultig graph Thanks for trying, but I think we must not share common assumptions and language on this topic. I'm going to let this go .... @JosephO'Rourke my apologies for my inability to provide a better explanation; maybe I will manage that in some future edit. I appreciate your honest feedback.
2025-03-21T14:48:30.357875
2020-04-19T09:39:16
357929
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "Yahav Boneh", "https://mathoverflow.net/users/156518" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628275", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357929" }
Stack Exchange
Relation of row sums to largest eigenvalue I know that the largest eigenvalue of a graph is bounded between the minimal and maximal row sum of the matrix. If I have a $0-1$ symetric matrix (an adjacency matrix) and I know $k$ of the rows have at least $k$ zeros in them (more specifically, I know $k$ is the maximum size of an all zeros principal sub-matrix of the adjacency matrix), is there anything else I can tell about the largest eigenvalue (or others eigenvalues)? Is there anything I can tell about the eigenvalues which implies I have that kind of principal sub-matrix? Thanks! You are asking for relationships between the maximum independent set and the eigenvalues. If you search with those terms you will find several. For example, Haemers proved that the maximum size of an independent set is bounded above by $$ n\frac{-\lambda_1\lambda_n}{\delta^2-\lambda_1\lambda_n},$$ where $\lambda_1,\lambda_n$ are the largest and smallest eigenvalues and $\delta$ is the minimum degree. Thank you! Do you know the name of the article?
2025-03-21T14:48:30.357975
2020-04-19T10:20:26
357931
{ "all_licenses": [ "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/" ], "authors": [ "darij grinberg", "https://mathoverflow.net/users/2530" ], "include_comments": true, "license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/", "provenance": "stackexchange-dolma-0006.json.gz:628276", "site": "mathoverflow.net", "sort": "votes", "url": "https://mathoverflow.net/questions/357931" }
Stack Exchange
Express inclusion-exclusion principle in terms of matrix operations First of all i denote $\{1,2,3,...,m\}$ by $[m]$ Let there be a collection of sets $\alpha=\{A_{1},A_{2},...,A_{m}\}$ such $\bigcup_{i\in[m]}A_{i}\subseteq [n]$ Consider any function $f:\mathcal{P}([n])\rightarrow \mathbb{C}$ It is well known that $$f(\bigcup_{i\in[m]}A_{i})=\sum_{i=1}^{m}(-1)^{i-1}\sum_{I\subseteq[m],|I|=i}f(\bigcap_{l\in I}A_{l})$$ That formula is called a inclusion-exclusion principle But what if we associate to $\alpha$ a matrix $\beta=(x_{A_{i}}(j))_{i\in[m],j\in[n]}$ Where $x_{A_{i}}(j)=\begin{cases} 1 &\text{when } j\in A_{i}\\0 &\text{otherwise } \end{cases}$ How to express inclusion-exclusion principle using operations on matrix $\beta$? By operation i mean for example adding, multiplying rows, columns, taking determinant etc. In particular i am intrested in case when $f$ is probability measure P. Here are the things i done in this case: Consider binomiall and independent distribution over all the numbers from the set $[n]$. Probability that we pick an element $i$ is equal to $p$. I noticed that $f(A\cap B)=P(A\cap B)=p^{|A\cup B|}$ (where $A$ stands both for set and an event to include set , same with $B$) But how to use a matrix there? Row $i$ is the $\left{0,1\right}$-vector corresponding to the set $A_i$. Thus, multiplying rows $i$ and $j$ entrywise yields the $\left{0,1\right}$-vector corresponding to the set $A_i \cap A_j$, and similarly for larger products. But multiplying rows of a matrix entrywise is not a standard matrix operation, so I'm not sure how much you gain from this point of view. Also, the inclusion-exclusion principle doesn't hold for any arbitrary $f$. You need $f$ to be modular (i.e., obtainable by summing values at single elements). As darij grinberg noted, the inclusion-exclusion principle holds only if $f$ is additive over $[n]$ or, equivalently, is a (signed) measure over $[n]$, and then $$f(A)=\int x_A\,df$$ for any $A\subseteq[n]$. Now, to get the inclusion-exclusion principle from properties of indicators $x_A$ of sets $A$, write \begin{align}x_{\bigcup_1^m A_j}&=1-\prod_1^m(1-x_{A_j}) \\ &=\sum_{i=1}^m(-1)^{i-1}\sum_{J\in\binom{[n]}i}\prod_{j\in J} x_{A_j} \\ &=\sum_{i=1}^m(-1)^{i-1}\sum_{J\in\binom{[n]}i}x_{\bigcap_1^m A_j}, \end{align} where $\binom{[n]}i$ is the set of all sets $J$ such that $J\subseteq[n]$ and $|J|=i$. Integrating now with respect to $df$, we get the inclusion-exclusion principle: \begin{align}f(\bigcup_1^m A_j)&= \sum_{i=1}^m(-1)^{i-1}\sum_{J\in\binom{[n]}j}f(\bigcap_1^m A_j). \end{align}