added
stringdate 2025-03-12 15:57:16
2025-03-21 13:32:23
| created
timestamp[us]date 2008-09-06 22:17:14
2024-12-31 23:58:17
| id
stringlengths 1
7
| metadata
dict | source
stringclasses 1
value | text
stringlengths 59
10.4M
|
---|---|---|---|---|---|
2025-03-21T14:48:29.834017
| 2020-02-11T14:28:59 |
352452
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Manfred Weis",
"Todd Trimble",
"https://mathoverflow.net/users/2926",
"https://mathoverflow.net/users/31310"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626277",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352452"
}
|
Stack Exchange
|
Early examples of proof appraisals
What are the earliest known examples for attributing proofs as 'deep', 'elegant' or 'beautiful' (or their equivalents in other languages)?
Gauß for example called one of his results 'remarkable' Theorema Egregium -- are there earlier examples of such proof-attributes?
I thought of this when I saw the article Beauty Is Not Simplicity: An Analysis of Mathematicians' Proof Appraisals.
@Matt: thanks for being faster with editing the question than I was.
The Gauss example was not his calling his proof remarkable (elegant, etc.) but rather calling the phenomenon itself, an objective "fact of nature" as it were, remarkable.
Probably, one of the earliest examples is the famous appraisal made by Plutarch (c. AD 46 – c. 120) about Archimedes geometric work. The translation in English is as follows:
It is not possible to find in all
geometry more difficult and intricate questions, or more simple and lucid explanations. Some ascribe
this to his natural genius; while others think that incredible effort and toil produced these, to all
appearances, easy and unlaboured results. No amount of investigation of yours would succeed in
attaining the proof, and yet, once seen, you immediately believe you would have discovered it; by so
smooth and so rapid a path he leads you to the conclusion required [...] His discoveries were numerous and admirable; but he is
said to have requested his friends and relations that, when he was dead, they would place over his tomb
a sphere containing a cylinder, inscribing it with the ratio which the containing solid bears to the
contained.
|
2025-03-21T14:48:29.834188
| 2020-02-11T15:05:12 |
352457
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Doug Liu",
"Felipe Voloch",
"https://mathoverflow.net/users/153360",
"https://mathoverflow.net/users/2290"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626278",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352457"
}
|
Stack Exchange
|
Good reduction of finite etale covers of abelian varieties
Let $R$ be a dvr (whose residue characteristic is zero if it helps) with fraction field $K$.
Let $A$ be an abelian variety over $K$ with good reduction over $R$. Let $X\to A$ be a finite etale morphism with $X$ geometrically connected over $K$.
Does $X$ have good reduction over $R$? (That is, does $X$ have a smooth proper model over $R$?)
By Lang-Neron, there is finite field extension $L/K$ such that $X$ is an abelian variety over $K$. In this case $X\to A$ is an isogeny and it follows from Neron-Ogg-Shafarevich that $X$ has good reduction as well over $R$. Thus, $X$ has potential good reduction over $R$, i.e., there is a finite extension $L/K$ such that $X_{R_L}$ has a smooth proper model over $R_L$, where $R_L$ is the integral closure of $R$ in $L$.
I fear that my answer has a negative answer, but I can't think of an explicit example. Can one find an elliptic curve $E$ over $\mathbb{Q}$ with good reduction at a prime $p$, an $E$-torsor $T$ with bad reduction at this prime $p$, and a finite etale cover $T\to E$. But I do not see how.
Try the cover of $y^2=x^3-x$ given by $z^2=3x$ with $p=3$.
Sorry, but I am confused. By $z^2=3x$ you mean a projective quadratic curve of genus $0$? How to cover a genus $1$ curve by another of genus $0$?
|
2025-03-21T14:48:29.834303
| 2020-02-11T15:43:05 |
352460
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Taras Banakh",
"YCor",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/61536"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626279",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352460"
}
|
Stack Exchange
|
Are the separability and autoseparability equivalent for (locally) compact topological group?
Definition. A topological group $G$ is called autoseparable if there exists a countable subset $S\subset G$ and a sequence $(f_n)_{n\in\omega}$ of automorphisms of $G$ such that for any neighborhood $U\subseteq G$ of the unit of $X$ we have $X=\bigcup_{n=0}^\infty f_n(US)$.
It is clear that each separable topological group is autoseparable.
The converse is not true as shown by the following
Example. Each topological vector space $X$ is autoseparable, which is witnessed by the set $S=\{0\}$ and the sequence $(f_n)_{n\in\omega}$ of automorphisms $f_n(x)=nx$.
Problem 1. Are the separability and autoseparability equivalent for (locally) compact topological groups?
Added in Edit. In can be shown that for any finite field $\mathbb F$ and any cardinal $\kappa$ the Tychonoff power $\mathbb F^\kappa$ is autoseparable, which means that the answer to the above problem is negative. So, Problem 1 transforms into
Problem 2. Which compact topological groups are autoseparable?
Added in the next Edit. The answer to this MO-question implies that for any infinite cardinal $\kappa$ the compact topological group $\mathbb T^\kappa$ is autoseparable, which implies that each ($\sigma$-compact locally) compact abelian topological group embeds into an autoseparable (locally) compact abelian topological group.
Just a matter of taste, why one countable set is written as countable subset, while the other is indexed as a sequence?
Such a group is clearly $\sigma$-compact. I guess the main case to look at is the case of compact groups, and compact abelian groups would be a good start.
@YCor You are right: I have corrected $S\subset X$ to $S\subset G$. Concerning different notations, in this way I avoided explaining what is $A(US)$ for a subset $A$ of the automorphism group of $G$. Concerning your last comment, indeed you are right and this was my initial intention. But I wanted to define autoseparability as a property related to the separability somehow.
Your edit led me to wonder whether there is a $\sigma$-compact that is not automorphism-separable (what you call "autoseparable"). Indeed there's a compact abelian one. Namely, let's use that there exist a torsion-free abelian group $D$ of cardinal $>2^c$, and with countable automorphism group. For a topological group with countable automorphism group, clearly automorphism-separable means separable. Hence the Pontryagin dual of $D$ being not separable, it is not automorphism-separable.
@YCor what about the autoseparability of uncountable powers of the circle groups? I have written a corresponding question here: https://mathoverflow.net/q/352604/61536
Extended comment. Let me first rephrase the question. Say that a topological group $G$, a subgroup $A$ of $\mathrm{Aut}(G)$, and a subset $S$ of $G$ satisfy: for every neighborhood $U$ of $1$ in $G$ we have $G=\bigcup_{f\in A,x\in S}f(Ux)$. Say that $G$ has Property (V) if it has such $A$, $S$ countable, and (V') is it has such $A$ countable with $S=\{1\}$.
You're asking when $G$ locally compact has Property (V). It clearly implies $\sigma$-compact and is implied by separable, and more specifically the question ask about equivalence with being separable.
Let me do two reductions.
(1) If there's a non-separable locally compact abelian group $G$ satisfying (V), then there another one (a quotient of $G$) satisfying (V').
Indeed, the quotient by the closure of the subgroup generated by $\bigcup_{f\in A}f(S)$ works.
(2) If there's a non-separable locally compact group $G$ satisfying (V'), then there is a compact one (a compact normal subgroup of $G$).
Proof: since $G$ is $\sigma$-compact, there is a compact normal subgroup $K$ of $G$ such that $G/K$ is second-countable. Then $W=\bigcap_{f\in A}f(K)$ is such that $G/W$ is second-countable, and in addition is $A$-invariant. Hence if $G$ is not separable, then $W$ is not separable. Also $W$ satisfies (V'): indeed let $U$ be a neighborhood of $1$ in $W$, then there exists a neighborhood $U'$ of $1$ in $G$ such that $U=U'\cap W$. Then $G=\bigcup_{f\in A}f(U)$. Intersecting with $W$ this yields $W=\bigcup_{f\in A}f(U')$.
The latter proof shows:
If $G$ satisfies (V') for some $A$ and $W$ is an $A$-invariant closed normal subgroup, then $W$ also satisfies (V').
Combining all this, if there is a non-separable locally compact abelian group with (V'), then there is one that in addition is compact, and either connected or profinite.
|
2025-03-21T14:48:29.834592
| 2020-02-11T16:22:30 |
352462
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626280",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352462"
}
|
Stack Exchange
|
Compact $G$-ENR's and Euler characteristic computed with Alexander-Spanier cohomology with compact support
Let $(Z,A)$ a compact ENR pair, then
$$\chi(Z)=\chi_c(Z-A)+\chi(A)$$
where $\chi_c$ is the Euler characteristic taken in Alexander-Spanier cohomology with compact support (ENR means euclidean neighborhood retract).
According to the book Transformation Groups by Tammo tom Dieck, page 230, proposition 1.12
$$\chi(X^{K}/NK)=\displaystyle\sum_{(H_i)}{\chi_c(X^{K}_{(H_i)}/NK)}$$
where $X$ is a compact $G$-ENR, with $G$ a compact Lie group, $K$ is a (closed?) subgroup of $G$ and $H_i$'s run through a complete set of conjugacy classes of (closed?) subgroups of $G$.
Each $X^{K}_{(H_i)}/NK$ is supposed to be a subtraction $X_i-Y_i$ where $(X_i,Y_i)$ is a compact ENR pair, but I do not see what that spaces are. Any suggestion?, please. Do $K$ and each $H_i$ need to be closed?.
|
2025-03-21T14:48:29.834682
| 2020-02-11T16:27:34 |
352463
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dan Petersen",
"David E Speyer",
"Harry Gindi",
"LSpice",
"Pace Nielsen",
"https://mathoverflow.net/users/1310",
"https://mathoverflow.net/users/1353",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/297",
"https://mathoverflow.net/users/3199",
"https://mathoverflow.net/users/44191",
"user44191"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626281",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352463"
}
|
Stack Exchange
|
The Octahedral Axiom in group theory
$\require{AMScd}$Here are two results about groups:
(The third isomorphism theorem) Suppose that I have $A \triangleleft B \triangleleft C$ and $A \triangleleft C$. Then $C/B \cong (C/A)/(B/A)$.
(An exercise I just assigned my students) Suppose that we have $X \triangleleft Z$ and $Y \triangleleft Z$ with $X \cap Y = 1$. Then $(Z/X)/Y \cong (Z/Y)/X$.
Vague question a student just asked me: Is there some general context in which to think of these results and why they look similar?
We can write these as diagrams with exact rows and columns:
\begin{gather}
\begin{CD}
@. @. 1 @. 1 \\
@. @. @VVV @VVV \\
1 @>>> A @>>> B @>>> B/A @>>> 1 \\
@. @| @VVV @VVV \\
1 @>>> A @>>> C @>>> C/A @>>> 1 \\
@. @. @VVV @VVV \\
@. @. C/B @>\cong>> (C/A)/(B/A) \\
@. @. @VVV @VVV \\
@. @. 1 @. 1
\end{CD} \\
\begin{CD}
@. @. 1 @. 1 \\
@. @. @VVV @VVV \\
@. @. X @= X \\
@. @. @VVV @VVV \\
1 @>>> Y @>>> Z @>>> Z/Y @>>> 1 \\
@. @| @VVV @VVV \\
1 @>>> Y @>>> Z/X @>>> (Z/X)/Y \cong (Z/Y)/X @>>> 1 \\
@. @. @VVV @VVV \\
@. @. 1 @. 1
\end{CD}
\end{gather}
If we were in an abelian category, these would be two forms of the octahedral axiom.
Vague but more technical question: Is there something like a semi-abelian category which includes the case of groups, and where we have something like an octahedral axiom?
I guess this looks a lot like the $3 \times 3$ lemma, and maybe that is the answer, but that usually comes with an assumption that the diagram commutes and deduces exactness, whereas here I have a bunch of exact sequences and the theorem is that there is an isomorphism making the diagram commute.
On thinking about this more, the right answer to the student's question is probably the $3 \times 3$ diagram built from a group $G$ and two normal subgroups $N_1$ and $N_2$, whose entries are $N_1 \cap N_2$, $N_1$, $N_1 N_2/N_2$, $N_2$, $G$, $G/N_2$, $N_1 N_2/N_1$, $G/N_2$ and $G/(N_1 N_2)$, and I was distracted by the octahderon. But I'll leave this up and see what someone else has to say.
This resemblance is because the octahedral axiom is a shadow of the exact same theorem in a stable ∞-category, where you replace cokernels by homotopy cofibres. See Theorem <IP_ADDRESS> of Jacob Lurie's Higher Algebra for a proof that the octahedral axiom arises naturally in this way
@Harry Gindi In what stable oo-category would a general (nonabelian) group be an object?
@DanPetersen None of them, but analyzing the proof of TR4 from stability, it seems like you can get away with a bit less.
@LSpice Sure, go ahead.
David, doesn't the exercise you assigned only looks like the third isomorphism theorem because (a) it fundamentally uses it, and (b) you used the lattice isomorphism theorem to write it in a "slicker" (but technically incorrect) manner. What I mean is that $X$ is not a subgroup of $Z/Y$ so $(Z/Y)/X$ literally makes no sense. Making things technical, you are claiming $(Z/Y)/(XY/Y) \cong (Z/X)(XY/X)$, which is a simple consequence of (two applications of) the 3rd isomorphism theorem. [Of course, you are correct that $XY/Y\cong X$ when $X\cap Y=1$, etc...]
Is there an easy way to separate the two diagrams a bit more? As currently shown, it's a bit difficult to keep track of what's going where.
@user44191, that's my fault; I edited to convert to AMScd, but didn't separate them enough. I won't edit again because I don't know what triggers CW status, but probably just putting them in two separate environments rather than in one big {gather} as I did would be enough.
|
2025-03-21T14:48:29.835039
| 2020-02-11T17:58:15 |
352471
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Captain Lama",
"Denis Nardin",
"https://mathoverflow.net/users/43054",
"https://mathoverflow.net/users/68479"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626282",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352471"
}
|
Stack Exchange
|
Multiplicative structure of the K-theory of Severi-Brauer varieties
There is a well-known result by Quillen stating that if $X_A$ is the Severi-Brauer variety of a central simple algebra $A$ of degree $d$ over a field $k$, then its (Quillen) K-theory decomposes as
$$K_\bullet(X_A) \simeq \bigoplus_{i=0}^{d-1}K_\bullet(A^{\otimes i}).$$
In every text I've seen mentioning it, the decomposition is stated at the level of $K_n$, and is just taken as a decomposition of abelian groups. But since $X_A$ is a scheme, $K_\bullet(X_A)$ is a graded-commutative ring, and this induces a multiplicative structure on $\bigoplus_{i=0}^{d-1} K_\bullet(A^{\otimes i})$.
It seems almost compelling that this structure should come from
$$K_p(A^{\otimes i})\otimes K_q(A^{\otimes j})\to K_{p+q}(A^{\otimes i+j})\to K_{p+q}(A^{\otimes r})$$
where $r\equiv i+j$ modulo $d$, and the last map is the isomorphism given by Morita equivalence.
Nonetheless, I couldn't see any reference to that fact in the literature I've read. Is it a "folklore" result? Is is actually stated/proved somewhere and I've missed it?
Actually, even in the split case, where we get $K_\bullet(\mathbb{P}^{d-1} _k)\simeq K_\bullet(k)^d$, I haven't seen stated that this gives an isomorphism with the group ring $K_\bullet(k)[\mathbb{Z}/d\mathbb{Z}]$.
I don't think it is true: for the projective space the decomposition is given by the fact that $K(\mathbb{P}^n_S)$ is a free $K(S)$-module with basis ${\mathcal{O}(i)}_{i=0,\dots,d-1}$ and it's not true that $[\mathcal{O}(d)]=1$ in $K_0(\mathbb{P}^n_S)$ (rather, it is given by a polynomial in $[\mathcal{O}(1)]$). Presumably the case of a general Severi-Brauer variety is similar.
@DenisNardin You are perfectly right, thank you. Actually, looking at the Koszul sequence $0\to \mathcal{O}\to \dots\to \mathcal{O}(i)^{\binom{d}{i}}\to\dots\to \mathcal{O}(d)\to 0$ shows that $\mathcal{O}(1)$ satisfies $(X-1)^d=0$, so $K_0(\mathbb{P}_k^d)\simeq \mathbb{Z}[X]/(X-1)^d$ is not reduced, whereas $K_0(k)[\mathbb{Z}/d\mathbb{Z}]$ is reduced (so they are really not isomorphic at all, this is not just a matter of a wrong basis choice).
|
2025-03-21T14:48:29.835198
| 2020-02-11T18:58:20 |
352476
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"R. van Dobben de Bruyn",
"anon",
"https://mathoverflow.net/users/108274",
"https://mathoverflow.net/users/149169",
"https://mathoverflow.net/users/82179",
"user267839"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626283",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352476"
}
|
Stack Exchange
|
Locally free group scheme étale
Let $R$ be a commutative ring, $p >0$ prime and $G$ a finite, locally free group scheme over $R$ of rank $p^n$; $n \in \mathbb{N}_{\ge 1}$. Assume $p \in R^*$ (i.e. is a unit in $R$).
Question: Why this condition on the rank implies that $G$ is étale?
By definition etale is equivalent to flat & unramified. As $G$ is locally free it's obviously flat. Be unramified is also a local condition. Thus we can translate the problem to commutative algebra and asking why the free $R$-module $R^{p^n}$ is unramified at a prime $\mathfrak{q} \subset R$ if $p \in R^*$.
This is not so easy, but relies on a well-known structure theorem for connected group schemes over a perfect field.
Lemma 1. A finitely presented morphism $Y \to X$ of schemes is unramified if and only if $Y_x \to x$ is unramified for all $x \in X$.
Proof. See [EGA IV$_4$, Cor. 17.4.2]. $\square$
Thus, we may reduce to the case $R = k$ for $k$ a field, and by flat descent to the case where $k$ is algebraically closed. Then unramified for a finite extension means (geometrically) reduced.
Lemma 2. A finite group scheme over an algebraically closed field is a semidirect product of an étale group scheme and a connected group scheme.
Proof. See for example [Wat, §6.8]. The étale part is $\pi_0(G)$ and the connected part is $G^0$. $\square$
Theorem. If $G$ is a geometrically connected finite group scheme over a perfect field $k$ of characteristic $p > 0$, then $\Gamma(G,\mathcal O_G)$ is isomorphic to $k[X_1,\ldots,X_n]/(X_1^{p^{e_1}},\ldots,X_n^{p^{e_n}})$ for some $e_1,\ldots, e_n \in \mathbf Z_{>0}$.
Proof. See for example [Wat, §14.4]. I don't know a quick summary of why this is supposed to be true (I would be interested if someone does), but the proof is not that hard. $\square$
Theorem. If $G$ is group scheme over a field of characteristic $0$, then $G$ is geometrically reduced.
Proof. See for example [Wat, §11.4]. $\square$
In particular, if the rank of $G$ not divisible by $p$ (e.g. $p = 0$), then $G^0$ has to be trivial and $G$ is étale. $\square$
References.
[EGA IV$_4$] A. Grothendieck, Éléments de géométrie algébrique. IV: Étude locale des schémas et des morphismes de schémas (Quatrième partie).. Publ. Math., Inst. Hautes Étud. Sci. 32, p. 1-361 (1967). ZBL0153.22301.
[Wat] W.C. Waterhouse, Introduction to affine group schemes. Graduate Texts in Mathematics 66 (1979). ZBL0442.14017.
Your final note I not fully understand. By your reduction steps we apply two base changes: first one we take a prime $\mathfrak{q}x \subset R$ and do base change along $R \to R{\mathfrak{q}x} \to R{\mathfrak{q}_x}/\mathfrak{q}_x=k(x)$ (can now apply Lemma 1) and then change base to alg closure via $k(x) \to \overline{k(x)}$ by the flat descent argument as you said). Set $k=k(x)$ and we are now in the setting of the Theorem, i.e. set $G=G^0$ (only con c), $\Gamma(G,\mathcal O_G)= k[X_1,\ldots,X_n]/(X_1^{q^{e_1}},\ldots,X_n^{q^{e_n}})$ where $q$ is the characteristic of $k$. Obviously $q$ is
prime to $p$ as $p \in R^*$ by assumption.
If $q>0$: rank of $k[X_1,\ldots,X_n]/(X_1^{q^{e_1}},\ldots,X_n^{q^{e_n}})$ is a power of $q$ but also of $p$, a contradiction. That's fine. But what about the case $q=0$. Then the theorem isn't applicable...
@TimGrosskreutz: right, I forgot to mention the characteristic 0 situation. There everything is reduced, so a finite group scheme is always étale.
one remark: could you give a reference for the statement that in case where $k$ is algebraically closed, then for a finite $k$-module (thus a field) unramified is equivalent to (geometrically) reduced?
See for example Tags 00U3 and 030W (but the latter deals with separable transcendental extensions as well).
It is obvious that a finite locally free morphism is unramified if it is at every point (no need to appeal to EGA).
|
2025-03-21T14:48:29.835449
| 2020-02-11T19:35:05 |
352479
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"AHusain",
"Eric",
"https://mathoverflow.net/users/3664",
"https://mathoverflow.net/users/69850"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626284",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352479"
}
|
Stack Exchange
|
Positive definite kernels on categories
I'm wondering if there is any work on studying positive definite kernels on (the objects of a) category. By this I mean for a category $\mathcal{C}$, find a function
$$
K: Ob\mathcal{C} \times Ob\mathcal{C} \rightarrow \mathbb{R}
$$
such that for any finite subset $\{ A_i \}_{i \in I} \subset Ob\mathcal{C}$, the matrix $K[I] = [K(A_i, A_j)]_{i, j \in I}$ is positive definite over the reals. Searching the internet for this has been tricky because of the existing notion of a kernel in category theory.
I think this would be interesting because we could then construct the Reproducing kernel Hilbert Space (RHKS) $H$, and essentially be able to think of the objects of $\mathcal{C}$ as elements of $H$.
Graph kernels are an area of study in machine learning, so there is some precedent for this kind of thing.
I know there is nothing stopping me from studying such a thing, but I didn't know if there was any literature on the subject. Or is there maybe a theorem showing that any such $K$ are trivial, or none exist for certain classes of categories?
For an example, I think we can form a positive definite kernel if we have a category $\mathcal{C}$ with a finite (and nonzero) number of objects. For $A, B \in Ob\mathcal{C}$, set
$$
K(A, B) = \max\{ \#Ob\mathcal{C'}/\#Ob\mathcal{C} \mid \mathcal{C'} \text{ is a full subcategory of } \mathcal{C} \text{ containing } A \text{ and } B \text{ s.t. } Hom_{\mathcal{C'}}(A, -) \simeq Hom_{\mathcal{C'}}(B, -) \}
$$
where the $\simeq$ here denotes a natural isomorphism between functors. The we have that $0 \leq K(A, B) \leq 1 \hspace{3pt} \forall A, B \in Ob\mathcal{C}$, and $K(A, B) = 1 \iff A \simeq B$ in $\mathcal{C}$. Thus if we have a subset $\{ A_i \}_{i \in I} \subset Ob\mathcal{C}$, then $K[I]$ is a real-valued symmetric matrix, and thus has real eigenvalues. For a fixed $i \in I$, we have that $0 \leq \sum_{j \neq i} |K(A_i, A_j)| \leq \#I - 1$. Then by the Gershgorin circle theorem, the eigenvalues of $K[I]$ lie between $2 - \#I$ and $\#I$. Then define the function
$$
\tilde{K}(A, B) = \begin{cases}
\#Ob\mathcal{C} \text{ if } A \simeq B \\
K(A, B) \text{ otherwise}.
\end{cases}
$$
The eigenvalues of $\tilde{K}[I]$ are (again by Gershgorin circle theorem) between $\#Ob\mathcal{C} - \#I + 1$ and $\#Ob\mathcal{C} + \#I -1$, making it a positive definite matrix, and $\tilde{K}$ a positive definite kernel.
The function $\tilde{K}$ is contrived and probably extremely difficult to compute. But are there other similar constructions? Or assumptions/structures on the category $\mathcal{C}$ that allow more practical kernels (e.g. Abelian, monoidal, enriched)? Or is the study of kernels on categories better handled as a case by case scenario?
$K$ doesn't even have to use any of the data of the category other than the objects? I can give any positive definite kernel on a set of the right cardinality?
hmm, that's a good point. i guess ideally i'd want $K$ to use the morphisms in some way. The entry $K(A, B)$ is usually quantifying some sort of notion of similarity between $A$ and $B$. I'm not sure what constraints to impose in general though
|
2025-03-21T14:48:29.835676
| 2020-02-13T11:55:36 |
352613
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Deane Yang",
"Mateusz Kwaśnicki",
"https://mathoverflow.net/users/108637",
"https://mathoverflow.net/users/143284",
"https://mathoverflow.net/users/613",
"trisct"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626285",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352613"
}
|
Stack Exchange
|
How can I prove this Weitzenbock formula
Update: It is almost sure that the expression of $\kappa$ in coordinates given in the book, namely
$$\kappa(u_t)=h_{\alpha\beta}\frac{\partial u_t^\alpha}{\partial x^i}\frac{\partial u_t^\beta}{\partial x^j}$$
is a typo. I have edited the question and given my version of this below.
In the book Variational Problems in Geometry by Seiki Nishikawa, two Weitzenbock formulae are given (Proposition 4.2, page 124):
Let $u\in C^\infty(M\times(-\delta,\delta),N)$, where $M,N$ are Riemannian manifolds, be a solution to the equation $\frac{\partial u}{\partial t}=\tau(u_t)$, where $\tau(u_t)$ is the tension field of $u_t(-):=u(-,t):M\to N$. Then we have
$$\frac{\partial e(u_t)}{\partial t}=\Delta e(u_t)-|\nabla\nabla u_t|^2-\sum_{i=1}^m\left\langle du_t\left(\sum_{j=1}^mRic^M(e_i,e_j)e_j\right),du_t(e_i)\right\rangle\\
+\sum_{i,j=1}^mR^N(du_t(e_i),du_t(e_j),du_t(e_j),du_t(e_i)$$
$$\frac{\partial\kappa(u_t)}{\partial t}=\Delta\kappa(u_t)-\left|\nabla\frac{\partial u_t}{\partial t}\right|^2+\sum_{i=1}^mR^N\left(du_t(e_i),\frac{\partial u_t}{\partial t},\frac{\partial u_t}{\partial t},du_t(e_i)\right)$$
where $Ric^M$ is the Ricci tensor on $M$, $R^N$ the curvature tensor on $N$, $\Delta$ the Laplace operator on $M$, $e_i$ an orthonormal basis of tangent spaces of $M$, and $\left\langle,\right\rangle$ is the inner product on the tangent spaces of $N$.
The quatities $e(u_t)$, $\kappa(u_t)$ are defined on page 123 of the book
$$e(u_t)=\frac{1}{2}|du_t|^2=\frac{1}{2}g^{ij}h_{\alpha\beta}\frac{\partial u_t^\alpha}{\partial x^i}\frac{\partial u_t^\beta}{\partial x^j}$$
$$\kappa(u_t)=\frac{1}{2}\left|\frac{\partial u_t}{\partial t}\right|^2=\frac{1}{2}g^{ij}u_{t;ij}^\alpha g^{rs}u_{t;rs}^\beta h_{\alpha\beta}$$
where $g_{ij}$, $h_{\alpha\beta}$ are the components of the metric tensor on $M$, $N$ respectively, and a semicolon together with subscripts denotes covariant derivatives. Note that the coordinate representation of $\kappa(u_t)$ is not given as a definition, but a result of calculation under the assumption that
$$\frac{\partial u_t}{\partial t}=\tau(u_t)=g^{ij}u_{t;ij}^\alpha\frac{\partial}{\partial y\alpha}$$ holds.
Question:
The formula about $e(u_t)$ has a complete proof in the book. The book says the second one can be proven in a similar manner to the first one. However, the original coordinate representation of $\kappa(u_t)$ given in the book must be wrong (see the top of this post), since the RHS has indexes $i,j$ while the LHS does not. I have given my version of $\kappa(u_t)$ in coordinates in the question, which is nothing similar to the expression of $e(u_t)$ in coordinates. And I now really don't know how to prove the second Weitzenbock formula "in a similar manner to the first one". Can you provide a sketch of proof of the second formula? A reference is also okay.
I have not looked at this carefully, but if you believe the coordinate-free definition of $\kappa$, then isn't the correct version in coordinates $$ \frac{1}{2}h_{\alpha\beta}\partial_tu^\alpha\partial_tu^\beta? $$ That's in fact consistent with the second Weitzenbock formula.
@DeaneYang I think so too, the book may have made a typo while writing down the coordinate representation of $\kappa$. But this is not the main problem. I am struggling to prove the second Weitzenbock formula, even after fixing the proof. What do you mean by "consistent with", are you saying there is an easy way to see why the second formula holds?
$\newcommand{\pa}{\partial}$Edit: The answer is now LaTeXified.
I happen to have written notes on this. :)
\begin{aligned}
\frac{\pa\kappa(u_t)}{\pa t}&=h_{\alpha\beta}(u_t)\frac{\pa^2 u_t^\alpha}{\pa t^2}\frac{\pa u_t^\beta}{\pa t}\\&=\Big\langle\frac{\pa^2u_t}{\pa t^2},\frac{\pa u_t}{\pa t}\Big\rangle.
\end{aligned}
By Ricci's identity, $$\nabla_i\nabla_t\nabla_ju_t^\alpha-\nabla_t\nabla_i\nabla_ju_t^\alpha=R^{\alpha}_{\beta\gamma\delta}(u_t)\frac{\pa u_t^\beta}{\pa x^i}\frac{\pa u_t^\gamma}{\pa t}\frac{\pa u_t^\delta}{\pa x^j}.$$ Thus \begin{aligned}
\Delta_g\kappa(u_t)&=g^{k\ell}\nabla_k\nabla_\ell\kappa(u_t)\\&=g^{k\ell}\nabla_k\Big(h_{\alpha\beta}(u_t)\nabla_\ell\nabla_tu_t^\alpha\nabla_t u_t^\beta\Big)\\&=g^{k\ell}h_{\alpha\beta}(u_t)\bigg(\nabla_k\nabla_\ell\nabla_tu_t^\alpha\frac{\pa u_t^\beta}{\pa t}+\nabla_\ell\frac{\pa u_t^\alpha}{\pa t}\nabla_k\frac{\pa u_t^\beta}{\pa t}\bigg)\\&=g^{k\ell}h_{\alpha\beta}(u_t)\nabla_k\nabla_t\nabla_\ell u_t^\alpha\frac{\pa u_t^\beta}{\pa t}+\Big|\nabla\frac{\pa u_t}{\pa t}\Big|^2\\&=g^{k\ell}h_{\alpha\beta}(u_t)\nabla_t\nabla_k\nabla_\ell u_t^\alpha\frac{\pa u_t^\beta}{\pa t}\\&\phantom{{}={}}+g^{k\ell}h_{\alpha\epsilon}(u_t)R^{\alpha}_{\beta\gamma\delta}(u_t)\frac{\pa u_t^\beta}{\pa x^k}\frac{\pa u_t^\gamma}{\pa t}\frac{\pa u_t^\delta}{\pa x^\ell}\frac{\pa u_t^\epsilon}{\pa t}+\Big|\nabla\frac{\pa u_t}{\pa t}\Big|^2\\&=h_{\alpha\beta}(u_t)\nabla_t\big(g^{k\ell}\nabla_k\nabla_\ell u_t^\alpha\big)\frac{\pa u_t^\beta}{\pa t}\\&\phantom{{}={}}-g^{k\ell}R_{\beta\gamma\epsilon\delta}(u_t)\frac{\pa u_t^\beta}{\pa x^k}\frac{\pa u_t^\gamma}{\pa t}\frac{\pa u_t^\epsilon}{\pa t}\frac{\pa u_t^\delta}{\pa x^\ell}+\Big|\nabla\frac{\pa u_t}{\pa t}\Big|^2.
\end{aligned} This gives the second formula.
Can you change the image into LaTeX? Doing so will enable search engines to index your answer properly.
|
2025-03-21T14:48:29.836228
| 2020-02-13T12:03:27 |
352615
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"LSpice",
"Martin Ortiz",
"efs",
"https://mathoverflow.net/users/109085",
"https://mathoverflow.net/users/142589",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/84768",
"reuns"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626286",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352615"
}
|
Stack Exchange
|
Analytic continuation of $f(x)=\sqrt{\frac{1-x}{1-x^p}}$ over the p-adics
Consider the power series $f(x)=\sqrt{\frac{1-x}{1-x^p}}$ over the algebraic closure of $\mathbb{Q}_p$, defined by $f(0)=1$.
What can be said about an analytic continuation "in the form of Mittag-Leffler"? By that I mean that $f(x)$ can be expressed as
$f(x)=g(x)+\sum_{n=0}^{\infty} \frac{a_n(x)}{h(x)^n}$, where $g(x)$ converges in a disk greater than the unit ball, $h(x)$ is a polynomial of norm $\le 1$, and the $a_n(x)$ are polynomials satisfying $\text{ord}_p(a_n)\ge rn-m$ for some $r>0$ and a constant $m \in \mathbb{R}$. Here the order of a polynomial is the minimum order of its coefficients.
I am particuarly interested in the values $f(\lambda)$, for Teichmuller representatives $\lambda$, which satisfy $\lambda^q=\lambda$ for some $q=p^r$. Thus I would like $h(x)$ not to have a zero at the Teichmuller representatives. Apparently the values at the Teichmuller representatives are related to classical results on Gauss Sums but I couldn't find anything in the literature.
By "analytic continuation", you mean that the power series converges also outside the unit ball, or analytic in the sense of Krasner?
@EFinat-S, I have clarified what I mean by analytic continuation, I think it corresponds to the latter.
Ok, that makes an important difference. I'll delete my answer.
Sorry for another question: is the condition $\operatorname{ord}_p(a_n) \ge r n$ a condition on some coefficient of $a_n(x)$, all coefficients of $a_n(x)$, some value of $a_n(x)$, or all coefficients of $a_n(x)$, or is it something else?
I don't know if this will help you, but the book "Analytic elements in $p$-adic analysis" by Alain Escassut has a chapter called "Composition of anaytic elements". Perhaps this is useful since you are composing an analytic function with a rational function.
Your question doesn't make sense because $f(\zeta_p+p x)$ is not meromorphic at $0$
@reuns the sum is an infinite sum of rational functions, so I don't see where is the problem.
|
2025-03-21T14:48:29.836393
| 2020-02-13T12:20:13 |
352617
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"5th decile",
"Mateusz Kwaśnicki",
"https://mathoverflow.net/users/108637",
"https://mathoverflow.net/users/33927"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626287",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352617"
}
|
Stack Exchange
|
“Chapman-Kolmogorov”-convolution vs. smoothness
Let $K:\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ be a so-called "integral-kernel": we certainly require $K(x,.)$ and $K(.,y)$ to be Lebesgue measurable for almost all $x,y \in \mathbb{R}^n$. An integral kernel may fail to be translation-invariant, meaning that the equation $K(x,y)=K(x+\Delta,y+\Delta)$ fails on a set of positive Lebesgue measure. A natural way to define the self-convolution $K*K$ of $K$ (with itself) is
$$K*K: \mathbb{R}^n\times \mathbb{R}^n \to \mathbb{R}\cup\{+\infty\}:\\(x_1,x_2) \mapsto \int_{\mathbb{R}^n} dy\,K(x_1,y)K(y,x_2)\text{ if well-defined, otherwise }+\infty.$$
Is it not true that this type of self-convolution inherits some of the smoothing properties of ordinary convolution? 'Rephrasing' the question towards a concrete application: can you not prove the Weyl lemma/hypo-ellipticity by merely analysing the repercussions of the Chapman-Kolmogorov equation (associated to a given time-homogeneous Markov process).
Extra notes/context:
Intuitively, there's still the picture that this extended form of convolution is about blurring and since blurring and smoothing are nearly synonymous, one would a priori expect affirmative answers to my question.
If $K(x,.)\geq 0$ and $\|K(x,.)\|_1=1$ for all $x \in \mathbb{R}^n$, then $(K*K)(x,y)$ is obviously finite for a.e. $y\in \mathbb{R}^n$, $(K*K)(x,.)>0$ and $\|K(x,.)\|_1=1$
If $K$ is translation-invariant, then my question essentially boils down to asking for smoothing-properties of ordinary convolution, which are well-documented and well-known.
As for the application in Markov-processes, self-convolution occurs when writing the following instance of the Chapman-Kolmogorov equation of a time-homogeneous Markov-process:
$$p(x_1|x_2,2t)=\int_{\mathbb{R}^n}dy\,p(x_1|y,t)p(y|x_2,t)$$
One could generalize the question to distributional $K$ and/or $K$ with co-domain $\mathbb{R}^{d\times d}$ (and defining $(K * K)_{ij}(x_1,x_2)=\int dy \,\sum_{m=1}^dK_{im}(x_1,y)K_{m j}(x_1,y)$). In this extended setting, I can think of obvious counterexamples (i.e. examples where $K*K$ is not "smoother" than $K$).
This question has a counterpart on MSE, but as so often these days the indifference on that platform is biting. On the other hand, I hope that my question is suffiently research-level to be acceptable for this venue.
This is a delicate issue: such kernels can well decrease regularity. For example, if $K(x, y) = k_1(x) k_2(y)$, then $K f(x) = \int K(x,y) f(y) dy$ is exactly as smooth as $k_1$, no matter how smooth $f$ is. Do you have a specific class of kernels $K$ in mind?
@Mateusz: fair point... On first sight it strikes me that your counterexample violates the property $\forall x \in \mathbb{R}^n:|K(x,.)|_1=1$. Could such a constraint make it harder for you to come up with counterexamples?
Not really: just set $K(x,y) = k_1(x)k_2(y) + (1-k_1(x))k_3(y)$, where $0<k_1(x)<1$ and $k_2, k_3$ are arbitrary probability density functions.
Okay, your example is also a positive kernel... Well, you mentioned that it's a delicate issue. So, you're aware of already-existing discussions, debates, treatments in the literature? Note that I tagged my question with the "reference-request"-label: you could write an answer outlining these counterexamples and giving literature references?
I am aware of a large number of papers where people prove some regularity (say: Hölder regularity) of heat kernels for various operators, which are likely not smooth. I do not know if anyone was interested in proving rigorously how irregular these heat kernels are, though. Transition probabilities of any non-Feller (but still Markov) process should also work as a counter-example; man such processes are known (for example the usual Brownian motion reflected at $0$, but running in all of $\mathbb{R}$). It is difficult to give references without knowing what exactly you are looking for.
It doesn't surprise me that (in contrast to ordinary convolution) you can't go beyond proving (Hölder-)continuity via this way. On the other hand, I must have been deluded in thinking that my Kolmogorov-Chapman-convolution would be universally smoothing. That is: I didn't think of a more specific set of assumptions. What if you assume a uniform-in-$x$ bound on the second moment $\int dy,K(x,y)(x-y)^2$? Does that accomplish something?
Or what about imposing continuity-in-$x$ for $\int dy,K(x,y)\chi_{y\in O}$ for all open $O$?
Again: I do not really know what kind of kernels you have in mind, but there are at least two standard notions. A kernel is Feller if it maps $C_0(\mathbb{R}^n)$ into itself (so in some sense at least it does not break continuity). A kernel is strong Feller if it maps bounded Borel functions into continuous ones (so it provides some minimal smoothing). Both are well-studied, and both (particularly the former one) have various variants. None of them imply any further regularity of iterated compositions of $K$ without (severe) further restrictions.
|
2025-03-21T14:48:29.836711
| 2020-02-13T12:24:59 |
352618
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dirk Werner",
"F. Carbon",
"https://mathoverflow.net/users/127871",
"https://mathoverflow.net/users/131781",
"https://mathoverflow.net/users/49733",
"user131781"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626288",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352618"
}
|
Stack Exchange
|
type and cotype of spaces of continuous functions
I've recently read about the notion of (Rademacher) type and cotype of a Banach space in some article. In the books I checked afterwards, typical examples studied were $L^p$-spaces or the Schatten classes but nothing was said about spaces of continuous functions. As these are arguably one of the most important examples of Banach spaces, I wonder why this is.
So here is my question: Is anything known about the type and/or cotype of the Banach space $C(K)$ with suitable $K$?
I'm particularly interested in the case where $K$ is a compact interval on the real line.
It is known that $C(K)$, for infinite $K$, contains a copy of $c_0$, hence it does not have nontrivial type (meaning $>1$) or nontrivial cotype (meaning $<\infty$).
@DirkWerner Thank you very much for this comment, so this means I could not find it in the books as the answer is trivial (to somebody who knows more functional analysis than I do...). If you want you could extend your comment to an answer and I will accept it.
This is just an addendum to Dirk Werner’s definitive answer but it might add useful information: every Banach space is isometrically isomorphic to a subspace of a $C(K)$-space (even $C([0,1])$ if separable) so there is no point in examining special properties of the latter if they are inherited by subspaces.
@user131781 this is a very useful comment, thank you very much!
It is known that $C(K)$, for infinite $K$, contains a copy of $c_0$, hence it does not have nontrivial type (meaning $>1$) or nontrivial cotype (meaning $<\infty$).
|
2025-03-21T14:48:29.836831
| 2020-02-13T12:59:16 |
352620
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Hans",
"Math_Y",
"https://mathoverflow.net/users/128639",
"https://mathoverflow.net/users/35593",
"https://mathoverflow.net/users/68835",
"user35593"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626289",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352620"
}
|
Stack Exchange
|
Minimax optimization of diagonal entries of function of matrix
Let $\mathbf{A}$ and $\mathbf{U}$ be arbitrary complex $M\times N$ and $N\times M$ matrices, respectively. Let denote superscript $(\cdot)^{\dagger}$ and $(\cdot)^{\mathrm{H}}$ as pseudo-inverse and conjugate-transpose operations. Then, is there any close-form solution for the following minimax optimization problem:
$$\min_{\mathbf{U}}\max_n [(\mathbf{A}^{\dagger}+\mathbf{B}\mathbf{U})(\mathbf{A}^{\dagger}+\mathbf{B}\mathbf{U})^{\mathrm{H}}]_{n,n},$$
where $\mathbf{B}=\mathbf{I}-\mathbf{A}^{\dagger}\mathbf{A}$ and denotes the $n$th diagonal entries of matrix, and $\mathbf{I}$ is the identity matrix.
Is there any propose for choosing $\mathbf{U}$ to have
$$\max_n [(\mathbf{A}^{\dagger}+\mathbf{B}\mathbf{U})(\mathbf{A}^{\dagger}+\mathbf{B}\mathbf{U})^{\mathrm{H}}]_{n,n}<\max_n [\mathbf{A}^{\dagger}(\mathbf{A}^{\dagger})^{\mathrm{H}}]_{n,n}.$$
Maybe it would be helpful to include some motivation for this problem?
is $U$ $N\times N$? Otherwise there seems to be a problem.
$U=0$ seems to work.
Sorry I meant $M \times N$
I correct the pointed problems.
|
2025-03-21T14:48:29.837050
| 2020-02-13T13:18:58 |
352621
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alexandre",
"Ron P",
"https://mathoverflow.net/users/111000",
"https://mathoverflow.net/users/85550"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626290",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352621"
}
|
Stack Exchange
|
Limit of a ratio of harmonic numbers?
Is there any way to find the following limit
$$R(n,m)=\lim_{N\to\infty}\frac{H_{nN,m}}{H_{N,m}}$$
which involves harmonic numbers (generalized if $m\neq 1$)
$$H_{N,m}=\sum_{k=1}^N k^{-m}\qquad ?$$
I am more specifically looking for a convenient way to compute it numerically for $m<1$ (if it converges to something else than 1 of course).
From numerical experiment on Mathematica for $m \leq 1$, I can guess
$$R(n,m)=n^{1-m} \quad .$$
Did you try replace the sums with integrals?
No. Do you know an integral formula for the generalized harmonic number?
Suppose first that $0\le m<1$. Then, using the inequality $k^{-m}\ge\int_k^{k+1} x^{-m}\,dx$ for $k>0$, we have
$$H_{N,m}\ge\int_1^{N+1}x^{-m}\,dx=\frac{(N+1)^{1-m}-1}{1-m}\sim\frac{N^{1-m}}{1-m}\tag{1}$$
(as $N\to\infty$). Similarly, using the inequality $k^{-m}\le\int_{k-1}^k x^{-m}\,dx$ for $k>1$, we have
$$H_{N,m}\le1+\int_1^{N}x^{-m}\,dx=1+\frac{N^{1-m}-1}{1-m}\sim\frac{N^{1-m}}{1-m}.\tag{2}$$
So, here $H_{N,m}\sim\frac{N^{1-m}}{1-m}$. Similarly, $H_{nN,m}\sim\frac{(nN)^{1-m}}{1-m}$. Thus, we confirm that $H_{nN,m}/H_{N,m}\sim n^{1-m}$, if $0\le m<1$.
The case $m<0$ is similar, now with the inequalities in (1) and (2) going in the opposite direction.
Finally, for $m=1$ we have $H_{N,m}\sim\ln N$, whence $H_{nN,m}/H_{N,m}\to1= n^{1-m}$.
|
2025-03-21T14:48:29.837152
| 2020-02-13T13:43:10 |
352623
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626291",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352623"
}
|
Stack Exchange
|
Entropy spectrum is not concave
Let $T:[0, 1]\rightarrow [0, 1]$ be map such that $T(x)=4x(1-x)$. For any $\alpha \in \mathbb{R}$, we define the level set as follows
$$F(\alpha)=\{x\in [0,1]: \lim_{n\rightarrow \infty}\frac{1}{n}\log |(T^{n})^{'}(x)|=\alpha\}.$$
Is not $\alpha \mapsto h_{top}(F(\alpha))$ concave? $h_{top}(.)$ defines in this paper
I do not know how to prove it.
The only values of $\alpha$ for which $F(\alpha)$ is non-empty are $-\infty$ and $\log 2$. This is because the map is conjugate (by the conjugacy $H\colon x\mapsto \sin^2(\pi x/2)$ to the full tent map $S$ (that is, $T\circ H=H\circ S$).
In particular, $T^n\circ H=H\circ S^n$, so that for any $x$, $|(T^n)'(H(x))|H'(x)=H'(S^nx)\cdot 2^n$, so that for any $x$ away from the endpoints,
$$
|(T^n)'(x)|=2^n\frac {H'(S^n(H^{-1}x))}{H'(H^{-1}x)}.
$$
If $x$ is any element of the co-countable set of points whose orbit does not hit the point 0, then the limit superior of $|(T^n)'(x)|^{1/n}$ is $2$.
|
2025-03-21T14:48:29.837242
| 2020-02-13T13:55:21 |
352624
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626292",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352624"
}
|
Stack Exchange
|
powers of linear forms in projections of complete intersections in codimension 3
Let $I\subset \mathbb{C}[x_0,x_1,x_2]=:A$ be a complete intersection, $I=(p_1,p_2,p_3)$, $p_i$ homogeneous all of the same degree d
for some $d>2$.
Let $l$ be a general linear form and let $J$ be the image of $I$ in $A/(l)$. Then $J$ is still minimally generated
by three elements of degree $d$ by Green's hyperplane restriction theorem.
Is it true that if $I$ contains no $d$-th power of a linear form, then the same holds for $J$ (in $A/(l)$)?
|
2025-03-21T14:48:29.837307
| 2020-02-13T14:24:26 |
352626
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"LSpice",
"Sasha",
"https://mathoverflow.net/users/2095",
"https://mathoverflow.net/users/2383"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626293",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352626"
}
|
Stack Exchange
|
Reference request - Weyl's integration formula
Is there a reference discussing in an organized way (with a proof) the Weyl integration formula for a reductive group over a local field (Archimedean or not), expressing the Haar integral on the group as a sum over Levi's of integrals over the elliptic elements in the Levi of orbital-like integrals?
Thank you!
Sasha
Section 7 of Kottwitz's article in the 2003 Clay proceedings here has what you are looking for.
Thank you very much! Looks like a good reference.
I had no idea that those proceedings were online! Thanks!
|
2025-03-21T14:48:29.837379
| 2020-02-13T15:36:42 |
352632
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Michael Albanese",
"Oscar Randal-Williams",
"Paweł Czyż",
"RBega2",
"Ryan Budney",
"https://mathoverflow.net/users/123144",
"https://mathoverflow.net/users/127803",
"https://mathoverflow.net/users/1465",
"https://mathoverflow.net/users/21564",
"https://mathoverflow.net/users/318"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626294",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352632"
}
|
Stack Exchange
|
Embeddings of exotic spheres
Let $\{S^n_u \mid u\in \text{some indexing set} \}$ be the set of all non-diffeomorphic $n$-spheres. Define $k_u$ to be the smallest number such that there exists a smooth embedding $S^n_u\hookrightarrow \mathbb R^{k_u}$.
I am looking for the distribution of $k_u$ depending on the dimension $n$. Surely there are obvious bounds $$n+1\le \inf k_u \le \sup k_u \le 2n$$
but I'm looking for something more precise. (For trivial cases in which $n$-dimensional exotic spheres don't exist we simply have $k_u=n+1$, but I can't go any further).
For what its worth, my (extremely limited) understanding is if an exotic 4-sphere exists it can be embedded in R^5 (see https://arxiv.org/pdf/1208.5988.pdf).
Using embedding calculus you can argue $k_u \leq n+3$. Presumably there are better estimates known -- I would look back to old papers of Haefliger and Hirsch on embeddings of highly-connected manifolds.
Thanks! You shed some light on it.
According to Moishe Kohan's answer here, if $\Sigma$ is an exotic sphere of dimension $n \geq 7$, then it cannot embed into $\mathbb{R}^{n+1}$. So, provided an exotic sphere exists in dimension $n \neq 4$, then $\sup k_u \geq n + 2$ (note that there no exotic spheres in dimensions $n = 5, 6$).
@RyanBudney I'm pretty sure that is not true, see e.g. Theorem 5.2 of https://arxiv.org/pdf/2006.03109.pdf
@OscarRandal-Williams, thanks, that's a little unexpected (for me). Apologies Pawel, I thought the bound $k_u \leq n+3$ was easily deduced. Randal-Williams and Kupers have been thinking about this problem more carefully than I have. The reason behind my mistake looks to be an important one. I have been hoping to use embedding calculus to distinguish exotic spheres, although not in this way. This gives me a little more hope than I already had.
|
2025-03-21T14:48:29.837533
| 2020-02-13T15:49:38 |
352633
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Irddo",
"Sebastian",
"https://mathoverflow.net/users/4572",
"https://mathoverflow.net/users/74747"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626295",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352633"
}
|
Stack Exchange
|
Isometric immersions and metrics in the same conformal class
Let $\phi:\Sigma^2\to M^3$ an conformal isometric immersion into a Riemannian 3-manifold $(M,g)$.
I would like to know what kind of informations is preserved (about the immersion) when we change $g$ by $e^f g$.
For example, since $g$ is in the same conformal class of $e^fg$ then $\phi:\Sigma^2\to M^3$ is also an conformal isometric immersion into $(M,e^fg)$.
If we suppose that $\Sigma$ is an oriented Riemannian surface, is it possible to relate the normal vector field of $\phi(\Sigma)$ into $(M,g)$ and of $(M,e^f g)$?
If $:\Sigma^2\to (M^3,g)$ has constant mean curvature then $\phi:\Sigma^2\to (M^3,e^fg)$ has constant mean curvature too?
I appreciate any help and book recommendations.
In general, having constant mean curvature is not preserved. This can be seen by a direct computation, or by looking at examples: The round metric on the 3-sphere without a point is conformal to flat 3-space. The Clifford torus in the 3-sphere (even without a point) is minimal, hence of constant mean curvature, and embedded. After stereographic projection the Clifford torus is not of constant mean curvature, as you can prove by computation, or from Alexandrovs theorem: There are no compact embedded CMC surfaces in euclidean 3-space besides the round sphere.
The functional which is invariant under conformal change of the ambient metric is the Willmore functional (and not the area functional):
$$\mathcal W(\phi)=\int_\Sigma (H^2-K+\bar K )dA,$$
where $H$ is the mean curvature, $K$ is the curvature of the induced metric, $\bar K$ is the sectional curvature of the tangent plane of $\phi$ and $dA$ is the area form, all w.r.t. to the metric $g.$ But in fact, the integrand $(H^2-K+\bar K )dA$ is invariant under conformal changes of the metric.
This was known by Sophie Germain, Blaschke and others for Moebius transformations, but first shown in the general case by "B.Y. Chen, Some conforaml invariants of submanifolds and their applications, Bol. Un. Mat. Ital, (1974)", see also Joel Weiner: On a probelm of Chen, Willmore et.al., Indiana Math Journal (78).
As a first consequence, being a Willmore surface, i.e., a critical point for the Willmore functional, is invariant under conformal changes of the metric. It is possible to define the Willmore functional for surfaces into the conformal 3-sphere without any reference to a Riemannian metric on the 3-sphere, see e.g. the work of Burstall, Pinkall and Pedit.
big thanks for your comments. Do you have any information about the normal vector field? Because, when $\Sigma$ is oriented, then we can suppose that the normal of the two immersion are the same, I guess.
The proof of the invariance of the Willmore functional under conformal changes of the ambient metric is based on the fact that the oriented normals of the surface with respect to two conformal metrics are scalings of each other.
|
2025-03-21T14:48:29.837723
| 2020-02-13T16:32:47 |
352637
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626296",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352637"
}
|
Stack Exchange
|
What is a suitable state feedback adaptive law in discrete time?
I am trying to find a discrete adaptive law and prove stability in an analogous way to a well-known case in continuous time. Let me give a rough explanation of the background.
In continuous time, let us consider a LTI system:
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = Ax + Bu(t)\label{1}\tag{1}
$$
where $x$ is a $n \times 1$ vector; $A$ is $n \times n$ matrix, whose elements are unknown; $B$ is a known $n \times m$ matrix; $u$ (control input) is a $m \times 1$ vector. If all states are accessible, and if a so-called matching condition exists,
it is possible to design the control input $u$ such as to force system \eqref{1} to behave like a desired reference model. Let this reference model be written as :
$$
\frac{\mathrm{d}x_m}{\mathrm{d}t} = A_mx_m + Br(t)\label{2}\tag{2}
$$
where dimensions of $x_m, A_m, r$ are similarly defined.
To force system \eqref{1} to behave like system \eqref{2}, we select $u$ as follows:
$$
u(t) = \Theta^*x(t) + r(t)\label{3}\tag{3}
$$
where $\Theta^*$ is m x n matrix. It is easy to check that substituting \eqref{3} into \eqref{1} gives exactly \eqref{2} if $A_m = A + B\Theta^*$ (this is the so-called matching condition).
Since we don't know the entries of $A$ (that is the whole point of adaptive control), we do not know $\Theta^*$ exactly a priori. We only know it exists. It will be the job of the adaptive controller to compute this using an adaptive law. So it is more appropriate to re-write \eqref{3} as:
$$
u(t) = \big(\Theta^* + \Theta(t)\big)x(t) + r(t)\label{4}\tag{4}
$$
Where $\Theta$ is $m \times n$ matrix representing the error in estimating $\Theta^*$, which adaptive controller should eventually force to 0.
A suitable adaptive law is:
$$
\frac{\mathrm{d}\Theta(t)}{\mathrm{d}t} = -B^T\!Pe(t)x(t)^T \label{5}\tag{5}
$$
Where $e(t) = x(t) - x_m(t) $ is $n \times 1$ error vector; $P$ is a $n \times n$ symmetric positive definite matrix (sometimes also called Lyapunov matrix).
The adaptive law \eqref{5} can be shown to be valid by checking the stability of the whole system. A lyapunov candidate is selected:
$$
V = e^T\!Pe + \mathrm{Trace}(\Theta^T\Theta)\label{6}\tag{6}
$$
It can be shown (I will not write the proof here for the sake of brevity) that $\mathrm{d}V/\mathrm{d}t < 0$ and with Barbalat's lemma, conclude that the system consisting of \eqref{1}, \eqref{2}, \eqref{5}, and $e$ is asymptotically stable.
I am attempting a similar derivation in discrete time. Re-write the LTI system \eqref{1} as:
$$
x(k+1) = Ax(k) + Bu(k)\label{7}\tag{7}
$$
Let the reference model be:
$$
x_m(k+1) = A_mx_m(k) + B_mr(k)\label{8}\tag{8}
$$
Where $A, B, A_m, B_m$ are discretized matrices with same dimensions as before.
Let the input be chosen as:
$$
u(k) = (\Theta^* + \Theta(k))x(k) + r(k)\label{9}\tag{9}
$$
The error vector is:
$$
e(k) = x(k) - x_m(k)\label{10}\tag{10}
$$
$$
\implies e(k+1) = x(k+1) - x_m(k+1) = A_me(k) + B\Theta(k)x(k)\label{11}\tag{11}
$$
Before I pick the adaptive law, I want to point out something on the Lyapunov candidate function. I picked a function:
$$
V(k) = e^T(k)Pe(k) + \mathrm{Trace}\big(\Theta(k)^T\Theta(k)\big)\label{12}\tag{12}
$$
$$
\implies V(k+1) = e^T(k+1)Pe(k+1) + \mathrm{Trace}\big(\Theta(k+1)^T\Theta(k+1)\big)\label{13}\tag{13}
$$
The discrete 'derivative' of $V$ is:
$$
\begin{split}
\Delta V &= V(k+1)-V(k) \\
& = e^T(k+1)Pe(k+1) + \mathrm{Trace}\big(\Theta(k+1)^T\Theta(k+1)\big) \\
&\qquad - e^T(k)Pe(k) - \mathrm{Trace}\big(\Theta(k)^T\Theta(k)\big)
\end{split}\label{14}\tag{14}
$$
A few manipulations will lead to:
$$
\begin{split}
\Delta V & = e^T(k)(A_m^TPA_m - P)e(k) + 2e^T(k)A_m^TPB\Theta(k)x(k) \\
& \;\;+ x(k)^T\Theta^T(k)B^TPB\Theta(k)x(k) + \mathrm{Trace}(\Theta(k+1)^T\Theta(k+1) - \Theta(k)^T\Theta(k))
\end{split}\label{15}\tag{15}
$$
The goal is to show that $\Delta V < 0$. The first term in \eqref{15} is equivalent to $-e^T(k)Qe(k) < 0$ where $Q$ is Positive Definite. So the 1st term is good. So I must pick an adaptive law such that the rest of the terms also add up to $< 0$. Several tries later, the closest I was able to get to cancel those terms is with the adaptive law:
$$
\Theta(k+1) = \Theta(k) - B^TPA_me(k)x(k)^T\label{16}\tag{16}
$$
Substituting into \eqref{15}, and after a few manipulations, we get (I'll drop the $k$ since it adds no new information):
$$
\Delta V = -e^TQe + x^T\Theta^TB^TPB\Theta x + x^Txe^TA_m^TPBB^TPA_me\label{17}\tag{17}
$$
Alas, the gods of Lyapunov functions have not been kind to me because I am unable to eliminate the 2nd and 3rd terms in \eqref{17} above. For now I have run out of adaptive law candidates. Or maybe I picked a bad Lyapunov function?
Does anyone have experience with discrete time state feedback adaptive control? Any pointers to picking the adaptive law and/or Lyapunov function?
I've looked at Gang Tao's Adaptive Control Design and Analysis book. Chapter 4 deals with continuous time adaptive state feedback, but it falls short of the discrete analysis, instead going for discrete output feedback.
Thanks,
Ed.
|
2025-03-21T14:48:29.838014
| 2020-02-13T16:44:00 |
352639
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Johannes Ebert",
"Pierre PC",
"https://mathoverflow.net/users/129074",
"https://mathoverflow.net/users/9928"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626297",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352639"
}
|
Stack Exchange
|
Spin structure using flag manifolds instead of a Riemannian metric
Let $(M,g)$ be an oriented Riemannian manifold of dimension $n$, and denote by $P_{\mathrm{SO}}\to M$ its oriented frame bundle. The usual definition of a spin structure is the data of a principal $\mathrm{Spin}(n)$ bundle $P_{\mathrm{spin}}$ together with a bundle morphism $P_{\mathrm{spin}}\to P_{\mathrm{SO}}$ such that a certain diagram commutes (the actions of the Lie groups on the principal bundles must be compatible with the bundle morphism). We say that a (Riemannian oriented) manifold is spin if it admits such a spin structure.
Of course it is well-known that if a manifold is spin, then the spin structure is essentially unique, and that the existence is a topological property (vanishing of the second Stiefel–Whitney class). In particular, it does not involve the Riemannian structure, and many applications in index theory implicitly carry a ‘this is in fact independent of the Riemannian structure’ subtext.
Let $E\to M$ be the fibre bundle over $M$ such that $E_x$ is the complete oriented flag manifold of $T_xM$, i.e. a point in $E_x$ is a increasing sequence of oriented subspaces of dimensions 1 to $n$. The model fibre $F$ is the complete oriented flag manifold of $\mathbb R^n$. Then it is my understanding that $E$ is canonically isomorphic to $P_{\mathrm{SO}}$, at least as a fibre bundle. In the same way that $\mathrm{SO}(n)$ admits a unique connected double cover $\mathrm{Spin}(n)$, $F$ admits a unique connected double cover $F'$, and we can ask if there exists a fibre bundle $E'\to M$ with model fibre $F'$ together with a bundle morphism $E'\to E$ such that $E'_x\to E_x$ is conjugate to $F'\to F$ for all $x\in M$.
The question of existence as it stands makes no reference to the Riemannian structure. If $(M,g)$ admits a spin structure, then of course forgetting the structure of principal bundle of $P_{\mathrm{SO}}\to M$ gives such an $E'$. I would bet that the existence of $E'$ ensures that the manifold is spin. In fact, I would take my chances saying that it gives, for any Riemannian metric on $M$, a canonical structure of $\mathrm{Spin}(n)$-bundle on $E'$ that makes $E'\to E$ into a spin structure (once $E$ is canonically identified with $P_{\mathrm{SO}}$).
Question
Is my understanding correct? In other words, is the loss of information due to the actions of $\mathrm{Spin}(n)$ and $\mathrm{SO}(n)$ irrelevant in purely topological terms?
Is this approach documented? Are there advantages to it, for instance does it smooth out some ‘this is in fact independent of the Riemannian structure’ arguments?
As far as I know, there is no natural group structure on $F$, and obviously we like group structures and principal bundles. In particular, many constructions of adjoint bundles are relevant to index theory. Short of being a principal bundle, is $E$ an adjoint bundle for some group, with an action that does not depend on the metric structure? In that case, we could maybe construct, say, spinor bundles as adjoint bundles with respect to this group.
You can instead formulate the spin condition topologically as follows. The group $GL_n(\mathbb{R})^+$ ( matrices of positive determinant) has a unique twofold cover $G$, and a topological spin structure on a vector bundle is a reduction of the structure group to $G$. This is not of much help for index theory, as the spin representation does not extend to $G$.
@JohannesEbert Ah, true, thank you! And in fact the flag bundle $E$ should be the adjoint bundle associated to the action of $\mathrm{GL}^+(n)$ on the flag manifold of $\mathbb R^n$, when considering the whole oriented frame bundle $P_{\mathrm{GL}^+}$. Just to be sure I understand your claim about the structure group, are you saying that the bundle is spin iff its representation as a Cech cocycle with values in $\mathrm{GL}^+(n)$ lifts to a cocycle with values in $G$? Am I right in thinking that this is the standard argument giving the obstruction class for the existence of a spin structure?
|
2025-03-21T14:48:29.838261
| 2020-02-13T18:29:20 |
352643
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626298",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352643"
}
|
Stack Exchange
|
Possible flaw in the proof of the Eells-Sampson theorem on harmonic maps in Nishikawa's book
Background:
I am reading the book Variational Problems in Geometry by Seiki Nishikawa. The main purpose of this book is to prove the existence of harmonic maps $M\to N$ between two compact Riemannian manifolds $M$ and $N$ with the target manifold being nonpositively curved. Since a harmonic map is defined as a minimizer for the energy functional $E$, the idea is to deform an existing map $u:M\to N$ along the gradient lines of $E$, which is equivalent to solving an equation:
$$\frac{\partial u_t}{\partial t}=\tau(u_t)\quad\quad\text{Eq.(1)}$$
where $u_t$ is a smooth variation of some given $u_0$, and $\tau$ denotes the tension field (along this direction, $E$ decreases fastest). By embedding $N$ into $\mathbb R^q$ using the Nash embedding theorem, we can regard $N$ as a submanifold and $u$ as a vector-valued function to simplify certain statements. After this embedding, the above equation takes a new form:
$$(\Delta-\partial_t)u_t=\Pi(u_t)\quad\quad\text{Eq.(2)}$$
I'll omit the definition of $\Pi(u_t)$. Just know that they are equivalent in the sense that their solutions coincide, if given identical initial conditions. The difference is:
Eq.(1) is intrinsic and is an equation in $\Gamma(u_t^{-1}TN)$;
Eq.(2) is an equation in $\mathbb R^q$.
Theorem:
One of the main theorems is the existence of a solution to any one of the equations above defined for $t\in[0,+\infty)$, given some $u_0$ as the initial condition. Suppose now we have already established:
local (w.r.t time) existence of a solution
certain estimates
The proof goes as follows:
Proof. (The part where there's no problem, so I'll give a sketch only) Since we have established the local existence, for a given $u_0$, we can find a smooth $u:M\times[0,T)\to N$ for some $T>0$ such that it solves (1) and (2) and $u(\cdot,0)=u_0$. Now suppose $T_0$ is the supremum of all such $T$. We wish to prove $T_0=+\infty$. If not, we take an increasing sequence $t_i\to T_0$. By certain estimates, we know that $\{u_{t_i}\}$ and $\{\partial_tu_{t_i}\}$ are uniformly bounded and equicontinuous, respectively in the Holder spaces $C^{2+\alpha}(M,\mathbb R^q)$ and $C^{\alpha}(M,\mathbb R^q)$, for some $0<\alpha<1$.
Proof. (The part where I am confused. I'll write exact words) By the Ascoli-Arzela theorem, there exists a subsequence $\{t_{i_k}\}$ of $\{t_i\}$ and functions$^1$
$$u(\cdot,T_0)\in C^{2+\alpha}(M,\mathbb R^q)\quad\text{and}\quad\partial_t u(\cdot,T_0)\in C^\alpha(M,\mathbb R^q)$$
such that the subsequences
$$\{u(\cdot,t_{i_k})\}\quad\text{and}\quad\{\partial_tu(\cdot,t_{i_k})\}$$
respectively, converge uniformly to $u(\cdot,T_0)$ and $\partial_tu(\cdot,T_0)$, as $t_{i_k}\to T_0$. Since for each $t_{i_k}$, we have
$$\partial_tu(\cdot,t_{i_k})=\tau(u(\cdot,t_{i_k}))\quad\quad\text{Eq.(3)}$$
we also get at$^2$ $T_0$
$$\partial_tu(\cdot,T_0)=\tau(u(\cdot,T_0))\quad\quad\text{Eq.(4)}$$
consequently$^3$, we see that (1) has a solution in $M\times[0,T_0]$. Using $u(\cdot,T_0)$ again as the initial condition to solve (1), we extend the solution to $M\times[0,T_0+\epsilon)$ for some $\epsilon>0$, contradicting $T_0$ being the supremum. Hence $T_0=\infty$.
Questions:
How do we know that the limits do not depend on the choice of $t_i$. I think it can be argued using the uniformly boundedness and equicontinuity. Am I right?
How can we go from (3) to (4)? Eq. (3) is clearly obtained by (1), which is an equation in $\Gamma(u_t^{-1}TN)$. However, (3)$\implies$(4) would require the convergence of $\partial_tu(\cdot,t_{i_k})$ to $\partial_tu(\cdot,T_0)$. But this convergence is only in $C^{\alpha}(M,\mathbb R^q)$, not in $\Gamma(u_t^{-1}TN)$. Although I think I can fix this by writing them in the form of (2) to start with.
How can we conclude from (4) that (1) has a solution in $M\times[0,T_0]$. Note that $\partial_tu(\cdot,T_0)$ is so far only a notation for the limit of $\partial_tu(\cdot,t_{i_k})$, we have not proved that it is actually the (one-sided) derivative of $u$ at $t=T_0$. How can I fix this?
$\newcommand{\R}{\mathbb{R}}\newcommand{\pa}{\partial}$Edit: The answer is now LaTeXified.
Below are my notes on this. I reworked the proof:
Proof. Let $S:=\big\{T\in[0,\infty):$ the equation has a solution in $C^{2+\alpha,1+\alpha/2}(M\times[0,T],N)\big\}$. Let $T_0:=\sup S$. By existence of local solution, $T_0>0$. We claim that $T_0=\infty$.
Suppose $T_0<\infty$. By uniqueness of solution and definition of $T_0$, we have a solution $u\in C^{2,1}(M\times[0,T_0),N)$. Take $\alpha<\alpha'<1$. By the a priori estimate above, $u_t$ is uniformly bounded in $t$ in $C^{2+\alpha'}(M,\R^L)$.
Define $$u(x,T_0):=\int_0^{T_0}\pa_tu(x,t)\,dt+u(x,0).$$ For any sequence $t_k\nearrow T_0$, $(u_{t_k})_{k=1}^\infty$ has a subsequence that converges in $C^{2+\alpha}(M,N)$ by Arzelà–Ascoli, and its limit is necessarily $u_{T_0}$. Thus $u_{T_0}\in C^{2+\alpha}(M,N)$, and any such sequence must in fact converge to $u_{T_0}$ in $C^{2+\alpha}(M,N)$. In other words, $u\in C^{2+\alpha,0}(M\times[0,T_0],N)$, or equivalently, $t\mapsto u_t$ is continuous as a map $[0,T_0]\to C^{2+\alpha}(M,N)$.
Since $\pa_tu_t=\tau(u_t)$, we see that $t\mapsto\pa_tu_t$ has a continuous extension $[0,T_0]\to C^{\alpha/2}(M,\R^L)$. So in fact $u\in C^{2+\alpha,1+\alpha/2}(M\times[0,T_0],N)$, i.e., $T_0\in S$. Now, existence of local solution implies that $u$ can be extended to a solution on $[0,T_0+\varepsilon]$, which is a contradiction. $\square$
The a priori estimate refers to the one as given in the book: For $0<\alpha<1$, $$\sup_{t\in[0,T)}\Big(\big\|u_t\big\|_{C^{2+\alpha}(M,\R^L)}+\big\|\pa_tu_t\big\|_{C^\alpha(M,\R^L)}\Big)\leq C(M,N,f,\alpha).$$
|
2025-03-21T14:48:29.838601
| 2020-02-13T19:28:25 |
352645
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"C.Niculescu",
"Sasha",
"https://mathoverflow.net/users/111845",
"https://mathoverflow.net/users/2095"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626299",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352645"
}
|
Stack Exchange
|
Understanding the geometric fibre twisted differential operators
Let $\mathfrak{g}$ be a Complex semisimple Lie algebra with decomposition $\mathfrak{n}^- \oplus \mathfrak{h} \oplus \mathfrak{n}$. For $\lambda \in \mathfrak{h}^*$, we let $\mathcal{D}_{\lambda}$ be the sheaf of $\lambda$-twisted differential operators on the flag variety $G/B$ and $\chi_{\lambda}$ the corresponding central character. Further, we define the $U^{\lambda}=U(\mathfrak{g})/\ker_{\chi_\lambda}U(\mathfrak{g})$ to be the central reduction.
Assume that $\lambda$ is $\rho$-dominant; let $i:eB \to X$ denote the standard inclusion, we define the right module:
$$\mathcal{M}_{\lambda}=i_* \mathcal{O}_{eB} \otimes_{\mathcal{O}_X} \mathcal{D}_\lambda.$$
This is a right $\mathcal{D}_{\lambda}$-module and a I am trying to compute its global sections $\Gamma(X, \mathcal{M}_{\lambda})$; by construction this should a right $U^{\lambda}$-module.
On the other hand, $\Gamma(X,\mathcal{M}_{\lambda}) \cong \Gamma(eB,i^* \mathcal{D}_{\lambda})$ which is the geometric fiber of $\mathcal{D}_{\lambda}$ at $eB$ which is the right Verma module $M(\lambda)^r:=\mathbb{C}_{\lambda} \otimes_{U(\mathfrak{b})} U(\mathfrak{g})$.
What I do not understand is how $U^{\lambda}$ is supposed to act on $M(\lambda)^{r}$ as the obvious action fails.
Any hints or references will be greatly appreciated.
I must admit that I could not understand your question. $U(\mathfrak{g})$ acts on the Verma module. What does it mean "the obvious action fails"?
Sorry, I should have made that clearer. What I mean is that if $U(\mathfrak{g})$ acts from the right on $M(\lambda)^r$, $\ker_{\chi_{\lambda}}$ does not annihilate the module, so I do not obtain a $U^{\lambda}$-action.
Why not? With the correct normalization, $M(\lambda)$ will indeed be annihilated by a suitable central ideal. For that you need to recall Harish-Chandra's isomorphism and look how the center acts on the height weight vector.
Yes, I understand that $\ker_{\chi_{\lambda}}$ annihilates the left Verma module $M(\lambda)=U(\mathfrak{g}) \otimes_{U(\mathfrak{b})} \mathbb{C}{\lambda}$, but I am looking at $\mathbb{C}{\lambda} \otimes_{U(\mathfrak{b})} U(\mathfrak{g})$; I understand that this too will be annihilated by a suitable central ideal, I do not see why this is $\ker_{\chi_{\lambda}}$
I can't say on the spot precisely as I am always confused initially with $\rho$-shift, $\lambda \mapsto -\lambda$ and $\lambda \mapsto w_0 \lambda$, but it is easy to find by hand the central character via which the right action will be - you use Harish-Chandra map $Z(\mathfrak{g}) \to \mathfrak{n} U(\mathfrak{g}) \cap U(\mathfrak{g}) \mathfrak{n}^- + U(\mathfrak{h})$ and apply this to the weight weight vector and see what you get (but it is always more correct to put $\rho$-shift, and one needs to be careful with it, since $\rho$ for $\mathfrak{n}$ is $-\rho$ for $\mathfrak{n}^-$, and so on)
and also it is always a bit confusing until you get used to it, that you also have similarly Harish-Chandra map $Z(\mathfrak{g}) \to \mathfrak{n}^- U(\mathfrak{g}) \cap U(\mathfrak{g}) \mathfrak{n} + U(\mathfrak{h})$, but $\rho$-shift takes care of the discrepancy
A different way of thinking would be to use the equivalence between left $\mathfrak{g}$-modules and right $\mathfrak{g}$-modules, induced by $X \mapsto -X$ on $\mathfrak{g}$ itself. Then your right Verma will become a suitable left Verma, and it is easy to compute all the swaps that take place in this situation.
Thanks you very much for the hints.
|
2025-03-21T14:48:29.838941
| 2020-02-13T21:02:01 |
352648
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626300",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352648"
}
|
Stack Exchange
|
basic question on quotient stacks
Let $X$ be a scheme over $S$, and $G$ be an affine group scheme over $S$ acting on $X$. This Wikipedia article (or also this related MO question) defines a quotient stack $[X/G]$ as a category of principal $G$-bundles fibred over Sch/S.
When they discuss the map $[X/G] \to X/G$ (when $X/G$ exists as an algebraic space, or let's say scheme for convenience if one prefers), they say "complete the diagram" i.e. if $D\to T$ is a principal $G$-bundle corresponding to a $T$-point $T\to [X/G]$, then we can complete the diagram
$T \leftarrow D\to X \to X/G$ (which I understand as induing map $T \to X/G$)
It is not clear to me how one can induce a map $T\to X/G$ using the fact that $D\to X$ is $G$-equivariant.
Q1. How do we get the induced map $T\to X/G$?
Q2. If we have $T$-point of $X/G$, pull back of the diagram $T\to X/G \leftarrow X$ gives a principal $G$-bundle $D\to T$ and $D\to X$ which is $G$-equivariant. Then what is the obstruction of this map $X/G \to [X/G]$ being the quasi-inverse of $[X/G]\to X/G$?
The map $D\to X$ induces a map $D/G\to X/G$. Since $D\to T$ is a principal $G$-bundle the induced map $D/G\to T$ is an isomorphism (e.g. by checking locally over $T$ and reduce to the case of the trivial bundle) and so we get a map $T \to X/G$. This is the map they refer to.
For the second, in general the map $X\to X/G$ is not a principal bundle. In general some fibers of the map $X\to X/G$ might not have a free transitive action of $G$. A prototypical example is the action of $\mathbb{G}_m$ on $\mathbb{A}^1$ homotheties, where the quotient shceme is a single point and carry no corresponding principal bundle of course.
|
2025-03-21T14:48:29.839082
| 2020-02-13T21:14:28 |
352649
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ABIM",
"Jochen Wengenroth",
"Yemon Choi",
"https://mathoverflow.net/users/21051",
"https://mathoverflow.net/users/36886",
"https://mathoverflow.net/users/763"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626301",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352649"
}
|
Stack Exchange
|
Continuous function on colimit
Let $X$ be a Banach space and $f:X\rightarrow \mathbb{R}$ be continuous. Suppose that $\{X_n\}_{n \in \mathbb{N}}$ is a strictly nested sequence of sub-Banach spaces, for which $\cup_{n \in \mathbb{N}} X_n$ dense in $X$. Then the colimit is an LF-space which is not metrizable.
Since $f|_{X_n}$ is continuous to $\mathbb{R}$, then by the universal property of the colimit (in the category LCS of locally convex spaces and continuous linear maps) it should extend to $f':X\rightarrow \mathbb{R}$ (where not $X$ is considered with the colimit topology and not its original Banach space topology).
What is $f'$ (explicitly in tems of $f|_{X_n}$)? My intuition is that it is either $f$ or an infinite-sum of the $f|_{X_n}$..
Note: In particular it shouldn't be $f$ because the colimit topology is strictly finer.
Do you mean that $\bigcup_n X_n$ is dense in $X$ or equal to $X$? I think the latter scenario is ruled out (assuming the $X_n$ are strictly increasing) by a Baire category argument
Also, colimit in what category?
Yes, I oversimplified things. Dense and also in the category LCS (with continuous linear maps as morphisms).
I don't understand what you mean by $X$ is considered with the colimit topology: Only $Y=\bigcup_n X_n$ has a colimit topology and (as Yemon mentions) $Y$ is hardly ever equal to $X$. And I don't see how density of $Y$ in $(X,|\cdot|)$ should help -- this does not mean that $X$ is a completion of $Y$ (which need not be complete but easily can be, e.g., is all $X_n$ are closed subspaces of $X$ or if all $X_n$ are reflexive).
I did not understand fully your setting (for example, are the $X_n$ given the topology of a subspace or simply the embedding map $X_n \to X$ is required to be continuous... Or is $f$ linear... etc.). But the colimit topology on $X$ will be finer than the original topology on $X$, and therefore the functional $f$, being continuous w.r.t. the original topology will also be continuous w.r.t. the colimit topology, and hence it satisfies the demand of the universal property - the unique continuous map from the colimit whose restrictions to the $X_n$'s are the given... So yes, I think $f^{\prime}$ will be equal to the original $f$.
|
2025-03-21T14:48:29.839266
| 2020-02-13T21:31:36 |
352651
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/95282",
"user95282"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626302",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352651"
}
|
Stack Exchange
|
Decomposition into distal and proximal
For a topological group $G$ and a bounded real- or complex-valued function $f$ on $G$, the orbit closure of $f$ is the pointwise closure in the space of all bounded functions on $G$ of the orbit of $f$ under right translations. For a bounded function $f$ on $G$ whose orbit closure consists of continuous functions, say that $f$ is distal if its orbit closure is a distal $G$-flow; say that it is proximal if its orbit closure is a proximal $G$-flow. See e.g. section 4.6 in Berglund, Junghenn and Milnes, "Analysis on semigroups" (1989).
Question: Is there another characterization of the functions of the form $d+p$ where $d$ is distal and $p$ is proximal? If not in general then at least for some groups?
it would be useful to at least provide references to your terminology (distal $G$-flow, proximal $G$-flow) over an arbitrary topological group – also I assume you have in mind real/complex-valued functions.
@YCor Good idea. It's done.
|
2025-03-21T14:48:29.839364
| 2020-02-13T22:31:46 |
352653
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"J. De Ro",
"Yemon Choi",
"https://mathoverflow.net/users/470427",
"https://mathoverflow.net/users/763"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626303",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352653"
}
|
Stack Exchange
|
Hopf C-star algebra/comodules using a Fubini tensor product rather than the minimal tensor product?
Throughout, $\otimes$ denotes the minimal tensor product of $\newcommand{\Cst}{{\rm C}^{\ast}}\Cst$-algebras. and $\odot$ denotes the tensor product of (underlying) vector spaces.
Given two $\Cst$-algebras $A$ and $B$, with respective $\Cst$-subalgebras $C$ and $D$, the Fubini tensor product of $C$ and $D$ (relative to $A$ and $B$) is defined to be the following subset of $A\otimes B$:
$$C\otimes_{\mathcal F} D = \{ w \in A \otimes B \colon (\phi\otimes\iota)(w) \in C, (\iota\otimes\psi)(w)\in D \;\hbox{for all}\; \phi\in A^*\;\hbox{and all}\;\psi\in B^*\}.$$
This always contains $C\otimes D$ but example are known where it is strictly bigger; it does coincide with $C\otimes D$ if both $C$ and $D$ are nuclear, for instance.
I have a very hazy recollection of seeing some papers, possibly survey articles, where one is dealing with a Hopf $\Cst$-algebra A and wants to relax the usual definition of comodule $D$ so that the coaction takes values in $A\otimes_{\mathcal F} D$ — possibly Kirchberg's name came up, either for the technical prerequisites or as someone who had proposed a similar construction. Can anyone confirm if such a use of the Fubini tensor product has been tried before, and if so, whether it has gone anywhere? Mainly I want to quickly check if some ideas I am playing with are rediscovering old things or known not to work.
2 years later, I have the same question. Did you perhaps find other references than the ones in the answer below?
@QuantumSpace I'm afraid that I became busy with other projects and haven't given this further thought. I would still be interested to know of any other references in the literature
Alright, I'll let know if i come across anything useful!
Although i do not know much more to say, i recall i have seen this variant of the definition of the comodule you are describing, used in the context of Hopf-von Neumann algebras. See for example:
Crossed products of dual operator spaces by locally compact groups, Dimitrios Andreou, arXiv:1910.00433 [math.OA] (see the def in p.3)
and also:
Masamichi Hamana: Injective envelopes of dynamical systems, Toyama Math. J., Vol. 34(2011), 23-86 https://toyama.repo.nii.ac.jp/?action=repository_action_common_download&item_id=3070&item_no=1&attribute_id=18&file_no=1
(see def 2.1, p.31)
Thanks for these references. A quick look suggests that both of these are considering a variant of the Fubini tensor product involving embedding into B(H) and using normal slices - possibly this is the same as what I have defined above, using embedding of a Cstar algebra into its bidual, but it's not quite clear to me. But thank you for these references, which I was not aware of or had forgotten, they may be helpful for me
Also worth noting that both of these seem to be using the Hopf-von Neumann perspective, whereas the work which motivated my question is really about the Hopf-Cstar perspective (for better or for worse)
|
2025-03-21T14:48:29.839587
| 2020-02-13T23:29:53 |
352656
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Abdelmalek Abdesselam",
"Igor Rivin",
"Powerspawn",
"YCor",
"https://mathoverflow.net/users/11142",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/152336",
"https://mathoverflow.net/users/7410"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626304",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352656"
}
|
Stack Exchange
|
Can the degree of an affine variety increase after intersecting with a hyperplane?
Say we have an affine variety $V \subset \mathbb{C}^n$, and suppose we intersect $V$ with a hyperplane $H$, possibly not in general position. Is it possible for the degree of $V \cap H$ to be larger than the degree of $V$?
How do you define the degree of $V$ of $V$ is not a hypersurface?
The definition I'm using is that if $V$ has dimension $d$, then the degree of $V$ is equal to the number of intersection point of $V$ with $d$ hyperplanes in general position. (Including intersections at infinity and intersection multiplicity).
you mean $n-d$ hyperplanes
@AbdelmalekAbdesselam The hyperplanes have codimension $1$, so each intersection reduces the dimension of $V$ by $1$, so we need to intersect with $d$ hyperplanes total.
oops you're right got mixed up between what is dimension and what is codimension
Answered here? https://math.stackexchange.com/questions/428205/degree-and-dimension-of-intersection-of-projective-variety-and-hypersurface
Yes, it is possible for the degree to increase. Say $V \subset \mathbb{C}^3$ is reducible: a union of a curve of degree $d$ plus one more point that doesn't lie on the curve. Then $V$ has degree $d$. But if $H$ is any plane through that extra point, then the intersection of $V$ with $H$ has degree $d+1$: the one point, plus $d$ points of intersection with the curve.
More generally, degree only sees the highest-dimensional component of $V$, but intersection with a hyperplane can bring up lower-dimensional components, raising the degree.
|
2025-03-21T14:48:29.839714
| 2020-02-14T00:23:23 |
352659
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Derek Holt",
"https://mathoverflow.net/users/35840"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626305",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352659"
}
|
Stack Exchange
|
Systems of imprimitivity for irreducible subgroup of GU(n,q)
My question is similar to this one but about finite field case.
So, the set up is the following:
Let $G$ be $GU_n(q)$ acting on unitary space $(V, {\bf f})$, where $V=\mathbb{F}_{q^2}^n$ and ${\bf f}:V \times V \to \mathbb{F}_{q^2}$ is non-degenerate unitary form. Let $H \le G$ be irreducible and imprimitive with a system of imprimitivity:
$$V=V_1 \oplus \ldots \oplus V_k.$$
Now, since $H$ is irreducible, all $V_i$-s are either non-degenerate or totally isotropic. Let us, for simplicity, stick with the first option, so assume that all $V_i$-s are non-degenerate subspaces of $V.$
My question is: is it true that ${\bf f}(V_i,V_j)=0$ for $i \ne j$?
An additional question is: If the statement in the first question is false, can I find another imprimitivity system of $H$ for which it is true?
The proof in here works only for algebraically closed fields. In particular, if $H$ is absolutely irreducible, then the answer is yes.
I don't know the answer. The answer to the corresponding question for the symplectic groups with $V_1$ a totally isotropic subspace is no. An example is ${\rm Sp}(6,13)$, which has an imprimitive subgroup $H \cong {\rm SL}(2,5)$ which acts imprimitively on six 1-dimensional subspaces, but does not preserve a symplectic system of imprimitivity.
|
2025-03-21T14:48:29.839821
| 2020-02-14T00:33:32 |
352660
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626306",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352660"
}
|
Stack Exchange
|
Dirichlet problem for a subharmonic function
Suppose $K$ is a compact subset of $\mathbb R^n$ , $V_0$ and $V_1$ the complements of $K$ in $\mathbb R^n$ a and $\mathbb R^n_\infty$ (one point compactification), respectively. Let $u$ be subharmonic on $V_0$ and $H$ be the generalized solution of Dirichlet problem of $u$ on $V_1$. So in particular $H$ is harmonic on $V_1$; meaning this is harmonic in the usual sense on any open subset of $V_1$ that does not contain infinity, and if $W$ is an open subset of $V_1$ that contains infinity, then $H$ is continuous at infinity and $H(\infty)$ equals the mean value of $H$ over any ball $B$ whose closure is contained in $W$ ( see Helms, « introduction to potential theory », chapter on Dirichlet problem for unbounded domains). My question is: can we say $ u\leq H$ on $V_0$?
The examples given in Helms's book already answer your question in the negative: if $K$ is the unit ball and we prescribe zero boundary values, then we have $$H(x) = 0,$$ but we can have $$u(x) = c (1 - |x|^{2 - n})$$ for any $c \in \mathbb{R}$. (If $n = 2$, set $u(x) = c \log |x|$ instead.)
|
2025-03-21T14:48:29.839976
| 2020-02-14T04:25:40 |
352668
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Brendan McKay",
"N math",
"Tony Huynh",
"https://mathoverflow.net/users/152342",
"https://mathoverflow.net/users/2233",
"https://mathoverflow.net/users/9025"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626307",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352668"
}
|
Stack Exchange
|
Cocktail party and tripartite graphs are DS?
Cocktail party graphs and $k_{n,n,n}$ (a tripartite graph) are determined by the spectra of their adjacency matrices? I think thay are DS ( determined by the adjacency spectrum) but I can't find a reference for that. Does anyone know if there is a reference for this?
Thanks to everyone for the help.
Yes, the cocktail party graphs and $K_{n,n,n}$ are determined by the spectra of their adjacency matrix. See for example, Proposition 6 of the paper Which graphs are determined by their spectrum? by van Dam and Haemers. Proposition 6 shows that the disjoint union of any collection of cliques is DS. You also need to use the fact that if $G$ is regular, than a graph is DS with respect to the adjacency matrix if and only if the complement of $G$ is DS with respect to the adjacency matrix. Since the cocktail party graphs and $K_{n,n,n}$ are both regular and the complement of a disjoint union of cliques, the result follows.
Since the paper appears to be behind a paywall, I will include a proof of Proposition 6 here.
Every graph which is the disjoint union of cliques is determined by the spectra of its adjacency matrix.
Proof. Let $G=K_{n_1} \sqcup \dots \sqcup K_{n_k}$ where $n_1 + \dots +n_k=n$. The spectra of $G$ is $n_1-1, \dots, n_k-1, -1, \dots, -1$, where $-1$ occurs $n-k$ times. Let $H$ be a graph with the same spectra and $A$ be the adjacency matrix of $H$. Since all eigenvalues of $A+I$ are nonnegative, $A+I$ is positive semidefinite. Therefore, $A+I=BB^T$ for some matrix $B$. Because the diagonal entries of $A+I$ are all $1$, each column of $B$ is a unit vector. Moreover, since $A+I$ is $0/1$-valued, it follows that if $x$ and $y$ are columns of $B$, then either $x=y$ or $x$ and $y$ are orthogonal. By grouping identical columns of $B$, we see that $A+I$ can be put in block diagonal form, where each block is an all-ones matrix. It follows that $H$ is also a disjoint union of cliques. Finally, it is easy to see that two graphs which are both the disjoint union of cliques have the same spectra if and only if they are isomorphic.
Great. Thanks for your answer. You mean since these graphs are regular, we can take $\overline{A}$, instead of A, the adjacency matrix, it's right?
Yes, that's correct.
Thanks for your answer.
You're welcome. I also added a proof of Proposition 6 here. If you are satisfied with the answer, you can click on the green check mark to show that it has been answered.
A tiny bit is missing in order to go to the complement. Namely, a regular graph cannot be cospectral to an irregular graph. This is an old result of Sachs: a graph is regular iff the sum of the squares of the eigenvalues equal $n$ times the largest eigenvalue.
@BrendanMcKay Yes, I swept that part under the rug. Thanks for your comment!
@Tony Huynh Thanks for your proof.
|
2025-03-21T14:48:29.840211
| 2020-02-14T05:03:32 |
352670
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"MaoWao",
"Yemon Choi",
"https://mathoverflow.net/users/152346",
"https://mathoverflow.net/users/763",
"https://mathoverflow.net/users/95776",
"user152346"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626308",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352670"
}
|
Stack Exchange
|
Characterizing spectral radius using invertible elements in unital C* algebra
Consider A a unital C* algebra, I want to show that the spectral radius of a satisfies the following: $()= $ inf$_{∈()}||^{−1}||$, where Inv(A) is the set of invertible elements in A.
So far I can only see one direction, namely $()≤$ inf$_{∈()}||^{−1}||$. I'm wondering how to prove the other direction.
Thank you very much for the help.
If I recall correctly, this is an exercise in Murphy's book. Where did you come across this, and what's the context in which you need to know the answer?
Yes this is a problem in Murphy's book. I'm a theoretical physicist and need some knowledge in C* algebra. I got stuck on this problem.
This question was crossposted on MSE: https://math.stackexchange.com/questions/3546121/characterizing-spectral-radius-using-invertible-elements-in-unital-c-algebra.
By the way I should remark that I don't think this is a trivial question - without the hint given by the first part of the exercise, I remember being stuck on this for a while, a long time ago - but probably MSE is a better home for the question, now that it is getting feedback there.
I did solve the first part. Now I see the second part. Thank you both very much for the help.
I'm voting to close this question because it was subsequently asked on MSE and got feedback there which led to a resolution
|
2025-03-21T14:48:29.840340
| 2020-02-14T05:06:24 |
352672
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dan Petersen",
"https://mathoverflow.net/users/1310",
"https://mathoverflow.net/users/27004",
"wonderich"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626309",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352672"
}
|
Stack Exchange
|
(1/2) K3 surface or half-K3 surface: Ways to think about it?
I heard from string theorists thinking of the so-called "(1/2) K3 surface" or "half-K3 surface" as follows:
Let $T^2 \times S^1$ be a 3-torus with spin structure periodic in all directions. $T^2 \times S^1$, with this spin structure, is the boundary of a “1/2-K3 surface,” that is, a four-manifold $M^4$ that maps to a disc $D$ with generic fiber an elliptic curve. In particular, the map
$$ M^4 \to D$$
has a section
$$s : D \to M^4.$$
(possibly missing contexts)
...
Are these mathematically clear? Or do we require more clarification?
What are some other Mathematical way"s" to think about or to define "(1/2) K3 surface"?
Some K3 surfaces are elliptic, meaning that they are fibered over $\mathbb P^1$ with generic fiber an elliptic curve. Since $\mathbb P^1$ is glued from two disks it seems natural to consider the inverse image of (say) the unit disk to be "1/2" of the K3. This would somehow motivate their definition of a 1/2-K3 surface to be just any old elliptic fibration over a disk, except for the fact that an elliptic surface fibered over $\mathbb P^1$ is not in general a K3; the part of the quoted section after "that is" seems like it could just as well deserve to be called a "1/2-Enriques surface"...
Of course what you've written is too vague to be a definition, but I can guess what they're talking about. In low-dimensional topology there's a 4-manifold called $E(1)$; this is a rational complex surface obtained from $CP^2$ by blowing up the nine basepoints of a cubic pencil. This manifold is fibred by elliptic curves (the proper transforms of the cubics). If you cut out one of these fibres, you're left with a 4-manifold whose boundary is the 3-torus (because the elliptic fibres have trivial normal bundle in the blown up surface). If you glue two of these 4-manifolds together along their common torus boundary, you get a 4-manifold called $E(2)$, also known as the K3 surface. As Dan Petersen mentions in the comments, this is the same as decomposing an elliptically-fibred K3 into two pieces by taking the preimages of the two hemispheres under the elliptic fibration, but additionally making sure that the two halves are diffeomorphic (each containing 12 of the 24 nodal fibres (with vanishing cycles a,b,a,b,a,b,a,b,a,b,a,b) if your fibration is Lefschetz).
thanks +1 for the answer!
|
2025-03-21T14:48:29.840561
| 2020-02-14T05:49:31 |
352675
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Brian Hopkins",
"Martin Brandenburg",
"https://mathoverflow.net/users/14807",
"https://mathoverflow.net/users/2841"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626310",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352675"
}
|
Stack Exchange
|
Counting self avoiding walks in a strip
Consider the strip $\{0,1,\ldots n\}\times\{0,1,2\}$ in $\mathbb{N}^2.$ Is a formula known for the total number of self avoiding walks in this strip starting at $(0,0)$ in terms of the parameter $n$?
Edit: I mean all the walks that take steps of $(\pm 1,0),(0,\pm1)$ as long as they are confined to that strip.
Note: Asked on math.stackexchange (see here) a week ago with no specific answer.
Yes. Do you know the transfer-matrix method?
I know. This is why the method has to be applied to a suitable graph. See Faase, "On the number of specific spanning subgraphs of $G \times P_n$". Here it is explained how to count self-avoiding paths from one point on the "left" to one point on the "right", but something similar works for general end-points, and then we just sum up.
A special case of this was Problem A2 of the 2005 Putnam exam. It was in the context of a rook's tour on 3 by $n$ chessboard, counting paths from $(1,1)$ to $(n,1)$ that visited each square exactly once. For the current problem, these are maximal SAWs with a specified endpoint.
|
2025-03-21T14:48:29.840678
| 2020-02-14T06:41:39 |
352678
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626311",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352678"
}
|
Stack Exchange
|
Persistent homology stability results (query about Lipschitz functions)
One of the beneficial properties of persistent homology is its stability results (so called robustness to noise).
Usually the referenced paper is this paper
titled "Lipschitz functions have $L_p$-stable persistence".
According to the authors, the Lipschitz condition is crucial (otherwise the results don't hold).
My understanding is that the Lipschitz function $f$ is used to construct the filtration $X_a = f^{−1}(−\infty, a] $ in the context of persistent homology.
My question is how "universal" is this assumption of Lipschitz-ness?
That is, for a "typical" filtration $K_1\subseteq K_2 \subseteq \dots K_m$, can it be expressed in the form of $K_i=f^{−1}(−\infty, a] $ for some Lipschitz function $f$?
If no, doesn't the oft-cited statement that persistent homology is "robust to noise" fail to hold?
Thanks a lot. That is my main question after reading the referenced paper since it is written in a very general viewpoint (more general than the context of persistent homology).
The Lipschitz property (together with a bound on the dimension of the domain, roughly speaking) allows to prove stability of persistence diagrams in the Earth mover distance (aka Wasserstein distance).
The general stability result (for bottleneck distance) does hold even if the function is not Lipschitz.
|
2025-03-21T14:48:29.840820
| 2020-02-14T09:49:11 |
352682
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"https://mathoverflow.net/users/111897",
"https://mathoverflow.net/users/130022",
"red_trumpet",
"user130022"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626312",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352682"
}
|
Stack Exchange
|
complete intersection curves with large Hilbert scheme of points
Let $X$ be a very general hypersurface of degree $6$ in $\mathbb{P}^3$. Fix an integer $d$.
Define $Y:= \{ C \in \mathbb{P}(H^0(\mathcal{O}(3))) \text{ such that } \text{dim}(\text{ Hilb}^d(X \cap C)) > d \}$, where $\text{Hilb}^d(X \cap C)$ denotes the Hilbert scheme of zero-dimensional subschemes of length $d$. Note that if the intersection $X \cap C$ is smooth then $\text{Hilb}^d(X \cap C)$ has dimension $d$.
My question is the following:
What is the dimension of $Y$ ?
Can we give an effective bound on the dimension of $Y$ ?
Out of curiosity: Why is $Y$ non-empty? Do you have an example?
I do not have any particular example. But if the intersection has some bad singularity then it is possible to have large dimensional Hilbert scheme.
I believe that in characteristic zero for smooth $X$ we always have $\dim \text{Hilb}^d(X\cap C) = d$.
Sketch of proof: Let $Y = X\cap C$. For a point $p\in Y$ and any $e\in \mathbb{N}$ denote by $\text{Hilb}^e(Y, p) \subset \text{Hilb}^e(Y)$ the locus consisting of subschemes supported only at $p$. Since $X\subset \mathbb{P}^3$ is smooth, the singularities of $Y$ are planar, so that for every $p\in Y$ the locus $\text{Hilb}^e(Y, p)$ can be identified with a subscheme of $\text{Hilb}^e(\mathbb{A}^2, 0)$.
Briançon proved that $\dim \text{Hilb}^e(\mathbb{A}^2, 0) = e-1$, so $\text{Hilb}^e(Y, p)\leq e-1$. (here we need characteristic zero)
Now fix a partition $\lambda = (\lambda_1,\ldots, \lambda_m)$ of $d$ and consider the locus $S_{\lambda}$ in $\text{Sym}^d(Y)$ consisting of cycles of the form $\sum \lambda_i p_i$ for distinct $p_i\in Y$. By the above estimate, the fiber of Hilbert-Chow morphism over any element of $S_{\lambda}$ has dimension at most $d-m$. The locus $S_{\lambda}$ has codimension $d-m$ in $\text{Sym}^d(Y)$, so we get that its preimage has dimension at most $d$, summing over all $\lambda$ we get the claim.
thank you very much for the answer.
|
2025-03-21T14:48:29.841132
| 2020-02-14T10:13:11 |
352683
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Manfred Weis",
"gmvh",
"https://mathoverflow.net/users/31310",
"https://mathoverflow.net/users/45250"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626313",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352683"
}
|
Stack Exchange
|
Non-polynomial splines, a non-linear problem
I'm looking for references on how to construct spline-like functions from a basis that does not include piecewise polynomials.
To be specific, given a class of functions such as "decaying exponentials" or "sines and cosines" (which are parameterized by a single parameter, e.g. the decay rate or the frequency), is there an efficient and numerically stable method to construct a function that is piecewise a linear combination of $N$ such functions (whose parameters are to be determined), interpolates given data $(x_k,f_k)$ [which are assumed to be such that they can be interpolated using the given function class, i.e. e.g. monotonically decreasing for decaying exponentials] and has continuous derivatives at $x_k$ up to order $2N-2$ (in order to fix $N$ linear coefficients and $N$ non-linear parameters)?
I can of course write down the explicit equations needed to satisfy these conditions, but directly solving those using a non-linear equation system solver does not look too promising as an approach.
What literature I could find so far on splines with non-polynomial components referred to spaces spanned by polynomials and some given non-polynomial. Here I'm looking for the case where there are no polynomials and the non-polynomial functions are parameterized by a parameter whose values are to be determined by the interpolation and smoothness conditions.
one strategy that comes to mind is to proceed analogous to the B-Spline technique, i.e. construct basis functions of minimal support with the smoothest possible transition to $f(x)\equiv 0$ and attempt to meet the goal of making the collection of basis functions a partition of unity. Then solvong the interpolation task amounts to evaluating the basis functions at the knots and solving a system of linear equations.
here is a download link describing exponential splines https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=2ahUKEwiD_JGNxNPnAhXmw8QBHW1hApAQFjABegQIBBAB&url=https%3A%2F%2Fcore.ac.uk%2Fdownload%2Fpdf%2F82112125.pdf&usg=AOvVaw0nvVKzO_qYZXqCnrjlGOqE
Thanks for the comment and the link. I was aware of the McCartin paper, which to my understanding deals with spline interpolation in $\mathrm{span}{1,x,e^{px},e^{-px}}$, which is why I stated that existing work seems to deal with mixed polynomial-exponential bases. The problem I'm asking about is different in that the function space to which the splines piecewise belong doesn't contain polynomials of any degree (except possibly constants), and that the non-linear parameter(s) $p$ are to be determined by the interpolation and continuity conditions.
Could you provide an exemplary example of what kind of functions you have available and which parameters have to be determined as part of the interpolation task? My assumption was that the set of functions is fixed up to multiplication with a constant, but apparently you need something that goes beyond smoothly piecing a linear combination of given functions.
I've edited the question for clarity. What I'm looking for is a way to both pick a finite-dimensional subspace of an infinite-dimensional space and the unique element of that subspace that satisfies the given conditions. E.g. from "decaying exponentials", i.e. ${e^{-\alpha x}|\alpha>0}$, four conditions pick an element $s(x)=Ae^{-\alpha x}+Be^{-\beta x}\in\mathrm{span}{e^{-\alpha x},e^{-\beta x}}$. The problem is precisely the non-linearity of the task.
Just a remark: if the $1$ and $x$ are in the span, then at least granting continuity of the resulting curve poses no problem; otherwise already that would pose severe difficulties for arbitrary non-polynomial function bases.
@ManfredWeis I agree that arbitrary non-polynomial bases will be a nightmare in general. I may have overgeneralized my actual problem ... for decaying exponentials or cosines, I see no problem with continuity.
This is not a full answer, but I guess it is more than a comment.
One way to reduce the task is to apply the spirit of VARPRO to separate the linear coefficients and the non-linear parameters, i.e. alternatingly solve $N$ of the conditions as a linear equation system for the coefficients of the solution in a given $N$-dimensional subspace, and solve the remaining $N$ conditions as a non-linear system for $N$ parameters giving a different subspace (in which the coefficients, being kept fixed, are those of the solution to those other $N$ conditions). This reduces the problem size by a factor of two.
For the case of $N=2$, there is a somewhat canonical-looking way to split the equations by solving the interpolation conditions as linear equations for the coefficients, and the smoothness conditions as non-linear equations for the parameters, but sadly this does not generalize to $N\ge 3$, where the split between the equations appears to be necessarily arbitrary.
regarding the functions $Ae^{ax}$ it should be noted that $A$ and $a$ do not imply two degrees of freedom because $A$ is a "shape-preserving" shift of $e^{ax}$ along the $x$-axis. For $C^1$ continous splining of that function class at least 3 functions $e^{a_ix}, e^{b_ix},e^{c_ix}$ are required on $[x_i,x_{i+1})$; the solution could be calculated symbolically and the $\lbrace a_i,b_i,c_i\rbrace$ chosen to optimize the solution w.r.t. further properties like curve length etc.
The interpolant
$$A\left(e^{\frac{a}{A}(x-x_i)}-1\right)+B\cdot\left(\cosh\left(\frac{a}{A}(x-x_i\right)-1\right)+y_i\ =\ (A+B)\mathbf{e^{\frac{a}{A}(x-x_i)}}+B\cdot \mathbf{e^{-\frac{a}{A}(x-x_i)}}-(A+B)+y_i\\
a=\frac{d}{dx}S(x_i),\\
\Delta x_i=x_{i+1}-x_i,\\
\Delta y_i=y_{i+1}-y_i,\\
B=\frac{\Delta y_i-A\left(e^{\frac{a}{A}\Delta x_i}-1\right)}{\cosh\left(\frac{a}{A}\Delta x_i\right)-1}$$ solves the $C^1$ interpolation problem albeit not the periodic one.
The construction of the spline $S(x)$ happens from left to right and requires knowldege of the slope at $x_0$ to be able to calculate the Interpolant that connects $(x_0,y_0)$ with $(x_1,y_1)$; knowing the interpolant we can determine the slope at $x_1$ and we are in the same situation as before so that eventually $S(x)$ is determined.
The underlying ideas that led to identifying the interpolant for $C^1$ continuous interpolation can most likely be generalized to higher degrees of smoothness but I'm still on my way.
Addendum:
parameter $A$ provides limited control over the shape of the interpolant:
if we want the "purest" expontial functions, the $A$ should be chosen to minimize $B$
if we strive for preserving shape we can chose $A$ to control the abscissa of the (unique) local extremum in cases of $y_i\lt y_{i+1}\land y_{i+2}\lt y_{i+1}$
align it with the vertex of the parabola through $\left(\,(x_i,y_i),\,(x_{i+1},y_{i+1}),\,(x_{i+2},y_{i+2})\,\right)$
align it with the abscissa of the intersection of the line with slope $a$ that contains $(x_i,y_i)$ and the line through $(x_{i+1},y_{i+1})$ and $(x_{i+2},y_{i+2})$
Your interpolant does not appear to depend on $x$ at all. Is that right?
@gvmh you are right; the definition of $\ksi$ amd \eta$ were flawed; I corrected that; thanks for bringing it to my attention
if the functions to be "splined" are invertible on the intervals $[x_i,x_{i+1}]$ and, if $f_{i-1}(x)$ and $f_i(x)$ are two such functions that are invertible on $[x_{i-1},x_i]$, resp. on $[x_i,x_{i+1}]$, additionally $f_{i-1}^{-1}(x_i)=f_i^{-1}(x_i)$
then $$S(x)\ :=\ \mathcal{F}^{-1}\Big(\mathcal{S}\big(x_i,\mathcal{F}\left(y_i+\gamma(x_i)\,\right)\big)\Big)-\gamma(x) $$
yields an entire class of interpolating non-polynomial splines. I used the "strange" notation in order to stress the analogy to signal processing: if the original interpolation task is non-polynomial in the "time"-domain then "transforming" the $y_i$ via the inverse functions makes the interpolation polynomial in the "frequency" domain; interpreting interpolation as filtering completes the analogy to signal processing.
The key concept is Homomorphism and was introduced to signal processing in R.W. Schafer "Echo removal by discrete generalized linear filtering". Res. Lab. Electron. MIT, Tech. Rep. (1969)
The $\gamma(x)$ is a way to allow for mixed interpolation, e.g. exponentials plus polynomials; the case $\gamma(x)\equiv \mathrm{const}$ seem interesting in its own right when investigating the spline's limit behavior for e.g. $\gamma(x)\equiv \mathrm{const}\,\to\,\infty$, especially in the case of exponential splines.
Maybe its worth mentioning albeit trivial that this "homomorphic" splining directly carries over higher dimensions or parametric interpolation and one isn't limited to polynomials in the "frequency" domain; rational interpolants would be the "next bigger thing" to use.
Addendum:
the notation I used is aimed at emphasizing the analogy to transformation from the time domain to the frequency domain e.g. via a Fourier transform $\mathcal{F}$; the analogy to interpolating $(x_i,y_i)$ with exponential splines $S(x)\in C^{k-1}$ of the form $e^{\sum_{j=0}^k a_{ij}x^j}, S(x_i)=y_i$ is that the tranformation $\mathcal{F}$ corresponds to applying the analogy of the Fourier transformation $\mathcal{F}$, i.e. the inverse $\ln(\cdot)$ of $e^{(\cdot)}$, to the ordinates $y_i$ of the data to be interpolated and then calculate the interpolating polynomial spline with these transformed ordinates; that spline is denoted by $\mathcal{S}(x)$ because it is calculated in the analogy of the frequency domain.
the final step is to go back to the analogy of the time domain by applying the analogy $\mathcal{F}^{-1}$, i.e. $e^{(\cdot)}$ of the inverse Fourier transformation to $\mathcal{S}(x)$ to obtain the interpolating spline $S(x)$ in the analogy of the time domain.
The $\gamma(x)$ allow for modeling situations where the data is "almost" polynomial, i.e. if for some order $h$ the magnitudes of the divided differences of order $h$ become small, then it may, depending on the model of the origin of the data, improve the quality of the interpolating spline if the values of a "lower polynomial hull" at the abscissas $x_i$ are subtracted from the ordinates prior to the transition to the analogy of the frequency domain.
Another usecase for introducing $\gamma(x)$ is if one is only interested in the shape of the interpolating curve but the ordinates of the data are outside the value range of the interpolation curve, e.g. negative $y_i$ in the case of exponential splines; then adding $\gamma(x)$ can fix these issues.
An example may make things clearer:
assuming we have sampled an empirical distribution of which we assume that is the sum of Gaussian's that we would like to recover.
In that case an Ansatz could be to take the logarithm of the ordinates, calculate a quadratic spline and take the abscissas of apices of parabolas with negative leading coefficient as an initial guess for the central values of the Gaussians.
Sorry, I don't quite understand your notation here -- what precisely are $\mathcal{F}$ and $\mathcal{S}$ ? And how is $\gamma$ to be chosen?
@gmvh Thanks for coming back at me; I have added an explanation of the notation and also an example. I hope I managed to make things clearer.
I'm sorry, I still don't quite get it: Taking the log and then splining gives a piecewise polynomial, whose exponential won't typically be a linear combination of piecewise exponentials. So while this is some kind of non-polynomial spline, I don't think it is the solution to to problem from the OP, unless I'm missing something.
@gmvh agreed; the splines that are described do not depend on "horizontal-scaling" factors whose calculation is part of the interpolation task. For the case of exponentials that task seems to be explicitly solvable; I will eventually provide details.
For the exponential case with $N=2$, I've worked out a solution in terms of the Lambert $W$-function. If you can provide an explicit solution for the exponential case with $N>2$, that would be tremendous!
|
2025-03-21T14:48:29.842189
| 2020-02-14T11:08:25 |
352685
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/152357",
"sharl"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626314",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352685"
}
|
Stack Exchange
|
Advantage of fractional Fourier transform over multiscale wavelet
What is the best argument of fractional Fourier transform over multiscale wavelet in data analysis purpose.
Optimization of the good time-frequency domain parameter ? "Good" will be, find the %time-%fq domain that minimize spectral entropy of the data.
Real physic world behavior, since particle-path is locally describe by fractional Fourier in N-slit problem context ?
Wavelet is just a fast log2 implementation of fractFt ?
Continuous transformation from time to frequency... But for what purpose ?
I take any strong argument of fracFt over wavelet
Please don't overuse abbreviations.
Performance of wavelet, fractional Fourier and fractional cosine transform in image compression compares these techniques. Wavelets perform better at lower compression ratios, whereas the fractional Fourier transform provides good results at higher compression ratios.
Here is an example at compression ratio 10:1, where wavelet clearly gives better results than the fractional Fourier transform.
Yes, I often notify that wavelet perform better than fractional Transform in various signal processing fields ! I argue that because signal are brownian motion (frequencies decrease in $\frac{1}{x}$), thus log-scale power spectrum is constant ! So naturally well fitted for dyadic wavelet quantification. In fact, even for fractional brownian signal, wavelet perform better than all classes of fractional transform at same bitrate. At high compression level, it's not so clear that the opposite is true. I seek for a more "philosophical" way to use fractional transform or a strong reason/property
|
2025-03-21T14:48:29.842339
| 2020-02-14T11:51:45 |
352687
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alex Dugas",
"David E Speyer",
"Hugh Thomas",
"Mare",
"Mike Pierce",
"https://mathoverflow.net/users/11791",
"https://mathoverflow.net/users/297",
"https://mathoverflow.net/users/468",
"https://mathoverflow.net/users/61949",
"https://mathoverflow.net/users/64073"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626315",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352687"
}
|
Stack Exchange
|
What's an illustrative example of a tame algebra?
A finite-dimensional associative $\mathbf{k}$-algebra $\mathbf{k}Q/I$ is of tame representation type if for each dimension vector $d\geq 0$, with the exception of maybe finitely many dimension vectors $d$ representations*, the indecomposable representations of $\mathbf{k}Q/I$ with that dimension vector can be described up to isomorphism as finitely many one-parameter families, the parameter coming from $\mathbf{k}$.
What is an illustrative example of a tame algebra? Specifically, what's an example of a quiver $Q$ and admissible ideal $I$ such that (1) for some dimension vectors $d$ the indecomposables can't be described as finitely many one-parameter families*, and (2) for some dimension vectors there is more than just one one-parameter family.
I ask because my go-to example of a tame algebra now is the path algebra of the Jordan quiver, the quiver having one vertex and one loop, over an algebraically closed field. But this example doesn't utilize all the wiggle-room that the definition of a tame algebra allows. So I'm hoping there is a better quintessential example to keep in mind.
* Note that, as it was originally written, this was not the correct definition of an algebra having tame representation type, and actually condition (1) is not possible. See the comments below for the correct definition.
The part 3 of the book by Skowronski and Simson has a chapter on this topic including smoe other tame examples.
I think this definition of tame is not quite standard (although it is perhaps equivalent to the standard definition). There usually is not an exception for any dimension vectors $d$. Instead it is required that for every dimension $d$, the indecomposable reps of dimension $d$, with finitely many exceptions, are described by a finite number of one-parameter families. If one allows constant one-parameter families, then the phrase "with finitely many exceptions" can be removed.
@AlexDugas Yeah, the definition is non-standard, because it's not correct. ;) Reading the definition here I misunderstood; there are not finitely many dimension vectors excepted, but finitely many indecomposables that live outside of a one-parameter family per dimension vector, just as you say. And according to what Hugh Thomas is saying below here and here, this is not equivalent to the correct definition.
I think the following is an example of a tame algebra where there is more than one component to a moduli space of fixed dimension. I don't know any examples where there are dimension vectors with moduli of dimension $>1$.
Take a quiver with two vertices $1$ and $2$, two arrows $x_1$ and $x_2$ from $1$ to $2$ and two arrows $y_1$ and $y_2$ from $2$ to $1$. Impose the relations $x_i y_j = 0$ and $y_j x_i = 0$ for $1 \leq i,j \leq 2$. I believe every indecomposable representation either satisfies $x_1=x_2=0$ or $y_1=y_2=0$. Thus, every indecomposable representation is a representation of either the Kronecker quiver $1 \rightrightarrows 2$ or else $1 \leftleftarrows 2$, both of which are tame, so this is tame.
For each dimension vector of the form $(n,n)$, we get two families of indecomposable representations, coming from choosing whether to make $x_1=x_2=0$ or else $y_1=y_2=0$.
Very nice! And you can't do better: as soon as there is a 2-dimensional family, the representation theory must be wild.
For the Kronecker quiver (two vertices, two arrows in the same direction) and dimension vector (1,1), over an algebraically closed ground field, the indecomposables are naturally parameterized by points in $\mathbb P^1(k)$. (The representation with the two maps given by $a$ and $b$ is sent to $[a:b]$.
For other tame quivers with no relations over an algebraically closed ground field, the situation is slightly worse: the natural indexing set for the representations whose dimension vector is the null root is $\mathbb P^1(k)$ with some points (up to three of them) counted more than once (but finitely many times). This happens in the example Bugs gave: there are three inhomogeneous tubes, each of width two, each containing two representations of dimension vector the null root, whereas the other points of $\mathbb P^1(k)$ each correspond to one representation. (With the all-inward orientation, the reason for the indexing by $\mathbb P^1(k)$ is that the moduli space of 4 points on $\mathbb P^1$—equivalent to representations with dimension vector $(1,1,1,1,2)$, i.e., the null root—is again $\mathbb P^1$.)
I am not quite sure what you mean by the extra wiggle room of type (1). Are these supposed to be dimension vectors that have only finitely many indecomposables? I would usually think that in that case, they can also be described by one-parameter families: just make the families constant.
Any quiver whose graph is affine Dynkin graph $\tilde{D}_4$, $I=0$. If all arrows look at the center, this is related to the 4-subspace problem, which is tame.
This has the same features the OP didn't like for the Jordan quiver over an algebraically closed field. For every dimension vector, the family of indecomposable representations is either $0$ or $1$ dimensional. The latter case only occurs for dimension vectors of the form $(n,n,n,n,2n)$; the $1$-dimensional family of modules represented by the following four maps $k^{2n} \to k^n$: $(\mathrm{Id}_n, 0)$, $(0, \mathrm{Id}_n)$, $(\mathrm{Id}_n, \mathrm{Id}_n)$, $(\mathrm{Id}_n, J_n(\lambda))$ where $J_n(\lambda)$ is the $n \times n$ Jordan block with $\lambda$ on the diagonal.
@DavidESpeyer: I think perhaps you didn't write what you meant. That the families of indecomposable representations be zero or one-dimensional is equivalent to being tame. It is true that the example Bugs gives, and the more general example I gave, can be described as a single one-dimensional family plus finitely many extra points (which you disregard in your comment). I guess it should be possible to find algebras with honestly more than one one-dimensional family of indecomposables of some dimension, but we would have to leave hereditary algebras.
@HughThomassupportsMonica It sounds like you are saying that the OP's request (1) is impossible: To get a tame algebra where some of the moduli spaces have dimension $>1$? I thought that too, but I wasn't confident enough to say so. It looks like Bugs' example doesn't achieve the OP's (1) or (2).
|
2025-03-21T14:48:29.842761
| 2020-02-14T12:09:34 |
352688
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Ashot Minasyan",
"YCor",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/7644"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626316",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352688"
}
|
Stack Exchange
|
Embedding into a restricted direct product of finite groups
Residually finite groups are those groups embeddable in a direct product of a family of finite groups. What happens if we consider only restricted direct product?
Subgroups of restricted direct products have very restrictive properties: they are locally finite, and they are FC-groups (i.e., all conjugacy classes are finite, or equivalently, every element has a centralizer of finite index). This restricts to locally finite residually finite FC-groups. Whether every such group embeds into a restricted direct product of finite groups, I don't see immediately.
@MarkSapir it can't be true, since there are (abelian) locally finite FC-groups that are not residually finite.
@MarkSapir your comment looks like the assertion that groups that are (E) embeddable into restricted direct products of finite groups are exacty those locally finite FC-groups. Possibly you were just saying that groups (E) are locally finite FC-groups, in which case it's a trivial remark which doesn't need a reference.
I don't actually know whether every residually finite abelian $p$-group $A$ embeds into a direct sum of finite (abelian) groups. For instance, what about the torsion subgroup of $\prod_n C_{p^n}$?
@Yves: direct sum of finite abelian groups is isomorphic to a direct sum of finite cyclic groups. Kulikov gave a general criterion for an abelian $p$-group to be a subgroup of a direct sum of cyclic groups: see Theorems III.17.1 and III.18.1 in the book "Infinite Abelian Groups" by Laszlo Fuchs. This criterion implies that the torsion subgroup of the Cartesian product $\Pi_{n \in \mathbb{N}} C_{p^n}$ cannot be embedded in a direct sum of cyclic groups.
|
2025-03-21T14:48:29.842899
| 2020-02-14T13:33:47 |
352692
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Burnie",
"GH from MO",
"Mateusz Kwaśnicki",
"https://mathoverflow.net/users/108637",
"https://mathoverflow.net/users/11919",
"https://mathoverflow.net/users/146060"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626317",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352692"
}
|
Stack Exchange
|
A proposition about power series
Is this proposition established?
Suppose that $0<\nu<1$, $x\in[0,1]$ and absolutely converge power series
$$p(x)=\sum_{n=0}^\infty a_nx^n,$$
$$P(x)=\sum_{n=0}^\infty \frac{\Gamma(n+1)}{\Gamma(n+1+\nu)}a_nx^{n+\nu}.$$
Suppose that $p'(x),P'(x)$ don't exist. For any $a\in[0,1)$ and any sufficiently small $\delta>0$, there exists a certain $C>0$ such that
$$\sup_{x,y\in[a,a+\delta]}|P(x)-P(y)|\geq C\sup_{x,y\in[a,a+\delta]}|p(x)-p(y)|\delta^{\nu}. $$
What is $C$ supposed to depend on? As stated, $C$ may depend on $(a_n)$, $a$ and $\delta$, in which case such $C$ obviously exists.
It does not makes sense to assume that $p'(x)$ does not exist. If $p(x)$ converges in $(0,1)$, then the function defined by it is infinitely many times differentiable in $(0,1)$. In fact $p(x)$ extends holomorphically to the unit disk ${z\in\mathbb{C}:|z|<1}$. This is one of the basic theorems in complex analysis. See https://en.wikipedia.org/wiki/Power_series#Differentiation_and_integration See also my last comment under my response.
If $P'(a)=0\neq p'(a)$, then there is no such constant. Indeed, in this situation, we have for sufficiently small $\delta$,
\begin{align*}
\sup_{x,y\in[a,a+\delta]}|P(x)-P(y)|&\ \ll_a\ \delta^2\\
\sup_{x,y\in[a,a+\delta]}|p(x)-p(y)|\delta^{\nu}&\ \gg_a\ \delta^{1+\nu}.
\end{align*}
These bounds follow readily from the Taylor series expansion of $P(x)$ and $p(x)$ around $a$. In particular, the ratio of the left hand sides tends to zero under $\delta\to 0+$, hence it is not bounded away from zero.
What if P'(x) and p'(x) do not exist?
@Burnie: They always exist for $x \ne 0$, do they not?
the convergence of Σan does not indicate a convergence of Σn*an. @Mateusz Kwaśnicki
@Burnie: Any power series is differentiable in the interior of its set of convergence. Hence $P'(a)$ and $p'(a)$ exist. More generally, if $u_n$ are holomorphic functions on an open set $M\subset\mathbb{C}$, and the function series $\sum u_n$ converges locally uniformly on $M$, then the series defines a holomorphic function on $M$, and its derivative equals $\sum u_n'$ on $M$, which itself converges locally uniformly on $M$.
|
2025-03-21T14:48:29.843067
| 2020-02-14T14:02:37 |
352695
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"MLPhysics",
"Michael Engelhardt",
"https://mathoverflow.net/users/128247",
"https://mathoverflow.net/users/134299"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626318",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352695"
}
|
Stack Exchange
|
Equivalence of solutions to Sturm-Liouville problem after translation of boundary conditions
My doubt is related to the equivalence between solutions of the following Sturm-Liouville problem:
\begin{equation}
r^{2}f''(r) + 2rf'(r) + \{\omega^{2}r^{2} - [j(j+1)-|q|^{2})]\}f(r)=0\,,\label{SL1}
\end{equation}
where $r\in\mathbb{R}$ (actually $r\in [0,+\infty)$) and $j$ can take the values $j=|q|-1, |q|, |q|+1,...$(and so on). The boundary conditions (BC) are $f(r_{0})=0=f(r\to\infty)$.
It is relevant to stress that for $r<r_{0}$, where $r_{0}$ stands for a constant, this function f(r) is identically zero (by definition of my physical problem). Thus, in one hand I'm considering that the solution is given by
$$f(r)=\begin{cases}0, \text{for } r<r_{0}\,,\\
f_{r_{0}}\,, \text{for } r>r_{0}\,,\end{cases}$$
where $f_{r_{0}}$ is the solution to my first eq. with the aforementioned BCs.
On the other hand, I could simply proceed with a simple change of variables such that
$$\bar{r} = r - r_{0}\,,$$
which implies that my ODE turns out to be
$$(\bar{r}+r_{0})^{2}f''(\bar{r}) + 2(\bar{r}+r_{0})f'(\bar{r}) + \{(\bar{r}+r_{0})^{2}\omega^{2} - [j(j+1)-|q|^{2}]f(\bar{r})\}=0\,,$$
where, now, primes denote derivative with respect to $\bar{r}$ and the BCs were shifted to
$$f(\bar{r}=0)=0=f(\bar{r}\to\infty)\,.$$
My question is: Are these two solutions equivalent? Are there any problems related to my first "method"? I mean, when I find $f_{r_{0}}$ with the boundary condition applied to $r_{0}\neq 0$ I may have ignored the behavior of the function between the interval $0 \leq r < r_{0}$, right? This looks confusing since from definition I have a null function in this interval.
Thanks!
It seems to me that in your second method, you'll be allowing a few more solutions than in the first. Say you find an $f_{r_0} $ that behaves linearly for small $r-r_0 $. Then, in your first formulation, your derivative operator will generate a $\delta $-function at $r=r_0 $, and you will discard this solution, since $f$ doesn't solve your equation at $r=r_0 $ (though it does everywhere else). In your second method, you disregard this consideration at $r=r_0 $ and you will accept $f_{r_0 } $ as a solution.
Dear @MichaelEngelhardt, I cannot see why there is a solution with linear behavior in this limit. Besides that, the boundary condition at $r=r_0$ has been chosen so that both side limits go to zero. That is, I was looking for some smooth solution of the ODE, which means that the first derivative acting on this radial function must not be a Dirac delta-"function". But, I may be wrong and I would be delighted if you could explain your opinion in more details. Finally, I forgot to mention that without the translation, the ODE can be put in the form of a Bessel equation.
I would have thought that your typical $f_{r_0 } $ would be a Bessel function with a node at $r_0 $ - so it would be proportional to $r-r_0 $ close to $r_0 $. Then the proposed solution has a discontinuous slope at $r_0 $, and the second (not the first) derivative generates a $\delta $-function, so it actually does not solve the ODE at $r_0 $ - you would presumably discard this. In your second method, you lose this information, and you'd presumably have to reintroduce it in terms of a more stringent boundary condition at $\bar{r} =0$.
|
2025-03-21T14:48:29.843297
| 2020-02-14T15:13:09 |
352699
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Ethan Bolker",
"Gerry Myerson",
"Luis Ferroni",
"Martin Brandenburg",
"Zach Teitler",
"https://mathoverflow.net/users/147861",
"https://mathoverflow.net/users/2841",
"https://mathoverflow.net/users/3684",
"https://mathoverflow.net/users/45581",
"https://mathoverflow.net/users/88133"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626319",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352699"
}
|
Stack Exchange
|
Fermat's Last Theorem for integer matrices
Some years ago I was asked by a friend if Fermat's Last Theorem was true for matrices. It is pretty easy to convince oneself that it is not the case, and in fact the following statement occurs naturally as a conjecture:
For all integers $n,k\geq 2$ there exist three square matrices $A$, $B$ and $C$ of size $k\times k$ and integer entries, such that $\det(ABC)\neq 0$ and:
$$A^n + B^n = C^n$$
Of course, the case $k=1$ is just Fermat's Last Theorem, but in that case the conclusion is the opposite for $n>2$.
I think that I read somewhere that it is known that the above assertion is true (I do not remember exactly where, and haven't seen anything on Google, but this old question on MSE, on which there is an old reference, that I think does not answer this).
Two observations that are pretty straightforward to verify are the following: the case $2\times 2$ and $3\times 3$ solve the general case $k\times k$ by putting suitable small matrices on the diagonal.
Also, as it is stated in a comment on that question, if the exponent $n$ is odd then the case $2\times 2$ can be solved by this example:
$$\begin{bmatrix} 1 & n^\frac{n-1}{2}\\ 0 & 1\end{bmatrix}^n + \begin{bmatrix} -1 & 0\\n^\frac{n-1}{2} & -1\end{bmatrix}^n = \begin{bmatrix} 0 & n\\1& 0\end{bmatrix}^n$$
Does anybody know of such examples in the $2\times 2$ case for even $n$, and the general $3\times 3$ case?
More clearly: are there easy and explicit examples for each $n$ and $k$ for the above conjecture?
Can we even demand $\det(A)=\det(B)=\det(C)=1$?
Hmm, it may still be true with that assumption. However notice that the example here and in Robert’s answer do not work in that case.
Of possible interest, Niven, Ivan. Fermat’s theorem for matrices. Duke Math. J. 15 (1948), no. 3, 823--826. doi:10.1215/S0012-7094-48-01574-9. https://projecteuclid.org/euclid.dmj/1077475036, Mathematical Reviews number (MathSciNet) MR0026672, Zentralblatt MATH identifier 0032.00102
There's also Schwarz, Štefan, Fermat's theorem for matrices revisited. (English). Mathematica Slovaca, vol. 35 (1985), issue 4, pp. 343-347
MSC: 11C20, 15A33, 20M10 | MR 820630 | Zbl 0584.15007 at https://dml.cz/handle/10338.dmlcz/136402
E. Bolker, "Solutions of $A^k + B^k = C^k$ in $n \times n$ integral matrices”, American Mathematical Monthly, 75, 1968, 759-760.
Thanks Ethan, I've looked your paper and it's very nice. For the sake of completeness I just state here that Ethan proved that the equation $A^{2n}+B^{2n}=C^{2n}$ has an $n\times n$ solution. Something that just caught my attention was the fact that your suspicion on Remark 3 turned out to be wrong since the example here above settles it. Nevertheless, thank you for sharing this beautiful article. Let's hope someone will complete the remaining cases.
A JSTOR link to Ethan Bolker's article is here: https://www.jstor.org/stable/2315199
This problem is addressed in "On Fermat's problem in matrix rings and groups," by Z. Patay and A. Szakács, Publ. Math. Debrecen 61/3-4 (2002), 487–494, which summarizes previous work on the topic and gives some new results. It seems that the problem is not completely solved.
When $k=2$, Khazonov showed that there are solutions in $SL_2(\mathbb Z)$ if and only if $n$ is not a multiple of 3 or 4, but I couldn't immediately find any statement anywhere about the case $4\mid n$ and $2\times 2$ integer matrices with nonzero determinant.
Khazanov also proved that $GL_3(\mathbb Z)$ solutions do not exist if $n$ is a multiple of either 21 or 96, and $SL_3(\mathbb Z)$ solutions do not exist if $n$ is a multiple of 48.
Patay and Szakács give explicit solutions for $SL_3(\mathbb Z)$ when $n=\pm 1\pmod 3$ as well as for $n=3$. Here's a solution for $n=3$:
$$\pmatrix{0& 0&1\\ 0 &-1& 1\\ 1 & 1 & 0}^3 + \pmatrix{0&1&0\\ 0&1&-1\\ -1&-1&0}^3 = \pmatrix{0&1&1\\0&0&1\\1&0&0}^3.$$
For $k=2$ with $n \equiv 2 \mod 4$:
$$ \pmatrix{1 & (-1)^{(n-2)/4} 2^{n/2} n^{n-1} \cr 0 & 1}^n + \pmatrix{n & -n\cr n & n\cr}^n = \pmatrix{1 & 0\cr (-1)^{(n-2)/4} 2^{n/2} n^{n-1} & 1\cr}^n $$
An example for $n=4$ is
$$ \pmatrix{3 & -2\cr 1 & 2\cr}^4 +
\pmatrix{2 & -4\cr 2 & 0\cr}^4 =
\pmatrix{1 & 2\cr -1 & 2\cr}^4 $$
but I don't have a generalization.
|
2025-03-21T14:48:29.843722
| 2020-02-14T15:24:37 |
352700
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626320",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352700"
}
|
Stack Exchange
|
Quotient of Euclidean space with maximal volume growth
Let $\Gamma$ be a discrete subgroup of the isometry group of $\mathbb R^n$ and $O=\mathbb R^n/\Gamma$ is the orbifold.
If there exists a point $p \in O$ such that
$$
\lim_{r \to \infty}\frac{\text{Vol} B(p,r)}{r^n} >0,
$$
can we prove that $\Gamma$ must be finite?
Every discrete subgroup $\Gamma$ is virtually isomorphic to $\mathbf{Z}^k$ for some $k$ (which is also the dimension of the unique minimal $\Gamma$-invariant affine subspace $V_\Gamma$), and the given ratio grows as $r^{-k}$, and in particular has positive limsup iff $k=0$, i.e., if $\Gamma$ is finite. I'm just not sure what is the quickest proof. It is useful to have in mind that $\Gamma$ acts cocompactly, and virtually by translations on $V_\Gamma$, but does not always virtually act by translation on all $\mathbf{R}^n$.
(Unlike what I said in my previous comment, $V_\Gamma$ is not always unique, e.g., when $\Gamma=1$ and $n>0$. Still it is true that $\Gamma$ acts cocompactly on every minimal $\Gamma$-invariant affine subspace.)
|
2025-03-21T14:48:29.843818
| 2020-02-14T15:29:15 |
352702
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626321",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352702"
}
|
Stack Exchange
|
Bounding and dominating numbers for preordering $\leq^*$ on bijections $f:\omega\to\omega$
For functions $f, g:\omega\to\omega$ we write $f \leq^* g$ if $\{x\in\omega: f(x)> g(x)\}$ is finite.
Let $S_\omega$ denote the collection of bijections $\varphi:\omega\to\omega$ Similarly to the bounding number and the dominating number respectively, we define
${\frak b}^{\text{bij}} = \min\{|B|: B\subseteq S_\omega \land \forall f\in S_\omega\; \exists b\in B(b\not\leq^* f)\}$, and
${\frak d}^{\text{bij}} = \min\{|D|: D\subseteq S_\omega \land \forall f\in S_\omega\; \exists d\in D(f\leq^* d)\}$.
Do we have ${\frak b}^{\text{bij}}={\frak b}$? And what about ${\frak d}^{\text{bij}}={\frak d}$?
Of course $\mathfrak{b}^\mathrm{bij} \geq 2$ and it is not hard to see that $\{id_\omega,f\}$ is an unbounded family if $f$ is the function that permutes each even number with its successor. So $\mathfrak{b}^\mathrm{bij} =2$.
I suspect $\mathfrak{d}^\mathrm{bij}= \mathfrak{c}$ but I don´t have time to check this.
|
2025-03-21T14:48:29.843916
| 2020-02-14T15:32:54 |
352703
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Abdelmalek Abdesselam",
"MannyC",
"darij grinberg",
"https://mathoverflow.net/users/144413",
"https://mathoverflow.net/users/2530",
"https://mathoverflow.net/users/7410"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626322",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352703"
}
|
Stack Exchange
|
Curious identity between the two kinds of Chebyshev polynomials
I have found, by accident, an identity that relates a sum of Chebyshev polynomials of the first kind to a Chebyshev polynomial of the second kind. It goes as follows:
Given an integer partition of $n$, let $g_a$ be the number of times $a$ appears in said partition.${}^1$ Then the following identity holds for all $n \in\mathbb{N}$:
$$
U_n(x) = \sum_{\substack{n_i>0\\ \sum_i n_i = n}} \frac{1}{\prod_{a\in \{n_i\}} g_a!} \prod_{i=1} \frac{2}{n_i} T_{n_i}(x)\,.
$$
The sum is over all integer partitions of $n$, the product is on all $n_i$'s in the partition, with repetitions.
I have a very roundabout way to prove this identity (I'll skip the details). The left hand side is obtained by contracting two symmetric traceless tensors of $SO(4)$. That is, letting $|x|=|y|=1$ and $x,y\in \mathbb{R}^4$ then
$$
(x^{i_1}\cdots x^{i_n} - \mathrm{traces}) (y_{i_1}\cdots y_{i_n} - \mathrm{traces}) \propto U_n(x\cdot y)\,.
$$
The right hand side instead comes from the same contraction but in spinor notation. Namely we let
$$
\mathrm{x} = \left(\begin{matrix}x_3-x_4 & x_1 - i x_2 \\ x_1 + i x_2 & -x_3-x_4\end{matrix}\right)\,,\quad \bar{\mathrm{x}} = \epsilon\, \mathrm{x}\, \epsilon^T\,,
$$
and $\mathrm{y}$ in a similar way ($\epsilon$ is the Levi Civita tensor). Then we introduce two dimensional spinors $\eta,\tilde{\eta}$ let $\partial_{\eta^\alpha}\eta^\beta = \delta_\alpha^\beta$ (similar for $\tilde{\eta}$) and finally
$$
(\partial_\eta \mathrm{x} \partial_{\tilde{\eta}})^n (\eta \mathrm{y}\tilde{\eta})^n \sim \sum_{\substack{n_i>0\\ \sum_i n_i = n}} \# \prod_{i} \mathrm{tr}\,((\mathrm{x}\bar{\mathrm{y}})^{n_i})\,.
$$
The sum over partitions comes from a combinatoric argument. Then it's a simple exercise to show that $\mathrm{tr}\,((\mathrm{x}\bar{\mathrm{y}})^n) \propto T_n(x\cdot y)$.
My questions are
Is this identity known already?
If not, could you come up with some more direct argument to prove it?
$\;{}^1$ For example, $(1,1,1,2,2,3)$ is an integer partition of $n=10$ with $g_1 =3,\, g_2=2,\,g_3=1$.
Here is how to prove it with more standard methods. First of all, let me
restate your identity:
Definition. Let $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $. A
partition shall mean an integer
partition, i.e., a
weakly decreasing finite list of positive integers. If $\lambda$ is a partition
and $i$ is a positive integer, then $m_{i}\left( \lambda\right) $ shall mean
the number of times that $i$ appears as entry of $\lambda$. (For example,
$m_{3}\left( \left( 4,3,3,1\right) \right) =2$ and $m_{2}\left( \left(
4,3,3,1\right) \right) =0$.) The size $\left\vert \lambda\right\vert $ of
a partition is defined to be the sum of all entries of $\lambda$. If
$n\in\mathbb{N}$, then a partition of $n$ means a partition of size $n$. We
write "$\lambda\vdash n$" for "$\lambda$ is a partition of $n$".
Definition. We let $T_n\left(x\right)$ denote the Chebyshev polynomials of the first kind, which can be defined (e.g.) by the recurrence $T_0\left(x\right) = 1$ and $T_1\left(x\right) = x$ and $T_{n+1}\left(x\right) = 2x T_n\left(x\right) - T_{n-1}\left(x\right)$. We let $U_n\left(x\right)$ denote the Chebyshev polynomials of the second kind, which can be defined (e.g.) by the recurrence $U_0\left(x\right) = 1$ and $U_1\left(x\right) = 2x$ and $U_{n+1}\left(x\right) = 2x U_n\left(x\right) - U_{n-1}\left(x\right)$.
Theorem 1. For any $n\in\mathbb{N}$, we have
\begin{equation}
U_{n}\left( x\right) =\sum_{\lambda=\left( \lambda_{1},\lambda_{2}
,\ldots,\lambda_{k}\right) \vdash n}\left( \prod_{i=1}^{\infty}\dfrac
{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left( \lambda\right) !}\right)
\cdot\prod_{j=1}^{k}\left( 2T_{\lambda_{j}}\left( x\right) \right) .
\end{equation}
To prove this, I will use two well-known generating-function identities for
Chebyshev polynomials, both of which appear on the
Wikipedia:
\begin{equation}
\sum_{n=0}^{\infty}T_{n}\left( x\right) t^{n}=\dfrac{1-tx}{1-2tx+t^{2}}
\label{darij1.eq.T-gen}
\tag{1}
\end{equation}
and
\begin{equation}
\sum_{n=0}^{\infty}U_{n}\left( x\right) t^{n}=\dfrac{1}{1-2tx+t^{2}
}.
\label{darij1.eq.U-gen}
\tag{2}
\end{equation}
These are identities in the ring $\left( \mathbb{Q}\left[ x\right] \right)
\left[ \left[ t\right] \right] $ of formal power series in the variable
$t$ over the polynomial ring $\mathbb{Q}\left[ x\right] $. Both identities can easily be derived from the above recurrent definitions of $T_n\left(x\right)$ and $U_n\left(x\right)$.
Now, subtracting the equality $\underbrace{T_{0}\left( x\right) }
_{=1}\underbrace{t^{0}}_{=1}=1$ from the identity \eqref{darij1.eq.T-gen}, we obtain
\begin{equation}
\sum_{n=1}^{\infty}T_{n}\left( x\right) t^{n}=\dfrac{1-tx}{1-2tx+t^{2}
}-1=t\cdot\dfrac{x-t}{1-2tx+t^{2}}.
\end{equation}
Dividing both sides of this by $t$, we obtain
\begin{equation}
\sum_{n=1}^{\infty}T_{n}\left( x\right) t^{n-1} = \dfrac{x-t}{1-2tx+t^{2}}.
\end{equation}
Integrating both sides of this equality over $t$, we find
\begin{align}
\sum_{n=1}^{\infty}T_{n}\left( x\right) \dfrac{t^{n}}{n} & =\int\dfrac
{x-t}{1-2tx+t^{2}}dt\nonumber\\
& =\dfrac{1}{2}\log\dfrac{1}{1-2tx+t^{2}}
\label{darij1.eq.T-ge2}
\tag{3}
\end{align}
(as you can easily check by differentiation). (Note that this identity also
appears on the Wikipedia, under the guise of $\sum_{n=1}^{\infty}T_{n}\left(
x\right) \dfrac{t^{n}}{n}=\log\dfrac{1}{\sqrt{1-2tx+t^{2}}}$, apparently
because someone finds square roots simpler than division by $2$.)
Multiplying both sides of the equality \eqref{darij1.eq.T-ge2}
by $2$, we obtain
\begin{equation}
2\sum_{n=1}^{\infty}T_{n}\left( x\right) \dfrac{t^{n}}{n}=\log\dfrac
{1}{1-2tx+t^{2}}.
\end{equation}
Hence,
\begin{equation}
\log\dfrac{1}{1-2tx+t^{2}}=2\sum_{n=1}^{\infty}T_{n}\left( x\right)
\dfrac{t^{n}}{n}=\sum_{n=1}^{\infty}2T_{n}\left( x\right) \dfrac{t^{n}}{n},
\end{equation}
so that
\begin{equation}
\dfrac{1}{1-2tx+t^{2}}=\exp\left( \sum_{n=1}^{\infty}2T_{n}\left( x\right)
\dfrac{t^{n}}{n}\right) .
\end{equation}
Hence, \eqref{darij1.eq.U-gen} becomes
\begin{equation}
\sum_{n=0}^{\infty}U_{n}\left( x\right) t^{n}=\dfrac{1}{1-2tx+t^{2}}
=\exp\left( \sum_{n=1}^{\infty}2T_{n}\left( x\right) \dfrac{t^{n}}
{n}\right) .
\label{darij1.eq.T-ge3}
\tag{4}
\end{equation}
Now, we recall one of the staple formulas of algebraic combinatorics (probably
in EC or Wilf or similar sources):
Proposition 2. Let $R$ be a commutative $\mathbb{Q}$-algebra (for example,
$\mathbb{Q}$ or $\mathbb{Q}\left[ x\right] $). Let $b_{1},b_{2},b_{3}
,\ldots\in R$ and $c_{0},c_{1},c_{2},\ldots\in R$ be such that
\begin{equation}
\sum_{n=0}^{\infty}c_{n}t^{n}=\exp\left( \sum_{n=1}^{\infty}b_{n}\dfrac
{t^{n}}{n}\right)
\end{equation}
in the ring $R\left[ \left[ t\right] \right] $ of formal power series.
Then,
\begin{equation}
c_{n}=\sum_{\lambda=\left( \lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right)
\vdash n}\left( \prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left( \lambda\right)
}m_{i}\left( \lambda\right) !}\right) \cdot\prod_{j=1}^{k}b_{\lambda_{j}}
\end{equation}
for each $n\in\mathbb{N}$.
Proof of Proposition 2. An infinite sequence $\left( k_{1},k_{2}
,k_{3},\ldots\right) \in\mathbb{N}^{\infty}$ of nonnegative integers will be
called a weak composition if all but finitely many $i\geq1$ satisfy
$k_{i}=0$. There is a bijection
\begin{align}
\left\{ \text{partitions}\right\} & \rightarrow\left\{ \text{weak
compositions}\right\} ,\nonumber\\
\lambda & \mapsto\left( m_{1}\left( \lambda\right) ,m_{2}\left(
\lambda\right) ,m_{3}\left( \lambda\right) ,\ldots\right)
\label{darij1.pf.p2.1}
\tag{5}
\end{align}
(since any partition $\lambda$ is uniquely determined by the numbers
$m_{1}\left( \lambda\right) ,m_{2}\left( \lambda\right) ,m_{3}\left(
\lambda\right) ,\ldots$ which record how often each positive integer appears
in $\lambda$). We notice that any partition $\lambda$ satisfies
\begin{equation}
1m_{1}\left( \lambda\right) +2m_{2}\left( \lambda\right) +3m_{3}\left(
\lambda\right) +\cdots=\left\vert \lambda\right\vert
\label{darij1.pf.p2.2}
\tag{6}
\end{equation}
(because $\left\vert \lambda\right\vert $ is the sum of all entries of
$\lambda$, while $1m_{1}\left( \lambda\right) +2m_{2}\left( \lambda\right)
+3m_{3}\left( \lambda\right) +\cdots$ is what becomes of this sum after
equal addends are bunched together). Moreover, any partition $\lambda=\left(
\lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right) $ satisfies
\begin{equation}
\prod_{i=1}^{\infty}b_{i}^{m_{i}\left( \lambda\right) }=\prod_{j=1}
^{k}b_{\lambda_{j}}
\label{darij1.pf.p2.3}
\tag{7}
\end{equation}
(for a similar reason: the product $\prod_{i=1}^{\infty}b_{i}^{m_{i}\left(
\lambda\right) }$ is what will become of the product $\prod_{j=1}
^{k}b_{\lambda_{j}}$ if you bunch factors corresponding to equal entries of
$\lambda$ together).
We have the following product rule (i.e., analogue of the distributivity law)
for infinite products of infinite sums: If $\left( a_{i,k}\right)
_{i\geq1\text{ and }k\geq0}$ is a family of elements of $R\left[ \left[
t\right] \right] $ satisfying $a_{i,0}=1$ for each $i\geq1$, then
\begin{equation}
\prod_{i=1}^{\infty}\sum_{k=0}^{\infty}a_{i,k}=\sum_{\substack{\left(
k_{1},k_{2},k_{3},\ldots\right) \text{ is a}\\\text{weak composition}}
}\prod_{i=1}^{\infty}a_{i,k_{i}},
\label{darij1.pf.p2.prodrule}
\tag{8}
\end{equation}
provided that everything formally converges (i.e., for each given
$N\in\mathbb{N}$, all but finitely many pairs $\left( i,k\right) \in\left\{
1,2,3,\ldots\right\} ^{2}$ satisfy
$t^N \mid a_{i,k}$ in $R\left[\left[t\right]\right]$).
We have
\begin{align}
\sum_{n=0}^{\infty}c_{n}t^{n} & =\exp\left( \sum_{n=1}^{\infty}b_{n}
\dfrac{t^{n}}{n}\right) =\prod_{n=1}^{\infty}\underbrace{\exp\left(
b_{n}\dfrac{t^{n}}{n}\right) }_{\substack{=\sum_{k=0}^{\infty}\dfrac{1}
{k!}\left( b_{n}\dfrac{t^{n}}{n}\right) ^{k}\\\text{(since }\exp
z=\sum_{k=0}^{\infty}\dfrac{1}{k!}z^{k}\text{)}}}\nonumber\\
& \qquad \left(\text{since $\exp\left(\sum_{i\in I} a_i\right) = \prod_{i\in I} \exp a_i$ for any family $\left(a_i\right)_{i\in I}$}\right)
\nonumber\\
& =\prod_{n=1}^{\infty}\sum_{k=0}^{\infty}\dfrac{1}{k!}\left( b_{n}
\dfrac{t^{n}}{n}\right) ^{k}=\prod_{i=1}^{\infty}\sum_{k=0}^{\infty}\dfrac
{1}{k!}\underbrace{\left( b_{i}\dfrac{t^{i}}{i}\right) ^{k}}_{=\dfrac
{b_{i}^{k}t^{ik}}{i^{k}}}\nonumber\\
& \qquad\left( \text{here, we have renamed the index }n\text{ as }i\right)
\nonumber\\
& =\prod_{i=1}^{\infty}\sum_{k=0}^{\infty}\dfrac{1}{k!}\cdot\dfrac{b_{i}
^{k}t^{ik}}{i^{k}}=\prod_{i=1}^{\infty}\sum_{k=0}^{\infty}\dfrac{b_{i}
^{k}t^{ik}}{i^{k}k!}\nonumber\\
& =\sum_{\substack{\left( k_{1},k_{2},k_{3},\ldots\right) \text{ is
a}\\\text{weak composition}}}\prod_{i=1}^{\infty}\dfrac{b_{i}^{k_{i}}
t^{ik_{i}}}{i^{k_{i}}k_{i}!}\nonumber\\
& \qquad\left( \text{by the product rule \eqref{darij1.pf.p2.prodrule},
applied to }a_{i,k}=\dfrac{b_{i}^{k}t^{ik}}{i^{k}k!}\right) \nonumber\\
& =\sum_{\lambda\text{ is a partition}}\underbrace{\prod_{i=1}^{\infty}
\dfrac{b_{i}^{m_{i}\left( \lambda\right) }t^{im_{i}\left( \lambda\right)
}}{i^{m_{i}\left( \lambda\right) }m_{i}\left( \lambda\right) !}}_{=\left(
\prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left(
\lambda\right) !}\right) \left( \prod_{i=1}^{\infty}b_{i}^{m_{i}\left(
\lambda\right) }\right) \left( \prod_{i=1}^{\infty}t^{im_{i}\left(
\lambda\right) }\right) }\nonumber\\
& \qquad\left(
\begin{array}
[c]{c}
\text{here, we have substituted }\left( m_{1}\left( \lambda\right)
,m_{2}\left( \lambda\right) ,m_{3}\left( \lambda\right) ,\ldots\right) \\
\text{for }\left( k_{1},k_{2},k_{3},\ldots\right) \text{ in the sum, due to
the bijection \eqref{darij1.pf.p2.1}}
\end{array}
\right) \nonumber\\
& =\sum_{\lambda\text{ is a partition}}\left( \prod_{i=1}^{\infty}\dfrac
{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left( \lambda\right) !}\right)
\left( \prod_{i=1}^{\infty}b_{i}^{m_{i}\left( \lambda\right) }\right)
\underbrace{\left( \prod_{i=1}^{\infty}t^{im_{i}\left( \lambda\right)
}\right) }_{\substack{=t^{1m_{1}\left( \lambda\right) +2m_{2}\left(
\lambda\right) +3m_{3}\left( \lambda\right) +\cdots}\\=t^{\left\vert
\lambda\right\vert }\\\text{(by \eqref{darij1.pf.p2.2})}}}\nonumber\\
& =\sum_{\lambda\text{ is a partition}}\left( \prod_{i=1}^{\infty}\dfrac
{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left( \lambda\right) !}\right)
\left( \prod_{i=1}^{\infty}b_{i}^{m_{i}\left( \lambda\right) }\right)
t^{\left\vert \lambda\right\vert }.
\label{darij1.pf.p2.6}
\tag{9}
\end{align}
Now, let $n\in\mathbb{N}$. Comparing coefficients of $t^{n}$ on both sides of
the equality \eqref{darij1.pf.p2.6}, we obtain
\begin{align*}
c_{n} & =\underbrace{\sum_{\substack{\lambda\text{ is a partition;}
\\\left\vert \lambda\right\vert =n}}}_{\substack{=\sum_{\lambda\vdash
n}\\\text{(since the partitions of }n\\\text{are precisely the partitions
}\lambda\\\text{with }\left\vert \lambda\right\vert =n\text{)}}}\left(
\prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left(
\lambda\right) !}\right) \prod_{i=1}^{\infty}b_{i}^{m_{i}\left(
\lambda\right) }\\
& =\sum_{\lambda\vdash n}\left( \prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left(
\lambda\right) }m_{i}\left( \lambda\right) !}\right) \prod_{i=1}^{\infty
}b_{i}^{m_{i}\left( \lambda\right) }\\
& =\sum_{\lambda=\left( \lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right)
\vdash n}\left( \prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left( \lambda\right)
}m_{i}\left( \lambda\right) !}\right) \underbrace{\left( \prod
_{i=1}^{\infty}b_{i}^{m_{i}\left( \lambda\right) }\right) }
_{\substack{=\prod_{j=1}^{k}b_{\lambda_{j}}\\\text{(by
\eqref{darij1.pf.p2.3})}}}\\
& =\sum_{\lambda=\left( \lambda_{1},\lambda_{2},\ldots,\lambda_{k}\right)
\vdash n}\left( \prod_{i=1}^{\infty}\dfrac{1}{i^{m_{i}\left( \lambda\right)
}m_{i}\left( \lambda\right) !}\right) \prod_{j=1}^{k}b_{\lambda_{j}}.
\end{align*}
This proves Proposition 2. $\blacksquare$
Proof of Theorem 1. Recall the identity \eqref{darij1.eq.T-ge3}. Thus,
Proposition 2 (applied to $R=\mathbb{Q}\left[ x\right] $ and $c_{n}
=U_{n}\left( x\right) $ and $b_{n}=2T_{n}\left( x\right) $) yields that
\begin{equation}
U_{n}\left( x\right) =\sum_{\lambda=\left( \lambda_{1},\lambda_{2}
,\ldots,\lambda_{k}\right) \vdash n}\left( \prod_{i=1}^{\infty}\dfrac
{1}{i^{m_{i}\left( \lambda\right) }m_{i}\left( \lambda\right) !}\right)
\cdot\prod_{j=1}^{k}\left( 2T_{\lambda_{j}}\left( x\right) \right)
\end{equation}
for each $n\in\mathbb{N}$. This proves Theorem 1. $\blacksquare$
After your proof I realized that this could be trivially generalized to general Gegenbauer polynomials by letting $U_n(x)\to G_n^{(\alpha)}(x)$ and $2T_{\lambda_j}(x) \to 2\alpha T_{\lambda_j}(x) $. Furthermore, for $\alpha = (d-2)/2$, $d\in\mathbb{N}$ I can prove it in my way too.
Why only for the canonical scaling dimension $\alpha$? There is no reason to be afraid of fractional free fields.
I was about to show the details of my proof in a self-answer but I did not have time today. Essentially I want to express the left hand side as a contraction of $d$ dimensional, rank $n$ symmetric traceless tensors and the right hand side as traces of products of matrices belonging to the Clifford algebra in $d$ dimensions. While I believe these concepts can be generalized to any $d$, I have no rigorous way of doing so.
I think I can sketch a shorter proof.
Let $z_j = x_j+x_j^{-1}$, and let $p_m$ and $h_m$ denote the power-sum and complete homogeneous symmetric polynomial.
Then (see e.g p.3 in this preprint)
$$
2 T_m(z_j/2) = p_m(x_j,x_j^{-1})
\text{ and }
U_m(z_j/2) = h_m(x_j,x_j^{-1})
$$
Now, we can use the Newton identities, to express $h_m$
in terms of the power-sum symmetric functions.
This gives a relation between the $U_m$ and the $T_m$.
Looking at your formula, it is very similar to the Newton identity.
+1. I agree this is probably the best way to understand the identity. Another relevant reference is the article "Miraculous cancellations for quantum SL2" by Francis Bonahon, https://arxiv.org/abs/1708.07617
Ah -- I actually got my proof by reverse engineering a symmetric functions argument, but I wasn't aware of you proving the necessary formulas in your preprint!
|
2025-03-21T14:48:29.844642
| 2020-02-14T15:35:23 |
352704
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Sergei Akbarov",
"https://mathoverflow.net/users/18943",
"https://mathoverflow.net/users/58682",
"yada"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626323",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352704"
}
|
Stack Exchange
|
Equality of topologies on the space of positive Radon measures
Let $X$ be a locally compact Hausdorff space, $C_0(X)$ the Banach space of continuous functions vanishing at infinity, $M(X) := C_0(X)'$ the space of Radon measures and $M^+(X) \subseteq M(X)$ the positive finite Radon measures.
On $M(X)$, denote by $w^*$ the weak$^*$ topology (relative to $C_0(X)$) and by $\tau$ the topology of uniform convergence on norm compact sets of $C_0(X)$, so that $w^* \subseteq \tau$. It is known that $\tau$ coincides with the topology of uniform convergence on norm null sequences (by a theorem of Grothendieck, every norm compact set in a Banach space is contained in the absolutely convex closure of a norm null sequence).
Is it true that on $M^+(X)$ it holds $w^* = \tau$?
So, we have to show that for nets $\mu_\alpha, \mu \in M^+(X)$ with $\mu_\alpha f \to \mu f$ for each $f \in C_0(X)$ it also holds $\sup_n |(\mu_\alpha - \mu) f_n| \to 0$ for each sequence $f_n \in C_0(X)$ with $f_n \geq 0$ and $\lVert f_n \rVert \to 0$.
I have read somewhere that this is true for a compact space $X$. So, it may be also true for a locally compact Hausdorff space. But since the above condition involves sequences, I think, one has to restrict to $\sigma$-compact or paracompact spaces $X$.
Edit: Here is the proof for compact $X$:
Let $\mu_\alpha \to \mu$ for $w^*$ in $M^+(X)$. Since $X$ is compact, $1_X \in C_0(X) = C(X)$. From $\mu_\alpha 1_X \to \mu 1_X$, there is $\alpha_0$ such that $0 \leq \mu_\alpha 1_X \leq \mu 1_X + 1 =: c$ for all $\alpha \geq \alpha_0$.
Then for any $f \in C(X)$: $|\mu_\alpha f| \leq \lVert \mu_\alpha \rVert \cdot \lVert f \rVert = \mu_\alpha 1_X \cdot \lVert f \rVert \leq c \lVert f \rVert$ for all $\alpha \geq \alpha_0$. Therefore, $\{ \mu_\alpha \mid \alpha \geq \alpha_0 \}$ is $w^*$-bounded. By Banach-Alaouglu, this set is $w^*$-relatively compact and since $\tau$ and $w^*$ coincide on $w^*$-compact sets (because $C(X)$ is complete) it follows that $\mu_\alpha \to \mu$ for $\tau$.
For non-compact $X$, I think, $1_X$ should be replaced by some strictly positive function in $C_0(X)$, and these do exist, if $X$ is paracompact - have to think about it.
Is this true for compact $X$?
@SergeiAkbarov I added the proof for compact $X$.
yadaddy, it seems to me it will better to write $\mu(f)$ instead of $\mu f$, because the latter can be confused with the measure $f(x)\mu(dx)$.
For measures with density I see people often writing $f \mu$.
If they distinguish $f\mu$ and $\mu f$, this also looks confusing.
Can we say that if $\mu_\alpha \to \mu$ for $w^$ and $\mu_\alpha \geq 0$ that $\mu_\alpha(X)$ is Cauchy, i.e. converges to some point? If this is true, then we can proceed as in the compact case: $\mu_\alpha(X)$ is eventually bounded so that then ${ \mu_\alpha \mid \alpha \geq \alpha_0 }$ is $w^$-bounded for some $\alpha_0$, hence $w^*$-relatively compact and then $\mu_\alpha \to \mu$ for $\tau$ because $C_0(X)$ is complete. Note that in general, convergent nets of signed measures (in $M(X)$) need not be bounded and moreover not even eventually bounded. But here, we have positive measures.
It would be enough to just deduce $\mu_\alpha(X)$ eventually bounded from $\mu_\alpha \to \mu$ for $w^*$, $\mu_\alpha \geq 0$ - Cauchyness of $\mu_\alpha(X)$ is not required.
For a general locally compact space $X$ the condition $\mu_\alpha \to \mu$ for $w^*$, $\mu_\alpha \geq 0$ does not imply that $\mu_\alpha$ is eventually uniformly bounded, see here: https://math.stackexchange.com/a/3602015/167838
For the case $\mu = 0$ one can proceed as follows.
For a given sequence $f_n \in C_0(X)$, $f_n \geq 0$ with $\lVert f_n \rVert \to 0$ construct a function $g \in C_0(X)$ such that $f_n \leq g$ for all $n$.
Then $|\mu_\alpha f_n| = \mu_\alpha f_n \leq \mu_\alpha g$ for each $n$ since $\mu_\alpha \geq 0$ and $f_n \geq 0$. It follows that $\sup_n |\mu_\alpha f_n| \leq \mu_\alpha g \to 0$.
Construction of $g$: From $\lVert f_n \rVert = \sup_{x \in X} |f_n(x)| \to 0$ we can iteratively construct a sequence of indices $0 \leq n_1 < n_2 < n_3 < \dots$ such that $\lVert f_n \rVert \leq \frac{1}{k}$ for all $n \geq n_k$.
(0) For the finite initial part $f_0, \dots, f_{n_1-1} \in C_0(X)$ there is a compact $K_0 \subseteq X$ such that $f_0, \dots, f_{n_1-1} \leq 1$ on $X \setminus K_0$ and $\leq M$ on $K_0$ for some $M \geq 1$.
Define $g_0(x) := M$ for all $x \in X$. Then $f_n \leq g_0$ for all $n \in \mathbb{N}$.
(1) For $n \geq n_1$ we know that $f_n \leq 1$ on $X$. Take any relatively compact open $U_0 \supseteq K_0$. Define $g_1 : X \to \mathbb{R}$ as follows. For $x \in K_0$ set $g_1(x) := g_0(x)$. For $x \in X \setminus U_0$ set $g_1(x) := 1$. Extend the so-defined function $g_1$ on $K_0 \cup (X \setminus U_0)$ to a continuous function $g_1$ defined on $X$ satisfying $1 \leq g_1 \leq g_0$ as follows: there is a continuous function $\psi_1 : X \to \mathbb{R}$ such that $\psi_1 = 1$ on $K_0$, $\psi_1 = 0$ on $X \setminus U_0$ and $0 \leq \psi_1 \leq 1$. Then $g_1(x) := g_0(x) \cdot \psi_1(x) + 1 \cdot (1-\psi_1(x))$ defined for $x \in X$ is the desired continuous extension. It holds $1 \leq g_1 \leq g_0$ on $X$, $g_1 = g_0$ on $K_0$ and $g_1 = 1$ on $X \setminus U_0$. Observe that $f_n \leq g_1$ on $X$ for all $n \in \mathbb{N}$.
(2) For $n \geq n_2$ we know that $f_n \leq \frac{1}{2}$ on $X$. For the finite collection $f_0, \dots, f_{n_2-1} \in C_0(X)$ there is a compact $K_1 \subseteq X$ such that $f_0, \dots, f_{n_2-1} \leq \frac{1}{2}$ on $X \setminus K_1$. We can assume that $U_0 \subseteq K_1$ by potentially enlarging $K_1$. Take any relatively compact open neighborhood $U_2$ of $K_1$. Define $g_2 : X \to \mathbb{R}$ as follows. For $x \in K_1$ set $g_2(x) := g_1(x)$. For $x \in X \setminus U_1$ set $g_2(x) := \frac{1}{2}$. Extend the so-defined function $g_2$ on $K_1 \cup (X \setminus U_1)$ to a continuous function $g_2$ defined on $X$ satisfying $\frac{1}{2} \leq g_2 \leq g_1$ as in step (1).
Observe that $f_n \leq g_2$ on $X$ for all $n \in \mathbb{N}$. In fact, from $f_n \leq g_1$ on $X$ for all $n \in \mathbb{N}$ we get $f_0, \dots, f_{n_2-1} \leq g_1 = g_2$ on $K_1$, so that with $f_0, \dots, f_{n_2-1} \leq \frac{1}{2} \leq g_2$ on $X \setminus K_1$
we get $f_0, \dots, f_{n_2-1} \leq g_2$ on $X$. For $n \geq n_2$ we already know that $f_n \leq \frac{1}{2} \leq g_2$ on $X$.
We can now proceed iteratively. This yields a sequence $g_k \in C_b(X)$ satisfying $0 \leq g_k \leq M$ on $X$. Since $g_k$ is pointwise decreasing it follows that $g_k(x)$ converges for any $x \in X$. Define $g(x) := \lim_{k \to \infty} g_k(x)$ for any $x \in X$. From $f_n \leq g_k$ on $X$ for all $n$ and all $k$ it follows that $f_n \leq g$ for all $n$. Finally, to see that $g \in C_0(X)$ let $\varepsilon > 0$ and take any $k > \frac{1}{\varepsilon}$. By construction, we have $g_k = \frac{1}{k}$ on $X \setminus U_{k-1}$. Then from $K_k \supseteq U_{k-1}$ it follows that $g \leq g_k = \frac{1}{k} < \varepsilon$ on $X \setminus K_k$.
EDIT: It would be interesting to know whether a similar proof applies for a general $\mu \geq 0$ - I expect that we then need also to approximate the integrals $\mu f$ in a suitable way (this is obviously not necessary for the case $\mu = 0$).
|
2025-03-21T14:48:29.845157
| 2020-02-14T15:53:03 |
352707
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626324",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352707"
}
|
Stack Exchange
|
Logarithmic Sobolev growth of time-space-periodic Schrödinger solutions
Consider the following Schrödinger equation
$$i\partial_t \psi (t,x) + \Delta \psi - V(t,x) \psi = 0 \, ,$$
where $x\in \mathbb{T}^d$ and $V(t,\cdot)$ is real, smooth, and periodic (with a diophantine condition on the periodicity wrt $\mathbb{T}^d$).
Question: Is there a potential $V$, a dimension $d$, a constant $s>0$, and an initial condition $\psi _0 (x)\in H^s(\mathbb{T}^d)$ such that $\|\psi(t,\cdot)\|_{H^s} \sim \log[2+t]^{C(s)}$ for all $t>0$?
Background:
Bourgain, Wang and others showed that while this equation preserves the $L^2_x$ norm, under some general conditions, $H^s$ norms with $s>0$ can grow at most logarithmically, e.g., in $d=1$
$$\|\psi(t,\cdot) \|_{H^s_x} \lesssim \log [2+t]^{C s} \, ,$$
for some $C>0$. A similar statement holds for small potentials in $d=2$.
The best result in ways of a lower bound is in Bourgain too: for $d=1$ and any $s>0$, there is a sequence of $t_j\to \infty$, $ V_{j}(t,x)$, and $\psi_{0,j} (x)$ such that $\|\psi_j (t_j, \cdot)\|_{H^s} \sim \log[2+t_j]^{C(s)}$, where $\psi_j$ is the solution corresponding to the potential $V_j$ with initial condition $\psi_{0,j}$. See remark 1.11 at Maspero and Robert. Whether such growth is possible for a fixed equation is unclear.
|
2025-03-21T14:48:29.845272
| 2020-02-14T15:54:28 |
352708
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Johannes Hahn",
"Nate Eldredge",
"Piotr Hajlasz",
"https://mathoverflow.net/users/121665",
"https://mathoverflow.net/users/3041",
"https://mathoverflow.net/users/4832",
"https://mathoverflow.net/users/95282",
"user95282"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626325",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352708"
}
|
Stack Exchange
|
Any kind of result giving a sufficient condition for when a measure arises from the Riesz representation theorem?
Is there any sort of result known that gives a set of conditions on a measure space which are sufficient for it to be such that it arises from a linear functional on a locally compact Hausdorff space via the Riesz representation theorem? Or more generally is there a good place for me to look for work on classification of measure spaces?
I guess for starters, you need some condition for a measurable space to be the Borel $\sigma$-algebra of some LCH topology. I would be interested in that in itself.
The "bible" of measure theory is Fremlin's book. It's dense but you could look there.
@PiotrHajlasz I think the question is about the "if you have a measure on a LCH space" part. How do you know that you are in that situation? If I were to give you any measure space, how would you decide whether or not the sigma algebra is the borel-sets of some LCH space? And can you decide it in a way that gives you enough information to see if my measure is Radon or not?
@PiotrHajlasz How do get $C_0(X)$, if you only know $(X,\Sigma,\mu)$ ? In particular: How do you get the topology from the measure? In general you can't, since the topology is not uniquely determined from the measure space and there are measure spaces that do not come from a topological space at all. So how do you decide whether you're in such a case? And if you're not, how do you decide if one of the possible compatible topologies is LCH ?
@JohannesHahn OK. I misunderstood the question. I was not a careful reader. I am deleting my stupid comments.
One necessary condition is that the measure must be perfect; or, if we are dealing with not necessarily finite measure spaces, the restriction to every set of finite measure must be perfect. For a completion of a countably generated space with a finite measure this is also sufficient. For more on perfect measures, see Chapter 52 in Fremlin's Measure Theory.
|
2025-03-21T14:48:29.845409
| 2020-02-14T17:02:57 |
352712
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626326",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352712"
}
|
Stack Exchange
|
Describing modifications using limits
It is well know that in (1-)category theory, one can describe the set of natural transformations between two functors by an end formula. I would like to know whether some similar description is available for modifications between two pseudonatural transformations. More precisely:
Let $\mathcal{C}$ and $\mathcal{D}$ be (small) bicategories, $F,G:\mathcal{C}\rightarrow \mathcal{D}$ be pseudofunctors and $a,b:F\Rightarrow G$ be two pseudonatural transformations.
Is there a way to describe the set of modifications from $a$ to $b$ using some end formula?
|
2025-03-21T14:48:29.845476
| 2020-02-14T17:06:37 |
352713
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ManUtdBloke",
"Nemo",
"https://mathoverflow.net/users/152373",
"https://mathoverflow.net/users/82588"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626327",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352713"
}
|
Stack Exchange
|
Trying to bound the generalized hypergeometric function ${}_2F_3(x+1,x+1;1,1,1;\alpha)$ as $x\to \infty$?
(See also edit below)...
I am trying to get a nice, explicit, bound on the hypergeometric function
$$
{}_2F_3(a_1,a_2;b_1,b_2,b_3;\alpha),
$$
in the case of a large parameter. In particular I am interested in the case where
$$
{}_2F_3(x+1,x+1;1,1,1;\alpha), \quad \quad x \to \infty.
$$
I found this paper that shows how we can decompose hypergeometric functions into sums of hypergeometric functions of lower orders, when they have integer parameters differences.
Using the method in the paper I decomposed the above function as
\begin{align*}
{}_2F_3(x+1,x+1;1,1,1;\alpha) & = \sum_{j=0}^x {x\choose j} {}_1F_2(x+1+j;1+j,1+j;\alpha) \frac{(x+j)!}{x!(j!)!} \alpha^j \\
\end{align*}
I then bound the binomial coefficient with ${x\choose j}\le (ex)^j(1/j)^j$, and applied Sterling's approximation $(1/j)^j \le 1/(j!e^j)$ to get:
\begin{align*}
{}_2F_3(x+1,x+1;1,1,1;\alpha) & \le \sum_{j=0}^x x^j {}_1F_2(x+1+j;1+j,1+j;\alpha) \frac{(x+j)!}{x!(j!)^4} \alpha^j \\
\end{align*}
However, while the hypergeometric function has now been reduced, I don't seem to be any closer to being able to get a nice explicit expression.
Somewhat surprisingly, upon running some numerics, simply neglecting the ${}_2F_1$ seems to still give an excellent approximation (yellow line in image):
\begin{align*}
{}_2F_3(x+1,x+1;1,1,1;\alpha) & \lessapprox \sum_{j=0}^x x^j \frac{(x+j)!}{x!(j!)^4} \alpha^j \\
\end{align*}
Here, $\alpha=1/4$ was used.
But even with that simplification, I don't seem to be much better off when it comes to letting $x\to \infty$ and finding an explicit bound; I am looking for bound that doesn't feature a series/integral/special function.
The function looks pretty straightforward on the plot so I would hope it is possible to get a nice bound for it. Apart from my attempt above, I have looked through a lot of literature for useful identities/techniques but haven't been able to find anything.
So is it possible to get a nice explicit bound on this function?
EDIT:
Ok, it seems there is another simplification (numerically at least for the moment); it seems we can 'transfer' the $x$ and reduce (blue line in image)
$$
{}_2F_3(x+1,x+1;1,1,1;\alpha)
$$
to (green line in image):
$$
{}_0F_3(;1,1,1;x^2\alpha)
$$
So this suggests that it may be possible to get a nice bound on ${}_2F_3(x+1,x+1;1,1,1;\alpha)$ if we can find an asymptotic representation of ${}_0F_3(;1,1,1;x^2\alpha)$ for large x?
Note that ${}_0F_3(;1,1,1;x^2\alpha) = \sum_{j=0}^\infty (1/j!)^4 (x^2\alpha)^j$.
You can find the asymptotics for large $x$ by using the method of steepest descent.
You have done much of the work for yourself. Here is the last missing step:
Writing the $_2 F_3$ as integral (see, e.g., http://dlmf.nist.gov/16.5.E1):
$$
_2 F_3(x+1,x+1;1,1,1;\alpha)=\Gamma(x+1)^{-2}\ \frac{1}{2 \pi i}\int_{\cal{L}} d s \frac{\Gamma(x+1+s)^2}{\Gamma(s+1)^3} \ \Gamma(-s) \ (-\alpha)^s,
$$
where the path $\cal{L}$ is described in the source given above. Then we insert the approximation for large $x$ (see, e.g., http://dlmf.nist.gov/5.11.E12)
$$
\left(\frac{\Gamma(x+1+s)}{\Gamma(x+1)}\right)^2 \sim x^{2 s} .
$$
Therefore for large $x$
$$
_2 F_3(x+1,x+1;1,1,1;\alpha)\sim \int_{\cal{L}} d s \ \Gamma(s+1)^{-3} \ \Gamma(-s) \ (-\alpha x^2 )^s = \ _0 F_3(;1,1,1;x^2 \ \alpha),
$$
which gives you the desired asymptotic similarity.
Thanks, I also found an alternative explanation in the meantime. To get the relationship, we can apply the formula in 16.8(iii) on this page twice.
I found large argument asymptotics in for ${}_0F_3$ in this book, specifically Formula 16.11.9.
Using these, I obtained
\begin{align}
{}_2F_3(x+1,x+1;1,1,1;\alpha) \approx {}_0F_3(;1,1,1;x^2\alpha) \sim Ce^{4(x^2 \alpha)^{1/4}}, \quad x\to \infty,
\end{align}
where $C > 0$. Numerics confirm it.
Now I just need to find out why ${}_2F_3(x+1,x+1;1,1,1;\alpha) \approx {}_0F_3(;1,1,1;x^2\alpha)$...
|
2025-03-21T14:48:29.845719
| 2020-02-14T17:40:39 |
352717
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626328",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352717"
}
|
Stack Exchange
|
Is each preseparable topological group narrow?
A topological group $G$ is defined to be
$\bullet$ precompact if for any neighborhood $U\subseteq G$ of the unit there exists a finite subset $F\subseteq G$ such that $G=UF$;
$\bullet$ narrow if for any neighborhood $U\subseteq G$ of the unit there exists a countable subset $S\subseteq G$ such that $G=US$;
$\bullet$ separable if there exists a countable subset $S\subseteq G$ such that for any neighborhood $U\subseteq G$ of the unit we have $G=SU$;
$\bullet$ preseparable if there exists a countable subset $S\subseteq G$ such that for any neighborhood $U\subseteq G$ of the unit there exists a finite subset $F\subseteq G$ such that $G=SUF$.
Let us observe the following facts concerning those concepts:
A topological group is preseparable if it is precompact or separable.
More generally, a topological group $G$ is preseparable if $G$ contains a separable closed normal subgroup $H$ whose quotient group $G/H$ is precompact. In the latter case the group $G$ is also narrow.
Each preseparable abelian topological group is narrow.
For any cardinal $\kappa>\mathfrak c$, the Tychonoff power $\mathbb R^\kappa$ is an example
of a narrow abelian topological group which is not preseparable.
Problem. Is each preseparable topological group narrow?
Jan Pachl has informed me that the answer to this problem is affirmative and can be derived from the following helpful fact, proved in Lemma 3.31 of his book "Uniform spaces and measures". I also remember that a similar theorem was proved in the book "Topologies on groups determined by sequences" by I.Protasov and E.Zelenyuk.
Theorem. If a group $G$ is written as $G=\bigcup_{i=1}^nU_iA$ for some sets $U_1,\dots,U_n,A\subset G$, then $G=U_i^{-1}U_iB$ for some $i\in\{1,\dots,n\}$ and some set $B\subseteq G$ of cardinality $|B|\le f(n,|A|)$, where the function $f(n,\kappa)$ is defined by the recursive formula: $f(1,\kappa)=\kappa$ and $f(n,\kappa)=f(n-1,\kappa+\kappa^2)$. In particular, $f(n,\kappa)=\kappa$ for any infinite cardinal $\kappa$ and any $n\in\mathbb N$.
Proof. The proof is by induction on $n$. For $n=1$ it is trivial. Assume that the theorem is proved for all $k<n$. Write $G$ as $G=\bigcup_{i=1}^nU_iA$ for some sets $U_1,\dots,U_n,A\subset G$. If $U_n^{-1}U_nA=G$, then we are done. If $U_n^{-1}U_nA\ne G$, then we can choose a point $x\in G\setminus U_n^{-1}U_nA$ and conclude that $U_nx\cap U_nA=\emptyset$ and hence $U_nx\subset \bigcup_{i=1}^{n-1}U_iA$. Then $U_nA\subset \bigcup_{i=1}^{n-1}U_iAx^{-1}A$ and $G=\bigcup_{i=1}^{n-1}U_i(A\cup Ax^{-1}A)$. By the induction hypothesis, there exists $i\in\{1,\dots,n-1\}$ and a set $B\subset G$ of cardinality $|B|\le f(n-1,|A\cup Ax^{-1}A|)\le f(n-1,|A|+|A|^2)=f(n,|A|)$ such that $G=U_i^{-1}U_iB$.
|
2025-03-21T14:48:29.845887
| 2020-02-14T17:52:15 |
352720
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"LSpice",
"Sandeep Silwal",
"aglearner",
"asv",
"fedja",
"https://mathoverflow.net/users/1131",
"https://mathoverflow.net/users/13441",
"https://mathoverflow.net/users/16183",
"https://mathoverflow.net/users/17773",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/83122",
"kodlu"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626329",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352720"
}
|
Stack Exchange
|
Reference to a conjecture on unit vectors in Euclidean space
I have heard that there exists the following conjecture (if I am not mistaken).
Let $u_1,\dots,u_n$ be unit vectors in an $n$-dimensional Euclidean vector space. Then there exists another unit vector $x$ such that
$$\sum_{i=1}^n |( x,u_i)|\geq \sqrt{n}.$$
I am looking for a reference for this conjecture. Also I will be happy to know what is known about it.
@AlexandreEremenko, $4/\pi < \sqrt2$, so doesn't the bound you quote for $n = 2$ imply the one that @MKO wants?
That isn't a conjecture but a routine exercise assigned after the students learn about Bang's solution of the Tarski plank problem. The proof goes in 2 steps:
1) Consider all sums $\sum_j \varepsilon_i u_i$ with $\varepsilon_i=\pm 1$ and choose the longest one. Replacing some $u_j$ with $-u_j$ if necessary, we can assume WLOG that it is $y=\sum_i u_i$. Comparing $y$ with $y-2u_i$ (a single sign flip) we get
$$
\|y\|^2\ge \|y-2u_i\|^2=\|y\|^2-4\langle y,u_i\rangle+4\|u_i\|^2
$$
whence $\langle y,u_i\rangle\ge 1$ for all $i$. (That part is the main step in the solution of the plank problem).
2) Now we have $\|y\|^2=\sum_i\langle y,u_i\rangle\ge n$, so for $x=\frac y{\|y\|}$, we get
$$
\sum_i\langle x,u_i\rangle=\sqrt{\sum_i\langle y,u_i\rangle}\ge \sqrt n
$$
The End :-)
Thanks! It seems you even did not use that the dimension is equal to $n$.
What are the courses in which students learn about Bang's solution of the Tarsky plank's problem? Is there some online course like that?
@aglearner I sometimes include it into "History of Mathematics" (just because I believe it is more interesting and useful than, say, Roman numerals). "Explorations in modern math." might be another good place for it but there one cannot be sure that the students know vectors well enough. There is also "Euclidean geometry for teachers" but I taught that one only once and did something else. As to online courses, I just don't know.
That's very interesting. Do you write down notes for such courses?
@aglearner Alas, no. I'm too lazy :-)
@fedja see please related unanswered question https://mathoverflow.net/questions/352746/a-conjecture-or-theorem-on-unit-vectors-in-a-euclidean-space/353034#353034
(Too long for a comment).
Here is a way to get $\ge c \sqrt{n}$ for some constant $c$: First pick $x$ uniformly at random from the sphere and consider $\mathbf{E}|\langle x,u_1 \rangle|$. We can assume the first vector of the basis is $u_1$ and form the rest of the orthonormal basis. Then the expected value is just the absolute value of the first coordinate $|x_1|$.
To calculate this, we note that we can generate a random vector by taking a random gaussian and normalizing it. This means that
$$\mathbf{E}|\langle x,u_1 \rangle| = \int_0^{\infty} \mathbf{P}(|x_1| \ge t) \ dt \approx \int_0^{\infty} \mathbf{P}(g \ge t \sqrt{n})\ dt $$
where $g$ is a standard normal random variable. In the approximation step, we use strong concentration of chi-squared random variables to say the norm of a random gaussian vector concentrates around $\sqrt{n}$ (the details need to be spelled out but they should be straightforward). Finally, the tail of the gaussian tells us that $\mathbf{P}(g \ge t \sqrt{n}) \le \exp(-t^2n)$ so the integral evaluates to $c/\sqrt{n}$ for some fixed constant $c$.
Since the expected value is at least $c \sqrt{n}$, this tells us that there exists a $x$ for which the bound holds.
I don't understand the claim in your last sentence. How do you go from $c n^{-1/2}$ to $c n^{1/2}$?
$n^{-1/2}$ was for one particular $u$. By linearity of expectations we can multiply by $n$.
|
2025-03-21T14:48:29.846155
| 2020-02-14T19:15:50 |
352724
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626330",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352724"
}
|
Stack Exchange
|
Concentration in Markov chains
Consider a discrete state space $\mathcal{X}$. The expander Chernoff inequality gives subgaussian concentration for the sample mean $\frac1n \sum_{t=1}^n f(X_t)$ for some function $f : \mathcal{X} \to [0,1]$, where $X_t$ follows a discrete time stationary Markov chain. The variance parameter of the subgaussian concentration itself is proportional to the spectral gap of the chain. A small spectral gap would be implied by bottlenecks in the Markov chain (Cheeger's inequality) which would in turn make convergence slow.
Would I be right to think that the existence of bottlenecks would only be problematic if the chain is stationary? By this I mean: assume that the Markov chain is always started from a fixed state $X$, and compare $\frac1n \sum_{t=1}^n f(X_t)$ with its expectation, $\frac1n \mathbb{E}_{\substack{X_1 = X,\\ X_{i+1} \sim P(\cdot | X_i)}} \left[\sum_{t=1}^n f(X_i) \right]$. Can one expect subgaussian concentration with a variance parameter that does not depend on the spectral gap of the chain?
In the extreme case, where the Markov chain has two disjoint components, the stationary chain will never converge, since it may never reach the vertices in the other component. However, when started in a fixed state, the 2nd disjoint component can be thrown away since it appears neither in the observed sample paths, nor in the expectations.
Your ``extreme case'' is not a good indication of what happens in a slightly less extreme case. If the chain has a narrow bottleneck between two strongly connected components, for some time it will stay on one side of the bottleneck and you will see the sample mean approach the average of $f$ over this side; but at some random time the chain will cross and the sample mean will move away toward the average of $f$ over the other side. The sample mean should navigate between the two side averages, with only a very slow convergence to the overall average - the behavior will be pretty much the same as for the simplest such case, with two states $a,b$, a probability to stay in one state of $1-\epsilon$ and a probability to switch state of $\epsilon$. Now, the expectation assuming a fixed starting point $x_0$ will (slowly) converge to the overall average, but will not exhibit these oscillations since it averages over all possible switch times. Therefore, whenever the chain gets stuck long enough on one side, you will see a significant gap with the expectation. Such events will occur from time to time, and you cannot expect a strong concentration.
You might be able to get good concentration for a (random) large subset of times, though.
Also note that when your $f$ is specified, you can in some cases improve the spectral gap by choosing appropriately the norm you use on your function space.
|
2025-03-21T14:48:29.846368
| 2020-02-14T19:18:28 |
352725
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Moishe Kohan",
"Rahul Sarkar",
"https://mathoverflow.net/users/151406",
"https://mathoverflow.net/users/39654"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626331",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352725"
}
|
Stack Exchange
|
Cut out an open ball from a 2-manifold and glue the boundary
I have a possibly elementary question. Let $\mathcal{M}$ be a manifold with $\text{dim} \; \mathcal{M} = 2$. Let $U \subseteq \mathcal{M}$ be homeomorphic to $\overline{\mathcal{B}(0,1)}$, and let $\partial U = U \backslash \text{int} \; U$. Construct the topological space $\mathcal{N}$ by removing $\text{int} \; U$ and then identifying all points on $\partial U$. Is $\mathcal{N}$ homeomorphic to $\mathcal{M}$? Can someone provide a proof if this is true, or provide a counterexample?
The issue in my mind is that I am facing complications with is the fact that $\partial U$ is only homeomorphic to $S^{1}$, and not necessarily a $C^1$ curve (or even piecewise $C^1$). Note that it is well known that all 2 manifolds admit smooth structures, so it makes sense to talk about differentiability.
Hint: Try to prove this first for $M={\mathbb R}^2$ and $U$ a round disk. Then use the fact that $\partial U$ has a collar in $M$.
@MoisheKohan the existence of a collar neighborhood is precisely my issue. is it true that this collar exists in the complement of $\text{int} U$?
The following is theorem A1 in the paper by David Epstein, "Curves in 2-manifolds and isotopies", Acta Math, 1966.
Theorem. Let $M$ is a surface equipped with a PL structure. Then every topological embedding $f: S^1\to M$ is isotopic to a PL embedding. Moreover, isotopy takes place in an arbitrarily small neighborhood of $f(S^1)$.
Now, every topological surface $M$ admits a PL structure (Rado). Thus, we see every subset $A\subset M$ homeomorphic to $S^1$ has a collar: A neighborhood $N$ (which can be chosen arbitrarily close to $A$) homeomorphic to the annulus or the Moebius band, where $A$ is the "core curve". (Taking a suitable regular neighborhood of a PL curve isotopic to $A$.)
If $A$ bounds a topological disk in $M$, then the collar cannot be a Moebius band. Hence, in your situation, if $U\subset M$ is a subset homeomorphic to the closed disk, then $\partial U$ admits an annular collar. From this, it is easy to conclude that $(M- int(U))/\partial U$ is homeomorphic to $M$.
Note that this fails in dimensions $n\ge 3$: The quotient is not always a manifold. However, if you assume that $U\subset M$ has locally flat boundary, then $\partial U$ again admits a collar. This is Brown's theorem:
Morton Brown, "Locally flat imbeddings of topological manifolds", Annals of Mathematics, Vol. 75 (1962), p. 331-341.
|
2025-03-21T14:48:29.846566
| 2020-02-14T19:46:16 |
352726
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dmitry Vaintrob",
"Martin Brandenburg",
"https://mathoverflow.net/users/2841",
"https://mathoverflow.net/users/7108"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626332",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352726"
}
|
Stack Exchange
|
Free augmented algebras
What is the "correct" definition of a free augmented commutative algebra?
At least two definitions come to my mind:
Fix a commutative ring $k$. We need elements $\lambda_1,\dotsc,\lambda_n \in k$. They define an augmentation on the polynomial algebra $k[X_1,\dotsc,X_n]$ via $\varepsilon(X_i) := \lambda_i$. Let us denote this augmented commutative algebra by $k[X_1^{[\lambda_1]},\dotsc,X_n^{[\lambda_1]}]$. This satisfies the universal property (for every augmented commutative algebra $A$)
$$\mathrm{Hom}(k[X_1^{[\lambda_1]},\dotsc,X_n^{[\lambda_1]}],A) \cong \{a \in A^n : \varepsilon(a_1)=\lambda_1,\dotsc,\varepsilon(a_n)=\lambda_n\}.$$
So (in contrast to commutative algebras) there is no free augmented commutative algebra with $n$ generators: we need to know their values under the augmentation, and for each list of values there is a different universal solution. This is somewhat similar to the definition of free graded algebras, where for each generator we have to know its degree.
On the other hand, the category of augmented commutative algebras is equivalent to the category of non-unital commutative algebras: We map $A \mapsto \ker(\varepsilon)$, and $B \mapsto B^{+}$ (unitalization) in the other direction. The category of non-unital commutative algebras is finitary algebraic and hence has free objects in the usual way. Specifically, they are algebras of polynomials without a constant term, let's denote them by $k[X_1,\dotsc,X_n]_+$. The corresponding augmented commutative algebra is just $k[X_1,\dotsc,X_n]$ with $\varepsilon(X_i)=0$, so it is $k[X_1^{[0]},\dotsc,X_n^{[0]}]$ with the above notation. It is kind of strange that we only get this special case. Right?
Anyway, my motivation for asking is basically that I need a small-as-possible dense subcategory of the category of augmented commutative algebras. What is a good choice here? By the second approach above, the $k[X_1^{[0]},\dotsc,X_n^{[0]}]$ should be sufficient, but it obviously leaves out elements with non-zero augmentation. How can you explain this?
For any choice of $\lambda_1,\dots,\lambda_n$ there is an isomorphism:
$$ k[X_1^{[\lambda_1]},\dots,X_n^{[\lambda_n]} ] \simeq k[Y_1^{[0]},\dots,Y_n^{[0]} ] $$
Which is given by $X_i \leftrightarrow Y_i+e\lambda_i$ where $e$ is the unit.
So the two constructions actually give you the same objects.
This is what I was looking for :). It is quite easy, but for some reason I didn't see it.
The free functor is left adjoint to the forgetful functor. There are two forgetful functors from augmented algebras to vector spaces. One views the algebra as a vector space, the other removes the identity summand first. The second one is probably more natural (in particular, it commutes with products, which you would like to have if you are to define a left adjoint), and its adjoint is the ordinary free commutative algebra with standard augmentation.
+1 for the hint about products. So the "naive" forgetful functor here has actually no left adjoint. This is why I agree that the other forgetful functor, which maps $A \mapsto \ker(\varepsilon)$, is more natural. But still it is kind of strange that the "elements" of my augmented algebra should therefore be only those elements of augmentation $0$. In some sense, we are not allowed to treat the unit as an element...
Well there is an alternative point of view you might like better, which is to consider the forgetful functor from augmented algebras to augmented vector spaces (vector spaces with a map to k). This functor also commutes with products and its adjoint is the ordinary free commutative algebra, $V\mapsto k[V]$ (with natural augmentation). I suspect this may answer your original question better, since this augmentation is precisely your list of $\lambda_i.$
|
2025-03-21T14:48:29.846853
| 2020-02-14T20:08:09 |
352727
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Ben",
"David E Speyer",
"https://mathoverflow.net/users/150898",
"https://mathoverflow.net/users/297"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626333",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352727"
}
|
Stack Exchange
|
Set of orthogonal complements to open set in $Gr(k,\mathbb{C}^n)$ open in $Gr(n-k,\mathbb{C}^n)$?
$\DeclareMathOperator\Gr{Gr}$Consider $\mathbb{C}^n$ endowed with the Hermitian inner product $\langle u,v\rangle=u^*v$, and let $U \subseteq \Gr(k,\mathbb{C}^n)$ be a Zariski open dense subset of the Grassmannian of $k$ planes in $\mathbb{C}^n$. Is the set
\begin{align}
V=\{u^{\perp} | u \in U\}\subseteq \Gr(n-k,\mathbb{C}^n)
\end{align}
of orthogonal complements (under $\langle\cdot,\cdot\rangle$) open dense in $\Gr(n-k,\mathbb{C}^n)$? Or does it at least contain an open dense subset of $\Gr(n-k,\mathbb{C}^n)$?
If the bijection $\Gr(k,\mathbb{C}^n)\leftrightarrow \Gr(n-k,\mathbb{C}^n)$ given by $u \leftrightarrow u^\perp$ were an isomorphism of algebraic varieties then this would be obvious, but unfortunately it appears to only be an isomorphism when these are viewed as varieties over the reals.
Another idea is to somehow use Chevalley's theorem, although this result doesn't seem to hold over the reals.
Judging by the question you link to, your $u^{\perp}$ is orhogonality with respect to the Hermitian inner product. If it were orthogonality by the standard complex-linear inner product, then you would have an isomorphism of varieties as described that question. But the Hermitian orthogonal complement is the composition of the complex linear orthogonal complement and complex conjugation! Both are automorphisms of the Zariski topology, so the answer is yes!
@DavidESpeyer Ahh... so simple. Thank you!
|
2025-03-21T14:48:29.846988
| 2020-02-14T20:52:18 |
352730
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"JustWannaKnow",
"https://mathoverflow.net/users/150264"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626334",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352730"
}
|
Stack Exchange
|
Gaussian measure on function spaces
I'm reading this classic work and I'd like to get deeper inside some of its techniques. In particular, the authors state: "We construct a Gaussian measure $d\mu_{0}(\phi)$ on a measure space of continuous functions $\phi(x), x\in \Lambda \subset \mathbb{R}^{3}$ with covariance $u$:
\begin{eqnarray}
\int d\mu_{0}(\phi)e^{i\int f\phi} = e^{-\frac{1}{2}\int f u f} \tag{1}\label{1}
\end{eqnarray}
It is then straightforward to show that:
\begin{eqnarray}
e^{-\beta U} = \int d\mu_{0}(\phi) e^{i\sqrt{\beta}\sum_{\alpha}e_{i(\alpha)}\phi(x_{\alpha})}" \tag{2}\label{2}
\end{eqnarray}
First of all, how to construct such a Gaussian measure $d\mu_{0}$ on a space of continuous functions? Is it defined by condition (\ref{1}) or does (\ref{1}) follow as a consequence? Besides, how can we prove existence? Does anyone know any reference on this construction?
Second, equation (\ref{2}) seems to follow by taking $f = \sum e_{i(\alpha)}\delta(x_{\alpha})$. But how can we take such an $f$ is $f$ must be a continuous function rather than a distribution?
You should have a look at the book by Gelfand and Vilenkin
Generalized functions. Vol. 4: Applications of harmonic analysis
where they describe how to construct Gaussian measures on (duals) of nuclear spaces.
Thus, given an open set in $\newcommand{\bR}{\mathbb{R}}$ $D\subset \bR^n$ one begins by constructing a measure on the space $C^{-\infty}(D)$ of generalized functions on $D$. If the covariance kernel is sufficiently regular then this measure is concentrated one on a much smaller subspace.
Also, if you read French, I recommend this 1967 paper by Xavier Fernique. It is not the most comprehensive but I found it very helpful.
Finaly, there is V. Bogachev's book Gaussian Measures.
Just a quick answer for now. I would need to read carefully the definitions in the paper to be more precise.
In general you need the Bochner-Minlos Theorem which says there is a unique probability measure on Schwartz distribution for which (1) is satisfied. You can then convolve your random distribution $\psi$ by some nice continuous or smooth function to get a random disribution $\phi$ with law $\mu_0$. This relies on say $u$ being a convolution square.
Then to prove (2) you can use (1) for the law of $\psi$ and not $\phi$. The mollifier then hits the $\delta(x_{\alpha})$'s.
Also, one may construct $\phi$ directly as $\sum_{i} Z_i h_i$ where the $Z_i$ are iid standard Gaussians and the $h_i$ are suitable functions like perhaps eigenfunctions for the Laplacian.
Can you elaborate a little more, when you have a chance? It seems that this "space of continuous functions" must be, in fact, a schwartz space. I didn't follow you when you said to convolve $\psi$ with some function to get $\phi$.
|
2025-03-21T14:48:29.847325
| 2020-02-14T21:07:21 |
352731
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Benoît Kloeckner",
"Thomas",
"https://mathoverflow.net/users/102458",
"https://mathoverflow.net/users/4961"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626335",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352731"
}
|
Stack Exchange
|
Is the radial projection map area increasing?
Let $S$ be a hypersurface enclosed inside the unit sphere in $R^n$. Assume that every ray $\{t x: t \geq 0 \}$ intersects $S$ at most once.
Is it always true that ${\rm Area}(S) \leq {\rm Area}(P(S))$?
Here $P$ is the radial projection map onto $S^{n-1}$, i.e. $P(x) = x/\|x\|$.
(I am mostly interested in the 2-dimensional case.)
Thanks.
The answer is negative: the area of $P(S)$ is at most the area of the unit sphere, while the area of $S$ can be made arbitrarily high.
An $S$ contained in the unit sphere and star-shaped at $0$ can be parametrized by the radius in polar coordinates: $S=\{\phi(u)u : \lVert u\rVert=1\}$ where $\phi$ is any smooth function from the unit sphere to $(0,1)$. Now, the area of $S$ is something like
$$\int \phi^{n-1}\sqrt{\lVert \nabla \phi\rVert^2+1}$$
(a bit late here, so I might have gotten the formula wrong but in any case the integrand goes to infinity with $\nabla\phi$).
Taking $\phi$ with value in say $[\frac13,\frac23]$ and with a lot of variation (e.g. making fingers or wrinkles) we can easily make the area of $S$ arbitrarily high.
Yea, definitely. I was hoping that the extra condition on $S$ can avoid those 'fingers'.
If the fingers are straight, slightly conical, with axis containing the origin, then $S$ is star-shaped at $0$ as you asked.
Actually, you can make things more explicit, I'll edit my answer.
I think you are right. I need a stronger condition on S in order to have a hope.
Adding curvature bounds would certainly do the trick.
Here is another counterexample. This is in $\mathbb{R}^3$ for simplicity, but the same argument works in any dimension. Let
$$
S_\epsilon=\{(x,y,z):\, x^2+y^2\geq 0.01,\ z=\epsilon,\ z^2+y^2+z^2\leq 1\}.
$$
This is a disc parallel to the equator plane, $\epsilon$ above the equator, and with a small disc of radius $0.1$ removed. $P(S_\epsilon)$ is a small strip above the equator so $\operatorname{Area}(P(S_\epsilon))\to 0$ as $\epsilon\to 0^+$. Therefore
$$
\lim_{\epsilon\to 0^+}\frac{\operatorname{Area(P(S_\epsilon))}}{\operatorname{Area}(S_\epsilon)}=0.
$$
|
2025-03-21T14:48:29.847513
| 2020-02-14T21:57:08 |
352734
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Michael Engelhardt",
"https://mathoverflow.net/users/134299"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626336",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352734"
}
|
Stack Exchange
|
Schrodinger operator with magnetic field: eigenvalues
Consider the self-adjoint operator on $L^{2}(\mathbb{R}^{N})$,
$$H=-\frac{1}{2}(\nabla-iA)^{2}+V,$$
where $A\in C^{\infty}(\mathbb{R}^{N}, \mathbb{R}^{N} )$, $V\in C^{\infty}(\mathbb{R}^{N})$, $V\geq 0$ and $V(x)\rightarrow\infty$ as $|x|\rightarrow\infty$.
Does H have a purely discrete spectrum?
In the examples that immediately come to my mind, say $N=2$, $A=(y,-x)B/2$ (constant magnetic field), and $V=C(x^2 +y^2 )$ (harmonic oscillator), the spectrum is indeed discrete. I don't know where to point you for a general statement, though. What if $A$ is strong enough to overwhelm $V$ for $|x|\rightarrow \infty $? That might be a way to get a continuous part of the spectrum. It could be you need more conditions in that respect.
|
2025-03-21T14:48:29.847876
| 2020-02-14T22:14:17 |
352735
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626337",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352735"
}
|
Stack Exchange
|
Right approximations for special modules in Frobenius algebras
Let $A$ be a commutative Frobenius algebra (we can assume $A$ is also local) given by quiver and relations.
Let $M_i=A/p_iA$ be a module where $p_i$ is a path in Q.
Let $N:=A \oplus \bigoplus\limits_{i=1}^{r}{M_i}$.
Question: Is there a nice way to describe a minimal $add(N)$ right approximation of an indecomposable $A$-module $X$ (we can assume $X$ is a submodule of $X$ in case that helps)?
|
2025-03-21T14:48:29.847937
| 2020-02-14T22:53:41 |
352739
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/40804",
"mme"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626338",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352739"
}
|
Stack Exchange
|
Is the natural action of the monoid of endomorphisms is a complete invariant for group?
Let $\alpha$ and $\beta$ be actions of semigroups $A$ and $B$ on sets $X$ and $Y$ respectively. Recall that $\alpha$ and $\beta$ are called isomorphic if there exists an isomorphism $\phi$ between the semigroups and a bijection $\psi$ between the sets such that $\forall f \in A, x \in X : \psi (f (x))) = (\phi (f)) (\psi (x))$.
Let $G$ and $H$ be groups. Consider the natural actions of endomorphism monoids $\operatorname{End}(G)$ and $\operatorname{End}(H)$ on $G$ and $H$ respectively. Is it true that if these actions are isomorphic, then $G$ and $H$ are isomorphic?
I think there are many torsion-free abelian groups $A$ of given cardinal $\alpha$ (countable, or continuum) whose endomorphism ring is reduced to $\mathbf{Z}$ (see the references at this MathSE answer. I guess it can also be arranged that the maximal locally cyclic subgroups are cyclic . As a $(\mathbf{Z},\times)$-set, $A$ is then just a disjoint copy of ${0}$ and $\alpha$ copies of $\mathbf{Z}$ (the maximal cyclic subgroups) with the standard action.
The question seems interesting for finite groups, though.
For finite groups: I guess that for large $p$ (say $p\ge 7$) one can find examples among groups of exponent $p$ and order $p^7$ (or $p^8$). Some computer help would be useful. Namely take some known 1-parameter family of 7-dimensional Lie algebras, and take it over $Z/pZ$. Computer can help described the endomorphism semigroup and converge to the right example. Also there are lists of finite $p$-groups and computer can help too (e.g., among groups of order 16, 32, 64, etc).
|
2025-03-21T14:48:29.848070
| 2020-02-11T19:52:18 |
352483
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"David Pepper",
"LSpice",
"Piyush Grover",
"https://mathoverflow.net/users/134093",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/30684"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626339",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352483"
}
|
Stack Exchange
|
What kind of differential equation problem is this?
I have a function $f(x,t;k)$, a starting point $x_0$, a gradient $\operatorname{Grad}(f)$, and an equilibrium point $x^*$. I can adjust the parameter $k$ freely, and I know that for any $k$ the process will eventually reach $x^*$.
I want to know which value of $k$ will produce a trajectory that takes me to $x^*$ in the shortest time possible.
What kind of a differential equation problem is this? Is there a canonical solution? Is there a reference somewhere I can check to get more information?
No, I have to set k at the beginning of the process and then keep it constant. Otherwise, yes, it's a standard control problem.
What is 'the process' in "for any $k$ the process will eventually reach $x^*$"?
x starts at x_0 and then follows its gradient every period.
Ok, I misunderstood your post, nevermind. What you are looking for is steepest descent, which is a heavily studied topic. Without more knowledge of how k appears, it is hard to give a reference.
|
2025-03-21T14:48:29.848171
| 2020-02-11T20:06:47 |
352484
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Mateusz Kwaśnicki",
"Peter",
"Sangchul Lee",
"fedja",
"https://mathoverflow.net/users/108637",
"https://mathoverflow.net/users/1131",
"https://mathoverflow.net/users/15602",
"https://mathoverflow.net/users/82510"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626340",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352484"
}
|
Stack Exchange
|
Multidimensional random walk falling pointwise below some threshold
Consider a random walk $X_t = \sum_{s=1}^t D_s$ with i.i.d. increments $D_t \in \mathbb{R}^n$,such that $X$ is a martingale $\mathbb{E}[D_t]=\vec{0} \in \mathbb{R}^n$, the support of $D_t$ is bounded, and $D_{t,i}$ has strictly positive variance for all $i \in \{1,\ldots,n\}$.
Is it true that the probability that the random walk eventually pointwise falls below some threshold $k < 0$ equals $1$, i.e.
$$
\mathbb{P}[ \exists t \text{ such that } X_{t,i} \leq k \text{ for all } i \in \{1,\ldots,n\} ] = 1 \,.
$$
for all $k<0$?
Also posted here the question here
What do you mean by "$D_t$ has strictly positive variance?" Does it mean that the covariance matrix of $D_t$ is positive-definite?
By the central limit theorem and Borel-Cantelli, the probability that this happens infinitely often is positive, and by Kolmogorov's 0-1 law it is necessarily one.
@sangchulLee thanks a lot for this question that was indeed unclear. I meant the componentwise variance and adjusted the question.
If that means component-wise positive variance, it is not enough to capture the dimensionality of your random walk, and so, the answer may be false. For instance, consider the $n=2$ case with the step distribution $$\mathbb{P}(D_t=(1,-1))=\mathbb{P}(D_t=(-1,1))=\frac{1}{2}.$$ Then $\mathbb{E}[D_t]=\vec{0}$ and $\operatorname{Var}(D_{t,i})=1$ for each $i=1,2$, but the walk $(X_t)$ is constrained on the line $x+y=0$, which never touches the region $(-\infty,k)^2$ if $k<0$.
@SangchulLee Thanks a lot, I should have thought of this example. Do you know if positive definiteness or a full rank condition is enough to rule this out?
@MateuszKwaśnicki Indeed. But what happens if only the expectation of the increment exists? (assuming that $D$ is not contained in any hyperplane, of course).
@fedja: Good question! Obviously – yes, if $n = 1$ (the random walk is then recurrent). Also – yes, if the distribution of $X_1$ is in the domain of attraction of a stable law, by exactly the same argument as in my previous comment. I bet the answer is the same in the general case, but at this moment I fail to see a proof.
@fedja: I was wrong! The answer is no. A counter-example is too long for a comment; see my answer below.
An extended comment, answering the question asked by fedja in one of the comments. (Edited: In order to simplify the example I chose to work in dimension two, but this example requires dimension at least three.)
My bet was incorrect: I am very surprised to find that the answer is negative! In order to construct a counterexample (in dimension 3, for simplicity), consider two independent random walks: $A_n$ the simple random walk in $\mathbb{Z}$, and $B_n$ a random walk with symmetric $\alpha$-stable distribution in $\mathbb{R}^2$ for some $\alpha \in (1, 2)$. Fix $\epsilon > 0$. We have $$|B_n| \geqslant n^{1/\alpha - \epsilon}$$ for all $n$ large enough (see Corollary 2 in Takeuchi, doi:10.2969/jmsj/01620109). Furthermore, by LIL, $$A_n \ge -n^{1/2 + \epsilon}$$ for all $n$ large enough. Choosing $\epsilon > 0$ small enough, we find that for an arbitrary constant $p > 0$, $$p A_n + |B_n| \geqslant 0$$ for all $n$ large enough. In other words, with probability one the random walk $(A_n, B_n)$ visits the cone $\{(a, b) \in \mathbb{R} \times \mathbb{R}^2 : p a + |b| < 0\}$ finitely many times.
It remains to choose $p$ and rotate $(A_n, B_n)$ appropriately, so that the above cone fits into the negative octant. To be specific, we set $$X_n = A_n \vec{v}_1 + B_{n,1} \vec{v}_2 + B_{n,2} \vec{v}_3, $$ where $\vec{v}_1 = \tfrac{1}{\sqrt{3}} (1, 1, 1)$ and otherwise $\vec{v}_1, \vec{v}_2, \vec{v}_3$ are arbitrary orthonormal vectors in $\mathbb{R}^3$. Then, by an easy calculation, $$ \max\{X_{n,1}, X_{n,2}, X_{n,3}\} \geqslant \tfrac{1}{\sqrt{3}} A_n + \tfrac{1}{\sqrt{6}} |B_n| \ge 0 $$
for all $n$ large enough (by choosing $p = \sqrt{2}$ above). This means that there is $k < 0$ such that with positive probability we have $X_{n,j} \geqslant k$ for every $j = 1, 2, 3$ and every $n = 0, 1, \ldots$
|
2025-03-21T14:48:29.848492
| 2020-02-11T20:07:16 |
352485
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Carlo Beenakker",
"Display Name",
"Dmitry Vaintrob",
"Exit path",
"Francois Ziegler",
"Jannik Pitt",
"LSpice",
"Michael Engelhardt",
"Phil Tosteson",
"SBK",
"Sandeep Silwal",
"Sergei Akbarov",
"Sylvain JULIEN",
"Tom Copeland",
"Vincent",
"chiliNUT",
"https://mathoverflow.net/users/101861",
"https://mathoverflow.net/users/11260",
"https://mathoverflow.net/users/117393",
"https://mathoverflow.net/users/12178",
"https://mathoverflow.net/users/122587",
"https://mathoverflow.net/users/134299",
"https://mathoverflow.net/users/13625",
"https://mathoverflow.net/users/140919",
"https://mathoverflow.net/users/152248",
"https://mathoverflow.net/users/18943",
"https://mathoverflow.net/users/19276",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/41139",
"https://mathoverflow.net/users/52918",
"https://mathoverflow.net/users/7108",
"https://mathoverflow.net/users/83122"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626341",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352485"
}
|
Stack Exchange
|
Why is the Fourier transform so ubiquitous?
Many operations and equivalences in mathematics arise as some sort of Fourier transform. By Fourier transform I mean the following:
Let $X$ and $Y$ be two objects of some category with products, and consider the correspondence $X \leftarrow X \times Y \to Y$. If we have some object (think sheaf, function, space etc.) $\mathcal{P}$ over $X \times Y$ and another, say $\mathcal{F}$ over $X$, assuming the existence of suitable pushpull and tensoring operations, we may obtain another object over $Y$ by pulling $\mathcal{F}$ back to the product, tensoring with $\mathcal{P}$, then pushing forward to $Y$.
The standard example is the Fourier transform of functions on some locally compact abelian group $G$ (e.g. $\mathbb{R}$). In this case, $Y$ is the Pontryagin dual of $G$, $\mathcal{P}$ is the exponential function on the product, and pushing and pulling are given by integration and precomposition, respectively.
We also have the Fourier-Mukai functors for coherent sheaves in algebraic geometry which provide the equivalence of coherent sheaves on dual abelian varieties. In fact, almost all interesting functors between coherent sheaves on nice enough varieties are examples of Fourier-Mukai transforms. A variation of this example also provides the geometric Langlands correspondence
$$D(Bun_T(C)) \simeq QCoh(LocSys_1(C))$$
for a torus $T$ and a curve $C$. In fact, the geometric Langlands correspondence for general reductive groups seems to also arise from such a transformation.
By the $SYZ$ conjecture, two mirror Calabi-Yau manifolds $X$ and $Y$ are dual Lagrangian torus fibrations. As such, the conjectured equivalence
$$D(Coh(X)) \simeq Fuk(Y)$$
is morally obtained by applying a Fourier-Mukai transform that turns coherent sheaves on $X$ into Lagrangians in $Y$.
To make things more mysterious, a lot of these examples are a result of the existence of a perfect pairing. For example, the Poincaré line bundle that provides the equivalence for coherent sheaves on dual abelian varieties $A$ and $A^*$ arises from the perfect pairing
$$A \times A^* \to B\mathbb{G}_m.$$
Similarly, the geometric Langlands correspondence for tori, as well as the GLC for the Hitchin system, arise in some sense from the self-duality of the Picard stack of the underlying curve. These examples seem to show that non degenerate quadratic forms seem to be fundamental in some very deep sense (e.g. maybe even Poincaré duality could be considered a Fourier transform).
I don’t have a precise question, but I’d like to know why we should expect Fourier transforms to be so fundamental. These transforms are also found in physics as well as many other “real-world” situations I’m even less qualified to talk about than my examples above. Nevertheless I have the sense that something deep is going on here and I’d like some explanation, even if philosophical, as to why this pattern seems to appear everywhere.
Maybe one should not consider time and frequency like independent entities, but rather like complementary aspects of the same underlying "time-frequency", say like space and time in physics. As the Fourier transform is an automorphism of the Schwartz space, it may play a role analogous to the one played by Lorentz transformations in relativity.
one answer you may like: Fourier transform reduces the non-abelian world of linear, translation-invariant operators and matrices to the abelian world of scalars
I wonder if you also count the double fibration transforms of integral geometry (e.g. Radon, Penrose: pdf) as “Fourier” instances.
I do not understand your definition of Fourier transform in a category (the scheme with $X$, $Y$, $X\times Y$, etc.). Could you explain this in detail?
One point is that all "linear" transformations from things on X to things on Y, should take the form pull-tensor-push. This is just a generalization of the fact that every linear transformation between vector spaces is given by a matrix (think of X and Y as finite sets, and "things" as functions). Similar statements hold for "linear" functors between derived categories, or linear transformations between function spaces.
@PhilTosteson can this be formalized. at least for the simplest cases like $\mathbb T$ and $\mathbb Z$?
What do you mean by "morally" obtaining?
@Sergei A formalization for functions is: https://en.m.wikipedia.org/wiki/Schwartz_kernel_theorem. For categories of sheaves, there is a derived variant of the Eilenberg Watts theorem-- though it requires a dg enhancement or equivalent data to state it.
@SylvainJULIEN Also in quantum mechanics the Fourier transform is used to transform position into momentum.
@PhilTosteson this looks like a misunderstanding, there is no a formalization of what leibnewtz speaks about in your link. Is there a text with an accurate description of Fourier transform in categories, or at least with the idea illustrated by examples?
@Sergei Akbarov I don’t claim that there is a way to make the procedure absolutely rigorous. It just seems to be a pattern. For example, with quasicoherent sheaves over schemes $X$ and $Y$ the pullback and push forward operations are the usual ones and the tensor product is just the one for quasicoherent sheaves
leibnewtz, I actually have a very weak intuition in sheaves... What does this mean for functions? From what is written I have a feeling that one can formalize this more or less easily in simple examples, but I don't understand the idea. If $G$ is a finite abelian group, what are your constructions in this case?
@SergeiAkbarov For $f:G \to H$ a morphism of finite groups, pullback of functions is just precomposition with $f$ and pushforward will be adding up the fibers. That is, for a function $g: G \to k$ for some ring $k$, its pushforward $f_* (g)$ will be defined by the following formula: $f_*(g)(x)=\sum_{y \in f^{-1}(x)}g(y)$. The tensor product is just multiplication of functions. Notice this generalizes to locally compact abelian groups by replacing the sum with an integral (as long as we restrict to functions integrable on the fibers). In particular we recover the ordinary Fourier transform
I think it can be formalized in particular cases, I’m just not sure how to construct a definition that encompasses all cases
@chiliNUT the reason I say “morally” is because the fibrations might have singular fibers, so there are technical instructions to defining the Fourier transform - or so I’m told
leibnewtz, the impression is that this construction will present the Fourier transform only if you have two different functions ${\mathcal P}$ on $X\times Y$ for the operations of passage from $k^X$ to $k^Y$, and from $k^Y$ to $k^X$. This will correspond to the multipliers $e^{ixy}$ and $e^{-ixy}$ in the Fourier integrals.
@SergeiAkbarov Each one of those kernels will provide inverse Fourier transforms; one from $k^X$ to $k^Y$, and the other from $k^Y$ to $k^X$
From a probability perspective, the Fourier transform is the characteristic function which uniquely determines a distribution (under some mild constraints). The 'addition to multiplication' property of the Fourier transform then makes it easy to prove results about the sums of random variables, such as CLT.
question about fourier transform....Oh cool OK I'm an analyst maybe I'll check out this questi-oooooh boy. I mean this in a lighthearted way, but this is almost parody levels of MathOverflowness, a Fourier transform question about category theory, mirror symmetry, geoemtric langlands.
@T_M I know basically nothing about analysis, but the ordinary Fourier transform of $L^2$ functions is also an example of this pattern. I'd be happy to learn more about it
I wonder if the question could be extended to an algebro-geometric discussion of the ubiquity of special functions in general in mathematical physics and other realms of applied math along the lines followed in "A Catalogue of Sturm-Liouville differential equations" by W.N. Everitt. (Related background: "The influence of elasticity on analysis: The classic heritage" by C. Truesdell and "PDEs, ODEs, Analytic Continuation, Special Functions, Sturm-Liouville Problems and All That" by Burgess.)
From the point of view of physics, Fourier transforms are ubiquitous because they are expansions in eigenfunctions of the derivative operator - and the derivative operator is fundamental in many aspects. Just to give two examples: The derivative operator is the generator of translations (in space or time), and to learn about the natural world, it is crucial that translations are symmetry operations - how would we learn about the natural world if we couldn't reproduce experiments at different times in different places? Secondly, field theories rely on locality, i.e., degrees of freedom only interact with their immediate neighbors - this naturally leads to dynamics described by derivatives.
To add to this answer, exponential functions are also eigenfunctions of the time shift operator. This means that any operator that commutes with the time shift is diagonalized in a basis of exponentials, or in other words that the Fourier (or Laplace, which includes exponential growth and decay) transform is appropriate for all linear, time invariant systems. Every system is linear for small enough amplitudes and time-invariant for small enough times, so there you go.
@DisplayName - indeed, in physics, one often refers to "time shifts" as "translations in time" ...
There's something disturbing about the statement "every system is time-invariant for small enough times" that somehow doesn't disturb me about "every system is linear for small enough amplitudes."
@LSpice - Yes, that statement also struck me as a bit odd. Of course, there are plenty of systems that are neither time-invariant even for arbitrarily small times nor harmonic even for arbitrarily small amplitudes, so the "every system is" should be qualified in any case. But if we stick with the most common examples, which are indeed harmonic for small amplitudes, they usually are time-invariant altogether, with no "for small enough times" qualifier needed at all. And of course, as far as time evolution goes, we don't get to stick around at our favorite stationary point in time.
@MichaelEngelhardt Sticking around our favorite stationary point in time works if we are allowed to pick several stationary points in sequence and patch the local solutions together. When the excitation frequencies are much higher than the frequency of time varying this can even lead in the infinite limit to an important approximation technique that I forget the name of.
@DisplayName - What you describe sounds like what I would call an adiabatic approximation. Indeed, there are important systems for which this is applicable. It does require specific conditions to be fulfilled, as you note - so this is a bit different from saying, "Every system ..."
To add a representation theory perspective: if $G$ is a Lie group, and $f$ is a function (or more precisely a distribution) on $G$ then (under certain mild conditions on $f$ and $G$), the function $f$ is uniquely determined by its unitary matrix coefficients, i.e. the coefficients of the matrix $\rho(f)$ where $\rho:G\to GL_n$ goes over all isomorphism classes of unitary and irreducible representations. This perspective should be understood as a change of basis that reveals the "underlying equivariant properties" of a function $f$, i.e. the properties important from the point of view of representation theory.
Now the unitary irreducible representations of the additive group $\mathbb{R}$ are one-dimensional representations $\rho_\alpha: t\mapsto e^{i\alpha t}$ indexed by $\alpha\in \mathbb{R}$, and so the matrix coefficient decomposition of a function is precisely its Fourier transform. This hints that whenever you are interested in problems with additive equivariance (action by $\mathbb{R}$), you should expect to see Fourier transforms.
Your Fourier-Mukai example is an example of the same phenomenon "one category level higher". Namely, coherent sheaves over an algebraic group $G$ form a monoidal category under convolution. A partial analogue of "function $f$ on $G$ acts on line bundles the stack $BG$" (i.e. invertible representations) is "coherent sheaf $F$ on $G$ acts on gerbes on $BG$". In the case of abelian varieties, Gerbes on $BG$ are (more or less) the dual variety and the "matrix coefficients" of this action turn out to precisely be a change of basis (in this case, an equivalence of categories, now given by Fourier-Mukai). For nonabelian groups, the situation is more complicated, since it's not enough to consider gerbes, and it's tricky to say exactly what is an irreducible module category over a monoidal category... but for any reasonable extension of this picture you give there will always be a "matrix coefficient" functor.
This is very interesting! Could you explain what you mean by $\rho(f)$? On the other hand, I'd be very interested in a definition of irreducible categorical representation (say for the category $D(G)$ of $D$-modules on $G$ acting on some linear category $\mathcal{C}$). Do you know if such a definition has been attempted in the literature?
$\rho$ is a function from elements of $G$ to matrices and $\rho(f)$ is defined as $\int_G f(g) dg$ (the "weighted action" by $f$).
I don't know if anyone has tried to formulate what an irreducible categorical representation would be, though highest weight representations do have a clear categorification (equivariant categories of sheaves on $G/U$).
From the point of view of engineering, sin and cos are eigenfunctions of LTI (linear time-invariant) systems, which makes the Fourier transform immanently important for system theory - and thus for control theory, signal processing and many other fields that make use of LTI systems
Someone had to say it! +1
LTI systems can also have exponentially growing eigenfunctions, which is why the Laplace transform is used instead of the Fourier transform in many situations.
Another perspective on the connections among "Fourier" transforms and physical applications can be gleaned from the discussion in the MO-Q Explaining the Fourier-Mukai transform physically on constructive and destructive interference and the general relations among Green (or Green's) functions, associated convolutions/integral transforms, and impulse responses of a physical system as presented in the Wikipedia entry. A particularly far-reaching example of destructive and constructive interference in a superposition of elements of an eigenbasis is Huygen's principle. (For a quick intro to the connections between input and output of a system through convolutions and integral transforms, see Sec. 3.5, pp.53-65, of Mathemagics: A tribute to L. Euler and R. Feynman by Cartier on Heaviside's magic trilogy.)
More circuitously, on the differential/algebraic geometry side, the Laplace transform, a close relative of the classical Fourier transform with its own convolution theorem, is intimately related to compositional inversion and, therefore, the Legendre-Fenchel transform, both of which figure in Koszul duality of quadratic operads, diffeomorphism relations between quantum fields, the combinatorics of associahedra, cumulant-expansion theorems, free probability theory, and general algebraic geometry.
Another variant is the Mellin transform which plays key roles in analytic number and interpolation theory and the realm of finite differences.
A very important example of a Green/impulse response function is the sinc function, a central character in the Shannon sampling theorem and Cesaro summation of divergent series.
In addition to the Frenkel ref on the Langlands program in the linked MO-Q related to the Fourier-Mukai transform, there is the more recent paper "An analytic version of the Langlands correspondence for complex curves" by Etingof, Frenkel, and Kazhdan (https://arxiv.org/abs/1908.09677) eschewing sheaves for functions.
In Mathemagics, eqns. 51, 52, and 53 should have zero explicitly as the lower limits of integration of the integrals, identifying 51 as a Laplace convolution, 52 as a Laplace transform, and 53 as a Mellin transform evaluated at positive integers.
Central to the mystery of quantun mechanics is the interference of complex wave functions representing probability amplitudes. To speak in generalities of linear transformations, groups, and symmetries without accounting for interference effects is a sterile exercise w.r.t. characterizing QM and, more prosaically, coherent imaging systems. (When discussing conservation laws, invariants, and equivalencies, symmetries come to the forefront.)
Related: https://mathoverflow.net/questions/9834/heuristic-behind-the-fourier-mukai-transform?noredirect=1&lq=1
A signature property of the fourier transform is that it converts convolution into multiplication. (This is crucial for real world applications such as image processing especially as in the discrete case the fourier transform can be computed very quickly using the FFT).
Another signficant property of the fourier transform is that it measures how randomly a finite set is distributed via the size of its fourier coefficients.
This is applied in Roth's proof of Szemeredi's as one can show that if all the fourier coefficients of a set are small then the set has approximately the right number of arithmetic progressions of length three.
Similarly, counting solutions to algebraic equations over finite fields uses the fact that the values of polynomials including simple powers are randomly distributed and this can be measured using the fourier transform and exponential sum estimates.
I think this will be unexpected, but from the point of view of stereotype theory, Fourier transform is ubiquitous because it is an example of a general categorical construction -- envelope. This is a formal construction that describes in the language of category theory different mathematical operations of "taking nearest exterior of a given class" (in contrast to the dual construction of taking the "interior enrichment", which is called refinement). The examples of envelopes are
the completion of a locally convex space,
the Stone–Čech compactification of a topological space,
the Arens-Michael envelope of a topological algebra, etc.
Fourier transform is also an example because in different "big geometries" it turns out to be a special case of the key envelopes used in the construction of these geometries. In particular, at the "branch of this tree" that can be called "topology" we obtain the following result:
for each locally compact abelian group $G$ the Fourier transform ${\mathcal F}:{\mathcal C}^\star(G)\to {\mathcal C}(\widehat{G})$ is a continuous envelope of the stereotype group algebra ${\mathcal C}^\star(G)$ of measures with compact support on $G$.
The same results are true in other big geometries: in differential geometry (with the smooth envelope as the key construction) and in complex geometry (with the Arens-Michael envelope, see details here and here).
|
2025-03-21T14:48:29.849851
| 2020-02-11T21:31:44 |
352490
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Anthony Quas",
"Christian Remling",
"Rahul Sarkar",
"Wojowu",
"https://mathoverflow.net/users/11054",
"https://mathoverflow.net/users/151406",
"https://mathoverflow.net/users/30186",
"https://mathoverflow.net/users/48839"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626342",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352490"
}
|
Stack Exchange
|
Find a circle that intersects the image of $[0,1]$ in a manifold $\mathcal{M}$ at only 1 point
Let $\gamma : [0,1] \rightarrow \mathcal{M}$ be a continuous map so that $[0,1]$ is homeomorphic to $\gamma([0,1])$, where $\mathcal{M}$ is a manifold (Hausdorff, second countable, and locally Euclidean). Using a chart containing $\gamma(0)$, I think it is always possible to find a circle centered at $\gamma(0)$ that intersects the curve $\gamma([0,1])$ at a single point. Can someone help me prove this?
The intuition I have is that it should be possible to choose a radius $r$ small enough so that $\gamma([0,1])$ intersects the circle exactly once. However I'm not sure how to reason that the curve is not like a "space filling curve" locally around $\gamma(0)$. I know this has to do something with $[0,1]$ (hence $\gamma([0,1])$) being compact, but not sure how the argument should go.
My understanding is that $\operatorname{dim}\mathcal{M}=2$ since we are taking about circles. Also that the precise formulation of the question is:
There is a coordinate system $\phi:U\subset\mathcal{M}\to\mathbb{R}^2$ and $r>0$ such $\gamma(0)\in U$ and the set
$$
\phi(\gamma([0,1])\cap U)\cap \underbrace{S^1(\phi(\gamma(0)),r)}_{circle}
$$
consists of exactly one point.
Yes, that is possible.
Here is a sketch of the argument. We can assume that we are in a coordinate chart homeomorphic to $\mathbb{R}^2$ (since it suffices to consider a small piece of the curve that is near $\gamma(0)$). Then $\gamma([0,1])$ can be extended to a closed Jordan curve, see https://mathoverflow.net/a/75350/121665. By Schonflies theorem there is a homeomorphism of the chart that maps the Joradn curve to a circle. Therefore you can find a coordinate chart in which your curve near $\gamma(0)$ is an arc of a circle or even a straight segment. Then the result is obvious.
I'm not sure why the last sentence is right "Then the result is obvious". The problem is that your coordinate chart doesn't preserve circles.
I think you are assuming that $\gamma$ is a homeomorphism, but the OP doesn't say this. Clearly the claim is false under the assumptions the OP stated. (I'm also wondering why $\dim \mathcal M =2$, though perhaps the use of the word circle suggests this.)
@AnthonyQuas $\mathcal M$ is just a manifold, so doesn't have an intrinsic notion of a circle (like Riemannian manifolds, say). I suspect OP means topological circles.
@Wowoju: I don't think so: notice that the OP talks about the radius of the circle...
@AnthonyQuas My apologies for not being precise with the terminology. The precise version of the question is given in the answer above by Piotr Hajlasz (though it should also add "there exists an r"). I meant radius of the circle in the coordinate system containing $\gamma(0)$. So yes, upstairs in the manifold it is indeed a topological circle.
|
2025-03-21T14:48:29.850075
| 2020-02-11T21:49:56 |
352491
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"dohmatob",
"https://mathoverflow.net/users/78539"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626343",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352491"
}
|
Stack Exchange
|
Smallest eigenvalue for large kernel matrix
I am interested in the the asymptotics of the minimum eigenvalue $\lambda_n^n$ of a class of kernel matrix $P = [ K(x_i - x_j) ]_{i,j}$, with $x_i$ equally spaced in the unit cube of $\mathbb{R}^d$.
Here the kernel $K$ is positive symmetric with finite smoothness, i.e. the Fourier transform $$\widehat{K}(\omega) \sim ||\omega||^{-\beta - d},$$
where $\beta >0$ is the smoothness parameter, and $d$ is the dimension.
According to 'Error estimates and condition numbers for radial
basis function interpolation' (Schaback), the minimum eigenvalue
$$c n^{-\beta/d}\le \lambda^n_n \le C n^{-\beta/d} \quad \mbox{for some
}c, C >0.$$
My question is whether there is any result regarding the convergence of $n^{\beta/d} \lambda_n^n$ ? i.e. $n^{\beta/d} \lambda_n^n \rightarrow A$ as $n \rightarrow \infty$ ? Is there any way to prove this result ?
There is a closely related topic on the eigenvalues of continuous operator, say $Tf: = \int K(x - y)f(y) dy$. The kernel matrix can be regarded as a discretization of the continuum operator.
Let $\lambda_1 > \lambda_2 \ldots$ be the eigenvalues of $T$.
It is known that $\lambda_i$ can be written as Kolmogorov n-width, and classical results of Joseph Jerome imply that
$$\lambda_i \sim Ci^{-(\beta+d)/d} \quad \mbox{for some } C >0.$$
So it is natural to expect a similar result for the kernel matrix.
Also there has been some work on quantifying $|\lambda^n_i/n - \lambda_i|$, e.g. 'Accurate Error Bounds for the Eigenvalues of the Kernel Matrix' (Braun). However, the estimates are too large to conclude.
Maybe this can help: Spectrum of Kernel Random Matrices https://projecteuclid.org/download/pdfview_1/euclid.aos/1262271608
|
2025-03-21T14:48:29.850216
| 2020-02-11T21:53:50 |
352492
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Ian Agol",
"https://mathoverflow.net/users/1345"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626344",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352492"
}
|
Stack Exchange
|
Lattices of minimal covolume in $\operatorname{SL}_2(\mathbb{R}) \times \operatorname{SL}_2(\mathbb{R})$
What are the (uniform/non-uniform) irreducible lattices of minimal (or even small) covolume in $\operatorname{SL}_2(\mathbb{R}) \times \operatorname{SL}_2(\mathbb{R})$?
Context: Such a lattice will be an order in a quaternion algebra over a totally real number field that (the algebra) splits at precisely two infinite places. Its covolume can be computed from number theoretic data [Prasad]. For the groups $\operatorname{SL}_2(\mathbb{R})$ and $\operatorname{SL}_2(\mathbb{C})$ people have worked out the lattices of minimal covolume explicitly. Is there something similar for $\operatorname{SL}_2(\mathbb{R})\times \operatorname{SL}_2(\mathbb{R})$?
Some info here: https://arxiv.org/abs/1501.06443
|
2025-03-21T14:48:29.850288
| 2020-02-12T01:47:38 |
352504
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ABIM",
"Dirk",
"Iosif Pinelis",
"Raghav",
"https://mathoverflow.net/users/36721",
"https://mathoverflow.net/users/36886",
"https://mathoverflow.net/users/69849",
"https://mathoverflow.net/users/9652"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626345",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352504"
}
|
Stack Exchange
|
$2$-Wasserstein distance between mixtures
I am stuck on the following problem. I have a discrete distribution $\mu_0$ (it is actually an empirical distribution). I have some $\mu_i$ (again discrete, some empirical distribution). I have some bound on the Wasserstein distance $W_2(\mu_0, \mu_i).$ I now want to consider a simple mixture of $\mu_i,$ that is, $\nu=\sum\limits_{i=1}^{m}\lambda_i\mu_i$ where $\sum\lambda_i=1, \lambda_i>0.$
My goal is to bound $W_2^2(\mu_0, \nu).$ I felt that it would be easy to get a bound on $W_2^2(\mu_0, \nu)$ in terms of $W_2^2(\mu_0, \mu_i),$ but I am unable to prove anything. I want something like $$W_2^2(\mu_0, \nu)\le \sum \lambda_i^2 W_2^2(\mu_0, \mu_i).$$
This does not look terribly hard, but I am stuck. Can anyone please say if it is true or not? If anyone can give a simple demonstration of why this is true, it would be great.
$\newcommand\Ga{\Gamma}$ $\newcommand\ga{\gamma}$ $\newcommand\la{\lambda}$
Let $\Ga(\mu,\rho)$ denote the set of all measures with marginals $\mu$ and $\rho$. For each $i$, take any real $c_i>W_2(\mu_0,\mu_i)^2$, so that
$$\int d(x,y)^2\ga_i(dx\times dy)<c_i$$
for some $\ga_i\in\Ga(\mu_0,\mu_i)$. Let
$$\ga:=\sum_i\la_i\ga_i.$$
Then $\ga\in\Ga(\mu_0,\nu)$ and hence
$$W_2(\mu_0,\nu)^2\le\int d(x,y)^2\ga(dx\times dy)
=\sum_i\la_i \int d(x,y)^2\ga_i(dx\times dy)<\sum_i\la_i c_i.$$
Letting now $c_i\downarrow W_2(\mu_0,\mu_i)^2$ for each $i$ such that $W_2(\mu_0,\mu_i)^2<\infty$, we get
$$W_2(\mu_0,\nu)^2\le\sum_i\la_i W_2(\mu_0,\mu_i)^2.$$
The inequality you proposed,
$$W_2(\mu_0,\nu)^2\le\sum_{i=1}^k\la_i^2 W_2(\mu_0,\mu_i)^2,\tag{1}$$
cannot hold in general. Indeed, suppose that for some probability measure $\rho$ we have $0<W_2(\mu_0,\rho)<\infty$. Let $\mu_i:=\rho$ and $\la_i:=1/k$ for all $i=1,\dots,k$. Then $\nu=\rho$ and the left-hand side of (1) is a constant $>0$, whereas its right-hand side goes to $0$ as $k\to\infty$.
Thanks! Yes, I realised that we can not hope for $\lambda_i^2$ in the right hand side. I could finally get the result with $\lambda_i$ but your argument is neater.
Also note that this argument works for any cost function!
If I'm not mistaken this argument works for any mixtures in $W_2$ no?
@Dirk : That's right.
@Zorn'sLama : That's right.
Is this a well-known result? That is, do you know a citeable reference to this?
@Elbebe : No, I don't know any reference to this. You can cite MathOverflow questions and answers by click on the "cite" buttons below them.
The argument should work for $W_p$ for any $p\in [1,\infty)$ no?
@BLBA : That's right.
|
2025-03-21T14:48:29.850472
| 2020-02-12T02:41:57 |
352506
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Wojowu",
"https://mathoverflow.net/users/30186",
"https://mathoverflow.net/users/84768",
"reuns"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626346",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352506"
}
|
Stack Exchange
|
irrationality of Bessel function in $p$-adic
Let $J_0(z)=\sum_{n\ge 0}\frac{(-1)^n}{n!^2}\left(\frac z2\right)^{2n}$ be the Bessel function considered in $\mathbb C_p$. Let $\alpha\in\mathbb Q^*$ be in the convegence disk of $J_0$. Is $J_0(\alpha)$ irrational? That sounded a pretty natural question, but I find nothing about this in googling.
Any answer will be welcome.
$J_0(1/k)$ is irrational. What does it have anything to do with the $p$-adics ?
@reuns You can consider the sum of the provided series in the $p$-adic numbers (as long as it converges). Irrationality of the value when considered in real numbers doesn't necessarily have any connection to whether the $p$-adic sum is in $\mathbb Q$.
|
2025-03-21T14:48:29.850552
| 2020-02-12T04:33:54 |
352510
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Daniele Tampieri",
"Deane Yang",
"Hollis Williams",
"https://mathoverflow.net/users/113756",
"https://mathoverflow.net/users/119114",
"https://mathoverflow.net/users/613"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626347",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352510"
}
|
Stack Exchange
|
References for systems of elliptic PDEs
I was wondering if there were any recent references dealing with the theory of systems of elliptic PDEs: in particular, someone was telling me about something which sounded like 'Schur complementarity' and reducing a system of elliptic PDE down to one PDE, does anyone know of a recent reference which explains this theory?
I did try the book by Ladyzhenskaya but there was not that much on systems of PDE, perhaps this is quite an old reference now though.
A not so recent reference are the papers of Agmon, Douglis, and Nirenberg.
Since, for the part on the Schur complement, @DenisSerre has adequately answere, I'd like to point out the book of Carlo Miranda, Partial differential equations of elliptic type, 2nd rev. ed. Translated from the Italian by Zane C. Motteler. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 2, Berlin-Heidelberg-New York: Springer-Verlag pp. XII+370 (1970), MR0284700, ZBL0198.14101.
While the book of Ladyzhenskaya et al is more oriented to the then recently introduced De Giorgi's regularity theory and therefore is oriented to the analysis of single, divergence type partial , this book tries to embrace the whole field of elliptic PDEs and thus is focused more on systems than on single equations. It is an "old" reference, thus there are problems and methods not even touched nor imagined at the time of its writing: nevertheless, it is worth reading.
@Deane Yang: Were there any papers in particular which you would recommend?
There are papers written jointly by Agmon, Douglis, Nirenberg.
In matrix analysis, the Schur complement is an object that you obtain after eliminating a part of the unknowns. It works that way: you have to solve $Mx=b$ where $M$ is a square, invertible matrix. You write the system in block form
$$\begin{pmatrix} A & B \\ C & D \end{pmatrix}\binom{y}{z}=\binom{c}{d},$$
where you are lucky enough that $A$ is invertible too. Then elimination of $y$ yields a system $(D-CA^{-1}B)z=b'$. The Schur complement of $A$ is precisely $D-CA^{-1}B$. Remark that we have the formula $\det M=\det A\cdot\det (D-CA^{-1}B)$.
The situation is similar if you have an elliptic system of PDEs. It can be written in an abstract way as a matrix $M$ of differential operators (including the information of their domains). Then write $M$ blockwise as above, with $A$ an elliptic operator, invertible over its domain. Then the Schur complement is $D-CA^{-1}B$, with domain that of $D$.
|
2025-03-21T14:48:29.850758
| 2020-02-12T04:44:31 |
352512
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626348",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352512"
}
|
Stack Exchange
|
Is the symplectic quotient $\mu^{-1}(0)/G$ unique up to something?
Given a Hamiltonian action of a compact Lie group $G$ on a symplectic manifold $(M,\omega)$, we may choose a moment map $\mu \colon M\to \mathfrak{g}^* $ and obtain the symplectic reduction $M/\!\!/G = \mu^{-1}(0)/G$. This construction clearly depends on the choice of moment map. However, I wonder if it is still unique up to some sort of (very?) weak equivalence in the symplectic category?
Given a Hamiltonian group action, moment maps may only differ by constant addition. So you seem to be comparing the reduced spaces at different levels. Let me state the two extreme cases.
When $G$ is a torus, any constant addition to a moment map is also a moment map. In a paper "Birational equivalence in the symplectic category (1989)" by Guillemin and Sternberg, authors showed that reduced spaces at regular levels are related by blowing up and down. I do not know the recent progress though. It might be helpful to read papers citing their paper.
The other extreme case is when $G$ is semisimple. In this case, the moment map, which is $G$-equivariant, is unique by the semisimplicity. Then the reduced space is unique and there is nothing to do.
|
2025-03-21T14:48:29.850870
| 2020-02-12T04:55:49 |
352513
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Per Alexandersson",
"https://mathoverflow.net/users/1056",
"https://mathoverflow.net/users/152241",
"user11566470"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626349",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352513"
}
|
Stack Exchange
|
cycloid-based polynomials
take the set of polynomials of the form $(a+b)^n$ and generalize them: let $P_f(a,b;n)$ be a sequence of polynomials where $f:(-c,c)\to \mathbb{R}^+$ is a function with $\int_{-c}^c f=1$ and $c$ can be infinity. further $P_f(a,b;1)=a+b$ and the polynomial always has degree $n$, for all $f$. the surface under $f$ describes the coefficients for the resulting polynomials. for example $f(x):={1\over \sqrt{2\pi}}e^{-x^2\over 2}$ and $c$ infinite, gives above $P_f(a,b;n)=(a+b)^n$.
now set $f$ to be the closed form expression of an upside-down cycloid shifted horizontally by half of its period-length and shifted up by $2r$, normalized to surface 1, and let $c$ be half the cycloid's period-length so that the cusps are excluded from the function. what is $P_f(a,b;n)$? what if the cycloid was further shifted up, so it doesn't touch the x axis? can further insights be gained when looking at it from a projective perspective (whith homogenous polynomials in 3 variables, where each term has equal degree)?
wikipedia says a cycloid is $x(t)=r(t-\sin t)$ and $y(t)=r(1-\cos t)$ and that the area under it is $3\pi r^2$. because of the cusps having infinite derivative, there is no closed form expression...
It is not at all clear how you define the family of polynomials...
if I knew how P is defined I'd plug in the formulas and would know the family of polynomials. all I know is that binomial numbers "n choose k" converge to the bell-curve, and the question is about another sequence that converges to an upside-down cycloid and when used as coefficients in a bivariate polynomial gives something interesting...
|
2025-03-21T14:48:29.851010
| 2020-02-12T06:09:01 |
352515
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ASillyGuy",
"Andrej Bauer",
"David Roberts",
"https://mathoverflow.net/users/1176",
"https://mathoverflow.net/users/152246",
"https://mathoverflow.net/users/4177"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626350",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352515"
}
|
Stack Exchange
|
Is there a standard way to relativize algorithmic complexity constructively?
Given an index set $A$ of indices that compute some (class of) structures such that $A$ is complete in the class $ \Pi^0_n$ in the arithmetical hierarchy, let’s say we want to determine the algorithmic complexity of some index set $B$ of indices that compute some (class of) structures within the class we get from $A$. Let’s say that we could somehow determine that, absolutely, this algorithmic complexity for $B$, assuming structures from $A$ as our “domain of discourse” were $\Pi^0_m$ complete themselves.
Is there some meaningful way we can relate these two complexities so as to remain within the more general class $A$ and work “relative to $A$”? Maybe something like $\Pi^{\overline{\textbf{0}^{(m)}}}_n$ ?
Please don't use LaTeX/MathJax to add emphasis to text. That's what Markdown is for.
Thanks, now I can format more germanely.
Why is the word "constructively" in the title? To lure constructivists into classical mathematics? But more seriously: what does "compute structures" mean, and what does it mean to compute structures "within a class we get from $A$". I find the question confusing.
I will admit it is partially done to lure, but it is somewhat intended to suggest a “building upon previous work” taste that is not evident in usual computability (AFAIK). Also we don’t use choice so it’s not really all that classical, in my defense, although I guess LEM is present. “Compute structures” means that an index in the set is that of the $e^{th}$ program that computes the atomic diagram of a structure. This, I think, is fairly standard in the theory of computable models. And the last question is more or less “restriction” to the indices of $A$, but I am open to suggestions.
To clarify further, I say I’m “open to suggestions” because your last question is, in some sense, a refinement of my own question. How can we make sense of “computing structures relative to some class of structures”, assuming there is some “compatibility” of how these things are generated. Can it be done? It may be an elementary question, but I can’t seem to find a satisfactory answer in the literature. Otherwise, if some examples are in order I could probably conjure them.
|
2025-03-21T14:48:29.851197
| 2020-02-12T06:18:59 |
352516
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626351",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352516"
}
|
Stack Exchange
|
When flatness of $S$ over $R_i$ implies flatness of $S$ over the ring generated by $R_1,R_2$
The following question I have asked in MSE, but have not received an answer, so I ask it here; I really apologize if it is not suitable for MO.
Let $k$ be a field of characteristic zero and let $R_1,R_2,S$ be three commutative $k$-algebras with $R_1 \subseteq S$ and $R_2 \subseteq S$.
Assume that $S$ is flat over $R_1$ and also $S$ is flat over $R_2$.
Denote by $R$ the ring generated by $R_1$ and $R_2$. Of course, $R$ is a subring of $S$.
Are there conditions that will guarantee flatness of $S$ over $R$?
Example 1: $R_1=k[x], R_2=k[y], R=k[x,y], S=k[x,y,z]$;
$k[x,y,z]$ is free over $k[x]$ and over $k[y]$, hence flat over $k[x]$ and over $k[y]$. Also, $k[x,y,z]$ is free over $k[x,y]$, so it is flat over $k[x,y]$.
However, generally $S$ may not be flat over $R$:
(Counter)example 2: $R_1=k[x^2]$, $R_2=k[x^3]$, $R=k[x^2,x^3]$, $S=k[x]$. Clearly, $k[x]$ is free over $k[x^2]$ and over $k[x^3]$, hence flat over $k[x^2]$ and over $k[x^3]$. But $k[x]$ is not flat over $k[x^2,x^3]$.
Notice the difference between the two examples: In the first example $R_1 \cap R_2 =k$, while in the second (counter)example $R_1 \cap R_2=k[x^6] \supsetneq k$.
Is there another counterexample, but this time with $R_1 \cap R_2=k$?
See also this relevant question about flatness over tensor products.
Thank you very much!
|
2025-03-21T14:48:29.851307
| 2020-02-12T06:26:18 |
352517
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Adi",
"SBK",
"https://mathoverflow.net/users/100801",
"https://mathoverflow.net/users/122587"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626352",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352517"
}
|
Stack Exchange
|
Joining Hölder continuous functions on Whitney covering
Let $u$ be a bounded function and let a closed set $E$ be given. The compliment of $E$ can be covered with a Whitney type covering $B_i$ such that the following are satisfied:
1) $E^c \subset \bigcup 4B_i$
2) $16B_i \cap E \neq \emptyset$
3) $\sum \chi_{4B_i}(x) \leq C(n)$
4) If $r_i$ is the radius of $B_i$, and if $B_i \cap B_j \neq \emptyset$, then $r_i \leq 2 r_j \leq 4r_i$.
The rest of the properties of the standard Whitney covering also holds.
I have the following uniform $C^{\alpha}$ bound: $$|u(x) - u(y)| \leq C |x-y|^{\alpha}$$ for all $x,y \in 8B_i$ with the constant $C$ and $\alpha$ being independent of the Whitney covering.
Question: How do I show that my function $u$ is Hölder continuous on $E^c$ with a uniform bound?
Do you know for sure that there is such a bound?
In my very specific situation, I get such a bound.
Your function need not be Hölder continuous. Let $\Omega$ be the union of two exponential cusps with a common vertex and let $E$ be the complement of these cusps. Let $u=1$ in the upper cusp and $u=0$ in the lower cusp. Because the cusps are "sharp", if $B_i$ is on one cusp, $8B_i$ will not intersect the oper cusp so the function $u$ will satisfy the condition
$$
|u(x)-u(y)|\leq C|x-y|^\alpha,
\quad
x,y\in 8B_i
$$
since the left hand side will be equal zero. However, $u$ is not Hölder continuous.
|
2025-03-21T14:48:29.851438
| 2020-02-12T06:58:30 |
352518
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Karl Schwede",
"https://mathoverflow.net/users/3521",
"https://mathoverflow.net/users/72288",
"user237522"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626353",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352518"
}
|
Stack Exchange
|
Special irreducible polynomials in $k[x,y]$
The following question I have asked in MSE, getting one comment.
Hopefully, it is ok to ask it here also.
Let $k$ be a field of characteristic zero, $n \in \mathbb{N}$.
Definitions:
(1) $0 \neq f \in k[x_1,\ldots,x_n]$ is always irreducible, if for every $\lambda \in k$, $f+\lambda$ is irreducible in $k[x_1,\ldots,x_n]$.
(2) $0 \neq f \in k[x_1,\ldots,x_n]$ is infinitely irreducible, if for infinitely many $\lambda \in k$, $f+\lambda$ is irreducible in $k[x_1,\ldots,x_n]$, and call those $\lambda$'s for which $f+\lambda$ is irreducible good scalars.
(3) $0 \neq f \in k[x_1,\ldots,x_n]$ is never irreducible, if there exist no $\lambda \in k$ for which $f+\lambda$ is irreducible in $k[x_1,\ldots,x_n]$.
Examples:
(i) In $\mathbb{R}[x]$, $x$ is always irreducible, $x^2$ is infinitely irreducible with good scalars $\in (0,\infty)$.
(ii) In $\mathbb{C}[x]$, $x$ is always irreducible, $x^2$ is never irreducible.
Question 1: Is it possible to somehow characterize all always irreducible polynomials in $\mathbb{C}[x,y]$?
Question 2: Is there a way to distinguish between always irreducibles and infinitely irreducibles?
Examples of always irreducible polynomials in $\mathbb{C}[x,y]$ are:
(a) $\lambda x- \mu$, where $\lambda,\mu \in \mathbb{C}$.
(b) $\lambda y- \mu$, where $\lambda,\mu \in \mathbb{C}$.
(c) $\lambda x + H(y)$, where $\lambda \in \mathbb{C}$, $H(y) \in \mathbb{C}[y]$.
(d) $\lambda y + H(x)$, where $\lambda \in \mathbb{C}$, $H(x) \in \mathbb{C}[x]$.
Actually, (c) includes (a) and (d) includes (b).
If I am not wrong, (c) and (d) can be proved by Eisenstein's criterion. One has to be careful, for example $x+y^2$, in wikipedia's notations we should take $p=x$ not $p=y$.
(e) By the fourth answer to this question, $f=g(x)-h(y)$ is irreducible when $\gcd(\deg(g),\deg(h))=1$; in particular, taking $g$ linear yields (c), and taking $h$ linear yields (d).
If I am not wrong, in $k[x,y]$:
If $(f,g)$ is an automorphic pair,
then $f$ (and $g$) is always irreducible, where $(f,g)$ is an automorphic pair if $k[x,y]=k[f,g]$ or, equivalently, if $(x,y) \mapsto (f,g)$ is an automorphism of $k[x,y]$.
Moreover, if $(f,g)$ is a Jacobian pair,
then $f$ (and $g$) is always irreducible, where $(f,g)$ is a Jacobian pair if $\operatorname{Jac}(f,g):=f_xg_y-f_yg_x$ belongs to $k-\{0\}$.
Indeed, $\frac{k[x,y]}{\langle f \rangle}$ is an integral domain (I can add an argument for this later), so $\langle f \rangle$ is a prime ideal, hence by the second link below, $f$ is irreducible. Repeat this argument for $f + \lambda$ for every $\lambda \in k$, and get that $f + \lambda$ is irreducible for every $\lambda \in k$.
Please see the following related questions: Irreducibility of polynomials in two variables, What do prime ideals in $k[x,y]$ look like?, Irreducibility of Polynomials in $k[x,y]$.
Thank you very much!
Half an hour ago I have received a second comment in MSE, which says the following: "With $f \in \mathbb{C}[x_1,…,x_n]$ then $f+a$ is irreducible for some $a \in \mathbb{C}$ iff $(f+t)$ is a prime ideal of $K[x_1,…,x_n]$ where $K=\bar{\mathbb{C}(t)}$ in which case $f+a$ is irreducible for all but finitely many $a$. Moreover those $a$ can be found in term of the zeros of the discriminants in each variable. This follows from that the map sending $z$ to one of the roots of $g(z,y) \in \mathbb{C}[z][y]$ is analytic away from the $z$ where $g(z,y)$ has a double root."
I'd be tempted to view the problem a bit more geometrically. Consider the flat map $k[t] \to k[t, x_1, \dots, x_n]/(f(x_1, \dots, f_n) - t)$. You are asking that all or some fibers are integral schemes. Presumably if the geometric generic fiber is integral, then an open dense of fibers are also integral. The sorts of fibers you are pointing out are not geometrically irreducible.
@KarlSchwede, thank you for you comment. You can write it as an answer, if you like.
|
2025-03-21T14:48:29.851783
| 2020-02-12T07:08:30 |
352519
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Gabe K",
"Mateusz Kwaśnicki",
"https://mathoverflow.net/users/108637",
"https://mathoverflow.net/users/125275",
"https://mathoverflow.net/users/16934",
"kaleidoscop"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626354",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352519"
}
|
Stack Exchange
|
How often a random walk with irrational increments is close to 0?
Let $\omega$ be an irrational number, and $X$ a random variable taking values $1,-1,\omega,-\omega$ each with probability $1/4$. Let then $X_i$ be iid variables with the same law as $X$ and $S_n=\sum_{i=1}^n X_i,n\in \mathbb N$ be the corresponding random walk.
Is it possible to have a precise asymptotics for $P(|S_n|<\epsilon)$ for $\epsilon>0$? Ultimately I would like to know the behaviour of
$$\sum_{n=1}^\infty n^{-3/2} P(|S_n|<\epsilon)$$ as $\epsilon\to 0$.
I feel like the diophantine properties of $\omega$ are relevant for this asymptotics.
How would you proceed to get such an estimate? Ideally I would like to consider $X$ with any discrete law, with eventually infinitely many atoms.
EDIT: to be clear, I think there are ad-hoc methods to solve this kind of problems, as Mateusz shows below. I want to make sure not to miss any kind of general theory that solves this kind of problems in the theory of random walks.
Ok, thanks! But the sum is at least bounded by the sum of the $n^{-3/2}$, so it should be finite. I expect it at least to go to $0$ with $\epsilon$, as your estimate suggests. Or maybe I misunderstood your answer (and btw $\epsilon$ is not necessarily irrational, but $\omega$ is)
One thing that may or may not be relevant is the following. If we think of this as a random walk on the lattice $\mathbb{Z} + \sqrt{-1} \omega \mathbb{Z} $, then this is a two dimensional random walk, as noted by Mateusz Kwaśnicki. Random walks in two dimensions are recurrent, so we expect $S_n$ to be zero infinitely often. As such, even for $\epsilon=0$, we expect there to be a lower bound on the sum you are studying. Of course, when we consider this as a 1-dimensional random walk, it will be close to the origin more frequently, but that seems much more complicated.
As a follow-up, one further thing to notice is that getting estimates also depends on the size of $\omega$. In particular, if $\omega \ll \epsilon \ll 1$, then the behavior will be very similar to a $1$-dimensional random walk (because you need many steps in the $\omega$ direction to get something $O(\epsilon)$. On the other hand, if $\omega \gg 1 \gg \epsilon$, the behavior will be more like a two dimensional random walk, where the steps in the $\omega$ direction have to cancel out to get back to the origin.
Ok thanks, but here $\omega$ is fixed and $\epsilon$ goes to $0$
Right, the point is only that the size of $\omega$ will affect the behavior and that when $\epsilon$ is sufficiently small, it's almost equivalent to $\epsilon =0$.
Just an extended comment. Let $X_n$ be the simple random walk in $\mathbb{Z}^2$. Then $$\mu(\{x\}) = \sum_{n = 1}^\infty n^{-3/2} P(X_n = x)$$ is comparable with $$\sum_{n = 1}^\infty n^{-3/2} \times n^{-1} \exp(-|x|^2 / (2 n)) \approx (1 + |x|)^{-3}.$$ So your question boils down to estimating $$\sum_{k \in \mathbb{Z}} \frac{1}{(1 + |k|)^3} \, \mathbb{1}_{(-\epsilon, \epsilon)}(k \omega - \lfloor k \omega\rfloor) . $$ This indeed seems closely related to how well one can approximate $\omega$ with rationals.
Ok thanks for this nice approach! So you transfer the problem to a random walk on higher dimensions. I was actually interested by general results from the theory of random walks: is it possible to relate the distribution of the increments to how often the chain will pass close to some state? This way that might directly lead to a condition on the law of the $X_i$ (instead of some estimates on $\omega$)
@kaleidoscop: Sorry, I do not quite understand your comment. I do not think there is a "soft" way to answer this kind of questions, so I suppose that one cannot simply link the rate of convergence of the sum to zero with distributional properties of the increment $X_i$.
Well, you actually understood my comment, because that is exactly what I was trying to get :)
|
2025-03-21T14:48:29.852055
| 2020-02-12T07:34:05 |
352520
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Abdelmalek Abdesselam",
"Giovanni Moreno",
"Robert Bryant",
"https://mathoverflow.net/users/13972",
"https://mathoverflow.net/users/22606",
"https://mathoverflow.net/users/7410"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626355",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352520"
}
|
Stack Exchange
|
Invariants of symmetric forms with respect to the symplectic group
Take a 6-dimensional vector space $V$ (for simplicity, over $\mathbb{C}$) and play the following game (for example, by employing the online Lie program): consider the 21-dimensional space $S^2V^*$ of symmetric two-forms on $V$ and decompose the space $S^k(S^2V)$ of degree-$k$ homogeneous polynomials on $S^2V^*$ into irreducible $\mathsf{SL}_6$-modules and, simultaneously, into irreducible $\mathsf{Sp}_6$-module, with $k=1,2,3,4,5,6$. The number of one-dimensional constituents you'll obtain is the following:
For $\mathsf{SL}_6$ there is a unique one-dimensional constituent $\langle d\rangle$, that appears when $k=6$;
For $\mathsf{Sp}_6$ the first one-dimensional constituent $\langle p\rangle$ pops up with $k=2$, then a second one $\langle q\rangle$ with $k=4$, accompained by $\langle p^2\rangle$, and, finally, for $k=6$, there are three one-dimensional constituents: $\langle p^3\rangle$, $\langle p q\rangle$ and $\langle d\rangle$.
Now it is well known that $d$ is the determinant.
QUESTION: what about the $\mathsf{Sp}_6$-invariants $p$ and $q$ of a symmetric two-form $\alpha$ on $V$? Can we read them off from the characteristic polynomial of a suitable endomorphism of $V$ related to $\alpha$? Does anybody know where precisely in the literature this is discussed? (Should be classical.)
In particular, I'm interested in the normal forms of elements $\alpha\in S^2V^*$ with rispect to the symplectic group: in the case of the linear group, the normal form of $\alpha$ is simply a diagonal matrix with as many 1's on the diagonal as the rank of $\alpha$ - but if the group is smaller I expect a more involved outcome.
Isn't this isn't just the standard result from Lie theory? It's well-known that, in this case, $S^2(V)\simeq S^2(V^*)$ is isomorphic as an $\mathsf{Sp}(V)$-module to the adjoint representation on $\mathfrak{sp}(V)$ itself. Since this is a simple Lie algebra, the ring $R$ of polynomial invariants $p(x)$ for $x\in\mathfrak{sp}(V)$ is generated by the coefficients of the characteristic polynomial of $\mathrm{ad}(x)$. In particular, the generic element is conjugate to an element of a maximal torus, and this immediately gives that $R$ is freely generated by $n$ elements of degrees $2,4,\ldots, 2n$.
@RobertBryant you're right, as usual: I forgot that $S^2V^*$ is nothing but the Lie algebra of $\mathsf{Sp}(V)$; this answers the question about the ring of polynomial invariants. Nevertheless, what interests me more is a list of normal forms: from Collingwood and McGovern's book "Nilpotent Orbits in Semisimple Lie Algebras" I've learnead that there are 8 nilpotent orbits and a three-parametric family of semi-simple ones; however, nowhere in the literature I've found them recast in terms of quadratic forms on $V$, neither (which is the true problem) any hint about the "mixed" orbits (those ...
... that are neither semi-simple neither nilpotent). Do you know of some reference where I can see an explicit list of such normal forms or some paper/book explaining how to classify these "mixed" orbits?
Nice question!
More generally, let $V=\mathbb{C}^{2n}$. Consider the $2n\times 2n$ matrix
$$
\varepsilon=\begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix}
$$
and the symplectic group ${\mathsf{S}\mathsf{p}}_{2n}$ which preserves the fundamental alternating bilinear form with matrix $\varepsilon$. An element $F$ of the symmetric power $S^p(V^{\vee})$ can be seen as a homogeneous polynomial $F(x)$ of degree $p$ in the variable $x=(x_1,\ldots,x_{2n})$.
It also corresponds to a unique symmetric array
$$
(F_{i_1,\ldots,i_p})_{(i_1,\ldots,i_p)\in [2n]^p}
$$
where $[2n]$ denotes the set of allowed index values $\{1,2,\ldots,2n\}$.
Symmetric means the entries stay the same if one permutes the $p$ indices.
The correspondence is so that the identity
$$
F(x)= F_{i_1,\ldots,i_p} x_{i_1}\cdots x_{i_p}
$$
holds. Note that I used Einstein's convention where indices $i_1,\ldots,i_p$ are to be summed independently over the set $[2n]$. I will keep using this convention below.
Now for integers $q,r,\ell$ with $0\le \ell\le\min(q,r)$, one can define a "symplectic transvectant" which is a ${\mathsf{S}\mathsf{p}}_{2n}$-equivariant map
$S^q(V^{\vee})\times S^r(V^{\vee})\rightarrow S^{q+r-2\ell}(V^{\vee})$. To a pair of forms $F$, $G$, we associate the new form
$$
H(x)= F_{i_1,\ldots,i_q} G_{j_1,\ldots,j_r} \varepsilon_{i_1,j_1}\cdots
\varepsilon_{i_{\ell},j_{\ell}}\ x_{i_{\ell+1}}\cdots x_{i_q}\
x_{j_{\ell+1}}\cdots x_{j_r}
$$
I will write $(F,G)_{\ell}$ for this new form $H$.
Now suppose $p$ is even. Then for any $m\ge \frac{p}{2}$, one has a linear endomorphism
$$
\begin{array}{cccc}
\mathcal{L}_{n}^{F}: & S^{m}(V^{\vee}) & \longrightarrow & S^{m}(V^{\vee}) \\
\ & G & \longmapsto & (F,G)_{\frac{p}{2}}
\end{array}
$$
which depends on the choice of $F$.
Let $\mathscr{H}_{m,s}(F)$ denote the coefficient of $\lambda^s$ in essentially the characteristic polynomial ${\rm det}(Id-\lambda \mathcal{L}_{n}^{F})$.
Alternatively, let $\mathscr{P}_{m,s}(F)$ denote the trace of the $s$-th power
of $\mathcal{L}_{n}^{F}$. It is not hard to see that $\mathscr{H}_{m,s}(F)$
and $\mathscr{P}_{m,s}(F)$ are ${\mathsf{S}\mathsf{p}}_{2n}$-invariants of $F$. They give you
one-dimensional submodules in $S^{s}(S^{p}(V))$.
The above is a trivial generalization to the symplectic context of a construction in the invariant theory of binary forms (the ${\mathsf{S}\mathsf{p}}_{2}={\mathsf{S}\mathsf{L}}_{2}$ case) due to Hilbert in his Königsberg Habilitationsschrift. I studied these concrete invariants in my recent article
"An algebraic independence result related to a conjecture of Dixmier on binary form invariants" in Res. Math. Sci. 2019.
The preprint version is here.
The main result I proved in that article is that for $n=1$, and for $p=2k$ with $k$ even, the invariants $\mathscr{P}_{k,2},\mathscr{P}_{k,3},\ldots,\mathscr{P}_{k,k+1}$ are algebraically independent. Note that this trivially shows the same holds true for any $n\ge 1$, by specializing to a generic form $F$ which only depends on the variables $x_1,x_{n+1}$.
Note that one can also represent the invariants graphically, as in the picture
which is taken from the above article. In the left picture, the lines with arrows correspond to $\varepsilon$'s, and the boxes correspond to symmetrizations.
Now take $n=3$, $p=2$, $m=\frac{p}{2}=1$, which gives $\mathscr{P}_{1,s}(F)={\rm tr}((\varepsilon F)^s)$, where $F$ is viewed as a $6\times 6$ symmetric matrix. These are the invariants you see in the Lie program calculations. Clearly, they vanish unless $s\ge 2$ is even.
For $p=2$, general $n$. The first fundamental theorem (FFT) of invariant theory for ${\mathsf{S}\mathsf{p}}_{2n}$ easily implies that the particular invariants $\mathscr{P}_{1,s}$, $s\ge 1$ generate the ring of invariants. Because of the relations between power sum symmetric functions, and the remark about parity, one has for this ring the list of generators
$$
\mathscr{P}_{1,2},\mathscr{P}_{1,4},\mathscr{P}_{1,6},\ldots,\mathscr{P}_{1,2n}.
$$
They are algebraically independent. Indeed, take $F$ to be the quadratic form with matrix
$$
\begin{pmatrix}
0 & D \\
D & 0
\end{pmatrix}
$$
where $D$ is the diagonal matrix with entries $y_1,\ldots,y_n$. Then the above invariants specialize to the power sums in the variables $y_1^2,\ldots,y_n^2$.
So this gives a complete description of the ring of invariants.
For a quick sketch of a proof of the FFT for ${\mathsf{S}\mathsf{p}}_{2n}$
see:
Invariants for the exceptional complex simple Lie algebra $F_4$
It proceeds by reduction to the FFT for ${\mathsf{S}\mathsf{L}}$ and/or ${\mathsf{G}\mathsf{L}}$ which are proved in
How to constructively/combinatorially prove Schur-Weyl duality?
and
How to constructively/combinatorially prove Schur-Weyl duality?
That's one hell of an answer! Two minor questions: when you wrote "for any $m>0$" you actually meant $m\geq\tfrac{p}{2}$, right? and in the formula of the characteristic polynomial $\lambda$ should be in front of $Id$, or not? Frankly speaking, I was expecting something more digestible than Hilbert's habilitation thesis - but it's good to know where it all began: now I'll try to dig out what I need. If you have some remark about the normal form issue, I'll be happy to see it!
yes $m\ge p/2$. other wise one can define the transvectant to just be identically zero and this becomes a dissertation about zero. 2) normally $\lambda$ is by $Id$ and that's why I said "essentially", but this would just change the labeling. I want the subscript $s$ to correspond to the degree of the invariant in $F$. BTW, I just realized I had this labeling wrong in my paper for the H's, but not the P's.
|
2025-03-21T14:48:29.852625
| 2020-02-12T08:52:11 |
352521
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Brendan McKay",
"François Jurain",
"Steve Huntsman",
"Szabolcs Horvát",
"Todd Trimble",
"apg",
"https://mathoverflow.net/users/149149",
"https://mathoverflow.net/users/1847",
"https://mathoverflow.net/users/2926",
"https://mathoverflow.net/users/8776",
"https://mathoverflow.net/users/9025",
"https://mathoverflow.net/users/90619"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626356",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352521"
}
|
Stack Exchange
|
When is a large graph with a given degree sequence likely to be connected?
Are there any results on whether a large random graph with a given degree distribution is likely to be connected?
In Erdős-Rényi graphs with $n$ vertices and $m$ edges, we have two sudden transitions (for large $n$):
A giant component appears above the threshold $m/n = 1/2$.
The graph becomes connected above the threshold $m/n = (\ln n)/2$.
There is a result analogous to (1) above by Molloy and Reed for random graphs with a given degree distribution. If $d$ denotes the vertex degree and $\langle \cdot \rangle$ denotes the average, then the quantity of interest is $Q = \langle d^2 \rangle - 2\langle d \rangle$. A giant component suddenly appears above the threshold $Q > 0$.
Question: Is there a result analogous to (2) for random graphs with a fixed degree sequence, in the large graph limit? Is there a quantity that can be computed from the degree distribution, and when it crosses a threshold, the graph suddenly becomes connected (in the $n\rightarrow\infty$ limit)? Let us assume that there are no isolated vertices ($d\ne 0$).
Clarification update: Let me try to give a more precisely specified version of the problem. Suppose we have $n$ vertices. Of these, precisely $n_d = f_d n$ have degree $d$: thus we have a degree sequence
$$(
\overbrace{0,\dots,0}^{\text{$n_0$ times}},\;
\overbrace{1,\dots,1}^{\text{$n_1$ times}},\;
\overbrace{2,\dots,2}^{\text{$n_2$ times}}, \dots).
$$
Choose one simple (labelled) graph with this degree sequence uniformly at random.
What conditions do we need to have on the $f_d$ (the degree distribution), or on $n_d$, so that in the $n \rightarrow \infty$ limit the graph is connected with probability 1?
Clearly, if the $f_0 \ne 0$, then there are isolated vertices and the graph is not connected. Therefore, one condition is that $f_0 = 0$.
Re: 1), see also https://arxiv.org/abs/1601.03714
Re: 2), you may be interested in the Chung-Lu model, see, e.g. https://doi.org/10.1007/PL00012580
@SteveHuntsman The Chung-Lu model is not quite the same kind of thing though because it is not for exact degrees, but expected ones. If I (numerically) generate a large graph from the Chung-Lu model with all expected degrees being 3, I get actual degrees between roughly 0 and 10, and in practice I always get a disconnected graph. If I generate random cubic graphs (exact degrees) by applying repeated random degree-preserving edge switches, in practice I always get a connected graph.
Consider a sequence of degree sequences $\boldsymbol{d}(n)=(d_1(n),d_2(n),\ldots,d_{n-1}(n))$. I believe that the probability of connectedness converges to 1 if $d_1(n) = o(\sqrt n)$ and $d_2(n)=o(n)$. I'm not giving this as an answer because I don't recall where it was studied, but I'm sure it has been.
My understanding of the question, which is different from the understanding of all answers so far, is that we are given a degree sequence, then we must generate a graph uniformly at random from all graphs with that degree sequence. Is that correct?
@BrendanMcKay Your understanding is correct: I meant to consider fixed degree sequences. Since such a result would apply in the limit of "large graphs", I phrased it as degree distribution, not degree sequence. Sorry, several answers and comments came in suddenly quite a while after I originally posted, and I haven't gotten around to addressing them. I will do so soon.
@BrendanMcKay Could you please clarify the notation from your first comment? I don't quite follow. Do you mean that if there are two vertices (of $n$ in total) so that the degree of the first scales as $\sqrt{n}$ and the degree of the second scales as $n$, then the probability that the graph is connected goes to 1 as $n\rightarrow\infty$?
Sorry, I should have defined my notation. $d_i(n)$ is the number of vertices of degree $i$ in the $n$-th degree sequence (not the degree of vertex $i$). Also, what I suggested needs some modification--for now I'll only claim that it is true if the maximum degree is bounded. In that case, if the number of vertices of degree 1 is $\omega(\sqrt n)$, there is probably an isolated edge (similar to birthday paradox) and if the number of vertices of degree 2 is $\Omega(n)$ there is probably an isolated cycle. This might not be the full story, but it goes something like that.
Please note that my answer given below is not the same as what I wrote here, though it is similar.
Second edition
This is a partial answer to the question per the "Clarification Update", but first I'll generalize a little. Suppose that for each $n$ we have a degree sequence $n_0,n_1,n_2,\ldots$, where $n_d=n_d(n)$ means the number of vertices of degree $d$. Also let the number of edges be $m=m(n)$ and the maximum degree be $\varDelta=\varDelta(n)$. Now we take a random simple graph $G=G(n)$ with this degree sequence, each such graph being equally likely. We seek to know if $G$ is connected. Take $n_0=0$ from now on.
This type of random graph has been extensively studied. I'll just make some simple observations using Theorem 2.1 of this paper.
By Theorem 2.1 the expected number of isolated edges is
$$\binom{n_1}{2}\frac{1+O(\varDelta/m)}{2m}$$ if $\varDelta=o(m)$.
Assuming the latter condition, the expected number of isolated edges goes to $0,\infty$ according as $n_1^2/m$ goes to $0,\infty$, respectively. This doesn't imply instantly that $n_1\approx \sqrt{m}$ is the threshold for having an isolated edge, but it is true (use the second moment method).
So now assume $n_1=o(m)$.
I thought a combination of degrees 1 and 2 might be an issue, but the most likely isolated component, a path of two edges, is unlikely if $n_1=o(\sqrt m)$.
(So, if these components are likely, so are isolated edges.)
Now consider isolated cycles. The expected number of isolated cycles of length $k$ is $$\frac{(n_2)_k(1+O(k\varDelta/m))}{2k\,(m)_k},$$ where $(x)_k=x(x-1)\cdots(x-k+1)$, provided $k\varDelta=o(m)$.
Since $n_2\le m$, this never goes to infinity for fixed $k$, but
the sum over an increasing number of $k$ values does go to infinity
if $n_2=(1-o(1))m$. In the other direction, if $n_2=o(m)$ then
the expectation goes to 0 for each $k$ and moreover the terms
appear to be decreasing exponentially as $k$ increases. Here
there is a gap in the proof because $k\varDelta=o(m)$ might not
be true for very large $k$ unless also $\varDelta=O(1)$. This gap can be filled but I won't go
into it. Modulo some things I haven't quite proved, the probability of connectedness goes to 1 if $n_2=o(m)$ and to 0 if $n_2=(1+o(1))m$.
In the intermediate ranger, for example if $n_2=cm$ for $0\lt c\lt 1$, I believe that the distribution of the number of isolated cycles will be Poisson with constant mean.
Beyond this, I'm reluctant to reinvent the wheel because someone
must have done this before except possibly in the case that some
degrees are very low and others very high. There are no component
types that are likely to occur under conditions when isolated edges
or cycles are unlikely to occur. The fact that random
regular graphs of degree at least 3 are almost always connected
was proved by Wormald in the 1970s. I hypothesize that $n_0=0$,
$n_1=o(\sqrt m)$ and $n_2\le cm$ for some $c\le 1$ are necessary
and sufficient conditions for almost sure connectivity.
The question also asks us to consider the case that there constants
$f_0,f_1,\ldots$ independent of $n$ such that $n_d(n)=f_d\,n$ for
all $n,d$. Translating what is above, the condition for connectivity
is $f_0=f_1=f_2=0$. Clearly forcing $f_d$ to be independent of $n$ loses a lot of detail.
Revised Edition to converge to the OP's notations, and slightly augmented.
Let $f_d$ be the distribution of the degrees, and $gF(u)= \Sigma_{0 \le d} f_d u^d $ its generating function; then I think the indicator you seek is $f_1 \over gF'(1)$, with consideration of $f_2 \over gF'(1)$ if necessary.
Edited after reading Pr. McKay's comments & the OP's clarifications. Fist, choose some $N \gg 1$, $n_d$'s all $ \ge 0 $ adding up to $N$ for the number of nodes of degree $d \le N - 1 $, and let $f_d = n_d / N $, so that $\Sigma_{1 \le d \le N-1} f_d = 1$. Then, pick a simple graph (no loop, no duplicate edges) uniformly at random amongst all those having $n_d = f_d N$ nodes of degree $d$ for each $d$, the degree sequence.
Alternatively, you may want to choose the $f_d$'s, the degree distribution, before settling on a graph size; if so, relax the normalization to $gF(1) = 1 + o(1)$, then constrain $N$ by $\Sigma_{d \ge N-O(1)} f_d = o(1)$, choose $ n_d = f_d (N + O(1))$ for each $d \le N - 1$ and pick your random graph, of size $\Sigma_{1 \le d \le N-1} n_d = N + O(1)$.
By convention $gF(1) = 1$ and $f_0 = 0$. Assume further the Molloy-Reed criterion is met: $gF''(1)/gF'(1)$ is at least $ 1 + \Omega(1)$, else there will be no giant component to speak of. In particular $2 f_2$ is $(1 - \Omega(1)) gF'(1) $.
Consider 1st the case I'm familiar with, that $gF$ is a polynomial of degree $ O(1)$. Then things are mighty clear:
if $f_1 = 0$, then w. p. $1 - o(1)$, a node chosen at random amongst those of the chosen graph is in the giant component. I think it is even true that w. p. $1 - o(1)$, the chosen graph has only $O(1)$ nodes outside the g. c. On the other hand, the condition does not ensure exactly 1 c. c.:
(1.a) if $f_2 = \Omega(1)$, then a node chosen at random amongst those of degree 2 will belong w. p. $ \sim {1 \over N} {1 \over {gF'(1) - 2 f_2}}$ in a ring of length $O(1)$. Except I forbade multiple edges, so the ring must be of length $ \ge 3$ and the probability $ \sim {1 \over N} {{2 f_2} \over gF'(1) } {1 \over {gF'(1) - 2 f_2}}$ , so w. p. $ \sim 1 - e^{- {2 f_2 \over gF'(1)} {f_2 \over {gF'(1) - 2 f_2 }}} = \Omega(1)$, the chosen graph will contain $\ge 1$ such c. c.;
(1.b) if $f_2 = o(1)$, then it remains true that w. p. $\Omega( {({f_2 \over gF'(1)})}^2) = \Omega({f_2 \over gF'(1)})$, at least 1 c. c. of the chosen graph is a ring of size $O(1)$; whereas I think w. p. $1 - o(1)$, all nodes of degree $\ge 3$ are connected to the g. c.; thus, it is now the case that w. p. $1 - o(1)$, the graph coincides with the g. c.
If $f_1 = o(1) > 0$, then cutting all edges incident to leaves will not change $gF$ in any appreciable way, and the same conclusion applies as in the case $f_1 = 0$: w. p. $1 - o(1)$, a fraction $1 - o(1)$ of the nodes is in the g. c.
(2.a) if $f_2 = \Omega(1)$, then w. p. $\Omega(1)$, at least $\Omega(1)$ nodes are in other c. c.'s than the giant one; on the other hand, if $f_2 = o(1)$, then the graph is connected, right? Well, not so fast. A fraction $\sim gF(p) = o(1)$ of the nodes is still in trees of size $O(1)$, with $p \sim { f_1 \over {gF'(1) - 2 f_2}} $ the fixed point of ${1 \over gF'(1)} gF' $; so:
(2.b) if $f_2 = o(1)$ and $ gF(p) \sim {f_1}^2/gF'(1) \ge \Omega(1/N)$, then w. p. $\Omega(1)$ the graph is not connected; if $ {f_1}^2/gF'(1) \gg 1/N $, the number of stray nodes is even $ \gg 1 $;
(2.c) if $f_2 = o(1)$ and $ {f_1}^2/gF'(1) = \Omega(1/N)$ however, then w. p. $1 - o(1)$ only $O(1)$ nodes are in other components than the g. c.;
(2.d) if $f_2 = o(1)$ and $ {f_1}^2/gF'(1) = o(1/N)$, then w. p. $1 - o(1)$ the graph is connected.
If $f_1 = \Omega(1)$, then w. p. $\Omega(1)$, a node chosen at random is in a connected component of size $O(1)$; this c. c. will be a tree having $k = O(1)$ leaves with probability ${f_1}^k O(1)$. So, a fraction ${f_1 \over {gF'(1) - 2 f_2}} + O({f_1}^2) $ of the leaves will not be in the giant component; the main contribution $ f_1 \over {gF'(1) - 2 f_2} $ comes from the c. c.'s that are chains of length $O(1)$ with 2 leaves at their ends, and is $\Omega(f_1)$.
These conclusions extend to any $gF(u)$, even though $gF'(1)$ is not guaranteed to remain $O(1)$ as in the polynomial case: just consider $f_1 \over gF'(1)$ in stead of $f_1$ and $ f_2 \over gF'(1)$ in stead of $f_2$ when deciding in which of the 7 cases above your $gF$ stands.
To summarize:
the giant component of the chosen graph contains almost every node w. p. $1 - o(1)$ iff $ {f_1 \over gF'(1) } = o(1)$. Else, w. p. $\Omega(1)$, a fraction $\Omega(1)$ of the nodes will be in components of size $O(1)$.
moreover, the number of nodes disconnected from the giant component shrinks from $o(N)$ to $\Omega(1)$ w. p. $1 - o(1) $ iff $ { {f_1}^2 \over gF'(1) } = \Omega({1 \over N})$, equivalently, $ { f_1 \over gF'(1) } = \Omega({1 \over \sqrt{N gF'(1)}})$.
The giant component contains each and every node w. p. $1 - o(1)$ iff, in addition, $ { f_2 \over gF'(1) } = o(1)$ and $ { {f_1}^2 \over gF'(1) } = o({1 \over N})$ ; equivalently, $ { f_1 \over gF'(1) } = o({1 \over \sqrt{N gF'(1)}})$. Else, w. p. $\Omega(1)$, the graph will also contain $ \ge 1$ component disconnected from the g. c.: rings of total size $O(1)$ if the condition on $f_2$ is not met, else $O(1)$ pairs of leaves.
Note that the average node degree is $ \sim gF'(1)$ and the number of edges is $ E \sim {1 \over 2} N gF'(1)$; so in stead of $ f_1 \over gF'(1)$ , you may want to compare $ n_1 \over E $ w. r. t. $ o(1)$ and $ \Omega(1/\sqrt E)$, and $ n_2 \over E $ rather than $ f_2 \over gF'(1) $ to $ o(1)$.
I think I'll let my keyboard cool down some, to Mr Trimble's satisfaction. Looks like I took great pains to reinvent the wheel; so much for not reading the classics.
Every edit bumps the post to the top of the stack. We're up to 28 now. Please avoid making tiny edits; make each edit really count.
@Todd Trimble hadn't noticed; I'll see to it.
I think for general random graphs with (very) high probability there cannot be two giant components. For the graph to be connected it should be enough to prove that no small cluster with just a handful of vertices appears.
Your condition $d\neq 0$ prevents all cluster of size 1.
For cluster of size $2$, we have to make sure that no pair of vertices with $d=1$ are connected. This lead to the condition $|\{i:d_i=1\}|<\sqrt{m}$ because $$\mathbb{P}(\exists i,j,d_i=d_j=1 \text{ and } (i,j)\text{ connected} )\leq \sum_{i,d_i=1}\sum_{j,d_j=1} \mathbb{P}((i,j)\text{ connected} ) = m^{-1}|\{i:d_i=1\}|^2$$
Considering a vertex with degree $d_i>1$ we have to make sure it doesn't belong to a small cluster. We do the exploration process visiting the connected vertex one after another and counting the degree. This creates a Galton Watson tree. We denote $X_k$ the number of outgoing edges from the set of visited vertex. $X_0=d_i$. Each time we visite a vertex $$X_{k+1}=X_n +d_{y_{k+1}}-2 $$ where $y_{k+1}$ is the visited vertex. We have a cluster of size $k$ if $X_k = 0$.
$$X_{k+1}-X_k = \begin{cases}-1 & \text{ with probability } q = m^{-1}|\{i:d_i=1\}| \\ 0 &\text{ with probability }2 m^{-1}|\{i:d_i=2\}| \\ \geq 1 & \text{with probability larger than } p =m^{-1}|\{i:d_i>2\}| \end{cases}$$
For $p>q$ one can check that $N_n := \left(\frac{q}{p}\right)^{X_n} $ is a positive supermartingale and then that $$\mathbb{P}(X_k=0)=\mathbb{P}(N_k=1)\leq\mathbb{E}(N_k)\leq \mathbb{E}(N_0)=\left(\frac{q}{p}\right)^{d_i}\leq \left(\frac{q}{p}\right)^{2} $$
Therefore no small cluster appears if $\left(\frac{q}{p}\right)^{2}\leq n^{-1}$. To conclude, I claim that the graph is connected if
1-$$|\{i:d_i=1\}|<\sqrt{m}$$
2-$$\frac{|\{i:d_i=1\}|}{|\{i:d_i>2\}|}< n^{-1/2}$$
Remark: the second condition could be improved and probably be made optimal with a better estimate of $\mathbb{P}(X_k=0)$.
With the highest probability, to wit, $1 - o(1)$, the g. c. is unique; doesn't rank very high on my own scale. Just being pedantic, I know... couldn't resist.
If you assume, as you do, there are no isolated vertices with high probability, you most likely have connectivity with high probability.
It is an interesting question to ask what conditions on the graph lead to exceptions to this coincidence between connectivity and isolated vertices. A 1d RGG is an example.
If the graph (of $n$ nodes) is formed non-randomly, the only condition to ensure connectivity is that the number of edges in the graph is strictly less than ${n \choose 2} - (n-2)$, since then there is at least the possibility of an isolated node (otherwise, packing the edges into their ${n \choose 2}$ possible spaces means you have a complete graph $K_{n-1}$ and a straggler node, connected by at least one edge to the main body).
If the graph is formed randomly, with some degree distribution (not necessarily with a specific, fixed degree sequence), the lack of isolated nodes is sufficient for connectivity in most cases. This is true in the Erdos-Renyi graph, and in various random geometric graphs. There is most likely the same coincidence in random graphs with unusual degree distributions.
Two clusters of nodes eventually have a bridge unless you force them not to, particularly if a singleton cluster is required to connect to another cluster with probability one from the outset.
There is an issue with exactly what the question is. It is certainly not true that for a fixed degree sequence without 0 a random graph with that degree sequence is probably connected. Consider if every degree is 1, or even if every degree is 2.
If it has a fixed degree sequence, I think it is different to fixed degree distribution. The second case (fixed degree distribution) is the random graph which I think is probably connected without isolated nodes. Otherwise it is definitely not necessarily connected, even when built randomly, as you say.
I will edit the answer to clarify what I mean by random graph.
Isn't "all degree 1" a distribution? I don't know what you mean by
with some degree distribution". Also, see OP's answer to SteveHuntsman.
Such as scale free, rather than Poisson. Those are different degree distributions.
All 1 would be a rare result of a random graph with some degree distribution. The degree has to be random I think? That is the subtle lack of definition in the question.
I clarified the question to note that it's about fixed degree sequences. BTW it took me a while to figure out who you are with the funky new username ;-)
I think Brendan McKay in fact answers this in a comment on the original post. Also the name is temporary, but I apparently can only change it once every 30 days!
|
2025-03-21T14:48:29.854251
| 2020-02-12T08:56:43 |
352522
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Sasha",
"https://mathoverflow.net/users/4428",
"https://mathoverflow.net/users/9449",
"roy smith"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626357",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352522"
}
|
Stack Exchange
|
Reference request: Singular curves
I'm interested in coherent sheaves on a singular curve.(For example, global dimension, Serre duality, Riemann-Roch's theorem for singular curves,etc....)
I find treatment of it only in Hartshorn's exercises.
Are there general treatments of singular curves? And are there good references of singular curves?
Try to look into Burban, Igor; Kreußler, Bernd Derived categories of irreducible projective curves of arithmetic genus one. Compos. Math. 142 (2006), no. 5, 1231–1262.
Serre's book: Groupes alge'briques et corps de classes, or Algebraic Groups and Class Fields, chapter IV, is a standard reference.
|
2025-03-21T14:48:29.854337
| 2020-02-12T09:09:41 |
352525
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Mare",
"YCor",
"abx",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/40297",
"https://mathoverflow.net/users/61949"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626358",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352525"
}
|
Stack Exchange
|
Tensor product of fields 2
Let $K_1, K_2$ be finite field extensions of a field $k$.
Question: Is it true that $A=K_1 \otimes_k K_2$ is isomorphic to a product of group algebras over fields?
Question 2: In case the answer is negative, we still have that $A$ is a symmetric Hopf algebra. What is the group of group-like elements?
Previous question with same title: Tensor products of fields
I am not sure I understand the question. Take $K_2=k$, do you really believe that every finite field extension is a product of group algebras?
@abx In that case $A=K_1 G$ for $G$ the trivial group, or? I think it always works in case $A$ is semi-simple.
Oh, I see, you allow a group algebra over a field extension. Sorry I missed that.
|
2025-03-21T14:48:29.854424
| 2020-02-12T09:57:00 |
352528
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Leonardo",
"Wojowu",
"https://mathoverflow.net/users/132140",
"https://mathoverflow.net/users/30186"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626359",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352528"
}
|
Stack Exchange
|
Analytic function whose derivatives and primitives are independent from a given set of countable cardinality
Let $L=(l_j)_{j\in\mathbb{N}}$ be a set of countably many independent real analytic functions on $[0,2\pi]$. Here and in the following, independent means that a function cannot be written as finite linear combination of the others.
Can I always find an analytic function on $[0,2\pi]$ such that the function itself, all its derivatives and its primitives (computed as indefinite integral $\int_0^x$) are independent from L? How do I construct it?
For example, if $L=(l_j):=(x^j)_{j\in\mathbb{N}}$, then $e^x$, all its derivatives and all its primitives (in this case $e^x$ as well) are independent from L, since $e^x$ is not a polynomial. How do I approach the problem in general? Thanks.
No matter what $L$ is, $e^{ax}$ will work for some $a$.
How does one see that in a rigorous way?
We may assume $L$ contains $x^j$ for all $j$. If $e^{ax}$ or any of its integrals or derivatives was linearly dependent on $L$, then $e^{ax}$ would belong to the linear span of $L$. But the latter has countable dimension, while $e^{ax}$ form an uncountable linearly independent set.
|
2025-03-21T14:48:29.854533
| 2020-02-12T10:41:50 |
352532
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Carlo Beenakker",
"Luis Ferroni",
"https://mathoverflow.net/users/11260",
"https://mathoverflow.net/users/147861"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626360",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352532"
}
|
Stack Exchange
|
Reference for Dedekind's problem
Dedekind's problem is about enumerating antichains in the Boolean lattice.
Is there an explicit reference where Dedekind stated this problem?
Is there a good motivation to study this problem except that it is an old open problem stated by a famous mathematician?
I don't know an exact reference (probably OEIS may provide some). However, for motivation, I can say that I struggled with some computations that have this problem as a particular case. For example, see here https://arxiv.org/abs/1902.00864 an article on which Bruns, Garcia and Moci try to compute the number of irreducible elements on the monoid Q(M) when M is a uniform matroid (I can't add the details here, but if M is the uniform matroid with rank n and n elements, this number reduces to Dedekind's problem). Being able to compute them would be very interesting.
Richard Dedekind, Über Zerlegungen von Zahlen durch ihre größten gemeinsamen Teiler, Gesammelte Werke, 2, pp. 103–148 (1897). regrettably behind a paywall
Speaking as a nonexpert, the enumeration and classification of monotone Boolean functions can give insight into optimization problems in logic, for instance by considering how far off an arbitrary function is from a monotone one. Doing a web search should reveal other motivations for studying Dedekind's problem.
Gerhard "Who Doesn't Like Enumeration Problems?" Paseman, 2020.02.12.
|
2025-03-21T14:48:29.854680
| 2020-02-12T10:46:58 |
352533
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dieter Kadelka",
"Julian Karch",
"Manfred Weis",
"Steve Huntsman",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/1847",
"https://mathoverflow.net/users/31310",
"https://mathoverflow.net/users/76238"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626361",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352533"
}
|
Stack Exchange
|
Numerical problems floating-point arithmetic
I am trying to calculate the following function in floating-point arithmetic.
$$f(c,z)=\frac{(c-1)z}{(z-1)^2}\left( \sum_{k=2}^{c-1}\frac{1}{c-k}\left(\frac{z-1}{z}\right)^k-\left(\frac{z-1}{z}\right)^c\log(1-z)\right)$$
where $z\in(0,1)$ and $c \in \mathbb{N}$, and $c>1$.
The following implementation, which I exemplary display as Matlab code, works for some inputs.
function res = hypergeo(c,z)
theSum = 0;
shared = (z-1)/z;
for k=2:(c-1)
theSum = theSum+shared^k/(c-k);
end
prefactor = (c-1)*z*(z-1)^(-2);
res = prefactor*(theSum-shared^c*log(1-z));
end
However, for example for $c=100$, and $z=0.1$, it returns -4.5288e+79, which is clearly wrong. I know that the correct answer for this case is $1.001002$.
The problems seem to occur if $z$ is small or $c$ is large. This leads to the terms $\left(\frac{z-1}{z}\right)^k$ and $\left(\frac{z-1}{z}\right)^c$ becoming quite large. For the example, $\left(\frac{0.1-1}{0.1}\right)^{100}=-99^{100}$. This leads me to believe that the reason for the function to return the wrong result is some kind of error accumulation due to the finite precision of floating-point arithmetic. Since I have many subtractions in the formula it might be a loss of significance (https://en.wikipedia.org/wiki/Loss_of_significance).
Does anybody see a way to transform the expression such that those numeric problems do not occur anymore? I tried the brute-force solution of increasing the number of bits used (in the R implementation) but this did not resolve the problem. My intuition is that I should somehow avoid those exponential terms but I do not know how.
UPDATE:
I updated the code according to the suggestions of @ManfredWeis. It now reads
function res = hypergeo(c,z)
theSum = 0;
for k=2:(c-1)
theSum = theSum + (z-1)^(k-2)/(z^(k-1)*(c-k));
end
res = (c-1)*(theSum-(z-1)^(c-2)/z^(c-1)*log(1-z));
end
Unfortunately, this did not help much. For $c=100,z=0.1$, I get -4.58e+79.
One possible solution may be using the module mpmath in python.See http://mpmath.org or http://code.activestate.com/recipes/576938-numerical-inversion-of-the-laplace-transform-with-/
First observation is that you can remove the denominator outside the brace by decreasing expoents in the brace by $2$. Second observstion is that is better to take the difference of teo sums than summing over differences; summing what is to the right of the minus sign amounts to multiplying with the number of summands
If you further do the exponentiation separately for numerators an denominator, then you can reduce the powers of the deminator by $1$ if you also remove the $z$ factor of the numerator outside the summation. As a general advice: simplify as much as possible before looking for more elaborate techniques.
https://en.wikipedia.org/wiki/Interval_arithmetic
@ManfredWeis: Thanks for your suggestions! I tried implementing see suggestions for reducing the term outside the bracks. Unfortunately, it does not seem to help much. I am not sure I understand your second observation but just to clarify, I only substract the term with the log once at the end.
@DieterKadelka: As far as I understand, this allows increasing the number of bits right? It tried that using an R implementation but it did not help, which I still find quite surprising.
@SteveHuntsman This seems like a useful concept. Could you elaborate on how I can use it for my problem?
It is actually pretty simple if you are comfortable with Taylor series (definitely not MO level, so ask on MSE next time). Let $w=\frac{z-1}{z}$. If $|w|<1$, you are in no trouble computing the expression as it is. So let's consider the case $|w|>1$. Then $\frac1{1-z}=1-\frac 1w$, so your expression in parentheses (the one you really have trouble with) becomes
$$
w^c(\frac 1w+\frac 1{2w^2}+\dots+\frac 1{(c-2)w^{c-2}}+\log(1-\frac 1w))
\\\
=-w^c\sum_{k=c-1}^\infty \frac 1{kw^k}
=-w\sum_{m=0}^\infty \frac 1{(m+c-1)w^m}
$$
If $|w|>2$, say, the series converges pretty fast, so your real trouble is not $z=0.1$ but $z\approx \frac 12$, where the series converges not so fast. However, let's say that you have $15$ decimal digit float point precision. Then your error with direct computation will be, roughly speaking $10^{-15}c|w|^c$ and the number of terms in the series that you should take to make the error coming from the truncation about $10^{-15}$ is going to be $8c$ if $|w|^c>100$, say. So I suggest as a rule of thumb comparing $|w|^c$ to $100$ and if it is less than that, then do the direct computation but if it is above that to take $8c$ terms in the infinite series. The guaranteed (relative) error is then about $10^{-13}c$ which is $<10^{-8}$ (the handheld calculator precision) for all $c<10^5$. If that is not enough, implement higher precision arithmetic yourself or use some ready package and adjust the splitting into cases accordingly.
Great, this solves my problem thanks! I have no clue about the first equal sign, which translates the finite into an infinite sum. What is happening there? Also, I do not understand how you arrive at the errors. I guess this is related to my lack of knowledge about the Taylor series. I tried reading up on that but was unsuccessful. Could you point me to resources that will allow me to understand your answer?
Oh, and sorry for posting on MathOverflow. I mixed up the two. I actually wanted to post on MSE. I am aware that this is not research level.
|
2025-03-21T14:48:29.855062
| 2020-02-12T12:48:59 |
352539
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Hollis Williams",
"camel8899",
"https://mathoverflow.net/users/119114",
"https://mathoverflow.net/users/152244"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626362",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352539"
}
|
Stack Exchange
|
Expected value of absolute value of shifted binomial distribution
Recently my research needs to calculate the close form of $\mathsf{E}[|X-\frac{n}{2}|]$ where $X$ follows binomial distribution with parameter $(n,p)$. When $p=\frac{1}{2}$, this is just the mean absolute deviation (MAD) and has close form, see this paper for more details. But when $p\neq\frac{1}{2}$ the close form seems to become tricky. I come up with an idea that we can try to calculate $\lim_{t\rightarrow 2}\mathsf{E}[(X-\frac{n}{2})^\frac{2}{t}]$, but I'm also not familiar with the fractional moment. Any references or ideas would be appreciated.
Thanks in advance.
I am not entirely sure what the question is asking, could you give some more details of what you are trying to achieve?
@Tom I'm trying to find the close form of the expected value for any $p$, if there's no close form, sharp approximation would be appreciated, thank you.
Mathematica can only produce a useless, tautological expression for $E|X-n/2|$ in terms of the hypergeometric function. Using Lemma 1 (Todhunter's Formula) in the paper you linked and the expression of the binomial distribution function in terms of the incomplete beta function (see e.g. Lemma 1), one can easily get an expression of $E|X-n/2|$ in terms of the incomplete beta function.
However, an apparently better way to deal with this problem is to provide the following approximation of $E|X-n/2|$, which will be very close to $E|X-n/2|$ if $p$ is not too close to $1/2$. Indeed, for any real $u$ we have $|u|=u-2u\,1_{u<0}$, which implies
$$E|X-n/2|=E(X-n/2)+R_n=n(p-1/2)+R_n,$$
where
$$R_n:=E(n/2-X)1_{X<n/2}.$$
Assuming now $p>1/2$ and using Hoeffding's inequality, we have
$$0\le R_n\le(n/2)P(X<n/2)\le R_n^*:=(n/2)e^{-2n(p-1/2)^2}.$$
The case $p<1/2$ is similar. So, we have
$$|E|X-n/2|-n|p-1/2||\le R_n^*.$$
In particular, this implies that for $n\to\infty$
$$E|X-n/2|\sim n|p-1/2|$$
if $p\ne1/2$ is fixed or, more generally, if $p=p_n$ varies with $n$ so that
$$\liminf_{n\to\infty}\frac{|p_n-1/2|}{\sqrt{(\ln n)/n}}>\frac12.$$
|
2025-03-21T14:48:29.855232
| 2020-02-12T13:10:11 |
352540
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Geva Yashfe",
"Mark Grant",
"Martin Brandenburg",
"Nick Gill",
"Robbie Lyman",
"Siddharth Bhat",
"https://mathoverflow.net/users/123769",
"https://mathoverflow.net/users/135175",
"https://mathoverflow.net/users/2841",
"https://mathoverflow.net/users/75344",
"https://mathoverflow.net/users/801",
"https://mathoverflow.net/users/8103"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626363",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352540"
}
|
Stack Exchange
|
The geometry of the action of the semidirect product
I'm going by the maxim
Groups, like men, are known by their actions
This naturally leads one to ask "given groups $G, H$ which act on sets $S, T$ and the semidirect product $G \rtimes H$, how does one visualize the action of $G \rtimes H$? What does it act on? Some combination of $S$ and $T$? ($S \times T$ perhaps?)
I know some elementary examples, likr $D_n \simeq \mathbb Z_n \rtimes \mathbb Z_2$. However, given an unknown situation, I am sure I cannot identify whether it is a semidirect product that is governing the symmetry.
The best responses on similar questions like intuition about semidirect product tend to refer to this as some kind of "direct product with a twist". This is shoving too much under the rug: the twist is precisely the point that's hard to visualize. Plus, not all "twists" are allowed --- only certain very constrained types of actions turn out to be semidirect product. I can justify the statement by noting that:
the space group of a crystal splits as a semidirect product iff the space group is symmorphic --- this is quite a strong rigidity condition on the set of all space groups.
This question on the natural action of the semidirect product identifies one choice of natural space for the semidirect product to act on, by introducing an unmotivated (to me) equivalence relation, which "works out" magically. What's actually going on?
The closest answer that I have found to my liking was this one about discrete gauge theories on physics.se, where the answer mentions:
If the physical space is the space of orbits of $X$ under an action $H$. Ie, the physical space is $P \equiv X / H$. Then, if this space $P$ is acted upon by $G$. to extend this action of $G \rtimes H$ onto $X$ we need a connection.
This seems to imply that the existence of a semidirect product relates to the ability to consider the space modulo some action, and then some action per fiber. I feel that this also somehow relates to the short exact sequence story(though I don't know exact sequences well):
Let $1 \rightarrow K \xrightarrow{f}G \xrightarrow{g}Q \rightarrow 1$ be a short exact sequence. Suppose there exists a homomorphism $s: Q \rightarrow G$ such that $g \circ s = 1_Q$. Then $G = im(f) \rtimes im(s)$. (Link to theorem)
However, this is still to vague for my taste. Is there some way to make this more rigorous / geometric? Visual examples would be greatly appreciated.
(NOTE: this is cross posted from math.se after getting upvotes but no answers)
About the quote: why only men?
One point that might interest you: if $H$ is a $G$-group and $X$ is an $H$-set and a $G$-set such that $g\cdot(h\cdot x)=(g\cdot h)\cdot(g\cdot x)$, then $X$ is an $H\rtimes G$-set with action $(h,g)\cdot x=h\cdot(g\cdot x)$.
Your first sentence after the maxim is slightly problematic: you can't talk about the semidirect product of $G\rtimes H$ as there are, in principle many (as many as there are homomorphisms $G\to Aut(H)$). So, to form your question correctly, you need THREE inputs: the action of $G$ on $S$, the action of $H$ on $T$, and the action of $G$ on $H$ described by whichever homomorphism $G\to Aut(H)$ that you choose.
This is a bit of a special example in a couple ways, but maybe it might be useful for building intuition.
Suppose $1 \longrightarrow H \longrightarrow G \longrightarrow \mathbb{Z} \longrightarrow 1$ is a short exact sequence of groups. The algebraic situation you describe at the end of your question is sometimes described as saying “the sequence splits.” It’s a fact that’s not hard to prove that any sequence of the above form splits.
Algebraically, there is an element $\Phi\in\operatorname{Aut}(H)$ such that if $t$ is a generator for $\mathbb{Z}$ and $g \in H \le G$, we have $tgt^{-1} = \Phi(g)$.
Topologically, such a short exact sequence of groups corresponds to a fiber bundle over the circle, in the sense that if $E \longrightarrow S^1$ is a bundle with fiber $F$, there is such a short exact sequence with $\pi_1(F)$ playing the role of $H$ and $\pi_1(E)$ playing the role of $G$.
In good situations, (e.g. if $F$ is a $K(H,1)$), we can realize the algebraic picture topologically: if there is some map $f\colon F \to F$ such that the action of $f$ on the fundamental group yields $\Phi$, then we can build the bundle $E$ in the following way:
$$ E = (F \times [0,1])/_{(x,1) \sim (f(x),0)}.$$
In other words, $E$ is built by taking the product of $F$ with an interval (think of the interval as the “vertical” direction), and then gluing the “top” of the product to the “bottom” via $f$. We call $E$ the “mapping torus” of $f$.
Thinking geometrically may not always be clear from here. E.g. if $F$ is a manifold admitting a Riemannian metric of non-positive curvature and $f\colon F \to F$ is a diffeomorphism, the bundle $E$ may or may not also admit a Riemannian metric of non-positive curvature. The product $E = F\times S^1$ always does in this situation, of course, but for instance when $F$ is a $2$-torus, the only other examples of bundles $E$ admitting a non-positively curved metric are finitely covered by the product.
What’s going on here is that especially in low dimensions, the geometry of $E$ is really intimately tied up with the dynamics of the action of $f$ (or $\Phi$) on $F$ (or $H$).
Thank you for the answer, I'll look into this! Forgive me, I don't know some of the terms you're using. What's $K(H, 1)$? what does it mean for the action on the fundamental group to yield $\Phi$? do you mean that $f$ acting on the fundamental group "looks like" $\phi$?
@SiddharthBhat No worries, I was a little fast and loose. The unhelpful definition is that a $K(H,1)$ is a topological space with fundamental group $H$ and all other homotopy groups trivial.
The point is that if $X$ is a $K(H,1)$, any automorphism $\Phi\colon H\to H$ may be represented by a homotopy equivalence $f\colon X\to X$ in the following sense:
We need to suppose that $f$ fixes a point $p$ of $X$. In this case, $f$ sends loops based at $p$ to loops based at $p$, so there is a well-defined map $f_\sharp\colon\pi_1(X,p) \to \pi_1(X,p)$. The claim that $f$ is a homotopy equivalence says that $f_\sharp$ is an automorphism, and I can choose it so that $f_\sharp = \Phi$ under the identification $\pi_1(X,p) = H$.
The case of the $2$-torus is especially nice: not only can I choose $f$ to be a homotopy equivalence, I actually can choose it to be a linear diffeomorphism.
Suppose $K\circlearrowright X$ is an action of $K$ on some set. We have the structure homomorphism $\varphi:K\rightarrow \text{Sym}(X).$
Let $H$ act on $K$ by automorphisms, and suppose these automorphisms can be realized as inner automorphisms within $\text{Sym}(X)$. That is, the action is given by some $\theta:H\rightarrow\text{Sym}(X)$ such that conjugations by $\theta(H)$ leave $K$ invariant. Equivalently, the $H$-action takes stabilizer subgroups of $K\circlearrowright X$ to stabilizers (it may permute them nontrivially).
Then we can construct an action of $K\rtimes H$ on $X$ by $(k,h).x = \varphi(k)\cdot\theta(h).x$, where the product $\varphi(k)\theta(h)$ is just taken in $\text{Sym}(X)$. Here, the multiplication rule for $K\rtimes H$ is
$$(k_1, h_1)\cdot (k_2,h_2) = (k_1 \cdot (h_1.k_2), h_1 h_2). $$
Any action arises in this way, since $H$ acts on $K$ by conjugation in the semidirect product $K\rtimes H$ and therefore it also acts by conjugation in the image under the structure morphism $K\rtimes H \rightarrow \text{Sym}(X)$ of an action of the semidirect product on $X$. This explains when an action of $K$ can be extended to an action of $K \rtimes H$.
It is not clear how to visualize the above. So let's pass to a nice special case.
Given another action of $H$ on a set $Y$, we can extend $H\circlearrowright Y$ to an action $K\rtimes H \circlearrowright Y$ (let the latter act via the quotient $K\rtimes H\twoheadrightarrow H$). Then we can produce an action of $K\rtimes H \circlearrowright Y \times X$. This action descends to the quotient $H\circlearrowright Y$ so that $K$ fixes each fiber $\{y\}\times X$, and $H$ acts by permuting fibers and "twisting" $X$.
If the $H$-action is faithful, the action of any element $(k,h)$ can be nicely separated into an $h$-part and a $k$-part, the $h$-part being uniquely identified by the action on $Y$. Thus given a permutation $\sigma$ of $Y\times X$ we can write that it is of the form $(k,h)$ for a known $h$, and then compute the permutation $(y,x)\mapsto \sigma\left((e,h^{-1}).(y,x)\right)$, which acts the same as $(k,h)\cdot(e,h^{-1}) = (k,e)$.
One gets some examples which appear different, but are isomorphic to these, by choosing different identifications between fibers than $\text{id}_X : \{y_1\}\times X \rightarrow \{y_2\}\times X$. These identifications may be analogous to the connection described in the question.
About the first half: There is nothing specific about the symmetric group here, you can replace it by an arbitrary group and have the description of the usual universal property of the semidirect product, i.e. the functor $\hom(K \rtimes H,-)$.
@MartinBrandenburg - it is just that group actions $G\circlearrowright X$ come with a map $G \rightarrow \text{Sym}(X)$, which is convenient to use here. Maybe I misunderstand your point.
|
2025-03-21T14:48:29.855863
| 2020-02-12T13:21:19 |
352541
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Bartosz Bartmanski",
"Carl-Fredrik Nyberg Brodda",
"Geva Yashfe",
"https://mathoverflow.net/users/120914",
"https://mathoverflow.net/users/152265",
"https://mathoverflow.net/users/75344"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626364",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352541"
}
|
Stack Exchange
|
Find the largest subset of all binary arrays of length $n$ with $r$ ones which have pairwise distance greater than $m$
Let $\Omega = \left\lbrace x : |x| = r \, \text{for} \, x \in \mathbb{Z}_2^n \right\rbrace$ for $r \in \mathbb{N}$. We want to find the the biggest subset of $\Omega$, $\Gamma = \left\lbrace x \in \Omega : |x - y| \geq m ,\, \forall x, y \in \Gamma \right\rbrace$ for a given value of $m \in \mathbb{N}$.
Where does this problem originate? Is this not saying to find the biggest subset of $\Omega$ which is an $(m-1)$-error detecting code?
This problem was given to me by biologists who were interested in improving experimental design. It's about organising samples and getting the most out of a single run of experiments
I don't know whether there is an "explicit" solution in terms of the size of $\Gamma$ in full generality, but depending on how much information you have about $\Omega$, you may be able to deduce some bounds on the size of $\Gamma$ by using the packing radius (and the associated sphere packing).
See https://en.wikipedia.org/wiki/Constant-weight_code
What you are looking for is precisely the optimal (largest cardinality) constant weight (this weight is $r$ in your case) binary code with length $n$ and distance $m$, which is a well-researched and very difficult problem in general.
Let this quantity be denoted $A(n,m,r)$ in your terminology. In fact normally, this is denoted $A(n,d,w)$ with $d$ the minimum distance and $w$ the constant weight. There are tables of upper (see here) and lower (see here) bounds for small values of the parameters.
Clearly, a constant weight code in general will have fewer codewords than an unrestricted code. So, general upper bounds on code cardinality will also upper bound a constant weight code.
Let $A(n,d)$ be the largest possible number of codewords in such an unrestricted binary code with length $n$ and minimum distance $d.$ Then by the fact that $r-$spheres around codewords must be disjoint where $r=\lfloor (d-1)/2\rfloor,$
such a code $\Omega$ must obey
$$
\#\Omega\leq \frac{2^n}{\sum_{k=0}^r \binom{n}{k}}
$$
where the denominator is the volume of the Hamming sphere of radius $r.$
|
2025-03-21T14:48:29.856154
| 2020-02-12T14:11:21 |
352544
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Goldstern",
"Hannes Jakob",
"Noah Schweber",
"https://mathoverflow.net/users/138274",
"https://mathoverflow.net/users/14915",
"https://mathoverflow.net/users/8133"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626365",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352544"
}
|
Stack Exchange
|
Amoeba forcing adds a null set covering all old null sets
I am currently reading a paper by Goldstern, Kellner and Shelah, in which they, pretty nonchalantly, state "Amoeba forcing will add a null set covering all old null sets", without proving this fact or giving a reference. The only thing I could find that would prove this statement was in the Bartoszynski book "Set Theory: On the structure of the Real Line", where he states that any two Amoeba forcings are equivalent, which would mean that, even when forcing with just one Amoeba forcing, for any $n\in\omega$, there would be a set of measure $<\frac{1}{n}$ (by a standard density argument), covering all old null sets, which would imply the statement. But I cannot seem to understand his proof, therefore I am wondering if perhaps there is an easier way, i.e. simply constructing an $\mathbb{A}_{1/2^{n+1}}$-generic Filter from a $\mathbb{A}_{1/2^n}$-generic filter.
The conditions in Amoeba forcing are open sets of measure less than $1/2$. (Say, in $2^\omega$.)
A condition $q$ is stronger than $p$ iff $q \supseteq p$. (Alternatively, use closed sets of measure greater than $1/2$. Then stronger conditions will be smaller.)
For a generic filter $G$ let $U_G$ be the union of all sets in $G$. An easy density argument shows that $U$ is an open set of measure $1/2$. (I can elaborate if necessary)
Another easy density argument shows that every ground model null set is contained in $U_G$. More precisely, whenever $c$ is a (natural) code for a $G_\delta$ null set, then the $G_\delta$ null set $B_c$ described by $c$ (in $V[G]$) is a subset of $U_G$.
Every "rational" infinite $01$-sequence (i.e., with only finitely many $1$'s) defines a measure-preserving translation map $x \mapsto x+t$ on $2^\omega$.
Every rational translate of $U_G+t $ has the same property as $U_G$: measure $1/2$, and it covers every ground model $G_\delta$ null set.
(Proof: given such a null set $N$, also the set $N-t$ is null, hence covered by $U_G$, so $N$ is covered by $U_G+t$.)
Now we get to the main point: How to get from $1/2$ down to $0$?
Let $U'_G:= \bigcap_t (U_G + t)$, where $t$ ranges over all rational sequences. Then $U'_G$ is a $G_\delta$ set, its measure is at most $1/2$, but by Kolmogorov's 0-1-law, its measure must be $0$. But $U'_G$ still covers all ground model null sets.
Remark: If you do an iteration of Amoeba forcing (of limit length) rather than a single forcing, you could (but why would you?) replace the above argument by the following: the first Amoeba forcing gives you an open set of measure 1/2, the second a new open set of measure 1/3, etc. Now take the intersection of all these sets.
Im sorry for this question coming so late. But im not quite sure how you can apply Kolmogorovs 0-1-law. The statement i read on wikipedia of this law requires random variables that are independent. If we take $X_t$ to be the uniform distribution on $U_G+t$, then i dont know how to prove independence, since this does not hold for arbitrary sets, even of measure 1/2: For example, take $U:=[(0,0)]\cup[(1,0)]$. Then $U+(1)=U$, but since $U$ has measure 1/2, $U$ and $U+(1)$ cannot be independent.
First, here is a link to the wikipedia article. Second: Consider the space $2^\omega$, with the random variables $X_n:2^\omega\to 2$ defined by $X_n(x)=x(n)$. They are independent and generate the $\sigma$-algebra of Borel sets. For any positive measurable set $S\subseteq 2^\omega$ (in particular, for any open set of measure $1/2$), the set $\bigcap _t S+t$ is a tail event.
That clears it up, thank you very much. And by the way, it is a great feeling to be asking a question about a paper only to have it answered by one of the authors!
I believe the following works:
Let $C$ be Cantor space and for $\sigma\in 2^{<\omega}$ let $C_\sigma=\{f\in C: \sigma\prec f\}$. There is a canonical bijection $i_\sigma:C_\sigma\rightarrow C$ given by cutting off the initial $\sigma$.
Suppose $G$ is amoeba-generic over $V$. For $\sigma\in 2^{<\omega}$ let $G_\sigma=i_\sigma[G\cap C_\sigma]$. The point is:
By "spending measure elsewhere," for each $\epsilon>0$ there will be some $\sigma$ with $m(G_\sigma)<\epsilon$.
But by the usual "engulfing" argument, we'll have $N\subseteq G_\sigma$ whenever $N$ is null in the ground model.
So the $\Pi^0_2$ set $\bigcap_{\sigma\in 2^{<\omega}}G_\sigma$ is null and covers all ground model null sets.
EDIT: Of course the first bulletpoint above is the heart of the argument, so let me explain why it's true.
First, note that by genericity it's enough to prove the following:
$(*)$ Suppose $A\subseteq C$ is open with $m(A)<{1\over 2}$. Then for all $\delta>0$ there is some $\sigma\in 2^{<\omega}$ such that ${m(A\cap C_\sigma)\over m(C_\sigma)}<\delta$.
This implies the first bulletpoint, and is what "spending measure elsewhere" refers to: supposing we have a condition $A$ and an $\epsilon>0$, let $\sigma$ be the string gotten by applying $(*)$ with $\delta={\epsilon\over 2}$. Then we consider some larger open $A'$ with $A'\cap C_\sigma=A\cap C_\sigma$ and ${1\over 2}-m(A')<{m(C_\sigma)\epsilon\over 2}$. We'll have that if $G$ extends $A'$ then $m(i_\sigma[G_\sigma])<\epsilon$ as desired.
So it just remains to prove $(*)$. For this, look at the complement $A^c$ of our set and note that it is a non-null set and hence on some interval has relative measure arbitrarily close to $1$.
Note that we really do need to think about intervals specifically - or at least some canonical countable collection of opens - since at the end we need a countable intersection to get the desired result. The variation of $(*)$ gotten by shifting from $C_\sigma$s to arbitrary open sets is trivial but unhelpful.
I am not quite sure how we can show that for some $\sigma$, $m(G_{\sigma})<\epsilon$, since $i_{\sigma}[M]$ can be of measure larger than $M$ (for example, set $\sigma={(0,1)}$ and $M=C_{\sigma}$, then $\mu(M)=1/2$, but $\mu(i_{\sigma}[M])=\mu(2^{\omega})=1$ so i think that some problems arise if $G$ is very uniform.
@HannesJakob I've edited to include this argument.
But how can you assume that G extends $A'$? The set of $B$ s.t. $B\cap C_{\sigma}=A\cap C_{\sigma}$ is not dense below $A$, since, if $C$ satisfies $C\cap C_{\sigma}\supsetneq A\cap C_{\sigma}$, then the same is true for any condition stronger than C and allowing the $\sigma$ to change can again bring drastic changes in the measure of $A\cap C_{\sigma}$ relative to $C_{\sigma}$.
|
2025-03-21T14:48:29.856585
| 2020-02-12T15:19:29 |
352548
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Aidan Rocke",
"JCK",
"https://mathoverflow.net/users/130706",
"https://mathoverflow.net/users/17773",
"https://mathoverflow.net/users/56328",
"kodlu"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626366",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352548"
}
|
Stack Exchange
|
analytic approximations of the min and max operators
Question:
What is the state of the art on analytic approximations of $\min$ and $\max$? My hunch is that numerical analysts probably have a better solution than the one I propose here.
For any $\epsilon$, I'd like analytic approximations $f$ that
Are accurate to within $\epsilon$, in the sense that
$$|f(V)-\max(V)|<\epsilon$$
for any vector $V$ in $[-1000,1000]^{100}$, and
Minimize the expected relative error:
$$E\left[\left|\frac{f(V)-\max(V)}{\max(V)}\right|\right]$$
where $V$ is a random vector uniformly distributed in $[-1000,1000]^{100}$.
Motivation:
Within the context of optimisation, differentiable approximations of the min and max operators are very useful. In particular I am looking for $f_N,g_N \in C^\infty$:
\begin{equation}
f_N: \mathbb{R}^n \rightarrow \mathbb{R}
\end{equation}
\begin{equation}
g_N: \mathbb{R}^n \rightarrow \mathbb{R}
\end{equation}
where $\forall x \in \mathbb{R}^n \forall \epsilon >0 \exists N \in \mathbb{N}$:
\begin{equation}
\forall m > N, \max(\lvert f_m(x)-\max(x) \rvert,\lvert g_m(x)-\min(x) \rvert) < \epsilon
\end{equation}
I found a few proposed solutions to a related question on MathOverflow but when I tested these methods I found that none of them were numerically stable with respect to the relative error $\frac{\lvert \delta x \rvert}{\lvert x \rvert}$. Without much loss of generality, I shall focus on approximations of the max operator.
In total, I analysed three different analytic approximations to the max operator $\forall X \in \mathbb{R}^n, \forall N \in \mathbb{N}$:
The generalised mean:
\begin{equation}
GM(X,N)= \big(\frac{1}{n}\sum_{i=1}^n x_i^N\big)^{\frac{1}{N}} \tag{1}
\end{equation}
Exponential generalised mean:
\begin{equation}
EM(X,N)= \frac{1}{N} \cdot \log \big(\frac{1}{n}\sum_{i=1}^n e^{N \cdot x_i}\big) \tag{2}
\end{equation}
The smooth max:
\begin{equation}
SM(X,N)= \frac{\sum_{i=1}^n x_i \cdot e^{N \cdot x_i}}{\sum_{i=1}^n e^{N \cdot x_i}} \tag{3}
\end{equation}
and found that all of these methods were vulnerable to overflow errors. In fact, I created an IJulia notebook where I analysed each method.
My proposed method:
This motivated me to come up with my own solution inspired by the properties of the infinity norm where I first rescale the vectors so they have zero mean and unit variance:
\begin{equation}
AM(\hat{X},N) = \sigma_X \cdot \big(\frac{1}{N} \log \big(\sum_{i=1}^n e^{N\cdot \hat{x_i}} \big)\big) + \mu_{X} \tag{4a}
\end{equation}
\begin{equation}
\hat{X} = \frac{X - 1_n\cdot \mu_X}{\sigma_X} \tag{4b}
\end{equation}
whose partial derivative with respect to $\hat{x_i}$ is simply the softmax:
\begin{equation}
\frac{\partial}{\partial \hat{x_i}} AM(\hat{X},N) = \frac{e^{N \cdot \hat{x_i}}}{\sum_{i=1}^n e^{N\cdot \hat{x_i}}} \tag{5}
\end{equation}
This may be be used to approximate both the min and max operators on $\mathbb{R}^n$ in the Julia programming language:
using Statistics
function analytic_min_max(X::Array{Float64, 1},N::Int64,case::Int64)
"""
An analytic approximation to the min and max operators
Inputs:
X: a vector from R^n where n is unknown
N: an integer such that the approximation of max(X)
improves with increasing N.
case: If case == 1 apply analytic_min(), otherwise
apply analytic_max() if case == 2
Output:
An approximation to min(X) if case == 1, and max(X) if
case == 2
"""
if (case != 1)*(case != 2)
return print("Error: case isn't well defined")
else
## q is the degree of the approximation:
q = N*(-1)^case
mu, sigma = mean(X), std(X)
## standardise the vector so it has zero mean and unit variance:
Z_score = (X.-mu)./sigma
exp_sum = sum(exp.(-Z_score*q))
log_ = log(exp_sum)/q
return (log_*sigma)+mu
end
end
and as expected it passed the numerical stability test that I defined:
function numerical_stability(method,type::Int64)
"""
A simple test for numerical stability with respect to the relative error.
Input:
method: the approximation used
type: 1 for min() and 2 for max()
Output:
Check that the average relative error is less than 10%.
"""
## test will be run 100 times
relative_errors = zeros(100)
for i = 1:100
## a vector sampled uniformly from [-1000,1000]^100
X = (2*rand(100).-1.0)*1000
## the test for min operators
if type == 1
min_ = minimum(X)
relative_errors[i] = abs(min_-method(X,i))/abs(min_)
## the test for max operators
else
max_ = maximum(X)
relative_errors[i] = abs(max_-method(X,i))/abs(max_)
end
end
return mean(relative_errors) < 0.1
end
References:
Wikipedia contributors. Smooth maximum. Wikipedia, The Free Encyclopedia. March 25, 2019, 21:07 UTC. Available at: https://en.wikipedia.org/w/index.php?title=Smooth_maximum&oldid=889462421. Accessed February 12, 2020.
alwayscurious (https://stats.stackexchange.com/users/194748/alwayscurious), What is the reasoning behind standardization (dividing by standard deviation)?, URL (version: 2019-03-18): https://stats.stackexchange.com/q/398116
Sergey Ioffe & Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015.
J. Cook. Basic properties of the soft maximum. Working Paper Series 70, UT MD Anderson Cancer
Center Department of Biostatistics, 2011. https://www.johndcook.com/soft_maximum.pdf
M. Lange, D. Zühlke, O. Holz, and T. Villmann, "Applications of lp-norms and their smooth approximations for gradient based learning vector quantization," in Proc. ESANN, Apr. 2014, pp. 271-276.
Aidan Rocke. analytic_min-max_operators(2020).GitHub repository, https://github.com/AidanRocke/analytic_min-max_operators
At least from the deep learning side, I think the practice is to avoid overflow concerns by making sure the input to the loss has been passed through a sigmoid function, or some other order-preserving normalization.
@MattF. That is correct.
@MattF. I appreciate your edit and I think I will add comments to the code as well. And yes, I am looking for functions that are ideally analytic so I like the way you framed the question.
Your link to the J Cook paper seems to be dead
@kodlu Thank you for pointing this out. It should work now.
|
2025-03-21T14:48:29.856980
| 2020-02-12T16:19:21 |
352551
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Martin Tancer",
"Moishe Kohan",
"https://mathoverflow.net/users/15650",
"https://mathoverflow.net/users/39654"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626367",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352551"
}
|
Stack Exchange
|
Contractibility and orientation double cover
Question. Let $M$ be a triangulated non-orientable 3-manifold with non-orientable boundary. (It is possible to assume that the boundary is the Klein bottle.) Let $\ell$ be a non-orientable loop on the 1-skeleton of the boundary (the regular neighborhood of this loop inside the boundary is the Möbius band and the regular neighborhood of this loop considered in $M$ is the solid Klein bottle). Let me also assume that the resulting space $M/\ell$ after contracting the loop $\ell$ is a contractible space. (Equivalently, after gluing a disk along $\ell$, we get a contractible space.)
Let $M'$ be the orientation double cover of $M$ and $\ell'$ be the loop which covers $\ell$ (this is a single loop by the assumptions on $\ell$). Is it true that after contracting $\ell'$ in $M'$, we get a contractible space?
Remarks and background. This question is kind of easy to formulate special case of something more general that I would like to be true. It regards certain analysis of some contractible singular 3-manifolds. (I am interested in the special cases when the contractibility can be recongized algorithmically.)
In this special case, the only example of $M$ and $\ell$ satisfying the assumptions, I am aware of is the following: $M$ is the solid Klein bottle and $\ell$ is the loop which is homotopic to the core curve of this solid Klein bottle.
Homology of $M'/\ell'$ should be OK. The difficulty, in my opinion, is the fundamental group $\pi(M'/\ell')$. There is a well established theory how to compute the $\pi(M')$ out of the knowledge of $\pi(M)$. However, the trouble is that $\pi(M)$ is not completely known.
You get more examples as follows: Take a knot $K\subset S^3$ which is invariant under and (orientation-reversing) involution $\tau$ with exactly two fixed points which are both in $K$. Then take $M=Ext(K)/\tau$, where $Ext(K)$ is the complement to an open tubular neighborhood of $K$. However, in this example, $M'/\ell'$ is still contractible. (All examples with $M'/\ell'$ contractible are obtained this way.)
Thanks! This is a useful example.
|
2025-03-21T14:48:29.857160
| 2020-02-12T16:25:49 |
352552
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dieter Kadelka",
"Mark",
"Nate Eldredge",
"Robert Furber",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/117762",
"https://mathoverflow.net/users/131781",
"https://mathoverflow.net/users/4832",
"https://mathoverflow.net/users/61785",
"https://mathoverflow.net/users/95282",
"user131781",
"user95282"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626368",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352552"
}
|
Stack Exchange
|
Is the separability of the space needed in the proof of the Prohorov's theorem?
The Section 5 of the book:
Billingsley, P., Convergence of Probability Measures, 1999,
studies Prohorov's theorem. A short reminder is given below.
Let $\Pi$ be a family of probability measures on $(S,\mathcal{F})$. We call $\Pi$ relatively compact if every sequence of elements of $\Pi$ contains a weakly convergent subsequence. The family $\Pi$ is tight if for every $\epsilon$ there is a compact set $K$ such that $P(K)>1-\epsilon$ for every $P$ in $\Pi$.
The direct half of the Prohorov's theorem is given in the Theorem 5.1: If $\Pi$ is tight, then it is relatively compact.
The converse half of Prohorov's theorem is given in the Theorem 5.2: Supose that $S$ is separable and complete. If $\Pi$ is relatively compact, then it is tight.
My question: In the proof of the Theorem 5.2 (i.e. relatively compact $\Rightarrow$ tight), we use separability and completness of the space $S$. On the other hand, in the proof of the Theorem 5.1 (i.e. tight $\Rightarrow$ relatively compact), I know that we do not need completness of $S$, but I do not know if we do need separability. I didn't find the place where separability is used in the proof of the Theorem 5.1. So my question is do I need or not the separability of the space $S$ in the direct part of the Prohorov's theorem?
Remarks:
I know the proofs of the same theorem that use separability (e.g. Note).
Prohorov's theorem in most books is given as one theorem on Polish spaces, so they assume separability in both halfs. It goes like this usually: Let $S$ be a Polish space and $\Pi$ a collection of probability measures on $S$. Than $\Pi$ is tight if and only if it is relatively compact.
The reason I am asking is that I would like to use the direct half of Prohorov's theorem on the problem I am currently working. Space $S$ in my case is complete but not separable.
Help with this would be great and needed. Thanks in advance.
Note that if $\Pi = {\mu}$ then $\Pi$ is compact but without additional assumptions $\mu$ is not tight!
@DieterKadelka, thank you for the comment. This is true, and this would be useful to remember that in the Theorem 5.2 (relative compact $\Rightarrow$ tight). This explains why do we need separability and completness there (if I am not mistaken).
You are correct that separability is not needed. However, there is also not really any loss of generality in assuming it. For suppose that $\Pi$ is tight. Then for every $n$ there exists a compact set $K_n$ such that $\mu(K_n) > 1-\frac{1}{n}$ for all $\mu \in \Pi$. So if we set $S_0 = \bigcup_{n=1}^\infty K_n$, then $S_0$ is separable and $\mu(S_0) = 1$ for all $\mu \in \Pi$. We can now view $\Pi$ as a set of probability measures on $S_0$, and it is still tight (since the $K_n$ are also compact in $S_0$). The separable case of the theorem then implies that $\Pi$ is weakly relatively compact in $\mathcal{P}(S_0)$, i.e. every sequence in $\Pi$ has a subsequence converging weakly in $\mathcal{P}(S_0)$, and you can easily check that such a subsequence also converges weakly in $\mathcal{P}(S)$. So $\Pi$ is weakly relatively compact in $\mathcal{P}(S)$, as desired.
In other words, once you have a tight family, then all those measures live on a separable subset of $S$ anyway, so the rest of the space is irrelevant and might as well not be there.
Nate Eldredge, thank you for the answer. The last sentence explains it all. So in the problem I am working currently $S$ is $C([0,T];\mathcal{M})$ or $L^{p}(0,T;\mathcal{M})$ or something similar. Here $\mathcal{M}$ represents space of Radon measures or some subspace of it or some other similar space of measures, and I use weak convergence of measures. So if I am not mistaken, I could apply the direct half of the Prohorov's theorem (i.e. Theorem 5.1) in those cases?
@Mark: Hm, could you explain more precisely what spaces you have in mind? If $\mathcal{M}$ is really all the Radon measures on some space, with the weak topology, then that is not metrizable, and so neither is $C([0,T],\mathcal{M})$. On the other hand, if $\mathcal{M}$ is just the probability measures, then it is Polish and so is $C([0,T];\mathcal{M})$; in particular it's separable and the issue doesn't arise. And in either case I don't know how to define $L^p([0,T];\mathcal{M})$ because the weak topology on $\mathcal{M}$ isn't a normed space.
Nate Eldredge, I work on a problem where I have one process that is in $L^{\infty}\cap BV$ and the other process that is in $C([0,T];H^k), k>2, k-integer$. I should compare them on the space that contains both of them. And later I should apply Prokorov's theorem on that new space. I got the recommendation of a one of my coworker to try to use some spaces such as $C([0,T];\mathcal{M})$ where $\mathcal{M}$ is some subspace of the space of Radon measures, $C([0,T];\mathcal{D^{'}})$ or $L^p([0,T];\mathcal{M})$.
Also it is important to note, that the two processes I mentioned above are stochastic. If $\mathcal{M}$ is the space of probability measures, this maybe could work on the probability laws of the two processes I mentioned but not on the processes itself. In this case I could use that $\mathcal{M}$ is the space of the probability measures with the finite first moment and than use with it Wasserstein metric $W_1$. Also as I recall the space of distributions $\mathcal{D}^{'}$ isn't metrizable too, so that the space $C([0,T];\mathcal{D}^{'})$ shouldn't work.
With regard to your question about Prohorov spaces, quite a lot of work was done on this in the 70’s—I’m not sure about more recent work. It seems to have more to do with completeness properties than with separability. Topologically complete spaces have this property but there are separable metrisable which fail. The references that I know are D. Preiss (Z. Wahrscheinlichkeitstheorie 34 (1973) 109-126) and F. Topsøe (Math. Scand. 34 (1974) 187-210). There is also material in the two volume “Measure Theory” by V. Bogachev.
By the way, if you are interested in function spaces with values in the space of meaure, then it seems natural to me to consider the vector space of signed measures. This is a Banach space but the norm topology is much too strong for most purposes. However, it has a natural complete lc topology which is just right in the sense that the underlying space embeds into the measures with a suitable universal property. The catch is that this space doesn’t fit into the mainstream classes of lcs’s—it is a Waelbroeck space when the topological space is compact, a CoSaks space otherwise.
@user131781 thanks for the references. I wasn't aware of the two works you suggested. The space of signed measure is of course one of the options, but I do not know how would I use it (also I have never heard of Waelbroeck and CoSaks spaces). I am sure that it is possible to view $L^{\infty}\cap BV$ and $H^k$-valued spaces as the signed measures but I am not sure how.
I meant this in the sense that these spaces are complete locally convex spaces and so the extension of various concepts of smoothness of scalar-valued functions to ones with values in spaces of measures are well established. I thought that this was what you were after.
@user131781, thanks for the clarification. I will try to implement everything I learned from this page during next weak. Hope that it will help me solve my problem. Thanks again.
Separability is not necessary. In fact, tightness of a family of Borel probability measures implies relative compactness in the vague/weak-* topology on any completely regular space. For instance, this can be found in volume 4 of Fremlin's Measure Theory. Specifically Proposition 437U (b) shows that tight families are compact in the narrow topology, and 437K (c) shows that for completely regular spaces, the narrow topology agrees with the weak-* topology.
My original answer below answers the wrong question - the question is about whether tight implies relatively compact, rather than the other way.
Let $\kappa$ be a real-valued measurable cardinal, and $\mu : \mathcal{P}(\kappa) \rightarrow [0,1]$ a probability measure vanishing on singletons. Consider $\kappa$ to be a discrete metric space. Then the 1-element family $\{\mu\}$ is compact, because it is a singleton, but it is not tight because all compact subsets of $\kappa$ are finite sets, so have measure zero.
You say that completeness is not necessary, but (unless you are making an extra assumption) it is, for essentially the same reason - there are separable metric spaces with Borel measures on them that are not tight.
@NateEldredge You are quite right - how silly of me!
Robert Furber, thank you for the answer. The spaces $S$ that I have in my problem are for example $C([0,T];\mathcal{M})$ or $L^{p}(0,T;\mathcal{M})$ where $\mathcal{M}$ represents space of Radon measures or some subset of it, and I use weak convergence. If I understood correctly I could apply Theorem 5.1 on it?
This is just an addition to the answers below but might be of interest. There is a natural lc topology on $C^b(S)$, the strict topology (R.C. Buck), for which the dual is the space of bounded (signed) tight meaures and the the uniformly tight sets are just the equicontinuous ones. Hence they are automatically relatively weakly compact. The converse implication is much more delicate but it is true for paracompact, locally compact spaces. Of course, there are stronger results for certain non-locally compact spaces if one confines attention to probability measures.
@RobertFurber Your answer is for a general completely regular space, so you have to be careful about the definition of a relatively compact set of probability measures. The definition given by OP is equivalent to the usual one (used by Fremlin) when $S$ is metrizable but not in general.
@user131781, thanks for the interest in this. Could you give an examples for your last sentence (if one confines attention to probability measures)?
@user95282 Clearly I was not having a good reading comprehension day when I wrote the answer. However, I don't want to re-bump the question with an edit to my answer, and I should say that when I say "compact", I never mean sequentially compact (unless the two are equivalent, of course). Incidentally, if you know of a space that can be proven to be compact, but not sequentially compact, without using the axiom of choice, then it would help here.
@Mark Even though I'm not 100% sure of how $L^p([0,T];\mathcal{M})$ is defined because $\mathcal{M}$ is not a Banach space in the weak-* topology, it will be a completely regular space because it's a topological vector space, and likewise for $C([0,T];\mathcal{M})$. So the theorem applies, with the caveat (pointed out by user95282) that it is for compactness, not necessarily the sequential compactness that you used as the definition of compactness in the question.
@RobertFurber, thanks for the follow up. I was pretty sure that Prohorov's theorem should work on $C([0,T];\mathcal{M})$. And I saw a few papers that use the spaces similar to $L^p([0,T];\mathcal{M})$, where instead of $\mathcal{M}$ it could be the subset of some negative Sobolev spaces or the space of distributions etc., but they didn't use Prohorov's theorem. I was thinking of using similar technique like them but on my problem (and for that I need that on those spaces Prohorov's theorem apply).
|
2025-03-21T14:48:29.857941
| 2020-02-12T16:39:52 |
352556
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Deane Yang",
"https://mathoverflow.net/users/613"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626369",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352556"
}
|
Stack Exchange
|
Gauss curvature of a fibre as a submanifold in a Riemannian warped product
Consider the Riemannian warped product $M^{n+1}=I\times\mathbb{S}^n$ with metric
\begin{align}
g=dt\otimes dt+f(t)^2g_{\mathbb{S}^n}
\end{align}
where $I\subseteq\mathbb{R}$ is some open interval and the warping function $f:I\to\mathbb{R}$ is smooth and positive. Fix a $t_0\in I$. Following O'Neill's terminology, I will call $\Sigma_0:=\{t_0\}\times\mathbb{S}^n$ a fiber of $M$. and view it as a Riemannian submanifold of $M$. Let me denote the induced metric as $g_0$. In this case, we have
\begin{align}
g_0=f(t_0)^2g_{\mathbb{S}^n}
\end{align}
Denote the components of $g_0$ by $g_{ij}$.
After some computations, the second fundamental form of $\Sigma_0$ is given by
\begin{align}
h_{ij}=\frac{f'(t_0)}{f(t_0)}g_{ij}
\end{align}
where $f'=\frac{df}{dt}$. I will omit $t_0$ in below. From this we deduce that
$\Sigma_0$ is totally umbilical.
The mean curvature is given by
\begin{align}
H=\lambda_1+\cdots+\lambda_n=\frac{nf'}{f}
\end{align}
The Gauss curvature is given by
\begin{align}
K=\lambda_1\cdots\lambda_n=\left(\frac{f'}{f}\right)^n
\end{align}
On the other hand, since $g_0$ just a scalar multiple of the round metric, the fiber $(\Sigma_0,g_0)$ is just a round sphere of radius $f(t_0)$, and so we should have
\begin{align}
K=\frac{1}{f^n}
\end{align}
instead.
This thus yields two different answers for the Gauss curvature. Apparently something must be wrong. Hence I would like to know where is the error.
Any comment, hint and answer are greatly appreciated.
You're missing a term involving the intrinsic (Riemann) curvature of $M$.
More specifically, the intrinsic nature of the Gauss curvature is derived from the formula: $$ h_{ik}h_{jl} - h_{il}h_{jk} . = R_{ijkl} - \tilde{R}_{ijkl}, $$ where $R$ is the Riemann curvature of $\Sigma_0$ and $\tilde{R}$ is the Riemann curvature of $M$.
|
2025-03-21T14:48:29.858205
| 2020-02-12T16:46:25 |
352557
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dieter Kadelka",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/121692",
"πr8"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626370",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352557"
}
|
Stack Exchange
|
Monotonicity of $\mathbf{P} ( \bar{X}_N > 0 )$ in $N$
Let $X$ be a real-valued random variable with positive expectation (wlog, $\mathbf{E}[X] = 1$, say).
For $N \in \mathbf{N}$, let $X_1, \cdots, X_N$ be independent, identically-distributed copies of $X$, and let $\bar{X}_N = \frac{1}{N} \sum_{i =1}^N X_i$ be their sample mean. Now, consider the quantity
$$p_N \triangleq \mathbf{P} ( \bar{X}_N > 0 ) \in [0, 1].$$
My question is: Is it known whether $p_N$ is increasing with $N$?
Intuitively, it seems like it ought to be (edit, added after answer: I meant to say `eventually' here). If it can be proved with some moment assumption on $X$, I would also be happy with that, though it would be nice to do so without this assumption. A counter-example would also be interesting.
The answer is, that $p_N$ is not necessarily increasing. Not that $\mathbb{P}(\bar X_N > 0) = \mathbb{P}(X_1 + \ldots + X_N > 0)$. Put $\mathbb{P}(X=1) = 0.99$ and $\mathbb{P}(X=-98) = 0.01$. Then $\mathbb{E}X_1 = 0.01$, but $p_2 < p_1$.
Thank you, that makes sense. Do you get the sense that there would be any nontrivial assumptions one could make on $X$ which would cause it to be increasing?
Only an idea: Assume that $X$ has symmetric distribution, i.e. $X \sim -X$ and $X_i \sim X + \mu$ with $\mu > 0$.
|
2025-03-21T14:48:29.858317
| 2020-02-12T16:51:27 |
352558
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ABIM",
"Luc Guyot",
"https://mathoverflow.net/users/36886",
"https://mathoverflow.net/users/84349"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626371",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352558"
}
|
Stack Exchange
|
Non-existent matrices with "essential zeros"
Is there a non-constant continous function $f:\mathbb{R}\rightarrow \mathbb{R}$ and matrices $A=\begin{pmatrix}
a_1 & 0\\
0 & a_2\\
\end{pmatrix}$ and
$B=\begin{pmatrix}
b_1 & 0\\
0 & b_2\\
\end{pmatrix}$ for which there does not exist any matrices $C\in \mathrm{Mat}_{d\times 2},D \in \mathrm{Mat}_{2\times d}$ and vectors $c \in \mathbb{R}^d,e\in \mathbb{R}^2$ such that:
$
D f_d\left(C
\begin{pmatrix}
x\\
x
\end{pmatrix}
+c
\right) + e = Af_2
\left(B
\begin{pmatrix}
x\\
x
\end{pmatrix}
\right) \qquad (\forall x \in \mathbb{R})
$
$D,e,C,$ and $c$ only have non-zero entries,
where $f_i(x)= (f(x_1),\dots,f(x_n))$.
Since $C(x , x)^t = xC(1, 1)^t$, that is $x$ times the sum of the two colums of $C$, we may replace $C$ with a fully unconstrained $d$-dimensional vector $c'$.
Don't $f(x) = \text{exp}(x)$ and $A = B = Id_2$ fit the bill?
@LucGuyot Is this only because $exp(x)$ is non-affine and injective?
No, the function $x \mapsto x^3$ does not fit your requirements.
Assuming that you allow $C(1, 1)^t$ to be zero, it is equivalent to ask whether there is a non-zero continuous function $f$ which doesn't lie in the linear span of ${x \mapsto f(cx + c')}_{c \in \mathbb{R}, c' \in \mathbb{R} \setminus {0}}$.Therefore $x \mapsto \text{exp}(x)$ is not an example; I retract my suggestion. But $x \mapsto \text{exp}(-1 /x^2)$ does the job.
So actually it seems any non-analytic function would work...
Yes, we can find such a triple $(f, A, B)$.
Let us first observe that OP's question can be rephrased as follows.
Question. Find a continuous function $f: \mathbb{R} \rightarrow \mathbb{R}$
such that $f$ does not lie in the $\mathbb{R}$-linear span $L(f)$ of $\{ x \mapsto f(cx + c') \}_{(c, c') \in \mathbb{R} \times \mathbb{R} \setminus \{0\}}$.
For instance, the continuous extension $f$ of $x \mapsto \text{exp}(- 1 / x^2)$ is such that $f \notin L(f)$. Indeed, any function in $L(f) $ is analytic in a neighbourhood of 0 whereas $f$ isn't. I believe that many analytic functions, including $f(x) = \text{exp}(x^2)$, can be shown to satisfy $f \notin L(f)$, but a simple proof of this fact still eludes me.
By contrast, if $f$ is a real-valued polynomial function over $\mathbb{R}$ then $L(f)$ is the $\mathbb{R}$-vector space of the polynomial functions of degree at most $\deg(f)$, so that $f \in L(f)$. To see this, one may use the Taylor series of $f(x + c)$ together with a well-known result on Vandermonde matrices.
It is also immediate to check that periodic functions and the exponential function $f(x) = \text{exp}(x)$ satisfy $f \in L(f)$. They can be used to build algebras of functions satisfying this property.
|
2025-03-21T14:48:29.858523
| 2020-02-12T16:58:32 |
352559
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626372",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352559"
}
|
Stack Exchange
|
Spectrum of large random asymmetric matrices with correlation
Background:
In their paper, Sommers Crisanti Sompolinsky and Stein derive the spectral distribution of large random matrices $\mathbf{J}$ by studying the following integral:
\begin{equation}
I=\left[\int\left(\Pi \frac{d^{2} z_{i}}{\pi}\right) \exp \left\{-\epsilon \sum_{t}\left|z_{i}\right|^{2}-\sum_{i, J, k} z_{i}^{*}\left(\omega^{*} \delta_{i k}-J \bar{k}\right)\left(\omega \delta_{k j}-J_{k j}\right) z_{j}\right\}\right]_J
\end{equation}
Where the brackets $[\dots]_J$ denote ensemble average with the following measure:
$$
P(\mathbf{J})\propto \exp\left\{-\frac{N}{2\left(1-\tau^{2}\right)} \sum_{i j} J_{i j}^{2}+\frac{\tau N}{2\left(1-\tau^{2}\right)} \sum_{i j} J_{i j} J_{j i}\right\}
$$
The parameter $-1<\tau<1$ represents the correlation between the off diagonal elements. When $\tau=1$ the matrix is symmetric.
By carrying out the average over the distribution of $J_{ij}$ and neglecting $\mathcal{O}(1/N)$ terms they find:
$$
I=\int\left(\Pi \frac{d^{2} z_{i}}{\pi}\right) \exp \left\{-N\left(\epsilon r+\ln (1+r)+\frac{r x^{2}}{1+r(1+\tau)}+\frac{r y^{2}}{1+r(1-\tau)}\right)\right\}.
$$
where $Nr\equiv\sum_i|z_i|^2$
My issue:
When I carry out the average over the $\mathbf{J}$ distribution I also find the $(1+r)$ term, but what escapes my understanding is how do they obtain $(1+r(1+\tau))$ and $(1+r(1-\tau))$ in the denominator and not $(1+r)$?
When I set $\tau=0$ I recover their results but when $\tau\neq0$ I cannot seem to understand how the value in the denominator is different from the value in the $\ln$. If someone understands or can guess how they did, I would appreciate it a lot if you could explain it to me.
Of course, any remark or advice is always appreciated. Thank you.
(Outline of my work):
If this helps, here are the main steps of my derivation.
I expand the terms in the integral:
$$\begin{equation}
I= \int\left(\prod_{i} \frac{d^{2} z_{i}}{\pi}\right) \exp \left\{-\epsilon \sum_{i}\left|z_{i}\right|^{2}-\sum_i |\omega|^2|z_i|^2+\sum_{ij}z_i^*\omega^*J_{ij}z_j+\sum_{ij}\omega z_i^* J_{ji}z_j-\sum_{i,j,k}z_i^*J_{ki}J_{kj}z_j -\frac{N}{2(1-\tau^2)}\sum_{ij}J_{ij}^2+\frac{\tau N}{2(1-\tau^2)}\sum_{ij}J_{ij}J_{ji}\right \}
\end{equation}
$$
Then, following their advice I decouple the terms quadratic in $J_{ij}$ with a complex Gaussian transformation, (based on the Hubbard Stratonovich method):
$$
\begin{equation}
\exp \left(-\sum_{i,j,k}z_i^*J_{ki}J_{kj}z_j\right)=\exp \left(-\sum_{k}\left | \sum_j z_jJ_{kj}\right|^2\right)= \int \left(\prod_{k} d^2m_k\right) \exp \left(- \sum_k m_k m_k^* \pm i \sum_{kj}z_j^*J_{kj} m_k \pm i \sum_{kj}z_jJ_{kj} m_k^*\right )
\end{equation}
$$
Then, I can perform the average over $\mathbf{J}$:
\begin{equation}
I\propto \int\int\left(\prod_{i} \frac{d^{2} z_{i}}{\pi}\right)\left(\prod_{i} d^{2} m_{i}\right) \exp \left\{-\sum_i|z_i|^2(\epsilon+|\omega|^2)-\sum_i|m_i|^2+\frac{1}{2N}\sum_{i j}b_{ij}^2 + \frac{\tau}{2N}\sum_{ij}b_{ij}b_{ji} \right \}
\end{equation}
where:
\begin{equation}
b_{ij}=(\omega^*z_i^*z_j+\omega z_iz_j^*+\mathrm{i}(m_iz_j^*+m_i^*z_j))
\end{equation}
Ignoring the $\mathcal{O}(1/N)$ terms, I integrate over the $\mathrm{d}^2m_i$ and finally obtain:
\begin{equation}
= \int \exp \left\{-N\left ( \epsilon r+\log(1+r) +\frac{rx^2(1-r\tau + \tau r^2 (\tau +1))}{1+r}+\frac{ry^2(-1+r\tau + \tau r^2 (\tau -1))}{1+r}\right) \right \}
\end{equation}
Which is correct for $\tau=0$ but clearly wrong when $\tau\neq0$. I went carefully over my computations so my mistake is more of understanding and doing something wrong rather than out of negligence.
Again, any remark or criticism that would help me improve, is appreciated. Thank you for your time and your help!
|
2025-03-21T14:48:29.858753
| 2020-02-12T17:24:28 |
352563
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Gerhard Paseman",
"Nandakumar R",
"Yaakov Baruch",
"https://mathoverflow.net/users/142600",
"https://mathoverflow.net/users/2480",
"https://mathoverflow.net/users/3402"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626373",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352563"
}
|
Stack Exchange
|
From a given triangle, to cut 2 mutually congruent convex pieces that together 'use' maximum area of the triangle
Two planar regions are congruent if one can be made to perfectly coincide with the other by translation, rotation or reflection (flipping over).
The Problem: Given a triangular region T, how will we cut from it, 2 mutually congruent convex pieces that together use the highest fraction of the area of T? Let us call such a partition, the best congruent 2-partition of T.
And which is the specific shape of triangle T such that the largest fraction of its area gets left over (’wasted’) under its best congruent 2-partition?
Note: One can generalize from T and ask for a general convex region C that 'wastes' the largest fraction its area when given its best congruent 2-partition.
An attempt on this question was posted in
R Nandakumar, Cutting Mutually Congruent Pieces from Convex Regions, arXiv:1012.3106.
Please note that the document discusses only partitions into convex pieces. I am not aware of any conclusive results on this question.
A variant (this bit is being added following this later post: On finding optimal convex planar shapes to cover a given convex planar shape ): One can also try to find 2 - or in general n - mutually congruent convex regions of maximum perimeter that can be drawn inside a given triangle T or general convex region C and compare this optimal region with the maximally area utilizing one drawn inside T.
This formulation is unclear. If you divide a region into pieces, how does that not cover all of the region? Are you looking for the largest duomino that fits in a triangle? I am inventing the term duonimo to describe the disjoint union of two convex regions which are congruent. Gerhard "Duonimo Should Be A Word" Paseman, 2020.02.12.
Thank you. I meant how to cut FROM T two convex pieces that together 'use' the highest fraction of the area of T. Edited question to reflect this better.
I was wondering if this problem is equivalent to finding the largest isosceles triangles fitting inside the given one, but is seems that the linked paper offers clever counterexamples to that.
Yes. experiments indicate one can 'use' more of the given triangle if the two congruent pieces are allowed to have more than 3 sides, at least for some input triangles. But in that document, I couldn't prove much conclusively.
|
2025-03-21T14:48:29.858951
| 2020-02-12T17:53:45 |
352564
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Mario Giambarioli",
"Tony Huynh",
"https://mathoverflow.net/users/152281",
"https://mathoverflow.net/users/2233"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626374",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352564"
}
|
Stack Exchange
|
Odd cycle transversal
Suppose we have a graph G. Say B a fundamental basis of the cycle space of G. Say LP a linear programming problem where there is a variable for each vertex of G, each variable can take value $\geq 0$, for each odd cycle of B we add to LP the constraint $x_{a} + x_{b} + x_{c} + ... + x_{i} \geq k$ where $x_{a},x_{b},x_{c},...,x_{i}$ are the verteces of the cycle and $k$ is the number of vertices of the cycle. The objective function of LP is $\min\sum\limits_{i=1}^{n}{x_{i}}$.
We say S an optimal solution of LP. Can we say that each vertex, whose variable takes a value $\gt 0$ in S, is a vertex of at least a minimum odd cycle transversal of G?
No. Let $G$ be the graph obtained by gluing a $3$-cycle $abc$ and a $5$-cycle $cdefg$ together at vertex $c$. Then $(x_a, x_b, x_c, x_d, x_e, x_f, x_g)=(0,0,3,1,1,0,0)$ is an optimal solution of the LP. However, neither $d$ nor $e$ are contained in a minimum odd cycle transversal of $G$, since $\{c\}$ is the unique minimum odd cycle transversal of $G$.
The answer is also no to the updated question in the comment below. Here is a counterexample for both versions. Let $C_n$ be an odd cycle with $n \geq 5$. Fix $e=ab \in E(C_n)$ and let $T=C_n - e$. Let $G$ be the graph obtained from $C_n$ by adding all edges $f$ such that $T \cup f$ contains an even cycle. Then, the fundamental basis of $G$ with respect to $T$ contains exactly one odd cycle, namely $C_n$. Thus, $x=(\frac{1}{n}, \dots, \frac{1}{n})$ is an optimal solution to the revised LP (see comment below) and $x=(1, \dots, 1)$ is an optimal solution to the original LP. However, it is not true that every vertex of $G$ is in a minimum odd cycle transversal, because the only minimum odd cycle transversals of $G$ are $\{a\}$ and $\{b\}$.
First of all, thank you for the answer. Then I would like to change the question just a little bit: what about if we replace the constraint x_a+x_b+x_c+...+x_i >= k with x_a+x_b+x_c+...+x_i >= 1 for each odd cycle of B?
I am sorry, but I do not understand your counterexample: if we suppose n = 5, C_n is the odd cycle with edges ab, bc, cd, de, ea. T is the graph with edges bc, cd, de, ea. G is the graph with edges ab, ad, bc, bd, cd, ce, de, ea. G has 5 odd cycles: abcde, abd, bcd, cde, dea. Now it is not true that a is an odd cycle transversal of G, because if we remove the vertex a, the odd cycle cde is still alive (the same if we remove the vertex b). I am sorry if I have misunderstood your counterexample.
You are supposed to add the edges $be$ and $ac$ (not $ce, bd$, and $ad$) to $C_5$.
Ok, I think that if we add the constraint $x_{a} + x_{b} + x_{c} + ... + x_{i} \geq 1$ for each odd cycle of G (not only for the odd cycles of B), then we can say that each vertex, whose variable take a value $\gt 0$ in an optimal solution of LP, is a vertex of at least a minimum odd cycle transversal of G. Is it right?
|
2025-03-21T14:48:29.859157
| 2020-02-12T19:04:34 |
352569
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626375",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352569"
}
|
Stack Exchange
|
Expansion of product of simple Lie group
(a quite technical question if you want to skip).
I am looking at the paper Breuillard, Green, Guralnick, and Tao - Expansion in finite simple groups of Lie type; Specifically, proposition 8.4.
Proposition $8.4 .$ Let $r \in \mathbf{N}$ and $\varepsilon>0 .$ Suppose $G=G_{1} G_{2},$ where $G_{1}$ and $G_{2}$ are products of at most $r$ finite simple (or quasisimple) groups of Lie type of rank at most
r. Suppose that no simple factor of $G_{1}$ is isomorphic to a simple factor of $G_{2} .$ If $x_{1}=$ $x_{1}^{(1)} x_{1}^{(2)}, \ldots, x_{k}=x_{k}^{(1)} x_{k}^{(2)}$ are chosen so that $\left\{x_{1}^{(1)}, \ldots, x_{k}^{(1)}\right\}$ and $\left\{x_{1}^{(2)}, \ldots, x_{k}^{(2)}\right\}$ are
both $\varepsilon$-expanding generating subsets in $G_{1}$ and $G_{2}$ respectively, then $\left\{x_{1}, \ldots, x_{k}\right\}$ is $\delta$. expanding in G for some $\delta=\delta(\varepsilon, r)>0$
There are two cases. The first one is OK. In the second one, $\lvert G_2\rvert \ge \lvert G_1\rvert^{\beta/5}$. In this case, $m = O_c(log \lvert G\rvert)$. And $\delta$ seems constant, indepedent of $\epsilon$.
My question is whether there is anything we can say about $\delta$ besides $\delta>0$. For example, $\delta>1/2$ would be nice, or $\delta> \lvert G_1\rvert/\lvert G_2\rvert$ or $\delta > C\epsilon$ (that would be best).
To put it into more easy to handle terms, I don't get how $m$ is obtained given a $\lvert G\rvert^\nu$-approximate group and using the weighted Balog-Szemerédi-Gowers.
Another question which I don't understand is: it is implied that $(1/2-\beta/5) > 0$ in the first case. Why is that?
|
2025-03-21T14:48:29.859285
| 2020-02-12T19:53:17 |
352570
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:626376",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/352570"
}
|
Stack Exchange
|
Does Higman's embedding theorem hold inside group varieties?
Suppose $\mathfrak{U}$ is a variety of groups. Let's define $F_n(\mathfrak{U})$ as relatively free groups in $\mathfrak{U}$.
Suppose $G \in \mathfrak{U}$ is a finitely generated group. We call $G$ finitely presented in $\mathfrak{U}$ iff $\exists n \in \mathbb{N}$ and finite $A \subset F_n(\mathfrak{U})$ such that $G \cong \frac{F_n(\mathfrak{U})}{\langle \langle A \rangle \rangle}$. We call $G$ recursively presented in $\mathfrak{U}$ iff $\exists n \in \mathbb{N}$ and recursively enumerable $A \subset F_n(\mathfrak{U})$ such that $G \cong \frac{F_n(\mathfrak{U})}{\langle \langle A \rangle \rangle}$.
My question is:
Is it true, that a finitely generated group is recursively presented in $\mathfrak{U}$ iff it is isomorphic to a finitely generated subgroup of a group finitely presented in $\mathfrak{U}$?
This fact is true for the varieties of abelian groups due to linear algebra, proved for the variety of all groups by Higman, and for the Burnside varieties by Olshanski.
However, I do not know, whether it is true in general.
This question on MSE
You probably need to choose recursively enumerable $A\subset F_n$ rather than $F_n(\mathfrak{U})$ where it does not make sense a priori. For Burnside varieties, I guess you mean "Burnside varieties of large enough exponent".
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.