added
stringdate 2025-03-12 15:57:16
2025-03-21 13:32:23
| created
timestamp[us]date 2008-09-06 22:17:14
2024-12-31 23:58:17
| id
stringlengths 1
7
| metadata
dict | source
stringclasses 1
value | text
stringlengths 59
10.4M
|
---|---|---|---|---|---|
2025-03-21T14:48:31.263415
| 2020-06-18T06:23:43 |
363405
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Federico Poloni",
"Glorfindel",
"https://mathoverflow.net/users/1898",
"https://mathoverflow.net/users/70594",
"https://mathoverflow.net/users/74539",
"lcv"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630177",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363405"
}
|
Stack Exchange
|
Is the matrix $A_{nm} = 1/(2-a_n-a_m)$ with $0 \le a_n < 1$ for $n = 1,2,....,N$ positive?
Is the $N \times N$ matrix $A_{nm} = 1/(2-a_n-a_m)$ where $0 \le a_n < 1$ for
$n=1,2,....,N$ non negative? For the $2 \times 2$ case the answer is yes.
The diagonal entries are positive and the determinant is non negative.
I have played around with the $3 \times 3$ case and have found no counter examples. I think the result may be true in general. If so this must be well known. This came up in the theory of E-0-semigroups.
Thanks for listening.
Reminds of a Hilbert, Cauchy, matrix.
Could you please use Latex math for better readability?
@YCor ah, right. The problem with MathJax in titles is that you don't see a preview, and the necessary {} around nm was already present in the body but not in the title.
The answer is Yes for the following reason. Use the fact that
$$\frac1K=\int_0^1t^{K-1}dt.$$
Then
$$A=\int_0^1S(t)dt$$
where
$$S(t)={\rm Mat}(t^{1-a_n-a_m})=tV(t)\otimes V(t),\qquad V(t)=\begin{pmatrix} t^{-a_1} \\ \vdots \\ t^{-a_N} \end{pmatrix}.$$
Since $S(t)$ is symmetric, positive semi-definite, $A$ is so. Actually, the vectors $V(t)$ span ${\mathbb R}^N$ and therefore $A$ is positive definite.
Notation: I will use the symbols $A>0$ and $A\geq 0$ to denote that $A$ is positive definite / semidefinite respecrtively (Löwner order).
Let $D > 0$ be the diagonal matrix with $D_{nn} = 1-a_n$. Then, direct verification shows that
$$DA+AD = E,$$ where $E$ is the matrix of all ones. This is a Lyapunov equation for $A$, and it is known that its solution is positive semidefinite when $D>0$ and $E \geq 0$. This result can be proved also directly in this case: let $v$ be an eigenvector of $A$ with eigenvalue $\lambda$ (real because $A$ is symmetric); multiply the equation on the left and on the right by $v^*$ and $v$ to get
$$
2\lambda(v^*Dv) = v^*Ev.
$$
Hence
$$
\lambda = (v^*Ev) / 2(v^*Dv) \geq 0.
$$
Thus the eigenvalues of $A$ are non-negative. $A>0$ is false, there are counterexamples (e.g., all the $a_i$ are zero).
|
2025-03-21T14:48:31.263591
| 2020-06-18T07:41:40 |
363410
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630178",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363410"
}
|
Stack Exchange
|
Do asymptotically conformal maps converge to a weakly conformal map?
$\newcommand{\CO}{\text{CO}_2}$
$\newcommand{\SO}{\text{SO}_2}$
$\newcommand{\dist}{\text{dist}}$
$\newcommand{\M}{\mathcal{M}}$
$\newcommand{\N}{\mathcal{N}}$
Let $\M,\N$ be two-dimensional smooth, compact, connected, oriented Riemannian manifolds, with or without boundaries.
Let $f_n \in W^{1,2}(\M,\N)$ with $Jf_n \ge 0$. Suppose that $f_n \rightharpoonup f$ in $W^{1,2}$, and that $\int_{\M} \dist^2( df_n,\CO ) \to0$. Must $df \in \CO$ a.e.?
If the answer is positive, then known regularity results imply that $f$ is smooth, and so $df \in \CO$ everywhere. Thus, it must be either constant or a local diffeomororphism.
Here $\CO =\{\lambda R : R \in \SO\, | \, \lambda \ge 0\} $, where the "copies" of $\SO,\CO$ at each point implicitly depend on the metrics of $\M,\N$ at these points**, so
$$2\dist^2(df,\CO)=|df|^2-2Jf=(\sigma_1-\sigma_2)^2,$$ where $\sigma_1, \sigma_2$ are the singular values of $df$.
** For $p \in \M$, $df_p \in \text{Hom}(T_p\M,T_{f(p)}\N)$, and the notion of "$\SO$" depends on the metrics on $T_p\M,T_{f(p)}\N$, and in particular on the image point $f(p)$: $$\text{SO}(T_p\M,T_{f(p)}\N) \subseteq \text{Hom}(T_p\M,T_{f(p)}\N).$$
Here is a proof for the case where $\M=\Omega_1,\N=\Omega_2$ are nice Euclidean domains:
Let $K \subseteq \Omega_1$ be compact. The "higher integrability of Jacobians" implies that $ Jf_n \rightharpoonup Jf $ in $L^1(K)$. Thus,
$$
\lim_{n\to \infty} \|df_n\|_{L^2(K)}^2=2\lim_{n\to \infty} \int_K Jf_n=2 \int_K Jf \le\|df\|_{L^2(K)}^2
$$
Since $df_n \rightharpoonup df$ in $L^2$, and the $L^2$-norm is weakly lower semicontinuous, $\|df\|_{L^2(K)}=\lim_{n\to \infty} \|df_n\|_{L^2(K)}$.
In particular, we have $2 \int_K Jf =\|df\|_{L^2(K)}^2$ which implies $f$ is conformal.
The problem with generalizing this argument to manifolds, is that the weak $L^1$ convergence of the Jacobians does no longer hold.
|
2025-03-21T14:48:31.263724
| 2020-06-18T10:30:34 |
363419
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"YCor",
"https://mathoverflow.net/users/14094"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630179",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363419"
}
|
Stack Exchange
|
Are all intermediate growth branch groups just-infinite?
Are all branch groups of intermediate growth just-infinite? I can't seem to find an answer to this one way or another; the question is motivated by the fact all examples of intermediate growth branch groups I know being just-infinite, and by this paper by Nekrashevych in which he constructs a family of branch groups, all but one of which are just-infinite and intermediate growth and one being neither.
This paper by Bartholdi, Grigorchuk and Sunic is my main reference on branch groups so far.
I've asked this question without success on math.SE two days ago, so I'm reposting it here hoping I'll be luckier.
It is useful context that a branch group is always just-non-virtually-abelian (this was said in a deleted answer). In particular, the question is equivalent to whether a branch group of intermediate growth can have an infinite virtually abelian quotient.
MathSE original post: https://math.stackexchange.com/questions/3721841/
|
2025-03-21T14:48:31.263814
| 2020-06-18T10:39:24 |
363421
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alexander Chervov",
"Mark Wildon",
"https://mathoverflow.net/users/10446",
"https://mathoverflow.net/users/7709"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630180",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363421"
}
|
Stack Exchange
|
Lie theoretic meaning to $e^{\text{cycle}} = \text{permutation}$?
It is well known that exponentiating the EGF(exponential generating function) for cycles gives the EGF for permutations: link here. Usually summarized under the catchy slogan all = exp(connected).
I wonder if it is possible to give a lie-theoretic explanation to this phenomenon: The similarity to group = exp(algebra) is tantalizing.
Is there some way to relate the counting done by the EGF function to an actual exponential between the 'algebra of cycles' and the group $S_n$? Perhaps there is some way to use the representation theory of $S_n$ to establish some connection? Is this one of those near-misses that holds no deep content?
Some other questions on somewhat similar "exponential formula": https://mathoverflow.net/questions/272045/q-and-other-analogs-for-counting-index-n-subgroups-in-terms-of-homs-to-s?rq=1
I think if there is a deep answer to this question, it might involve a combinatorial interpretation of the BCH formula.
|
2025-03-21T14:48:31.263929
| 2020-06-18T11:08:17 |
363423
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Branimir Ćaćić",
"Cameron Zwarich",
"Konstantinos Kanakoglou",
"Matthew Daws",
"Maxime Lucas",
"Simon Henry",
"Todd Trimble",
"Yemon Choi",
"https://mathoverflow.net/users/104710",
"https://mathoverflow.net/users/22131",
"https://mathoverflow.net/users/2926",
"https://mathoverflow.net/users/406",
"https://mathoverflow.net/users/68468",
"https://mathoverflow.net/users/6999",
"https://mathoverflow.net/users/763",
"https://mathoverflow.net/users/85967",
"https://mathoverflow.net/users/88855",
"https://mathoverflow.net/users/99234",
"hänsel",
"xuq01"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630181",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363423"
}
|
Stack Exchange
|
Why do some tricks in homological algebra work over the category of C*-algebras?
The category of $C^*$-algebras is not abelian (a "proof" that it is pre-abelian can be found here, but it does not seem correct; I can't find any authoritative sources). However, it's possible to do K-theory over the category of $C^*$-algebras, and one can often directly use tricks and techniques from homological algebra, especially when working with exact sequences. Sometimes people even use directly definitions and terms in homological algebra that are not really well-defined if the underlying category is not abelian.
The question is, why does this "just work"? Is it that the category of $C^*$-algebras have some good properties that allow one to use techniques from homological algebra, or is it that a lot of homological algebra don't really require the category one is working in to be abelian (e.g., pre-abelian is actually enough)?
I apologize if this question is not research-level enough.
Could you perhaps add a specific example?
The 6-term exact sequence is proved using techniques in homological algebra, IIRC. Another example would be the proof of $K_0(\mathcal{C}_0) \oplus K_0(\mathcal{C}_1) \cong K_0(\mathcal{C}_0 \oplus \mathcal{C}_1)$ which uses the five lemma.
One should pay attention to the comments on the linked math.so post: the hom sets for $C^$-algebras (that is, $$-homomorphisms) do not form an abelian group, and so the category is not additive.
This is way outside my comfort zone, but I wouldn't be surprised if the heart of the matter is simply that most of the functors you care about from $C^\ast\texttt{Alg}$ are functors to additive categories with certain special properties (e.g., preserving split short exact sequences) of a nature that permits homological algebra on the target categories to “interact” with phenomena in $C^\ast\texttt{Alg}$.
It’s a result of Higson’s—it seems from his MSc thesis!—that Kasparov’s $KK$-theory yields the universal target category for such functors.
@BranimirĆaćić Yes, in a sense, that's exactly what I'm curious about. Why do these functors we care about all have such good properties?
I agree with the comment of Matthew Daws. The "proof" in the SE linked post does not appear to be correct: the hom-sets cannot be abelian groups -at least not under usual addition of morphisms. The sum of algebra morphisms is in general not an algebra morphism.
@KonstantinosKanakoglou I changed the question description to explain that the proof appears to be incorrect.
I think I disagree with you assessment that these are techniques specific to abelian categories. The category of spaces is even 'less abelian' than the category of C* algebra, and most of these results still holds for many invariant on the category of spaces (the K-theory of C* algebra is partly inspired from the K-theory of spaces...) To me these are methods from abstract homotopy theory (which happen to includes homological algebra)
A long long time ago I remember seeing the assertion that the category of Cstar-algebras and star-homomorphisms is semi-abelian in the sense of Borceux and Bourn, ( https://ncatlab.org/nlab/show/semi-abelian+category ). I don't know if this is really relevant to K-theory, E-theory, KK-theory and so on
The semi-abelianness of $\mathrm{C}^\mathrm{Alg}$ is shown in Semi-abelian monadic categories by Gran and Rosický. Since $\mathrm{C}^\mathrm{Alg}$ is monadic, has finite coproducts and a zero object, semi-abelianness boils down to inheriting the split short five lemma from the category of general *-algebras.
And TIL that $\mathrm{C^\ast Alg}$ is, in particular, a homological category, so that it admits “[m]any of the standard results of classical homological algebra[…]: the five lemma, the nine lemma, the snake lemma, long exact sequence in homology, the Noether isomorphism theorem.”
((this is not an answer, but I want to be able to delete my post, that's why do not use comments: $KK$-theory is an abelianization of the $C^$-category: it is the universal matrix-stable (more precisely: compact-operators-stable), homotopy-invariant, split-exact category formed from the $C^$-cagtegory. Hence it is additive and abelian. If you refer to $K$-theory (recall: $K(A) = KK(\mathbb{C},A)$), then it is clear. You can also other categories of algebras than $C^*$ make in this way abelian.))
You can delete comments, @hänsel. Your answer will now be moved to a comment.
|
2025-03-21T14:48:31.264727
| 2020-06-18T11:24:44 |
363424
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630182",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363424"
}
|
Stack Exchange
|
Solve linear overdetermined system from subsystems that compose it
This is my first MathOverflow post: I apologize if my message is lacking of something. I also posted this question in Mathematics Stack Exchange, but as I haven't seen an answer I post it here.
Suppose you have system of equations:
$$Ax = b \rightarrow [A_{1}, A_{2},...,A_{M}]^{T}x = [b_{1},...,b_{M}]^{T} \: \: \: (1)$$
where each and all matrices $A_{1},...,A_{M}$ $\in \mathbb{R}^{S \times L}$. We call this system of equations the whole system.
In the same manner, each and all $b_{1},..., b_{M} \in \mathbb{R}^{S \times 1}$ and $x \in \mathbb{R}^{L \times 1}$.
As you see in (1), $A$ is composed by matrices $A_{1},...,A_{M}$ as well as $b$ by vectors $b_{1},...,b_{M}$. For each of these $A_{i}$ and $b_{i}$ you can write down equations:
$$A_{i}x_{i} = b_{i}, \: \; i = 1,...,M \: \: \: (2)$$
We call each of these equations a partial system.
By Least Squares you know the solutions $x_{1},...,x_{M}$ to each of the partial systems in (2). However, by restrictions you CAN'T solve directly the whole system of equations in (1) through Least Squares.
Let's suppose we know the error of each of the partial solutions $x_{i}$ with respect to the whole solution $x$. That is to say that we know:
$$ e(x_{i}) = || x_{i} - x||_{p} \: \: \: (3)$$
where in (3) $||\cdot||_{p}$ is some norm to be defined.
We want to build $x$ from $x_{1},..,x_{M}$ in such a way that the more $x_{i}$ that we include into the solution we build, the smaller the error with respect to $x$, and all the $x_{i}$ have the same importance.
In other words, we want to produce a sequence of functions $y_{1},...y_{M}$ as follows:
$$y_{1} = f(x_{k_{1}}), \: \: \: f: \mathbb{R}^{L} \rightarrow \mathbb{R}^{L} $$
$$y_{2} = f(x_{k_{1}},x_{k_{2}}), \: \: \: f: \mathbb{R}^{L} \times \mathbb{R}^{L} \rightarrow \mathbb{R}^{L}$$
$$y_{3} = f(x_{k_{1}},x_{k_{2}},x_{k_{3}}), \: \: \: f: \mathbb{R}^{L} \times \mathbb{R}^{L} \times \mathbb{R}^{L} \rightarrow \mathbb{R}^{L}$$
$$.$$
$$.$$
$$.$$
$$y_{M} = f(x_{k_{1}},...,x_{k_{M}}) = x, \: \: \: f: \mathbb{R}^{L} \times ... \times \mathbb{R}^{L} \rightarrow \mathbb{R}^{L}$$
such that $e(y_{M}) < e(y_{M-1}) < ... < e(y_{2}) < e(y_{1})$.
The indexes $k_{1},...,k_{M}$ represent which of the solutions $x_{i}$ are taking into account to build each $y$. We use those indexes to say that any of the partial solutions $x_{i}$ is NOT more important than any other.
|
2025-03-21T14:48:31.264905
| 2020-06-18T12:25:27 |
363427
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Carlo Beenakker",
"Jose Javier Gonzalez Ortiz",
"https://mathoverflow.net/users/11260",
"https://mathoverflow.net/users/69657"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630183",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363427"
}
|
Stack Exchange
|
Solve equation involving binomial coefficient
I have a problem that leads to the following equation:
$${x \choose k} = N$$
For some unknown $x$ and known constants $k$ and $N$. Here all numbers are natural numbers. I can solve this analytically for $k=1$ and $k=2$ but I can't find a general formula for any $k$. However, I was able to show that $ x \lt k N^{1/k} $.
Is there a way to solve for $x$ analytically? If not, is there a better way of finding $x$ than binary search?
this is a polynomial equation in $x$ of order $k$, for which there is no algebraic solution if $k>3$.
Yes, for a generic polynomial equation there isn't an algebraic solution, however I thought it might be the case this family of polynomials could be factored in some way that led to the roots. Do you know why that is not the case here?
Solving exactly will be tough. There are shockingly simple things about binomial coefficients that we don’t know (but that we would know if we could solve that explicitly for $x$). See for instance Singmaster’s conjecture, which is the assertion that for any fixed value of $N > 1$, that equation has at most $100$ solutions. This is asking if there are any numbers other than 1 that show up more than 100 times in Pascal’s triangle (which is still an open problem!). [Of course, this is asking to solve for integer values of $x$. So it’s conceivable a formula might be findable but verifying whether or not it gives you an integer is tricky... Unsure...]
That said, if you want asymptotics, try
$ \left( \dfrac{x}{k} \right) ^k \leq {x \choose k} \leq \left( \dfrac{ex}{k} \right) ^k$.
This gives $k N^{1/k}/e \leq x \leq k N^{1/k}$.
If that’s not good enough for you, you can use some tighter bounds on the binomial coefficients. What range do you want?
Or there are also the simple bounds
$\dfrac{(x-k)^k}{k!} \leq {x \choose k} \leq \dfrac{x^k}{k!}$,
which give us
$(N k!)^{1/k} \leq x \leq (N k!)^{1/k} + k$.
We could get closer still if that’s what you’re into. Lemme know.
If you want an algorithm to find $x$ given $N$ and $k$, binary search isn’t bad (you’d need to test $\log(k)$ terms by using the above). You could also try something involving modular arithmetic, but it’s gonna be hard to beat $\log(k)$ anyway since it takes that long just to read the number $k$. If interested in an algorithm, lemme know roughly how $k$ grows with $N$, and lemme know what parts of the computation are “expensive” for you.
Thanks! This is pretty much what I was looking for. With the last set of bounds the bisection should solve it pretty fast.
|
2025-03-21T14:48:31.265119
| 2020-06-18T12:43:31 |
363430
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Gerhard Paseman",
"Kevin",
"Pat Devlin",
"Richard Stanley",
"chasmani",
"https://mathoverflow.net/users/126262",
"https://mathoverflow.net/users/141277",
"https://mathoverflow.net/users/22512",
"https://mathoverflow.net/users/2807",
"https://mathoverflow.net/users/3402"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630184",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363430"
}
|
Stack Exchange
|
Permanent of a matrix with duplicate rows/columns
I'm trying to find an efficient algorithm/technique to calculate, or approximate, the permanent of a matrix. After reading some literature, it seems nothing exists faster than Ryser's algorithm in the general case. Unfortunately this is too slow for my purposes at $O(2^{n-1}n)$.
My matrix has a lot of duplicate columns, and so I was wondering if there was a way to use that structure to improve the efficiency of the calculation, or make an approximation. My matrix looks a bit like this:
$$
A =
\begin{bmatrix}
A^2 & A^2 & A & A & A & 1 & 1 & 1 & 1\\
B^2 & B^2 & B & B & B & 1 & 1 & 1 & 1\\
C^2 & C^2 & C & C & C & 1 & 1 & 1 & 1\\
D^2 & D^2 & D & D & D & 1 & 1 & 1 & 1\\
E^2 & E^2 & E & E & E & 1 & 1 & 1 & 1\\
F^2 & F^2 & F & F & F & 1 & 1 & 1 & 1\\
G^2 & G^2 & G & G & G & 1 & 1 & 1 & 1\\
H^2 & H^2 & H & H & H & 1 & 1 & 1 & 1\\
J^2 & J^2 & J & J & J & 1 & 1 & 1 & 1\\
\end{bmatrix}
$$
Any ideas? Or alternatively a proof that the computation cannot be made more efficient would be very useful.
Can you describe a bit how the above shape generalizes to the n by n case? Perhaps it has at most k distinct columns of degree at most d?
The actual matrices have at most F distinct columns and have size W. F is around 1000 and W is around 20000 or more, the maximum exponent of a column is around 1000. But I might be able to break the problem down so that there are only 3 distinct columns, just like above, but the matrix is about 1000 columns long. So there will be a few hundred of the columns with elements squared, a few hundred to the power of 1 and a few hundred to the power of zero i.e. 1.
If you expand an $n\times n$ permanent by the first row, then you will get a sum of $n$ $(n-1)\times (n-1)$ permanents. In your example, this reduces to a linear combination of three $8\times 8$ permanents. You can then iterate. This will save some time, but maybe not enough for you.
Yes, the idea of @RichardStanley gives you something like order $F^n$ time to compute an $n \times n$ matrix having $F$ distinct columns.
In the paper "Two Algorithmic Results for the Traveling Salesman Problem" by A. Barvinok, an $n^{O(r)}$ time algorithm is given for computing the permanent of an $n \times n$ rank $r$ matrix (Thm. 3.3). In particular, this gives a polynomial time algorithm in the case when you only have 3 distinct columns.
Aha! Here we go.
Just use Ryser's formula exactly as is, but be clever not to redo work you've already done. If it's an $n \times n$ matrix with $F$ distinct columns, you'll be able to compute its permanent in time roughly $F n^{1+F}$ ish (which would be great if you can get $F$ down).
More details:
Recall Ryser's formula is
$$\text{perm}(A) = (-1)^n \sum_{S \subseteq [n]} (-1)^{|S|} \prod_{i=1} ^{n} \sum_{j \in S} a_{i,j}.$$
The problem is that if we don't do anything clever, then there are too many terms of this sum. But! In our case, many of these terms are equal. The thing we're adding up depends on $S$, but it actually just depends on how many columns of each type are in $S$. [If this is already clear, then don't bother reading the rest]
Let's say that the distinct columns of $A$ are $\vec{x}^{(1)}, \vec{x}^{(2)}, \ldots , \vec{x}^{(F)}$ and that $\vec{x}^{(j)}$ appears $f_j$ times. Then the above sum is equal to
$$\text{perm}(A) = (-1)^n \sum_{(s_1, s_2, \ldots, s_F)}(-1)^{s_1 + s_2 + \cdots + s_F} \prod_{j=1} ^{F} {f_j \choose s_j} \prod_{i=1} ^{n} \sum_{k = 1} ^{F} s_k \vec{x}^{(k)} _{i},$$
where the sum is taken over all non-negative vectors [summing to at most $n$, where each coordinate $s_i$ is at most $f_i$]. A crude bound is that there are at most $\mathcal{O}(n^F)$ such terms, giving this a running time of at most like $\mathcal{O}(n^{F+1} F)$ or whatever.
Of course, this doesn't use anything about what the columns look like (and the above sum might be easier to compute than just adding up each term). But it's a start.
(Added as per suggestion) Working out $F=3$ Suppose we are in the optimistic case that the matrix has $n$ columns but only $3$ distinct columns. Let's call these columns $\vec{x}, \vec{y}$, and $\vec{z}$, and suppose each appears $f_x, f_y,$ and $f_z$ times (respectively). [So we have $f_x + f_y + f_z = n$]
Then we have
$$\text{perm}(A) = (-1)^n \sum_{(a, b, c)}(-1)^{a+b+c} {f_x \choose a} {f_y \choose b} {f_z \choose c} \prod_{i=1} ^{n} (a \vec{x}_{i} + b \vec{y}_i + c \vec{z}_i),$$
where the sum is taken over all triples $(a,b,c)$ summing to at most $n$.
Said slightly differently, for a vector $\vec{u} \in \mathbb{R}^n$, let $V(\vec{u}) = u_1 u_2 \cdots u_n$ be the product of its coordinates. [So $V(\vec{u})$ is the (signed) volume of the axis-parallel box with one corner at the origin and an antipodal corner at $\vec{u}$] Then the above formula is just
$$\text{perm}(A) = (-1)^n \sum_{(a, b, c)}(-1)^{a+b+c} {f_x \choose a} {f_y \choose b} {f_z \choose c} V(a \vec{x} + b \vec{y} + c \vec{z}),$$
which looks a little nicer to me and feels more geometric. [I'm then tempted to write this linear combination as a product of a matrix and the vector $(a,b,c)$, but let's not]
(If you'd like me to sketch some code or work out an example or something, lemme know)
For the above example, the sum is pretty clear: take all possible five letter subsets of the first eight letters, for each pick all subsets of size two to square, and note that this can be done in twelve ways for each monomial. Then multiply by 24. The result is 288 times a certain monic symmetric function in all the letters. The original poster might appreciate a specialization of your solution worked out for three types. If columns within a type differ, replace twelve by a certain breakdown. Gerhard "Working On Permanent Mental Arithmetic" Paseman, 2020.06.18.
|
2025-03-21T14:48:31.265491
| 2020-06-18T13:39:04 |
363431
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"MCjr",
"Michael Albanese",
"https://mathoverflow.net/users/104669",
"https://mathoverflow.net/users/21564",
"https://mathoverflow.net/users/68790",
"https://mathoverflow.net/users/9449",
"roy smith",
"ssx"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630185",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363431"
}
|
Stack Exchange
|
Non existence of stable vector bundles on $\mathbb{P}^4$ with $c_1=0$ and $c_2=1$
The Horrocks–Mumford bundle is the only known rank 2 vector bundle on $\mathbb{P}^4$ which is not split.
My question is:
How to prove that there is no a rank 2 stable vector bundle on $\mathbb{P}^4$ with Chern classes $c_1=0$ and $c_2=1$?
Let $\mathrm{E}$ be a vector bundle of rank $2$ with $c_1(\mathrm{E})=0$ on $\mathbf{P}^4$. Then the Chern character of $\mathrm{E}$ can be written as
$$\mathrm{ch}(\mathrm{E})=\sum_{m=0}^\infty \frac{2(-1)^m}{(2m)!}c_2(\mathrm{E})^m=2-c_2+\frac{1}{12}c_2^2+\ldots.$$
The degree $4$ part of the Todd class of $\mathbf{P}^4$ is $35h^2/12$,
and the Hirzebruch-Riemann-Roch theorem says that the holomorphic Euler characteristic of $\mathrm{E}$ is
$$\chi(\mathrm{E})=2\chi(\mathrm{O}_{\mathbf{P}^4})+\frac{1}{12}\int_{\mathbf{P}^4} \big(-35c_2 h^2+c_2^2).$$
Integrality requires that (identifying Chern classes with integers) $-35 c_2+c_2^2$ is divisible by $12$; in other words, that $c_2+c_2^2$ is divisible by $12$. In particular, there are no such $\mathrm{E}$ with $c_1=0$ and $c_2=1$ or $2$. The case $c_2=3$ was ruled out by Barth and Elencwajg; the latter is known to frequent this site, no doubt he has much more to say.
Schwarzenberger noticed that the Hirzebruch-Riemann-Roch theorem imposes divisibility constraints on the Chern classes, and what I explain above is merely a special case. For rank $2$ bundles on $\mathbf{P}^3$ you get the (probably most known) constraint that $c_1 c_2$ must be even. You can use this to show that $\mathrm{T}_{\mathbf{P}^2}$ cannot be the restriction of a rank $2$ bundle $\mathrm{E}$ on $\mathbf{P}^3$.
(A comprehensive discussion of such questions can be found in the standard reference `Vector Bundles on Complex Projective Spaces' by Okonek-Schneider-Spindler.)
What exactly does this condition mean? The expression $c_2(c_2 + 1 - 3c_1 - 2c_1^2)$ has terms in degrees 4, 6, and 8.
I am a novice here, but it seems that since these chern casses are on projective space, they can be written as integer multiples of powers of the hyperplane class, and the c's here are those integers. At least that is what I get from appendix I, Thm. 22.4.1, pp. 165-166, of Hirzebruch, Topological methods in algebraic geometry. For this reason Schwarzenberger himself writes them with the letter d.
Dear @Simpleton , Thank you very much for your answer. All the best.
@MCjr I hope the updated version is more useful! Roy Smith's interpretation is spot-on.
@Simpleton Yes, your updated version is very useful! Thank you!
|
2025-03-21T14:48:31.265688
| 2020-06-18T13:50:59 |
363432
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Robert Bryant",
"Vít Tuček",
"https://mathoverflow.net/users/13972",
"https://mathoverflow.net/users/6818"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630186",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363432"
}
|
Stack Exchange
|
Complex structures on Hermitian symmetric space
Let $(M_1,g_1,J_1)$ and $(M_2,g_2,J_2)$ be two simply-connected Hermitian symmetric spaces, which are isometric as two Riemannian manifolds.
Can we find an isometry $\varphi:M_1 \to M_2$ such that
$$
\varphi^* J_2=J_1?
$$
The answer is 'yes, we can'.
Since we are in the simply-connected case, by the deRham Theorem, we can assume that $(M_i,g_i)$ for $i=1,2$ are isometric to
$$
(\mathbb{C}^m,h_0)\times (N_1,h_1)\times\cdots \times (N_k,h_k)
$$
where $h_0$ is the standard flat metric on $\mathbb{R}^{2m}$ and, for $1\le \ell\le k$, $(N_\ell,h_\ell)$ is an irreducible symmetric space that has a $h_\ell$-parallel, $h_\ell$-orthogonal complex structure $A_\ell$. (I can't use $J_\ell$ because $J_1$ and $J_2$ have already been taken. In fact, because the holonomy group of $h_\ell$ acts irreducibly on the tangent space at any point with commuting ring isomorphic to $\mathbb{C}$, the only $h_\ell$-parallel, $h_\ell$-orthogonal complex structures on $N_\ell$ are $A_\ell$ and $-A_\ell$. It follows that we can assume that
$$
(M_1,g_1,J_1) = (\mathbb{R}^{2m},h_0,A_0)\times (N_1,h_1,A_1)\times\cdots \times (N_k,h_k,A_k),
$$
as Hermitian symmetric spaces while
$$
(M_2,g_2,J_2) = (\mathbb{R}^{2m},h_0,B_0)\times (N_1,h_1,\epsilon_1A_1)\times\cdots \times (N_k,h_k,\epsilon_kA_k),
$$
for some choice of $\epsilon_i = \pm 1$ while $A_0$ and $B_0$ are orthgonal complex structures on $\mathbb{R}^{2m}$ (not necessarily inducing the same orientation on $\mathbb{R}^{2m}$, of course).
The two flat factors are clearly isometric as Hermitian symmetric spaces, so the only question is whether there is an isometry $c_\ell:(N_\ell,h_\ell)\to (N_\ell,h_\ell)$ that satisfies $c_\ell^*A_\ell = -A_\ell$.
This may not be immediately obvious from the definitions, but it can be proved abstractly from the standard symmetric representation as $G/U$ or simply checked case by case using Cartan's classification of the irreducible Hermitian symmetric spaces.
For the cases AIII (the complex Grassmannians and their duals, including the projective spaces) and BDI (the complex quadrics and their duals) there is an obvious anti-holomorphic isometry.
For the case DIII, the set of positively oriented orthogonal complex structures on $\mathbb{R}^{2n}$ and its noncompact dual, an antiholomorphic involution (in the compact case) is conjugation with an orientation-reversing isometry.
For the case CI, the set of complex Lagrangian subspaces of $\mathbb{C}^{2n} = \mathbb{H}^n$ (and its dual), an antiholomorphic involution (in the compact case) is simply taking the orthogonal complex Lagrangian subspace.
In the first exceptional case EIII, one can rely on the fact that $\mathrm{E}_6\subset\mathrm{SU}(27)$ is defined as the stabilizer of a cubic form $C$ with real coefficients (as per Cartan), so complex conjugation preserves the singular locus of the cone $C=0$, and EIII is just the projectivization of this singular locus (a complex manifold of dimension $16$), which is invariant under the conjugation.
The second exceptional case, EVII (compact type), a complex manifold of dimension $27$, has a similar description as a projectivized orbit of a singular locus of the quartic form Q with real coefficients on $\mathbb{C}^{56}$ stabilized by $\mathrm{E}_7\subset\mathrm{Sp}(28)\subset\mathrm{SU}(56)$. See Cartan's paper on the classification of the real forms for details.
Is this the correct reference? http://www.numdam.org/item/?id=ASENS_1914_3_31__263_0 (And does there exists English translation?)
@Vit: Yes, that's the reference that I had in mind. (I couldn't remember whether it appeared in 1913 or 1914 and I didn't have my Cartan Collected Works handy.) I'm not aware of an English translation of this paper, but, of course, there have been lots of follow-ups and expositions in English.
At this point I am mostly interested in explicit calculations. I've checked the paper and one doesn't have to know that much French to get formulae from Cartan's paper. :)
|
2025-03-21T14:48:31.265948
| 2020-06-14T18:42:28 |
363062
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Andrej Bauer",
"VS.",
"https://mathoverflow.net/users/1176",
"https://mathoverflow.net/users/136553"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630187",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363062"
}
|
Stack Exchange
|
Constructivity of two problems on a standard simplex?
Maximizing a hyperplane $\sum_i a_ix_i$ where $a_i\in\mathbb R$ and each $a_i$ are fixed and non-negative and $x_i$ are variables over a standard simplex $\sum_i x_i\leq 1$ with $0\leq x_i$ always produces a vertex point on the simplex and maximization corresponds to $\max_i a_i$.
In infinite dimensions is such a proof considered constructive or does it hold only in classical logic? It seems that we would have to show that at maximizing point there is an $i\in\mathbb N$ such that $x_i=1$ holds and for that perhaps is it possible we cannot do this without invoking LLPO?
Suppose we are looking for a $0/1$ vector in integers (not reals as in 1.) on the standard simplex and we know the optimal vector has either sum of even coordinates summing to $1$ or odd coordinates summing to $1$ then in finite dimensions it is a process of enumerating vertices.
In infinite dimensions is such a proof considered constructive or does it hold only in classical logic? It seems that we would have to show that at optimizing point there is an $i\in\mathbb N$ such that $x_i=1$ holds and for that perhaps is it possible we cannot do this without invoking LLPO?
In general is the proofs of optimization over infinite dimensions considered constructive?
3a. How about when each $a_i$ are fixed and positive?
3b. How about when each $a_i$ are fixed and distinct and non-negative thus guaranteeing an unique vertex point?
3c. How about when each $a_i$ are fixed and distinct and positive thus guaranteeing an unique vertex point?
How do you show that the claim in the first sentence of your question holds constructively in finitely many dimensions?
You need to say that the $a_i$ are positive. Them being non-negative is not good enough. Also, you're going to destroy the question and my answer if you edit the question this way. It's better to ask another question, I think, or add an extra question at the end.
@AndrejBauer Added 3a, 3b and 3c.
3a changes nothing. For 3b and 3c a good rule of thumb is that unique solutions can generally be computed constructively, unless they're unstable (they move a lot when the parameters are perturbed slightly).
@AndrejBauer Even in infinite dimensions? Looks like we have to show there is a $1$ in even or odd coordinates which is LLPO.
Well, for infinite dimensions you're just asking for something like finding the index at which an infnite seqence attains its maximum, and that's not going to be computable in general. In any case, the method from my answer can be extended, I am pretty sure.
@AndrejBauer So you think it will not be constructive?
You make an erroneous assumption in your question, as already in dimension 1 you need LLPO to know that the maximum is actually attained at some point.
We work constructively.
Theorem: LLPO is equivalent to the statement that every affine map $[0,1] \to \mathbb{R}$ attains its maximum
Proof. The general form of an affine map on $[0,1]$ is $f_{a,b}(x) = a \cdot (1 - x) + b \cdot x$. Suppose then that for every such $f_{a,b}$ there exists $x_0 \in [0,1]$ such that $f_{a,b}(x) \leq f_{a,b}(x_0)$ for all $x \in [0,1]$.
Let us first show that LLPO implies attainment of maximum. Given any $f_{a,b}$, by LLPO either $a \leq b$ or $b \leq a$:
If $a \leq b$ then the maximum of $f_{a,b}$ is attained at $x_0 = 1$.
If $b \leq a$ then the maximum of $f_{a,b}$ is attained at $x_0 = 0$.
The converse is more interesting. First note that the following holds: if $f_{a,b}(0) \leq f_{a,b}(t)$ for some $t > 0$ then $f_{a,b}(0) \leq f_{a,b}(1)$. Similarly, if $f_{a,b}(t) \geq f_{a,b}(1)$ for some $t < 1$ then $f_{a,b}(0) \geq f_{a,b}(1)$.
Consider any two reals $a, b \in \mathbb{R}$. We shall decide $a \leq b \lor b \leq a$, which implies LLPO. By assumption, the map $f_{a,b}$ attains its maximum at some $x_0 \in [0,1]$. Either $x_0 < 2/3$ or $x_0 > 1/3$:
If $x_0 > 1/3$ then from $f(0) \leq f(x_0)$ it follows that $a = f(0) \leq f(1) = b$.
If $x_0 < 2/3$ then from $f(x_0) \geq f(1)$ it follows that $a = f(0) \geq f(1) = b$. $\Box$
Of course, since affine maps are very simple, the maximal value of $f_{a,b}$ exists, but the above argument shows it takes LLPO to know where it is attained.
So there is no constructive proof?
That's what the theorem says. I improved it to show your claim in dimension 1 is equivalent to LLPO.
How about for 2. same? 2. seems to be a discrete problem inherently? Still same? Perhaps for 2. constructive works for finite dimension. Only for infinite dimensions LLPO is needed?
For 2. we are in ${0,1}$ vectors not $[0,1]$ and thus I think discrete.
No, you still get LLPO. You have $f(0) = a$ and $f(1) = b$. If you can pick $x_0 \in {0,1}$ such that $f(x_0)$ is maximal, then you can decide whether $a \leq b$ or $b \leq a$: if $x_0 = 0$ then $a = f(0) \geq f(1) = b$, and if $x_0 = 1$ then $b = f(1) \geq f(0) = a$.
Interesting. So no useful optimization problem is in constructive logic?
Quite on the contrary. Constructive logic is telling you that it is numerically unstable to try to compute the point at which the maximum is attained. This is a fact of life that numerical analysts know very well. But under extra conditions you can do it constructively, for example, if you know that the function is non-constant in every dimension. You can also show that for every $\epsilon > 0$ you can find $x_0$ such that $f(x_0)$ is within $\epsilon$-error of true maximum.
Affine functions $\sum_i a_ix_i$ are non constant at every dimension which was what the problem was about.
The $a_i$ could all be zero, or just some of them.
Ok clarified on $a_i$.
|
2025-03-21T14:48:31.266300
| 2020-06-14T18:57:57 |
363063
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Jeremy Rickard",
"M. Cousto",
"https://mathoverflow.net/users/159597",
"https://mathoverflow.net/users/22989"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630188",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363063"
}
|
Stack Exchange
|
Induced map in K-theory by a "trivial" bimodule
Let $R$ be a ring (not necessary commutative) and let $P_{\bullet}$ be a perfect $R$-bimodule (chain complex). I will denote the category of perfect right $R$-chain complexes by $\textbf{Perf}(R)$. The endofunctor $-\otimes_{R}P_{\bullet} :\textbf{Perf}(R)\rightarrow \textbf{Perf}(R)$ induces a map in algebraic $K$-theory given by
$K_{\ast}(-\otimes_{R}P_{\bullet}):K_{\ast}(R)\rightarrow K_{\ast}(R)$.
If the class $[P_{\bullet}] \in K_{0}(R)$ is trivial $(=0)$ does it mean that
$K_{\ast}(-\otimes_{R}P_{\bullet})$ is a 0 map ?
When you write $[P_{\bullet}] \in K_{0}(R)$, do you mean the class of $P_\bullet$ considered as a complex of right $R$-modules (forgetting the left $R$-module structure)?
@JeremyRickard Yes
No. Let $R=\mathbb{Z}\times\mathbb{Z}$, let $P$ and $Q$ be the projective modules $\mathbb{Z}\times0$ and $0\times\mathbb{Z}$, and let
$$P_\bullet=\dots\longrightarrow0\longrightarrow P\otimes_\mathbb{Z}P
\stackrel{0}{\longrightarrow}Q\otimes_\mathbb{Z}P\longrightarrow0\longrightarrow\dots$$
|
2025-03-21T14:48:31.266509
| 2020-06-14T19:00:31 |
363064
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Geoffrey Irving",
"Nate Eldredge",
"Will Sawin",
"https://mathoverflow.net/users/129185",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/22930",
"https://mathoverflow.net/users/4832",
"mathworker21"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630189",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363064"
}
|
Stack Exchange
|
Natural way to thicken Brownian motion to 2D?
If we have a smooth plane curve (Hausdorff dimension 1), we can thicken it by a small amount to get a 2D set (all points within distance $\epsilon$ to the curve).
What if we start with the graph of a scalar Wiener process, which has Hausdorff dimension 1.5? We can again thicken to get a 2D set, but in some sense this feels like overkill: we’re already halfway from 1 to 2.
Question: Is a natural way to “thicken less” to enlarge a Brownian motion graph into a Hausdorff dimension 2 set?
The "obvious" thickening is known as (not joking) the Wiener sausage. There are results as to how the volume scales as the thickness $\epsilon$ goes to zero, and some of those asymptotics might describe what happens if you thicken "infinitesimally".
@NateEldredge the picture on the left side of the wikipedia page doesn't with the naming...
Let $Z$ be a set on the line of Hausdorff dimension $1/2$, e.g., the middle-half Cantor set or the zero set of another Brownian motion. Now let $W_t$ be a one dimensional Brownian motion, and consider the set $\Lambda:=\{(t+z,W_t): t \in [0,1], z \in Z\}$. This set will have Hausdorff dimension 2 (e.g. because the sum of two independent Brownian Zero sets has Hausdorff dimension 1 and then apply the Marstrand slicing theorem), though in the examples I mentioned its 2-dimensional measure will be 0.
Nice! Though intuitively, it feels like this doesn’t quite correspond to the notion of a “thickening”, since it maps each point of a disconnected set. Not sure how to formalize that, however.
@GeoffreyIrving Every connected set with more than two points has Hausdorff dimension at least $1$, since it projects to an interval with dimension 1. This suggests that every similar idea with a connected set will be an overkill.
|
2025-03-21T14:48:31.266663
| 2020-06-14T19:07:06 |
363065
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Qmechanic",
"https://mathoverflow.net/users/13917"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630190",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363065"
}
|
Stack Exchange
|
Physical and mathematical significance of the NS-2 brane
This question is about topological string theory and it was also posted in Physics Stack Exchange.
The existence of a new brane called "an NS-2 brane" is predicted in (the second paragraph in the page 14 of) the paper N=2 strings and the twistorial Calabi-Yau and confirmed to exist in S-duality and Topological Strings.
The argument that confirms the existence of such objects (last paragraph in page eight in S-duality and Topological Strings) is based on the the fact that the A and B models are S-dual to each other over the same Calabi-Yau space. It is argued that the S-dual picture of a F1-string ending on a lagrangian submanifold in the A picture (S-)dualize to a D1-brane ending on the aforementioned NS-2 brane in the B model.
My problem: Although I understand that the NS-2 brane must exists in the B-model as the S-dual of a lagrangian submanifold in the A model, I can't understand the physical and mathematical significance of such objects.
Question 1 (Physical significance): My naive intuition says that because the NS-2 brane has a real three dimensional worldvolume, then it should descend from the M-theory membrane (by embedding the topological string into M-theory). Is this true? And if the answer is positive, how can I check that that? (I'm asking for a chain of dualities that explicitly transform the M2-brane into the NS-2 brane).
I'm unsure about the M2 - NS2 identification probably because I don't understand the physical origin of a lagrangian submanifold in the A-model. Strings can end on lagrangian subspaces but as far I understand, lagrangian submanifolds are also three dimensional submanifolds but not M2 branes, aren't they?
Question 2 (Mathematical significance): The next cite can be read in the first paragraph in the page nine of S-duality and Topological Strings
"Their geometric meaning (talking about the NS-2 brane) is that they
correspond to a source for lack of integrability of the complex
structure of the Calabi-Yau in the B-model."
Does that mean that the NS-2 brane is "charged" under the Nijenhuis tensor of the target space? A little bit more precisely, an NS-2 brane can be defined as any three dimensional geometric place at which the integral of the (pullback) of the Nijenhuis tensor is non-zero?
Any advice/comment/reference is very welcome.
Crossposted from https://physics.stackexchange.com/q/556462/2451
|
2025-03-21T14:48:31.266852
| 2020-06-14T19:24:14 |
363066
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Daniel Li",
"Iosif Pinelis",
"dohmatob",
"https://mathoverflow.net/users/116621",
"https://mathoverflow.net/users/33741",
"https://mathoverflow.net/users/36721",
"https://mathoverflow.net/users/71233",
"https://mathoverflow.net/users/78539",
"leo monsaingeon",
"user135520"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630191",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363066"
}
|
Stack Exchange
|
Uniform distribution in Euclidean ball is sub-gaussian
Consider $n-$dimensional Euclidean ball centred at 0 with radius $\sqrt{n}$. We want to show that the uniform distribution $X$ in this ball is sub-gaussian and $||X||_{\psi_2}<C$ where $C$ is absolute constant.
Clarify: $X$ is subgaussian if $\langle X,x \rangle$ is subgaussian for any $x \in \mathbb{R}^n$ and $||X||_{\psi_2}=||\sup_x\langle X,x\rangle||_{\psi_2}$ where sup is over all unit vector $x$.
Attempt:
Uniform distribution on ball can be represented by $R,\varphi_1,..,\varphi_{n-1}$ jointly where $R$ is a uniform distribution on $[0, \sqrt{n}]$ representing radius, $\varphi_i$ representing the angles in spherical coordinates and they are uniform on $[0,\pi]$. All these variables are independent.
By symmetry, I only need to show $||\langle X, (1,0,0,0,...)\rangle||_{\psi_2}=||X_1||_{\psi_2}=||R\cos\varphi_1||_{\psi_2}<C$. Then it is not clear to me how to proceed
This sounds like a homework question... MO is for research-level questions only, perhaps Math.SE will be better suited.
$\newcommand\Ga\Gamma$
For each unit vector $x$, the random variable (r.v.) $\langle X,x\rangle$ equals
$$V:=\sqrt n\,W_nR$$
in distribution, where
$$W_n:=\frac{Z_1}{\sqrt{Z_1^2+\dots+Z_n^2}},$$
$Z_1,\dots,Z_n$ are iid $N(0,1)$ r.v.'s, and $R$ is a r.v. (independent of $V$ and) such that $0\le R\le1$.
So, it suffices to show that for some real $c>0$
$$\sup_{n\ge2}Ee^{cnW_n^2}<\infty. \tag{1}$$
Note that $W_n^2$ has the beta distribution with parameters $1/2,(n-1)/2$. So, for any $c\in(0,1/2)$ and $n\ge3$
\begin{align}
Ee^{cnW_n^2}
&=\frac{\Ga(n/2)}{\sqrt\pi\,\Ga((n-1)/2)}\int_0^1 e^{cnw^2}w^{-1/2}(1-w)^{(n-3)/2}\,dw \\
&\le\frac{\Ga(n/2)}{\sqrt\pi\,\Ga((n-1)/2)}\int_0^1 e^{cnw}w^{-1/2}e^{-(n-3)w/2}\,dw \\
&=O(\sqrt n)O(1/\sqrt n)=O(1).
\end{align}
Also, clearly $Ee^{cnW_n^2}<\infty$ for $n=2$. Thus, (1) holds, as desired.
Thank you! But would you mind explaining how in the last step the integral is of order $O(1/n)$. I tried to upper bound it by changing upper limit to infinity and write it as gamma function but this only gives me $O(1/\sqrt{n})$.
@DanielLi : That was a mistake, which is now corrected. Thank you for your comment.
OK, sure. I read your answer in a hurry. Please disregard my comment (deleted).
This maybe a dumb question, but isn't the uniform distribution on the ball bounded?
@user135520 : What do you mean by a bounded distribution?
I've confused about why this problem is non-trivial when we have that bounded random variables are automatically sub-Gaussian. I'm confused whether that fact applies to $X$ in the question.
@user135520 : That the expectation under the $\sup$ in (1) is finite is trivial. That the supremum of this expectation over all $n\ge2$ is finite is not so trivial.
|
2025-03-21T14:48:31.267065
| 2020-06-14T20:54:01 |
363070
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"HenrikRüping",
"Ivan Meir",
"Todd Trimble",
"https://mathoverflow.net/users/2926",
"https://mathoverflow.net/users/3969",
"https://mathoverflow.net/users/7113"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630192",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363070"
}
|
Stack Exchange
|
A natural $\mathbb Q\times \mathbb P$ subset of $\mathbb R$?
I would like a simple description of a dense subset of $\mathbb R$ which is homeomorphic to $\mathbb Q\times \mathbb P$. Preferably the description will be of an algebraic nature, and perhaps the set will even be a subgroup of $\mathbb R$. Here $\mathbb R$ is all real numbers, $\mathbb Q$ is all rational numbers, and $\mathbb P$ is all irrational numbers.
The following characterization by Jan van Mill may be useful: $\mathbb Q\times \mathbb P$ is the unique zero-dimensional separable metrizable space which is strongly $\sigma$-complete, nowhere complete, and nowhere $\sigma$-compact.
It's not particularly natural, but I'm thinking the set of irrational numbers whose continued fraction expansion $[a_0; a_1, a_2, \ldots]$ is such that $[a_0; a_2, a_4, \ldots]$ is quadratic irrational should be an example.
I made a comment earlier, but let me try converting it to an answer. It's similar in flavor to Ivan's.
By a back and forth argument, all countable dense subsets of $\mathbb{R}$ are homeomorphic to $\mathbb{Q}$, which allows one to replace $\mathbb{Q}$ with the space of quadratic irrationals, which I'll denote by $Q'$. The (regular) continued fraction expansions of elements of $Q'$ are precisely infinite continued fractions that are eventually periodic. By taking continued fractions, we have a homeomorphism $\mathbb{Z} \times \mathbb{N}^\mathbb{N} \cong \mathbb{P}$ defined by
$$(a_0, a_1, a_2, \ldots) \mapsto a_0 + \frac1{a_1 + \frac1{a_2 + \ldots}}$$
and from there we easily get a homeomorphism $\mathbb{N}^\mathbb{N} \cong \mathbb{P}$. Since $\mathbb{N}^\mathbb{N}$ is homeomorphic to its square via the interleaving map
$$((a_0, a_2, \ldots), (a_1, a_3, \ldots)) \mapsto (a_0, a_1, a_2, a_3, \ldots)$$
we get a homeomorphism $\mathbb{P} \times \mathbb{P} \to \mathbb{P}$ by interleaving continued fractions. The subset $Q' \times \mathbb{P}$ maps homeomorphically onto its image under this map, and this image is of course dense (it contains for example the dense set $Q'$ of numbers with eventually periodic cf's).
Take the dense subset to be those irrationals whose binary expansion $a_m 2^m+\cdots+a_n2^n$ is an irrational quadratic for the even powers (for alternatives see below) the binary number formed by the coefficients of the even places is irrational quadratic and the odd places irrational.
As observed by Henrik in the comments there are continuity issues mapping directly from $\mathbb{Q}$ so we replace it by any irrational homeomorphic set $\mathbb{Q}'$ which could be the quadratic irrationals as in Todd's answer or simply $\lambda \mathbb{Q}$ for some $\lambda \notin \mathbb{Q}$.
Then the map M from $\mathbb Q'\times \mathbb P$ to a dense subset of $\mathbb R$ given by interleaving the binary expansions should work. More formally $M(q,p)=T_e(q)+T_o(p)$ where $T_e(r)$ and $T_o(r)$ are defined by taking the binary expansion of $r$. Then in the binary form you map $2^n\rightarrow 2^{2n}$ and $2^n\rightarrow 2^{2n+1}$ respectively. i.e.
$T_e(11)=T_e(2^0+2^1+2^3)=2^0+2^2+2^6=69$, $T_o(11)=T_e(2^0+2^1+2^3)=2^1+2^3+2^7=138$.
This map is clearly continuous both ways, 1-1 and maps to a dense subset since any real number has an arbitrarily close rational approximation.
There are of course many possible such examples as for any infinite fixed subset S of the integers we can take the dense subset to be those irrational numbers whose binary expansion over $S$ or $\mathbb Z\setminus S$ is infinite and represents a value in $\mathbb{Q}'$ or is irrational respectively.
why is e.g. $T_e$ continuous? For example $a_n=1-2^n=0,1...1$ converges to one, but $T_e(a_n)$ converges to $0.0101010101...\neq T_e(1)=1$?
@HenrikRüping We require that $M(p,q)=T_e(q)+T_o(p)$ is continuous not $T_e$ or $T_o$ individually.
I still don't get it. If $M(p,q)$ is continuous, then $M(0,q)=T_e(q)$ would also be continuous.
@HenrikRüping (I think you mean $M(q,0)=T_e$) The problem with setting $p=0$ is that $p\notin \mathbb{P}$, the set of irrational numbers. However I certainly do appreciate your point which is relevant for fractions which have a finite binary representation and when you use this finite form in the construction. I believe you simply need to ensure you always use the infinite binary representation when you construct the mapping and the dense subset. I have updated my answer with this clarification. Thank you for your interesting observation.
Isn't there still a problem with a sequence of the form $a_n=(1+(-1/2)^n,\pi)$. What should H(1) be? The limit of $H(a_{2n})$ forces us to take the finite representation and the limit of $H(a_{2n+1})$ forces us to take the infinite representation. I do not see how to fix it.
Thank you Henrik you are definitely correct but I believe we can just replace $\mathbb{Q}$ by say the quadratic irrationals or even simpler $\sqrt{2} \mathbb{Q}$ and this should work. Let me know what you think - I have updated my answer to reflect this.
Sure multiplication with an irrational number should do the trick.
|
2025-03-21T14:48:31.267394
| 2020-06-14T21:06:33 |
363072
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Jochen Glueck",
"Lo Celso",
"dohmatob",
"https://mathoverflow.net/users/102946",
"https://mathoverflow.net/users/158029",
"https://mathoverflow.net/users/78539"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630193",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363072"
}
|
Stack Exchange
|
Frobenius inner product of a zero line-sum matrix and a doubly stochastic matrix
Let $A$, $B$ be two $n\times n$ real matrices.
Let $A$ be a zero line-sum matrix where each row sum and each column sum equals zero, i.e., $$\sum_{i=1}^{n}a_{ij}=\sum_{j=1}^{n}a_{ij}=0 $$ (it seems there is no name for such matrix, see Name for matrices with vanishing row and column sums).
Let $B$ be a matrix where every entry is non-negative and each row sum and each column sum equals 1, i.e.,
$$\sum_{i=1}^{n}b_{ij}=\sum_{j=1}^{n}b_{ij}=1, b_{ij}\geq 0 $$ ($B$ is a doubly stochastic matrix).
An index set $S\subset[n]\times[n]$ is a permutation set if $S$ has $n$ elements and if for every $(i,j),(i'j')\in S$, $(i,j)\neq(i',j')$ then $i\neq i;$ and $j\neq j'$.
Now suppose there exists a permutation set $S$ such that $a_{ij}\geq 0$ for every $(i,j)\in S$, $a_{ij}\leq 0$ for every $(i,j)\notin S$ and $\sum_{(i,j)\in S} b_{ij}\geq \sum_{(i,j)\in S'} b_{ij}$ for every permuation set $S'$. Is it true that $\langle A,B\rangle_F\geq 0$?
Note this is obviously true if $B$ is a permutation matrix or if $b_{ij}=1/n$ for every $(i,j)$. These seems like the two extreme cases. But how to prove it in general?
A small correction: $B$ is called a doubly stochastic matrix, not a Birkhoff polytope. The Birkhoff polytope is the set of all doubly stochastic matrices (in fixed dimension).
@JochenGlueck Thanks! I miswrote it because I was thinking about minimizing $\langle A,B\rangle_F$ over a Birkhoff polytope.
Going beyond your two examples, a computer might be of help finding potential counterexamples here. Data enters the problem in a very combinatorial way, and it's hard to get an intuition for why the statement should be true in general. So my 2 cents would be to try to finding some genuine counterexamples (e.g using a computer), and only in case of failure to find such, should you have more confidence in the claim, and try proving it.
@dohmatob Thanks for your advice! I used Sage to try the $3 \times3$ case and I found that the Birkhoff polytope, with the additional property I required on $B$ (I let $S={(i,i)}$), has 24 vertices, which is indeed hard to analyze. I have changed the question to a weaker one which I believe is true.
|
2025-03-21T14:48:31.267566
| 2020-06-14T21:31:14 |
363076
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Joshua Mundinger",
"https://mathoverflow.net/users/125523"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630194",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363076"
}
|
Stack Exchange
|
Hochschild cohomology of an Azumaya algebra
Let $k$ be a field. Given a commutative $k$-algebra $Z$ and an associative algebra $A$ that is Azumaya over $Z$, do we have an isomorphism of Hochschild cohomologies: $HH^*(A) \cong HH^*(Z)$?
This is true in characteristic zero by Weibel and Cortiñas, but their proof doesn’t generalise to prime characteristic. Is there a characteristic-independent proof, or is there a counter example in positive characteristic?
The paper Noncommutative motives of Azumaya algberas by Tabuada and Van den Bergh proves this when the rank of $A$ is invertible in $k$. I do not know a counterexample in the case when the characteristic divides the rank.
It is true that $HH^*(A) \cong HH^*(Z)$ when $Z$ is a commutative $k$-algebra (i.e. in the setting of affine varieties). The following argument is due to Vadim Vologodsky, although any errors are mine:
$\DeclareMathOperator{\Spec}{Spec}\DeclareMathOperator{\Hom}{Hom}$
Work in schemes over $k$. Let $X = \Spec Z$, and let $\Delta \subseteq X \times X$ be the diagonal. The Hochschild cohomology of $Z$ is $R\Hom_{X \times X}(\mathcal O_\Delta, \mathcal O_\Delta),$ but in fact this agrees with $R\Hom_{\widehat{X \times X}}(\mathcal O_\Delta, \mathcal O_\Delta)$ where $\widehat{X \times X}$ is the formal neighborhood of $\Delta \subseteq X \times X$. Similarly, $HH^*(A) = R\Hom_{\widehat{A \boxtimes A^{op}}}(A,A)$, where $\widehat{A \boxtimes A^{op}}$ means completion of $A \boxtimes A^{op}$ on $X \times X$ along the diagonal. So, it suffices to show $\widehat{A \boxtimes A^{op}}$ is a split Azumaya.
Note that $\widehat{A \boxtimes A^{op}}$ is split on $\Delta \subseteq \widehat{X \times X}$, with splitting bundle $A$. Viewing $A$ as defining a $\mathbb G_m$-gerbe, the obstruction to extending a splitting from $\Delta$ to $\widehat{X \times X}$ is in
$$ H^2(\widehat{X \times X}, ker(\mathbb G_{m, \widehat{X \times X}} \to \mathbb G_{m,\Delta}))$$
But by filtering according to powers of the ideal of $\Delta$, we see $ker(\mathbb G_{m,\widehat{X \times X}} \to \mathbb G_{m,\Delta})$ has a complete filtration where each piece of the associated graded is coherent. As $X$ is affine, this cohomology vanishes, and the splitting on $\Delta$ extends to $\widehat{X \times X}$.
Remark: if $X$ is not affine, the result is false, see e.g. Counterexamples to Hochschild-Kostant-Rosenberg in
characteristic p by Antieau, Bhatt, and Mathew.
|
2025-03-21T14:48:31.267737
| 2020-06-14T22:14:53 |
363079
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Harry Gindi",
"Somatic Custard",
"https://mathoverflow.net/users/1353",
"https://mathoverflow.net/users/94086"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630195",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363079"
}
|
Stack Exchange
|
How to understand the "boundary" of subscheme, as defined in "An elementary characterisation of Krull dimension"
In An elementary characterisation of Krull dimension and A short proof for the Krull dimension of a polynomial ring, Coquand, Lombardi, and Roy give an elementary characterization of Krull dimension, which inductively makes use of one of two notions of the "boundary" of a subvariety, given as follows:
Let $R$ be a commutative ring, and $x\in R$.
\begin{align*}
& \operatorname{upper boundary} R^{\{x\}} \mathrel{:=} R/I^{\{x\}},
&& I^{\{x\}} \mathrel{:=} xR + (\sqrt{0}:x) \\
& \operatorname{lower boundary} R_{\{x\}} \mathrel{:=} S_{\{x\}}^{-1}R,
&& S_{\{x\}} \mathrel{:=} x^{\mathbb{N}}(1+xR)
\end{align*}
where $(\sqrt{0}:x)$ is the ideal quotient of the nilradical, and $x^{\mathbb{N}}(1+xR) = \{x^n(1+rx) \mathrel\vert \text{$n\in\mathbb{N}$, $r\in R$}\}$.
Upon inspection, $\mathrm{Spec}(R^{\{x\}})$ is $V(x) \cap \overline{\mathrm{Spec}R\setminus V(x)}$, and $\mathrm{Spec}(R_{\{x\}})$ is a localization (not quite open) that is disjoint from the locus $V(x)$. Also, both are trivial exactly when $x\in R^\times \cup \sqrt{0}$.
However, I do not have good intuition for these subschemes.
How to think about these boundary schemes? Do they represent anything in particular?
Do these constructions appear anywhere else in the literature? I have not been able to find anything.
Are they commutative, in that $R^{\{x\}\{y\}} = R^{\{y\}\{x\}}$ and $R_{\{x\}\{y\}} = R_{\{y\}\{x\}}$?
I suspect they are commutative, but am unable to prove it, and I have reservations stemming from the fact that permutations of a regular sequence are not necessarily regular.
Are these very natural constructions? I.e. would it be worth studying them in more detail, in specific cases, or are they primarily instrumental in the characterization of Krull dimension?
I am willing to restrict to cases where $R$ is integral and Noetherian or has a finitely generated function field.
It seems best to consider first the $R^{\{x_0\}...\{x_k\}}$, where $x_0, ..., x_k$ form a regular sequence, but I was not able to get much further with this assumption.
The sense in which these are 'boundaries' is purely order-theoretic (with respect to the specialization ordering on the spectrum), if that helps. I suggest working this out in the case where R is a valuation ring.
@HarryGindi I got a sense that something like this was the case, from the first reference, but I'm not used to thinking about lattices and partial orders, so I didn't gain much from this. I know a la here this captures most of Zariski topology, but somehow that doesn't console me.
This isn’t really an answer, just some musings.
First, the lower boundary. $S_{\{x\}}$ is the multiplicative set generated by the multiplicative sets $T_{\{x\}} := \{x^n : n\in \mathbb{N}\}$ and $U_{\{x\}} := \{1+rx : r\in R\}$. Localizing at a multiplicative set has the effect of “turn these elements into units”, or “cut out any prime ideals that contain these elements”. So localizing at $T_{\{x\}}$ has the effect of turning $x$ into a unit — i.e. replace $R$ with $R[x^{-1}]$. On the other hand, it is well-known that an element $x\in R$ is in the Jacobson radical of a commutative ring if and only if every element of $U_{\{x\}}$ is a unit. So localizing at $U_{\{x\}}$ has the effect of putting $x$ into the Jacobson radical of the ring — i.e. all the maximal ideals of the resulting ring will now contain $x$. This is puzzling, since it means that localizing at $S_{\{x\}}$ should simultaneously turn $x$ into a unit (i.e. throw out any prime ideals that contain it) and an element of the Jacobson radical of the ring (i.e. put it into all the maximal ideals). These would seem contradictory.
Next, the upper boundary. As far as I can tell, the effect of modding out $I^{\{x\}}$ should be to make $x$ nilpotent, since it will identify $rx^2$ with a nilpotent element for all $r\in R$. So that is like putting $x$ into all the prime ideals of $R$. I guess what it does to the prime spectrum is to cut out any prime ideals that don’t contain $x$.
Maybe it’s instructive to see what these operations do to to a prime number $p$ of $\mathbb Z$. We have $I^{\{p\}} = p{\mathbb Z}$, so ${\mathbb Z}^{\{p\}} = {\mathbb F}_p,$ the field of $p$ elements. On the other hand, $S_{\{p\}}$ inverts $p$ and also every number that is congruent to $-1$ mod $p$. By Fermat’s Little Theorem, for any prime number $q$ other than $p$, we have that $-q^{p-1}$ is such a number, whence $q$ is a unit in the resulting ring. Hence, ${\mathbb Z}_{\{p\}} = \mathbb Q$.
Next, let’s do to the element $x$ of the ring $R=\mathbb{Z}[x]$. Similar to the above, we have $I^{\{x\}}$ = xR, so $R^{\{x\}} = \mathbb Z$. On the other hand, inverting $S_{\{x\}}$ has the effect of both of inverting $x$ and inverting every polynomial in $x$ that has a constant term of $1$. Kind of a weird ring. I think it has a lot more prime ideals than $\mathbb Z$ does (like the principal ideal generated by $2x^2 + x + 2$?), but I can prove that it has Krull dimension 1 anyway.
I’m pretty sure they are commutative.
I haven’t quite seen these constructions before. They remind me a bit of computations regarding multiplicity and analytic spread.
Thanks. Still need to think a bit more about your remarks, but it's helpful. It doesn't change your conclusion, but wouldn't $S_{{p}}$ invert $p$ and the numbers congruent to $1$ mod $p$? I also suspected they were commutative, but couldn't prove, and had reservations relating to regular sequences (I'm editing to add).
This is also not really an answer, but it explains an alternative version of the characterisation that I find easier to work with. Let $P_d(R)$ be the polynomial ring over $R$ in variables $x_0,\dotsc,x_d$, and order the monomials lexicographically. Say that $f\in P_d(R)$ is comonic if the lowest monomial has coefficient one. Say that an $R$-algebra homomorphism $\phi\colon P_d(R)\to R$ is thin if the kernel contains a comonic polynomial. After a little translation, the results of Coquand and Lombardi say that $R$ has dimension $\leq d$ iff every such homomorphism is thin.
|
2025-03-21T14:48:31.268156
| 2020-06-14T22:20:30 |
363080
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630196",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363080"
}
|
Stack Exchange
|
A maximization problem with permutations
Consider a partition $f:S_n\rightarrow [n]$ of $S_n$ into $n$ parts. Denote the permutations that map $j$ to $k$ by $s(j,k)$. Set $S(f):=\Sigma_{1\leq i,j\leq n}max_{1\leq k\leq n}|f^{-1}(i)\cap s(j,k)|$. I would like to choose a partition $f$ so that it maximizes $S(f)$.
My intuition says that the maximum $S(f)$ is $2n!$ and that it is only achieved by partitioning the permutations of $S_n$ either according to where they map $c$, or according to what they map to $c$, where $c$ can be any specific element of $[n]$. In fact, it appears that this technique naturally extends to partitions of $S_n$ into any number of parts. However, I do not have a proof that this is indeed so.
The above problem seems very natural to me. Though I only thought it up very recently, it would not surprise me if it was a known problem and there were some results about it, or even a complete solution. So, if anyone could point to a paper with some work or a solution of this problem, I would be grateful.
Edit: I have shown that my intuition was wrong for a factor of about $\frac{logn}{logloglogn}$.
|
2025-03-21T14:48:31.268261
| 2020-06-14T23:37:25 |
363083
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Robert Israel",
"Will Sawin",
"domotorp",
"https://mathoverflow.net/users/13650",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/24463",
"https://mathoverflow.net/users/955",
"srossd"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630197",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363083"
}
|
Stack Exchange
|
Hamming distance to primes
There is a positive density of odd numbers which are of the form $2^n+p$ (due to Romanoff), and a positive density which are not of this form (due to van der Corput and Erdos, see this paper for a review and some results on the density). So, for some but not almost all odd numbers, we can get to a prime by subtracting a power of two.
I'm curious about a related question: given an odd integer $m$, is there always a prime number with Hamming distance 1 to $m$? For example, $127 = 1111111_2$ is not of the form $2^n+p$, but it has Hamming distance 1 to a prime, since $383 = 101111111_2$ is prime.
A related question, which implies the first: given an odd integer $m$, does the set $\{m+2^n\mid n\in \mathbb{N}\}$ contain infinitely many primes (or at least one for which $2^n>m$, so that this corresponds to flipping a bit in $m$)?
The sum from $n$ to infinity of the one over the log of $2^n+m$ is infinite, but I can't imagine trying to prove the existence of such primes with current technology. Even getting density 1 seems impossible to me - you would need to look to $n$ exponentially large in $m$.
The sequence "least prime with Hamming distance 1 from the k'th odd integer" starts
$3, 2, 7, 3, 11, 3, 5, 7, 19, 3, 5, 7, 17, \ldots$. It doesn't seem to be in the OEIS yet, but should be. Are you interested in contributing it? If you don't wish to, I can (with a link to this question).
Thanks very much for your answer, I wasn't aware of the (dual) Sierpiński numbers. I'll go ahead and add this to the OEIS.
I think this question has already been asked in a slightly different form and answered here: https://mathoverflow.net/questions/316867/is-there-a-2-power-twinless-prime
See OEIS sequences A067760 and A076336. If $n$ is a dual Sierpiński number, there is no $k$ such that $n+2^k$ is prime. There is no prime with Hamming distance $1$ to the Sierpiński number $2131099$, and this may be the least positive integer with this property.
|
2025-03-21T14:48:31.268523
| 2020-06-14T23:50:41 |
363084
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"https://mathoverflow.net/users/158968",
"https://mathoverflow.net/users/35520",
"ofer zeitouni",
"user158968"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630198",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363084"
}
|
Stack Exchange
|
What is the Onsager-Machlup function for $dX(t)=f(B(t)) dt+dB(t)$?
What is the Onsager-Machlup function for $dX(t)=f(B(t)) dt+dB(t)$?
I know that the Onsager-Machlup function for $dX(t)=f(X(t))dt+dB(t)$ is $$L(x,v)=\frac12\left[v-f(x)\right]^2+\frac12f'(x)$$
But what if instead of $f(X)$ it is $f(B)$? I have a strong suspicion that it is
$$L(x,v)=\frac12 E[(v-f(B(t)+x))^2]$$
I have tried finding the SDE for $X(t)$ but it is quite difficult. I tried doing this through infinitesimal generator, but I can't find it.
Definitely not. I do not have the time right now to write a full answer, so here are hints, maybe you or someone else can complete them. You need to replace $f(B_t)$ by a functional of the solution, i.e. write $f(B_t)=g(X_0^t)$ where now $g$ is a functional. (In the classical case, $f=g$ is a pointwise function. When you now do the Girsanov, you will write $g(X_0^t)=g(\phi_0^t+X_0^t-\phi_0^t)$ ($\phi$ is the function you are trying to compute the OM functional at). Expand, and you see that the term $\int_0^T|f'(\phi_t)|^2 $ is replaced by something else.
The something else is simply the integral of the square of the functional derivative along the path, something like $\int_0^T \int_0^T ds ds' \int_0^{s'}\int_0^s \partial_u g(\phi_0^s) du \partial_{u'} g(\phi_0^{s'}) du'$. Writing a full proof will require a few pages...
@oferzeitouni Would it be easier to find the Onsager-Machlup function for just $dX(t)=f(B(t))dt$? I know this is a different problem.
Sorry, too hasty. The scaling of this problem is off. Indeed, if $|B-\phi|<\epsilon$ you have $|X-\int_0^T f(\phi) dt|\sim C\epsilon$ with $C\neq 1$, so I would not expect the OM functional to make sense.
What is $\mu_0$? and are you averaging on $B$ but not $X$? they are of course dependent, so I suspect this is not correct, but I am not sure. There should not be an expectation in the definition of $g$
@oferzeitouni Can you give me another hint? Sorry, I have tried to compute it the way you have said but it is difficult...
http://users.sussex.ac.uk/~md326/MAP.pdf actually I found this. Equation 2.1 has the answer.
|
2025-03-21T14:48:31.268690
| 2020-06-15T02:45:47 |
363090
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Tomo",
"Will Sawin",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/37110"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630199",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363090"
}
|
Stack Exchange
|
Galoisian perspective on local system tamely ramified along a smooth divisor
This question is about (1.7.8) and (1.7.11) in Deligne’s Weil II paper.
Let $X$ be a regular scheme and $D\subset X$ a smooth principal divisor cut out by the function $t$. Let $\mathcal F$ be a locally constant étale sheaf of finite sets on $X-D$ tamely ramified along $D$. Let $n$ be an integer inversible on $X$ and let $\pi:X_n\to X$ be obtained by adjoining $t^{1/n}$ to the ring of functions on $X$; $X_n$ is regular and $\pi$ is totally ramified over $D$ and étale away from $D$. The identification $\pi^{-1}(D)_{\mathrm{red}}=D$ gives a section of the projection $\pi^{-1}(D)\to D$; on functions this section sends $t^{1/n}$ to zero. The group $\mu_n$ of $n^{\text{th}}$ roots of unity acts on $X_n$: given $r\in\mu_n$, multiplication $t^{1/n}\mapsto rt^{1/n}$ gives an automorphism of $X_n$. By Abhyankar’s lemma, there is an $n$ invertible on $X$ so that $\pi^*\mathcal F$ extends to a local system on all of $X_n$. The restriction of $\pi^*\mathcal F$ to $D$ then gives a local system on $D$ called $\mathcal F[D]$.
The action of $\mu_n$ on $X_n$ fixes $\pi^{-1}(D)$ pointwise and induces the identity on $\pi^{-1}(D)_{\mathrm{red}}$. Therefore the action of $\mu_n$ on $X_n$ induces an action on $\mathcal F[D]$ by transport of structure.
The Galoisian perspective on this is to replace $X$ by the spectrum of its henselization at the generic point of $D$ with function field $K$ and residue field $k$ and let $X_1$ denote the spectrum of the strict henselization with function field $K_1$ and residue field $k_1$ a separable closure of $k$. Let $\mathbf{L}$ denote the set of primes not equal to the characteristic of $k$. There is an exact sequence
$$1\to\hat{\mathbf{Z}}_{\mathbf L}(1)\to\pi_1^{\text{mod}}(X-D,\operatorname{Spec}(\overline K))\to\operatorname{Gal}(k_1/k)\to1.$$
If $K_2$ denotes the extension of $K_1$ obtained by adjoining all $n^{\text{th}}$ roots of $t$ for $n$ invertible on $X$, then $\pi_1^{\text{mod}}(X-D,\operatorname{Spec}(\overline K))=\operatorname{Gal}(K_2/K)$, and compatible choices of $t$ and $t^{1/n}$ split this extension, as $\operatorname{Gal}(k_1/k)$ can then be identified with the subgroup of $\operatorname{Gal}(K_2/K)$ fixing the $t^{1/n}$. If one identifies $\mathcal F$ with the action of $\operatorname{Gal}(\overline K/K)$ on its stalk $F$ at $\operatorname{Spec}(\overline K)$, then in this way one obtains an action of $\operatorname{Gal}(k_1/k)$ on $F$; i.e. a sheaf on $\operatorname{Spec}(k)$. Deligne says moreover from this data one obtains a $\operatorname{Gal}(k_1/k)$-equivariant action of $\hat{\mathbf{Z}}_{\mathbf L}(1)$ on $F$.
How to deduce this action without proceeding via the geometric construction?
Whenever we have an exact sequence of groups $1 \to H \to G \to K \to 1$, if the exact sequence splits, we can express $G$ as the semidirect product $H \rtimes K$, and then a $G$-action on something is equivalent to a $K$-action together with a $K$-equivariant $H$-action. Do you want an explanation of why the exact sequence you wrote splits? If not, what do youw ant?
I see why the sequence splits, but I don’t understand why a $G$-representation is the same as a $K$-representation with a $K$-equivariant $H$-action. I understand $K$-equivariant to mean that $h^{-1}k^{-1}hk$ acts by the identity. By Mackey theory ($H$ is abelian) an irreducible $G$-representation is induced from the data of a character $\chi$ of $H$ and an irreducible representation of the subgroup of $K$ fixing $\chi$. When this subgroup is not all of $K$, there is a $k\in K$ so that $\chi(k^{-1}hk)\ne\chi(h)$ for some $h\in H$. Then how is the $H$-action $K$-equivariant?
That's not what $K$-equivariance means in this context. $K$-equivariance means that $k h k^{-1} \cdot k $ acts the same as $k \cdot h$ where $k$ is viewed as an element of $K$ and $k h k^{-1}$ and $h$ are viewed as elements of $H$.
This makes sense since the group scheme $\mu_n$ itself carries an action of $\operatorname{Gal}(K_1/K)$. Thanks!
|
2025-03-21T14:48:31.268951
| 2020-06-15T03:18:33 |
363091
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630200",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363091"
}
|
Stack Exchange
|
Negative moments of Steinhaus random variables
Let $f_i, i=1, \ldots, n$ be independent Steinhaus random variables, i.e. variables which are uniformly distributed on the complex unit circle. Let $a \in R^n$.
1) Find $E\left(\sum_{i=1}^nf_i a_i\right)^{-k}$, for $k=1,2$
2) What would mean the following condition: $\|f\|_{\infty}> b \|f\|_2$, where $b<1$ and $\|f\|_{\infty}$ is the maximum part of the real parts of $f_i$
|
2025-03-21T14:48:31.269009
| 2020-06-15T04:16:19 |
363092
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"D.-C. Cisinski",
"David Carchedi",
"Moutand Mohammed",
"https://mathoverflow.net/users/1017",
"https://mathoverflow.net/users/144181",
"https://mathoverflow.net/users/4528"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630201",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363092"
}
|
Stack Exchange
|
Homology group of the étale homotopy type of a projective limit of schemes
Let $(X_i)$ be a projective system of schemes such that $\hat{X} := \varprojlim X_i $ exist as a scheme ; let $Et(\hat{X})$(resp. $Et(X_i)$) the etale homotopy type of $\hat{X}$ (resp. of $X_i$) ; suppose that $H_n(Et(\hat{X}), \hat{\mathbb{Z}}) = 0$, do we have $\varprojlim H_n(Et(X_i), \hat{\mathbb{Z}}) = 0$ ?
Are they affine schemes? It so, you can see Corollary 4.4 here: https://arxiv.org/pdf/1905.06243.pdf
We need the limit to be op-filtered with affine transition maps, and the $X_i$'s quasi-compact and quasi-separated. Then this is a standard result from the 1960's.
@Denis-CharlesCisinski thank you for your comment ; pleas i need specific references
By a Yoneda type argument, the Formula is equivalent to saying that for a locally constant sheaf A wih finite fibers, the $j$th cohomology of the limit is isomorphic to the filtered colimit of the $H^j(X_i,A)$ (with $j=0$ if $A$ is a sheaf of sets, $j=1$ if $A$ is a sheaf of groups, $j\geq 0$ if $A$ is a sheaf of abelian groups). This is established as Theorem 5.7 in Exposé VII of SGA4 in the case where $A$ is abelian. The case of non abelian coefficients is described in Remark 5.14 of loc. cit.
By "The Formula" I meant the fact that the homology of the limit is the limit of the homologies of the $X_j$'s (where the second instance of limit means in the sense of pro-objects in the derived category of abelian groups). The vanishing result you seem to want follows then immediately from there whatever you mean by limit.
|
2025-03-21T14:48:31.269146
| 2020-06-15T08:51:24 |
363099
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alan",
"Liviu Nicolaescu",
"StopUsingFacebook",
"https://mathoverflow.net/users/137958",
"https://mathoverflow.net/users/13904",
"https://mathoverflow.net/users/20302"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630202",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363099"
}
|
Stack Exchange
|
Is there a better reference for existence/regularity for parabolic PDEs (and systems) than the book of Ladyzenskaja, Solonnikov, Uralceva?
The book of Ladyzenskaja, Solonnikov, Uralceva contains almost everything most people need yet the typesetting and notation is disgusting to the eye. Is there any better text that covers the same type of content, including systems of parabolic equations (hence the book of Liebermann does not count) and Holder regularity etc, and is modern?
Sometimes we need to be gracious for what we do have, don't you think?
Yes, but it's been 50 years since that book was released...
@StopUsingFacebook Oldie but goldie.
|
2025-03-21T14:48:31.269234
| 2020-06-15T09:24:25 |
363101
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alex Ravsky",
"LeechLattice",
"https://mathoverflow.net/users/100231",
"https://mathoverflow.net/users/125498",
"https://mathoverflow.net/users/129185",
"https://mathoverflow.net/users/43954",
"mathworker21",
"vidyarthi"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630203",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363101"
}
|
Stack Exchange
|
Independent sets in complement of Kneser graphs
Intuition strongly suggests that there exist $\left\lfloor\frac{\binom{n}{k}}{\lfloor\frac{n}{k}\rfloor}\right\rfloor$ independent sets in the complement of a Kneser graph each having $\lfloor\frac{n}{k}\rfloor$ vertices in it. Is this true. If true, how to establish it?
A construction of such a set of cliques in the Kneser graph $K(6,2)$ is as follows:
$$(12)(34)(56)$$
$$(13)(25)(46)$$
$$(14)(26)(35)$$
$$(15)(24)(36)$$
$$(16)(23)(45)$$
Thus, in this example we have $5$ disjoint triangles in the Kneser graph $K(6,2)$ which correspond to an equitable $5$ coloring of the complement graph $\overline{K}(6,2)$. Can such a construction be always done? I think this is related to the number of order $2$ elements in the symmetric group of order $n$. Thanks beforehand.
How would you interpret the case of $n=7$ and $k=3$?
@LeechLattice edited. please see now
@RobPratt yes, that is what I have said in the post
According to [p. 8], Baranyai's theorem [B] implies that the vertex set of the Kneser graph $K(n,k)$ can be partitioned into $\left\lceil\frac{\binom{n}{k}}{\left\lfloor\frac{n}{k}\right\rfloor}\right\rceil$ cliques of size $\left\lfloor\frac{n}{k}\right\rfloor$.
References
[B] Zs. Baranyai, On the factorization of the complete uniform hypergraph, In: Eds. A. Hajnal, R. Rado, and V. T. Sós, Infinite and Finite Sets (Proc. Intern. Coll. Keszthely, 1973), Bolyai J. Mat. Társulat, Budapest & North-Holland, Amsterdam, 1975, 91–108.
[BP] Boštjan Brešar, Mario Valencia-Pabon, Independence number of products of Kneser graphs, (November 19, 2018).
i'm currently banned from MSE (for telling mods to stop removing the infowars link from my profile). what you said here is correct; thanks for catching those typos. feel free to make the appropriate edits
@mathworker21 Done.
|
2025-03-21T14:48:31.269386
| 2020-06-15T10:04:14 |
363103
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Jason Starr",
"Lennart Meier",
"https://mathoverflow.net/users/13265",
"https://mathoverflow.net/users/2039"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630204",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363103"
}
|
Stack Exchange
|
$\mathscr{M}_*$, the stack of generalized elliptic curves (with some additional conditions) is locally of finite presentaion
Let $\mathscr{M}_*$ be the stack over $\mathbb{Z}$ which classifies generalized elliptic curves $E/S$, such that for every geometric point $k$ of $\mathbb{Z}$, the fibre $E_k$ is smooth or $n$-gon, where $\mathrm{char} k$ does not divide $n$.
(see III.0. of Deligne-Rapoport's "Les schemas de modules de courbes elliptiques")
In III.2.5., the authors claim that $\mathscr{M}_*$ is locally of finite presentation as a fibred category, i.e., for any direct limit of rings $A = \lim A_i$, the canonical map $\lim \mathscr{M}_* (A_i) \to \mathscr{M}_*(A)$ is an equivalence.
They say that this is EGA IV.8.8.
Using it (and some propositions in Stack Project), we can show, for a generalized elliptic curve $E/A$, that there exist an $i$, a proper flat curve $E_i / A_i$, and morphism $E_i^\text{sm} \times E_i \to E_i$, which induce the ones of $E/A$.
But I can't show that for some $i$, this $E_i/A_i$ is a generalized elliptic cuvre.
If for some $i$ $E_i/A_i$ has reduced geometric fibres, then we can show that this $E_i/A_i$ has connected geometric fibres.
And it seems that I can show for some $i$, every geometric fibre of $E_i/A_i$ has the trivial dualizing sheaf.
(First assume $A_i$ noether. Then $\operatorname{Spce} A_i$ has an connected open subset which intersects the image of $\operatorname{Spec}A$.
Thus using the Picard scheme of $E_i \times U / U$, we can show the subset "$\{\omega = \mathscr{O} \}$" of $U$ is clopen.)
So finally, I want to show that for some $i$, every geometric fibre of $E_i/A_i$ is reduced, and whose singularity is at least ordinary double.
(If so, then by the argument above and II.1.13., we have that this $E_i/A_i$ is a generalized elliptic curve.)
I can't find it in EGA.
So please suggest me some references of it, or please prove it.
Remark 2.1.16 in Cesnavicius's article https://www.imo.universite-paris-saclay.fr/~cesnavicius/modular-description.pdf seems to go in that direction, though I have not checked.
See, for instance, Proposition 2.3 of the following: https://arxiv.org/pdf/0809.5224.pdf
|
2025-03-21T14:48:31.269551
| 2020-06-15T10:44:21 |
363105
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Vidit Nanda",
"https://mathoverflow.net/users/18263",
"https://mathoverflow.net/users/64302",
"user2520938"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630205",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363105"
}
|
Stack Exchange
|
An unpublished result of H. Hamm
Let $f$ be a polynomial on $\mathbb{C}^n$. Denote
$$X_{R,p} = \{|x|<R\}\cap \{|f(x)|<p\}.$$
In "On the polynomials of I. N. Bernstein" Malgrange writes that H. Hamm proved that $f^{-1}(0)\cap\{|x|<R\}$ is a deformation retract of $X_{R,p}$, for sufficiently large $R$ and small $p$.
Has a proof of this result since appeared in the literature somewhere?
Note that sets like ${|f(x)| < p}$ are equivalent to ${|f(x)|^2 < p^2}$ and the norm-squared function is much friendlier to work with (eg it is smooth). I don't know if your desired result has appeared anywhere, but I think a good starting point is Durfee's paper (https://www.jstor.org/stable/1999065). This does what you want globally, ie., without the ${|x|<R}$ part; to make the equivalence local, you have to show that the gradient vector field of $-|f|^2$ points inwards along the boundary ${|x|=R} - {f=0}$.
@ViditNanda Thanks for your comment. In the article you mention it seems though that the singular locus of $f^{-1}(0)$ has to be compact. Now I'm really looking for a general result, so I do not think that this article really contains what I'm looking for. I will take a look at citing literature though, thanks.
|
2025-03-21T14:48:31.269668
| 2020-06-15T11:03:56 |
363108
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"A beginner mathmatician",
"Vít Tuček",
"https://mathoverflow.net/users/136860",
"https://mathoverflow.net/users/6818"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630206",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363108"
}
|
Stack Exchange
|
Generalization of Killing form
I am reading Knapp's book "Lie groups beyond introduction". On page 369, he has described the following. Let $\mathfrak g$ be a real semisimple Lie algebra. Suppose $\theta\colon\mathfrak g\to \mathfrak g$ is a Cartan involution. Let $B$ be a nondegenerate symmetric bilinear form on $\mathfrak g$ which is $\theta $-invariant and $B_\theta(X,Y):=-B(X,\theta Y)$ is positive definite. Let $\mathfrak g=\mathfrak k\oplus\mathfrak p$ be the Cartan decomposition of $\mathfrak g$ where $\mathfrak k$ and $\mathfrak p$ are eigenspaces of $\theta$ corresponding to eigenvalues $1$ and $-1$ respectively. Then clearly, $\mathfrak k\oplus i\mathfrak p$ is a compact real form of $\mathfrak g^{\mathbb C}$ (complexification of $\mathfrak g).$ Hence $B$ is negative definite on a maximal abelian subspace of $\mathfrak k\oplus i\mathfrak p$. I understood up to this point. Now Knapp argues that from above one can conclude that for any Cartan subalgebra of $\mathfrak g^{\mathbb C}$, $B$ is positive definite on the real subspace where all the roots are real valued. I do not understand how to get this. However, if $B$ is in particular the Cartan-Killing form then I can check by hand that this claim holds. Can someone please help me out?
I don't understand your question. Please check for typos. I've consulted Knapp's book and page 369 in my edition (from 1996) is devoted to exercies. Which edition are you using?
@Vit. I am reading the second edition. Please help me out if you can. I am struggling to understand this point. As far as typos are concerned, I think that is alright.
No, it's not all right. What do you mean by "all roots are real values"?
@Vit. Corrected.
According to theorem 2.15, all Cartan subalgebras of a complex semi-simple Lie algebras are conjugated. I.e. there exists $\alpha \in \mathrm{Inn}(\mathfrak{g})$ such that $\mathfrak{h}_1 = \alpha(\mathfrak{h}_2).$ Since any invariant form $B$ is also invariant with respect to the group of inner automorphisms (sorry, I don't know if or where Knapp proves this), it follows that any positivity property is preserved for corresponding real forms of all Cartan subalgebras.
|
2025-03-21T14:48:31.269839
| 2020-06-15T11:35:51 |
363109
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Mens",
"Simone Virili",
"https://mathoverflow.net/users/159622",
"https://mathoverflow.net/users/24891"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630207",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363109"
}
|
Stack Exchange
|
Characterising exact sequence in terms of (quasi-)identities
First of all, hello everyone and thanks in advance of any kind of help.
I am currently working on automated proofs of diagram chases. To this end, I have to characterise the property of $A \overset{f}{\to}B \overset{g}{\to} C$ being an exact sequence, where $f,g$ are morphisms between $R$-modules, in terms of algebraic (quasi-)identities. So, I am trying to find identities or implications of identities involving $f$, $g$ (and possible other, auxiliary morphisms) that characterise $\text{im}(f) = \ker(g)$.
Characterising the inclusion $\text{im}(f) \subseteq \ker(g)$ is pretty straight forward. This simply translates into $gf = 0$. But so far, I could not characterise the other inclusion $\ker(g) \subseteq \text{im}(f)$ in terms of algebraic (quasi-)identities.
Maybe someone can help me out here.
I am not an expert, so I do not know if this observation helps in any way but, after you know that $Im(f)\subseteq Ker(g)$, you can say that $g$ induces a unique map $\bar g\colon B/Im(f)\to C$. Then, the equality $Ker(g)=Im(f)$ is equivalent to $Ker(\bar g)=0$, that is, "$\bar g(x)=0$ implies x=0" for all $x\in B/Im(f)$.
Thanks for the comment. Very interesting observation. The only problem here is that I cannot really work with information about single elements - meaning I cannnot incorporate identities such as $x = 0$. I can only work with something like $\bar{g} = 0$.
Again, I do not know if this helps, but note that elements "are" very particular maps. Let me explain, for an $R$-module $M$, an element $x\in M$ is the same as a homomorphism $R\rightarrow M$ (uniquely determined by $1\mapsto x$). Then, in the setting of my previous comment, the last part of the sentence can be reformulated by saying that "$\bar g\circ \varphi =0$ implies $\varphi=0$, for all $\varphi\colon R\to B/Im(f)$"
Oohh yes of course. Right. I think this might really help. Thanks a lot!
|
2025-03-21T14:48:31.269990
| 2020-06-15T11:41:20 |
363110
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630208",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363110"
}
|
Stack Exchange
|
Ideals of a $C^{\infty}$-ring
Let $M$ be a $n$-dimensional smooth manifold and $F:M\rightarrow\mathbb{R}^{k}$ a smooth map.
If $0\in\mathbb{R}^{k}$ is a regular value of $F$, then the level set $N:=F^{-1}(0)$ is a smooth submanifold in $M$ with dimension $n-k$. Moreover, a standard result in differential geometry states that the tangent space $T_{p}N$ ($p\in N$) is equal to the kernel of the tangent map at $p$, i.e.
$$T_{p}N=\ker\,(dF_{p}:T_{p}M\rightarrow T_{0}\mathbb{R}^{k}\cong\mathbb{R}^{k}).$$
If we drop the regular condition, then the level set $N=F^{-1}(0)$ is not a manifold. However, we can define a "smooth structure" on $N$ as follows. Given an open subset $U$ in $M$, then $U\cap N$ is open in $N$. We say that a function $h:U\cap N\rightarrow\mathbb{R}$ is smooth, if $h=H|_{U\cap N}$ for some smooth function $H$ on $U$. Denote by $C^{\infty}_{N}$ the sheaf of smooth function on $N$. We get a ringed space $(N,C^{\infty}_{N})$. In fact, the stalks of $C^{\infty}_{N}$ are locally $C^{\infty}$-rings and therefore $(N,C^{\infty}_{N})$ is a locally $C^{\infty}$-ringed space(cf. Definition 4.8 in "Algebraic Geometry over $C^{\infty}$-Rings" by D. Joyce). Given a point $p$ in $N$, let $\mathfrak{m}_{p}$ be the maximal ideal of $C^{\infty}_{N,p}$. The Zariski tangent space of $N$ at $p$ is
$$T^{Zar}_{p}N=\mathrm{Hom}_{C^{\infty}_{N,p}}(\mathfrak{m}_{p}/\mathfrak{m}^{2}_{p},C^{\infty}_{N,p})
\cong\mathrm{Der}_{\mathbb{R}}(C^{\infty}_{N,p},C^{\infty}_{N,p}).$$
My question is: does the equality
$$T^{Zar}_{p}N\cong\ker\,(dF_{p}:T_{p}M\rightarrow T_{0}\mathbb{R}^{k}\cong\mathbb{R}^{k})$$
still hold?
One direction is obvious, $T^{Zar}_{p}N$, as a subspace of $T_{p}M$, is contained in $\ker\,(dF_{p})$. To prove the assertion, it suffices to show that each vector in $\ker\,(dF_{p})$ (considered as a derivation on $C^{\infty}_{M,p}$) vanishes on the ideal
$$J_{p}=\{[h]_{p}\in C^{\infty}_{N,p}\,|h|_{N}=0\}.$$
By contrast to the conventional algebraic geometry, the local $C^{\infty}$-ring $ C^{\infty}_{N,p}$ is not Noetherian. I don't know how to characterize the idea $J_{p}$ explicitly.
|
2025-03-21T14:48:31.270140
| 2020-06-15T12:49:43 |
363114
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alapan Das",
"Dieter Kadelka",
"Gabe Conant",
"Roee",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/156029",
"https://mathoverflow.net/users/159626",
"https://mathoverflow.net/users/38253"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630209",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363114"
}
|
Stack Exchange
|
Finding a variable P for which a sum converges
I need guidance in finding a variable P for which $ \sum _{n=4}^{\infty }\:\left(\frac{n\ln \left(n\right)-n}{\ln \left(n!\right)}\right)^p $ converges, or proof that there doesn't exist such P variable.
I've tried the direct comparison test, by defining the sequence $ b_n=\:\left(\frac{n\ln \left(n\right)-n}{\ln \left(n!\right)}\right)^p $ and the sequence $ a_n=\left(1-\frac{1}{\ln \left(n\right)}\right)^p $.
From here I'm not sure how to prove that $\sum _{n=4}^{\infty }\:a_n$ diverges.
Have you tried replacing $\ln(n!)$ with the Stirling formula?
Have you tried the divergence test on $a_n$?
@DieterKadelka I've tried using the Stirling formula, but I didn't see where it can lead.
@GabeConant I've tried using the direct comprasion test on $a_n$ with the sequence $ (1-(1/n))^p $ which I know diverges, but I'm not sure how to prove it.
To show that $\Sigma (1-(1/n))^p = \infty$ assume that $p \in \mathbb{N}$ and use the $\zeta$-function.
@AlapanDas I've been taught that if $\lim _{n\to \infty :}\left(\sqrt[n]{a_n}\right)=1$ the root test is indecisive, can you please explain further?
Yes you are right. Take the two series for example. 1) $f(x)=\sum_{n=1}^{\infty} \frac{x^n}{n}$ and 2)$g(x)=\sum_{n=1}^{\infty} \frac{x^n}{n^2}$. For both the series $\lim \limits_{n \to \infty} \sqrt[n]{|a_n|}=1$. But, at $x=1$, $f(x)$ doesn't converge, but $g(x)$ does.
In your question, $\lim \limits_{n \to \infty} (1-\frac{1}{2n(1-\frac{1}{ln(n)})+1})^{\frac{-p}{n}}≈(1-\frac{1}{2n})^{\frac{-p}{n}}$. This becomes similar to asking whether $\sum_{n=0} (1-\frac{1}{2n})^p$ is convergent or not. And obviously this is divergent.
@AlapanDas I don't know how to prove that $ \left(1-\frac{1}{n}\right)^p $ diverges
For , large $n$, $(1-\frac{1}{2n})≈1$, hence, it's equivalent to $\sum_{k=0} x^k$.
|
2025-03-21T14:48:31.270534
| 2020-06-15T12:59:16 |
363115
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Daniel Hast",
"Glasby",
"https://mathoverflow.net/users/23827",
"https://mathoverflow.net/users/31308"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630210",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363115"
}
|
Stack Exchange
|
Number of solutions of a degree 4 polynomial equation over a finite field
Suppose that $q$ is a prime power and $\xi, \eta\in \mathbb{F}_q$ are nonzero. A computer calculation for $q<70$ suggests that the number $N$ of $4$-tuples
$(a,b,c,d)\in\mathbb{F}_q^{4}$ satisfying $(ac-\xi bd)^2-(a^2-\xi b^2+1)(c^2-\xi d^2-\eta)=0$ is $q^3-q$.
Question. Is there some theory, or nice method, for computing $N$?
It may help, when $\xi$ is a non-square, to observe that the norm of $a+b\sqrt{\xi}\in\mathbb{F}_{q^2}$ is $a^2-\xi b^2$. The non-homogeneous polynomial above seems hard to me, but I am not an expert in such matters. I am not surprised that $N$ is a cubic in $q$.
Why are you interested in that polynomial in particular?
@Danial Hast. This polynomial arose from a geometric problem regarding quadratic forms. The size of its fibres seem to be piecewise polynomial. Indeed, this is the first of a series of polynomials that arise essentially from the determinant of a Gram matrix.
I will concentrate on the case of odd $q$ and $\xi,-\eta$ being squares, but the solution should be extendable to the remaining cases.
It is convenient to use the language of characters sums in order to compute $N$. To reduce your point counting problem to a character sum problem, the following observation is useful:
$$\#\{ x \in \mathbb{F}_q : ax^2+bx+c=0\} = 1 + \chi(b^2-4ac),$$
where $\chi$ is the unique non-trivial quadratic character of $\mathbb{F}_q^{\times}$, extended to give $0$ on $0$, and $a \neq 0$. This observation is proved by completing the square.
We will need the following well-known formula:
$$(\sim) \, \sum_{x \in \mathbb{F}_q} \chi(x^2+t) = -1$$
for $t\neq 0$. This formula encodes the fact that the number of points on the genus-0 curve $y^2=x^2+t$ is $q-1$. In your case we are definitely lucky, as the computation of $N$ reduces to point counting on genus-0 curves, while for higher genus these point counts are not polynomial in general.
(Cosmetic step) Replacing $(b,d)$ by $(b/\sqrt{\xi},d/\sqrt{\xi})$ we see that we may assume that $\xi = 1$. Further replacing $(c,d)$ by $(c\sqrt{-\eta},d\sqrt{-\eta})$ we see that we may assume that $-\eta = 1$ as well.
Let us expand your defining hypersurface and express it as a quadratic polynomial in $a$:
$$(*) \, a^2(d^2-1) + a(-2bcd) + b^2c^2+b^2-c^2+d^2-1 = 0.$$
The case $d^2 =1$ simplifies to $a(2bcd) = b^2c^2+b^2-c^2$. If $bc \neq 0$, this determines $a$ uniquely. If $bc=0$, we must have $b=c=0$, and $a$ can be arbitrary. This case yields $2((q-1)^2 + q)$ solutions.
Let us assume $d^2 \neq 1$. The discriminant of $(*)$ factorizes as $4(c^2+1-d^2)(b^2+d^2-1)$, which is a lucky coincidence, and possibly the heart the matter. Hence we see that given $b,c,d$ contribute to $(*)$
$$1+\chi( (c^2+1-d^2)(b^2 +d^2-1) )$$
solutions. Summarizing, we have
$$(**)\, N=2(q^2-q+1) + q^2(q-2) + \sum_{d^2 \neq 1, \, b,c,d \in \mathbb{F}_q} \chi( (c^2+1-d^2)(b^2 +d^2-1) ).$$
By $(\sim)$,
$$\sum_{b \in \mathbb{F}_q} \chi( (c^2+1-d^2)(b^2 +d^2-1) ) = \chi(c^2+1-d^2) \sum_{b \in \mathbb{F}_q} \chi(b^2 +d^2-1 ) = -\chi(c^2+1-d^2)$$
when $d^2 \neq 1$, and so
$$\sum_{d^2 \neq 1, \, b,c,d \in \mathbb{F}_q} \chi( (c^2+1-d^2)(b^2 +d^2-1) )= -\sum_{d^2 \neq 1, \, c,d \in \mathbb{F}_q} \chi(c^2+1-d^2),$$
and applying $(\sim)$ once more this becomes
$$-\sum_{d^2 \neq 1, \, c,d \in \mathbb{F}_q} \chi(c^2+1-d^2) = \sum_{d^2 \neq 1} 1 = q-2.$$
Plugging this character sum evaluation in $(**)$, $N=q^3-q$ is obtained, confirming your empirical observation.
For general $\xi$ and $\eta$, a very similar argument will work, because the discriminant of your defining equation (considered as a quadratic polynomial in $a$) still factorizes nicely, specifically it is
$$4(c^2 - \eta-d^2\xi) (-b^2\xi\eta + d^2 \xi + \eta).$$
Even $q$ is easier. The defining equation can now be written as $$(a+1)^2 (\xi d^2 +\eta) = \xi b^2 (c^2+\eta) + c^2.$$ Recall that in $\mathbb{F}_q$ with even $q$, $x \mapsto x^2$ is a field automorphism.
If $\xi d^2+\eta \neq 0$ (happens for all but a unique $d$), $a+1$ is uniquely determined by $b,c$, giving $(q-1)q^2$ solutions.
If $\xi d^2+\eta = 0$, $d$ is determined uniquely, $a$ is arbitrary and it remains to count solutions $(b,c)$ to $c^2(\xi b^2+1) = \eta \xi b^2$. Specifying $b$ determines $c$ uniquely, unless $\xi b^2 + 1=0$ (happens for a unique $b$), in which case there are no solutions. So this case contributes $q(q-1)$ solutions.
All in all, $N = (q-1)q^2+q(q-1) = q^3-q$ for even $q$ as well.
|
2025-03-21T14:48:31.270926
| 2020-06-15T13:58:56 |
363116
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dirk Werner",
"Jochen Wengenroth",
"https://mathoverflow.net/users/127871",
"https://mathoverflow.net/users/159631",
"https://mathoverflow.net/users/21051",
"https://mathoverflow.net/users/70478",
"pietro siorpaes",
"user159631"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630211",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363116"
}
|
Stack Exchange
|
Subspaces of $L_p([0,1])$ whose unit ball is compact for the topology of convergence in measure
Any information about the following questions would be welcome.
I wonder whether there are (well-known or easy) closed and infinite dimensional subspaces of $L_p([0,1])$ ($1<p<\infty$) whose unit ball is compact for the topology of convergence in measure.
If they do exist, can they be described or characterized? (Do they linearly embed into $\ell_p$ or into $\ell_2$?)
Note that, in the case $p=1$, such subspaces of $L_1([0,1])$ were already considered in the literature. For instance, in a paper called "On subspaces of $L^1$ which embed into $\ell_1$", G. Godefroy, N.J. Kalton and D.Li obtained a description of the subspaces whose unit ball is compact and locally convex in measure (Thereom 3.3 and Corollary 3.5 therein). In their words: "Corollary 3.5 somehow means that the subspaces of $L^1$ whose unit ball is i $\tau_m$-compact locally convex are close to the trivial ones, that is, to w$^*$-closed subspaces of copies of $\ell_1$ generated in $L^1$ by a sequence of disjoint indicator functions."
However, there exist subspaces of $L^1$ whose unit ball is compact but not locally convex in measure (Theorem 4.1 therein).
Theorem 4.4 in the paper N. Kalton, DW, ``Property (M), M-ideals, and almost isometric structure of Banach spaces.'' J. Reine Angew. Math. 461, 137-178 (1995) says about a subspace $X\subset L_p[0,1]$, $1<p<\infty$, $p\neq2$, that $X$ embeds almost isometrically into $\ell_p$ if and only if $B_X$ is $L_1$-compact. (This paper is the predecessor of the one you are quoting.) For those spaces, the unit ball is compact in measure. Concerning embeddings into $\ell_p$ see also W.B. Johnson and E. Odell's paper.
@DirkWerner This is exactly the kind of result I was looking for. Thank you very much for your help (and for the precise references).
No, it does not: otherwise, the given closed subspace $V$ would be a subspace of $L^0$ (the space of measurable functions, metrised with the convergence in measure) whose unit ball would be compact, and thus $V$ would be a topological vector space in which every ball is compact.
This implies that $V$ is finite dimensional, since
Every locally compact topological vector space $X$ has finite dimension: see Theorem 1.22 in
Rudin, Walter. "Functional analysis." (1973).
That is not a valid argument. Locally compact means that some neighbourhood of $0$ is compact. But the unit ball of the $L^p$-norm on $V$ need not be a $L^0$-neighbourhood. (Your argument would implay that every compact operator between Banach spaces had finite dimensional range.)
ah ah! You are totally right, I got confused between the intersection of $V$ with the unit ball in $L^p$, and the unit ball in $L^0$, which are of course very different things
|
2025-03-21T14:48:31.271389
| 2020-06-15T14:39:08 |
363119
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"5th decile",
"Abdelmalek Abdesselam",
"Adam Rubinson",
"Alan Dixon",
"Alex M.",
"Allawonder",
"Arnold Neumaier",
"Asaf Karagila",
"Brondahl",
"David Handelman",
"Deane Yang",
"Denis Serre",
"Earthliŋ",
"Favst",
"Federico Poloni",
"Fedor Petrov",
"Gabe Conant",
"Gabe K",
"Geoff Robinson",
"Gerhard Paseman",
"Hailong Dao",
"Harry Gindi",
"Harry Wilson",
"Ivan Meir",
"Joel David Hamkins",
"Jon Bannon",
"Joseph O'Rourke",
"Joshua Grochow",
"Julian C",
"Kapil",
"LSpice",
"Lennart Meier",
"Liviu Nicolaescu",
"M. Vinay",
"Mark Meckes",
"Mark Wildon",
"Martin Sleziak",
"Michael",
"Michael Engelhardt",
"Michael Greinecker",
"Michael Hardy",
"Mitch",
"Moishe Kohan",
"Pablo Zadunaisky",
"Per Alexandersson",
"Piero D'Ancona",
"Piyush Grover",
"Pop",
"R. van Dobben de Bruyn",
"RaphaelB4",
"Rauni",
"Robert Furber",
"Rodrigo A. Pérez",
"SBK",
"Sam Hopkins",
"Simon Rose",
"Simon Wadsley",
"Sohail Si",
"Somatic Custard",
"Sophie Swett",
"Stanley Yao Xiao",
"Steve Jessop",
"Steven Gubkin",
"Stig Hemmer",
"Terry Tao",
"Timothy Chow",
"Todd Trimble",
"Tyrone",
"Ville Salo",
"Vincenzo Zaccaro",
"Will R",
"Will Sawin",
"YCor",
"Yemon Choi",
"bof",
"copper.hat",
"darij grinberg",
"dohmatob",
"https://mathoverflow.net/users/1044",
"https://mathoverflow.net/users/1056",
"https://mathoverflow.net/users/10898",
"https://mathoverflow.net/users/1106",
"https://mathoverflow.net/users/11084",
"https://mathoverflow.net/users/111389",
"https://mathoverflow.net/users/120801",
"https://mathoverflow.net/users/121144",
"https://mathoverflow.net/users/121595",
"https://mathoverflow.net/users/122587",
"https://mathoverflow.net/users/123634",
"https://mathoverflow.net/users/124391",
"https://mathoverflow.net/users/124862",
"https://mathoverflow.net/users/125275",
"https://mathoverflow.net/users/127905",
"https://mathoverflow.net/users/134299",
"https://mathoverflow.net/users/1353",
"https://mathoverflow.net/users/13923",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/141351",
"https://mathoverflow.net/users/14450",
"https://mathoverflow.net/users/15316",
"https://mathoverflow.net/users/155308",
"https://mathoverflow.net/users/160511",
"https://mathoverflow.net/users/160522",
"https://mathoverflow.net/users/1703",
"https://mathoverflow.net/users/17353",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/1898",
"https://mathoverflow.net/users/1946",
"https://mathoverflow.net/users/20302",
"https://mathoverflow.net/users/2039",
"https://mathoverflow.net/users/2083",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/25028",
"https://mathoverflow.net/users/2530",
"https://mathoverflow.net/users/2926",
"https://mathoverflow.net/users/30684",
"https://mathoverflow.net/users/3106",
"https://mathoverflow.net/users/31084",
"https://mathoverflow.net/users/31729",
"https://mathoverflow.net/users/3237",
"https://mathoverflow.net/users/33927",
"https://mathoverflow.net/users/3402",
"https://mathoverflow.net/users/345",
"https://mathoverflow.net/users/35357",
"https://mathoverflow.net/users/37071",
"https://mathoverflow.net/users/38253",
"https://mathoverflow.net/users/38434",
"https://mathoverflow.net/users/38448",
"https://mathoverflow.net/users/39654",
"https://mathoverflow.net/users/41291",
"https://mathoverflow.net/users/42278",
"https://mathoverflow.net/users/4312",
"https://mathoverflow.net/users/43266",
"https://mathoverflow.net/users/43395",
"https://mathoverflow.net/users/50073",
"https://mathoverflow.net/users/50818",
"https://mathoverflow.net/users/52842",
"https://mathoverflow.net/users/54415",
"https://mathoverflow.net/users/54780",
"https://mathoverflow.net/users/54788",
"https://mathoverflow.net/users/56920",
"https://mathoverflow.net/users/5736",
"https://mathoverflow.net/users/58807",
"https://mathoverflow.net/users/6094",
"https://mathoverflow.net/users/613",
"https://mathoverflow.net/users/61785",
"https://mathoverflow.net/users/6269",
"https://mathoverflow.net/users/6316",
"https://mathoverflow.net/users/70681",
"https://mathoverflow.net/users/7113",
"https://mathoverflow.net/users/71136",
"https://mathoverflow.net/users/7206",
"https://mathoverflow.net/users/7294",
"https://mathoverflow.net/users/7410",
"https://mathoverflow.net/users/75761",
"https://mathoverflow.net/users/763",
"https://mathoverflow.net/users/76498",
"https://mathoverflow.net/users/766",
"https://mathoverflow.net/users/7709",
"https://mathoverflow.net/users/78539",
"https://mathoverflow.net/users/80084",
"https://mathoverflow.net/users/82179",
"https://mathoverflow.net/users/8250",
"https://mathoverflow.net/users/8799",
"https://mathoverflow.net/users/94086",
"https://mathoverflow.net/users/9449",
"https://mathoverflow.net/users/99045",
"inkievoyd",
"lalala",
"lightalchemist",
"msh210",
"qwr",
"roy smith",
"tomasz",
"user21820",
"wlad",
"yess",
"მამუკა ჯიბლაძე"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630212",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363119"
}
|
Stack Exchange
|
Every mathematician has only a few tricks
In Gian-Carlo Rota's "Ten lessons I wish I had been taught" he has a section, "Every mathematician has only a few tricks", where he asserts that even mathematicians like Hilbert have only a few tricks which they use over and over again.
Assuming Rota is correct, what are the few tricks that mathematicians use repeatedly?
A mathematician never reveals their tricks.
Going to MO, because that way all the tricks are pooled.
Tangentially related: What do named "tricks" share?.
Polycamelism (thanks Pietro Majer). See https://mathoverflow.net/a/349456 for more. Gerhard "Called Trick For A Reason" Paseman, 2020.06.15.
I'm not posting this as an answer because I don't remember the details. In grad school my office mate told me a story about a famous(?) mathematician who would try the same trick (in their head) on every problem they heard, and then only say anything out loud if it worked. Even if it only worked 1/100 times, they still came off looking like a genius. Maybe someone else has heard this and knows the mathematician and/or the trick. (Or maybe it's apocryphal.)
See "Use the Feynman Method" from "Ten Lessons I Wish I Had Been Taught"!
embarrassed emoji
Just to clarify, what is expected in an answer. Do you expect something along the lines: "Erdős used variations and generalizations of the following tricks in many of his works. 1. ... 2. .... 3. ..." I.e., should the answer contain the name of some mathematician and then some of their trick? Or is this question asking simply for collection of tricks? (If it's the latter, probably we should not bother too much with checking which mathematicians has a particular trick in his bag of tricks.)
@MartinSleziak If you can attach a mathematician to the trick that would be great but not essential especially as I suspect some tricks are common to many. Ones that you personally use or have learnt from other mathematicians would be interesting.
Your question has a slightly negative spin to it: it is probably unarguable that any given Mathematician has a finite supply of genuinely new and innovative ideas, but it may be that in some cases, people later realised that ideas they have had earlier in their career are applicable in contexts they had not originally foreseen
@GeoffRobinson Agreed and also Mathematical ideas I think may often be distinct from tricks which paradoxically could be much more broadly useful. For example "linearity of expectation" is a very powerful trick to use in many different scenarios but as a mathematical idea it's quite basic. Erdo's "Probabilistic Method" was a great mathematical idea as applied to Ramsey Theory for example and also a widely applicable trick.
I once heard a Fields Medallist say that his research consisted of interchanging the order of summation and applying the Cauchy-Schwarz inequality.
@SimonWadsley: I think you may be remembering a possibly apocryphal story about Peter Lax- https://mathoverflow.net/a/60908/25028.
@SamHopkins I believe this comment is about either Terence Tao or Tim Gowers (I have heard both make roughly the same quip.)
A typical cw I would say
@GabeConant Although it has been mentioned that Rota attributed this to Feynman, I believe Feynman describes it himself in Surely You're Joking, Mr Feynman. The details change in each person's telling of it, so it's important to find the original. Unfortunately I lent my copy of that book to someone who never gave it back.
I am not sure what to expect from answers to this question. I always thought that the point Rota was trying to make was that a mathematician has only a small set of "tricks" that the mathematician has personalized deeply enough to always reach for and use them. Certainly we all "know" a lot more mathematics than a few tricks, but we natively are all truly fluent in a much narrower range than one would naively expect. I thought the point was that the set of techniques was particular to each mathematician. What am I missing here? @MartinSleziak's suggestion for an answer format seems reasonable.
Also, though, even if I knew all of Hilbert's tricks...I don't think I could be Hilbert. There is something to be said for the collected experience of a mathematician. I thought this was the point of Rota's passage.
@JonBannon Rota was also trying to defend Erdos' work from the charge that he only used a few tricks by noting that Hilbert and others did the same. I think the challenge with any technique or trick is to know when and how to apply it which is where experience comes into play.
A math professor once told me that all you ever do in analysis is interchange the order of limiting operations and integrate by parts.
Well, for a mathematician, a few just means finite.
I think this question misunderstands the quote; I'm surprised at so many upvotes. I think the point is that every mathematician has a few tricks that are her own. We don't necessarily know each other's tricks. That's why the observation has some real content.... i.e. it's not that everyone else is super clever and you are average because they too only have a few tricks, it's just that you do not understand their tricks, you only understand your tricks
@T_M I didn't say that all mathematicians use the same tricks. Anyway my question was not about the meaning of the quote but rather the tricks themselves and from the responses there are clearly many tricks that are common across mathematicians and fields of study.
@IvanMeir the phrase "what are the few tricks that Mathematicians use..." (emphasis my own of course) does indeed ask for one collection of tricks used by 'mathematicians'. And the question patently pertains to the meaning of the quotation because you also say "Assuming Rota is correct...." just before you ask the main question. I (and some others) think the question is not really correctly related to what Rota was getting at. It doesn't matter anyway, as you say there are plenty of answers...maybe they are what you were looking for??
It has always seemed to me that Rota was getting at sort of 'real', research-level tricks that are more unique to the mathematician or at least the subfield in which they worked. From outside the field, the different uses of the trick each seem like a clever leap, but with the correct knowledge in the field, they are really multiple uses of the same trick. So to my mind (and I'm sure many others), the quotation isn't really about swapping summation or the triangle inequality (two most upvoted answers!?) or things that any mathematician can easily recognise
@T_M As I said I'm more interested in the tricks themselves. Rota never went into any specifics to my knowledge so I guess we will never know exactly what he meant. It would have been interesting in particular to know which tricks he thought Hilbert used. If you have any good ones please contribute!
I see the title of the question has changed. For what it is worth, I think the other interpretation of the question is far more interesting. To find the list of generic tricks common in maths you can indeed look at the tricki. But what would be more fun is to have a list of mathematicians together with their favorite tricks. Perhaps I or someone else can provide that alternative question. If I have a minute later I will reincarnate it...
To prove a theorem, a useful and quite often used trick is to prove a few lemmas before.
@JonBannon Hi thanks for letting me know that, I've just changed the title back as I agree with you and I also don't think the title of a post should be changed in this way, quite late and not by the OP.
I posted the other interpretation just in case you wanted to stick to your change. If that other question gets shut down, please someone (@LSpice) transport @LSpice's lovely answer there over to this question.
@JonBannon Thanks Jon no problem at all. I didn't actually change the post title, someone else did which is why I reverted it as I like the slightly provocative but admittedly perhaps ambiguous original!
@IvanMeir But Rota does give some context. We will have to agree to disagree, because when you consider the comments he refers to about Erdos.... If they are referring to the common tricks that all mathematicians use, it doesn't really make sense to disparage someone's work by saying that they relied on these common tricks. The point is surely that that person thought they were clever by having realised Erdos repeated his own set of non-trivial combinatorial/probabilistic tricks.
FWIW I agree completely with @T_M -- as with most things in Indiscrete Thoughts the context in which Rota is using his rhetoric is important, and I really don't think his words should be treated as oracular truth
I actually think Rota, in the process of trying to debunk the comments about Erdos, discovered to his own surprise that Hilbert genuinely used a small set of repeated tricks. So his opinion on the matter was really independent of the comments about Erdos. In terms of the Number Theorist's remarks it is very plausible that his opinion was that Erdos used a very small set of tricks compared to other great mathematicians, not that they were particularly unique to Erdos himself.
Erdos had a large number of joint papers and worked very closely with other mathematicians so I think it's unlikely that his own tricks were not known and used by other mathematicians and vice versa.
Perhaps though the Number Theorist was just jealous of Erdos' ability to prove difficult theorems using simple and elegant combinatorial arguments, for example, rather than lots of heavy machinery as is often the case in modern number theory!
"Illusion", Michael. A trick is something a physicist does for money.
When you find out Shelah's trick, let me know.
@IvanMeir The editor did nothing wrong when changing the title of the post; this is not bad etiquette and you should not consider it so and get offended. It was an honest attempt at clarification. Actually, I find the original title click-bait and I would prefer a more descriptive title that describes the question.
"There's all kinds of tricks in the world." "It's all one trick, man - a giant induction in outer space."
Unfortunately, it seems not to be a good question, because it seems we won't learn much from the answers.
In his sixties, Rota wrote a whole slew of top-ten lists. One of them was the one about differential equations that's been mentioned on mathoverflow a number of times. One that he intended to write was "Ten problems in probability no one likes to bring up", which, while he was writing it became a list of 14 and then a list of 12, which is where he left it. Here's the one on differential equations: https://web.williams.edu/Mathematics/lg5/Rota.pdf
$$
\sum_{i=1}^m\sum_{j=1}^n a_{i,j}=\sum_{j=1}^n\sum_{i=1}^m a_{i,j}
$$
(and its variants for other measure spaces).
I still get misty-eyed whenever I read something that capitalizes on this trick in an unpredictable way.
In some ways, Fubini's theorem is just a fancy version of this.
@GabeK Yes, Fubini is indeed one of the variants. Somehow the discrete version seems much more adept at sneaking up on me though.
I tell my students in basically every class I teach always to pay attention to the order in which a double sum (or double integral, or sum-of-an-integral) first shows up, because chances are, the best thing to do next is change the order of summation/integration.
Hah, my professor Leo Goldmakher made the exact same remark in our additive combinatorics class. Something about every mathematician working with character sums having just one trick that is repeatedly used: the discrete Fubini's principle.
People have made millions on algorithm speedups based on that. Find a first integral that's stable over repeated calculations, change the order of integration, and voila! your program is several orders of magnitude faster than competitors'.
Gelfand in one of his paper about integral geometry mentioned the fundamental trick which is version of what you wrote. If $$A\stackrel{\alpha}{\leftarrow} C\stackrel{\beta}{\rightarrow} B$$ and $f$ is a function on $C$, then $$\sum_{a\in A} \sum_{\alpha(c)=a} f(c)=\sum_{b\in B}\sum_{\beta(c)=b} f(c). $$ The inner sums are pushforwards and they have counterparts in other categories. Grothendieck used it succefully when applied to the (derived) category of coherent sheaves. E.g., you can get Poincare-Hopf theorem this way.
@LiviuNicolaescu This is very neat. What I wrote is the special case where $A=[m]$, $B=[n]$, $C=[m]\times[n]$, $\alpha$ and $\beta$ are projections, and $f\colon (i,j)\mapsto a_{i,j}$. Thanks Liviu!
I've been meaning to give an undergrad maths talk "correspondences in algebra, geometry, and analysis" for some time. These objects are ubiquitous! (As are the slightly more general kernels as in the answer and @LiviuNicolaescu's comment.)
I had the fortune to have Sir Tim Gowers as one of my first-year lecturers. At the end of the first lecture he finished by pointing this trick out, and said it was one of the most pratically useful techniques he'd ever come across.
@Michael can you elaborate or link? That sounds really interesting but I can't really visualize what you're saying.
@JulianC, I am currently working on computational lithography in a company that makes software for semiconductor manufacturing. This area is extremely computationally intensive: for each of the trillions of target pixels you need to integrate over all pixels on the source and all the pixels on the mask between the source and the target. One of the breakthroughs was simply the change of integration order in that computation; see Chris Mack's "Fundamental Principles of Optical Lithography" for that and much more.
A variant of this is to use counterexamples to this when the indexing sets are not finite to give counterexamples for various questions!
@Kapil, indeed, my research (computing characters of supercuspidal representations) is, in some sense, interesting precisely because of a failure of a naïve application of the Fubini theorem.
How can something that is so obvious be considered "a trick"?
A very useful generic trick:
If you can't prove it, make it simpler and prove that instead.
An even more useful generic trick:
If you can't prove it, make it more complicated and prove that instead!
The first is sometimes attributed to Polya. A related piece of wisdom due to de Giorgi: "If you can't prove your theorem, keep shifting parts of the conclusion to the assumptions, until you can."
"In dealing with mathematical problems, specialization plays, as I believe, a still more important part than generalization. Perhaps in most cases where we unsuccessfully seek the answer to a question, the cause of the failure lies in the fact that problems simpler and easier than the one in hand have been either incompletely solved, or not solved at all. Everything depends, then, on finding those easier problems and on solving them by means of devices as perfect as possible and of concepts capable of generalization...
...This rule is one of the most important levers for overcoming mathematical difficulties; and it seems to me that it is used almost always, though perhaps unconsciously" —David Hilbert, “Mathematical Problems”
These methods can be combined. First generalize the problem, making it more complicated. Then simplify along a different axis.
J. L. Alperin used to tell us first-year algebra students the second problem-solving technique, although, as far as I know, he did not claim to have invented it.
This reminds me of Zeilberger's paper on the method of undetermined generalization and specialization.
Another related approach which I have seen (from afar) to be successful, is to relax the axioms on some axiomatic system to make the system less rigid, yet retaining enough structure to be interesting (sometimes more so than before).
Dennis Sullivan used to joke that Mikhail Gromov only knows one thing, the triangle inequality. I would argue that many mathematicians know the triangle inequality but not many are Gromov.
Just in case somebody takes Sullivan's joke literally: I was always amazed how much analysis (including PDEs and functional analysis) Gromov knows. (As well as topology, dynamical systems,...)
In 1994, Vladimir Arnold was upset that among the Fields medalists, "three were inequalities manipulators".
Did Arnold think that number was too high, or too low?
In his book Partial Differential Relations, he showed that he also knows elementary. linear algebra.
@HarryWilson: just that it's less or equal than the sum of two other numbers of Fields medallists.
The only thing painters use is a brush
Sullivan even gave a talk where he made this triangle inequality point with Gromov in the audience. He asks Gromov if he agrees; unfortunately, the recording doesn't pick up Gromov's answer. See 4:10.
https://www.youtube.com/watch?v=ixc0TNfT0ks&ab_channel=Simplicityconference
In combinatorics: shove it into OEIS, and see what's up.
Also, add more parameters!
Note: the Macdonald polynomials were introduced by adding more parameters to the Jack and the Hall-Littlewood polynomials.
The introduction of Macdonald polynomials unified a lot of cool stuff,
and they are now essential in the field of Diagonal harmonics.
Does "snake oil" count as a trick?
That's basically the source of my PhD.
More parameters, meaning "catalytic variables"?
@SomaticCustard Well, the recent example I worked with, https://mathoverflow.net/questions/362265/proof-of-certain-q-identity-for-q-catalan-numbers was solved by first generalizing the problem, (adding two additional parameters a and c).
@PerAlexandersson Nice!
@PerAlexandersson Say, switching from a modular form to a Jacobi form? Would this count as adding more parameters in your sense?
Didn't Alexander Grothendieck mention "extra parameters" as a trick, or rather insisting that one ought to think about parametrized families of mathematical objects in stead of single objects.
Integration by parts has allegedly earned some people big medals.
Perhaps a reference to: https://mathoverflow.net/questions/53122/mathematical-urban-legends/60908#60908
@SamHopkins: Strange, I had heard this about Laurent Schwartz: somebody in his entourage jokingly said "so now one gets the Fields medal for integrating by parts"?
@AlexM.: apocryphal stories like this can often shift and evolve, involving different people, etc.
Transferring that derivative from one function to the other can be life changing.
Is the Attiyah-Singer Index Theorem not "just integration by parts" ?
For a finite set of real numbers, the maximum is at least the average and the minimum is at most the average.
Of course this is just the real version of the Pigeonhole Principle, but Dijkstra had an eloquent argument as to why the usual version is inferior.
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1094.html
I'm glad I came here today if for nothing else than to have read that piece by Dijkstra. Thank you for the link.
I'm not convinced of the superiority of the other version. It only makes sense for real numbers. What's wrong with "if $A$ has more elements than $B$, then there is no injection $A\to B$"? This version works even for infinite sets (and indeed, it's a commonly used trick).
The pigeonhole principle wins my vote, partly because it has such a memorable name and because you can explain it to your six year old. And it really is useful.
Although Erdős was mentioned in the comments as perhaps having prompted this whole discussion, I'm surprised not to see the basic trick of "try a random object/construction" posted as an answer, which he used so often to such great success.
what do you mean by "try a random object/construction"?
E.g. to prove that some graph exists satisfying a given property, show this holds with positive (or even high) probability for the random graph $G(n,p)$. This is also often called the "probabilistic method."
If an integer-valued function is continuous, it has to be constant.
This trick shows up in many places, such as the proof Rouché's theorem, and basic results about the Fredholm index.
...as long as the domain is connected! I happen to have a continuous, integer-valued, non-constant function in my pocket right now. (It's the function which maps each point inside of a piece of currency to the face value of that piece of currency in cents.)
@TannerSwett How is your function not constant?
@Allawonder Tanner must have two coins of distinct values in their pocket. Different coins are different connected components of the domain. Each coin maps to its own value, so the function is only locally constant.
@Allawonder How about the continuous integer-valued non-constant function $\operatorname{id} \colon \mathbb Z \to \mathbb Z$?
Whenever you find yourself trying to implement inclusion–exclusion by hand ... stop immediately and start over using the Möbius $\mu$-function.
Very appropriate for a question based on a Rota quotation!
Those of us who are old enough may remember http://www.tricki.org/
Localize + complete, taking a hypersurface section, and using the socle are useful tricks in commutative algebra.
I scoffed at your first sentence, then quickly had a terrible realisation.
I've been looking for a reference on the 'localize and complete' type of trick. Can you give an example of the technique in action (or a paper where I should take a look)?
@HarryGindi: there are countless examples. Often, a property can be checked locally, then completion allows one to use Cohen's structure theorem, thus working concretely over power series. One not very simple but powerful result where that procedure works is the following: If $R$ is a regular Noetherian algebra containing a field and $M,N$ be f.g modules, then $Tor_i^R(M,N)=0$ implies $Tor_j^R(M,N)$ for all $j\geq i$ (the so-called rigidity of Tor).
@HailongDao Thanks! Nice one!
Am I the only person who has to wilfully resist pronouncing tricki as "tritsky"?
@RobertFurber Is that you, comrade?
@RobertFurber Whatever happend to Leon Tritsky? But seriously, it depends on the distance from English. For those with this distance bigger than distance to Russian, it is rather "treeskee".
Not sure if...
well, what the...
Find a duality. Play duals against each other.
Duality is everywhere. It often appears in subtle and unexpected ways. Surprisingly powerful for something that sounds so simple.
And yet, curiously, triality doesn't seem to be 50% more powerful as a trick. (Or maybe I just haven't grokked how to use it properly!)
@LSpice Maybe because playing three guys against each other is much more tricky
@LSpice: Well... every vector space has a dual, but triality only exists in a few dimensions.
@darijgrinberg, fair enough; but perhaps one could also suspect that structure that's available all the time must be less powerful than structure that is available in only a few cases.
If $1-x$ is invertible, then its inverse is $1 + x + x^2 + \cdots $. This is the second most useful "trick" I know, after "look for the [symmetric] group acting on you thing", but someone else already mentioned it.
So for x=2.....
@Pablo Care to share useful instances of your trick?
@lalala, for $x = 2$, the inverse is exactly as stated. In $\mathbb Q_2$, anyway.
If $x$ is nilpotent, then $1-x$ is invertible
@RodrigoA.Pérez parametrices constructions
@Rodrigo A. Perez:this is essentially the usual contraction mapping proof of the local inverse mapping theorem in Banach spaces, as given say in Lang, Analysis I. I.e. if f = Id-o, where o is a little oh function, then x, x+o(x), x+o(x+o(x)), x+o(x+o(x+o(x))),...converges to the inverse image of x under f, for x near 0.
Hölder's inequality
and the special cases, Cauchy-Buniakovski-Schwarz
Cauchy-Schwarz, arguably, is the only trick in analytic number theory
Cauchy Schwarz Master Class granted it isn't only about Cauchy Schwarz...
Just imagine, how much would number theorists prove if they use general Hölder inequality instead.
I couldn't resist adding one of my own: "Apply linearity of expectation".
For example in Barbier's incredibly elegant approach (Buffon's Noodle) to Buffon's Needle Problem.
Jordan Ellenberg gives a nice popular exposition of this (linearity of expectation in general, and the Buffon's noodle problem in particular) in "How not to be wrong".
This is arguably a special case of Conant's answer.
@GabeConant's answer referenced by @TerryTao.
What worked very well for the French school of algebraic geometry (but it seems to predate them!) is the "French trick" of turning a theorem into a definition. See e.g. this post for some examples and background on the term.
If $r,s $ are elements of a ring, then $1-rs$ invertible implies $1-sr$ is invertible (and it is a trick: you can make an educated guess for the formula for the inverse of $1-sr$ from that for $1-rs$). This can be used to find quick proofs of: (a) in a Banach algebra, ${\rm spec\ } rs \cup \{0\} = {\rm spec}\ sr \cup \{0\}$ (which in turn yields the nonsolvability of $xy-yx = 1$---all one needs is boundedness and nonemptiness of the spectrum); (b) the Jacobson radical (defined as the intersection of all maximal right ideals) is a two-sided ideal; and probably some other things I can't think of right now ...
This trick is due to Halmos, right?—or at least he wrote up a nice explanation of it.
Halmos did explain the motivation for the method (power series) in one of his books. Jacobson in his 1930s (?) book (Theory of rings) must have included it. But I think there is an 1910s paper of either Burnside or Wedderburn which deals with this and a generalization.
Percy Deift has called the Sylvester determinant identity $\mathrm{det}(1+AB)=\mathrm{det}(1+BA)$ "the most important identity in mathematics".
In the course of working with Hervé Jacquet and reading many of his papers on automorphic forms and the relative trace formula, I feel like he got an amazing amount of mileage out of clever use of change of variables.
I remember a conference where all the speakers gave extremely hard-to-follow talks using very sophisticated machinery, and then Jacquet gave a talk with a very nice result and about 45 minutes of it was going through an elementary proof (once you knew the setup) that boiled down to a clever sequence of change of variables.
I like this answer as it is more in the spirit of what Rota seemed to be referring to
Classical physics is all about this! Hamiltonian/Lagrangian mechanics, Hamilton-Jacobi theory, canonical transformations, finding conserved quantities etc. involve clever choice of variables.
Maybe more than a "trick," but if you want to investigate a sequence
$a_0,a_1,\dots$, then look at a generating function such as $\sum
a_nx^n$ or $\sum a_n\frac{x^n}{n!}$. If you are interested in a
function $f:\mathrm{Par}\to R$, where $R$ is a commutative ring and
$\mathrm{Par}$ is the set of all partitions $\lambda$ of all integers
$n\geq 0$, then look at a generating function $\sum_\lambda
f(\lambda) N_\lambda b_\lambda$, where $\{b_\lambda\}$ is one of the standard
bases for symmetric functions and $N_\lambda$ is a normalizing factor
(analogous to $1/n!$). For instance, if $f^\lambda$ is the
number of standard Young tableaux of shape $\lambda$, then
$\sum_\lambda f^\lambda s_\lambda = 1/(1-s_1)$, where $s_\lambda$ is a
Schur function. If $f(\lambda)$ is the number of square roots
of a permutation $\lambda\in\mathfrak{S}_n$ of cycle type $\lambda$,
then
$$ \sum_\lambda f(\lambda)z_\lambda^{-1} p_\lambda = \sum_\lambda
s_\lambda = \frac{1}{\prod_i (1-x_i)\cdot \prod_{i<j} (1-x_ix_j)},
$$
where $p_\lambda$ is a power sum symmetric function and
$z_\lambda^{-1}$ is a standard normalizing factor.
The Renormalization Group trick:
Suppose you have some object $v_0$ and you want to understand a feature $Z(v_0)$ of that object. First identify $v_0$ as some element of a set $E$ of similar objects. Suppose one can extend the definition of $Z$ to all objects $v\in E$. If $Z(v_0)$ is too difficult to address directly, the renormalization group approach consists in finding a transformation $RG:E\rightarrow E$ which satisfies $\forall v\in E, Z(RG(v))=Z(v)$, namely, which preserves the feature of interest. If one is lucky, after infinite iteration $RG^n(v_0)$ will converge to a fixed point $v_{\ast}$ of $RG$ where $Z(v_{\ast})$ is easy to compute.
Example 1: (due to Landen and Gauss)
Let $E=(0,\infty)\times(0,\infty)$ and for $v=(a,b)\in E$ suppose the "feature of interest" is the value of the integral
$$
Z(v)=\int_{0}^{\frac{\pi}{2}}\frac{d\theta}{\sqrt{a^2\cos^2\theta+b^2\sin^2\theta}}\ .
$$
A good transformation one can use is $RG(a,b):=\left(\frac{a+b}{2},\sqrt{ab}\right)$.
Example 2: $E$ is the set of probability laws of real-valued random variables say $X$ which are centered and with variance equal to $1$. The feature of interest is the limit law of $\frac{X_1+\cdots+ X_n}{\sqrt{n}}$ when $n\rightarrow\infty$. Here the $X_i$ are independent copies of the original random variable $X$.
A good transformation here is $RG({\rm law\ of\ }X):={\rm law\ of\ }\frac{X_1+X_2}{\sqrt{2}}$.
Fantastic. I wish you also could explained Conformal Symmetry in a similar simplified language.
https://mathoverflow.net/questions/266921/how-is-the-conformal-prediction-conformal
just gave it a try
https://mathoverflow.net/questions/394335/what-is-a-simplified-intuitive-explanation-of-conformal-invariance/394363
Andre Weil's slogan that where there is a difficulty, look for the group (that unravels it).
I take this to mean something more aggressive than a truism to note and use group structure; more like "exploit the full potential of representation theory in all its manifestations after seeking out whatever obvious and hidden symmetries exist in the problem".
The chapter ‘A Different Box Of Tools’ of Surely You're Joking, Mr Feynman was named for a particular trick Richard Feymnan used:
[Calculus For The Practical Man] showed how to differentiate parameters under the integral sign — it's a certain operation. It turns out that's not taught very much in the universities; they don't emphasise it. But I caught on how to use that method, and I used that one damn tool again and again.
(pp.86–87)
I asked about Feynman's trick elsewhere on MO and got some interesting responses.
(1) Double-counting, which can also be described as counting the same thing in two ways. Very useful, and at least as powerful as interchanging summation order.
(2) Induction. When there is a natural number size parameter, one can always consider trying this.
(3) Extremal principle, which is ultimately based on induction, but looks very different. For example, the Sylvester-Gallai theorem has an extremely simple proof using this.
(2) can maybe be generalised to structural induction, so that one need not even artificially wrangle a natural-number parameter at all (e.g., directly inducting on trees, by, say, deducing a property of a tree if the same property holds when one leaf is removed, rather than on their height).
@LSpice: Indeed. Structural induction should even be taught in that way (top-down). =)
Existence as a property: You want to find an object that solves a given equation or a given problem. Generalize what you mean by object so that existence becomes easy or at least tractable. Being an object is now a possible property you might prove about your generalized object.
Having already something you can prove properties about is often both mathematically and psychologically easier than searching in the void.
Some examples:
Algebraic closures: In your original field, you don't know whether your polynomial has zeros, but in the algebraic closure it does. If you can show that it is Galois invariant, then it is actually in the original field. (Given that complex numbers are an algebraic closure (though unknown at the time of their conception), this is maybe the most classical of these examples.)
Representability of moduli problems: Often it is hard to show that a moduli problem is representable by a quasi-projective variety. This is what lead Weil to define general varieties so that he could represent a moduli problem. If your moduli problem does not have automorphisms and you can produce an ample line bundle, you can show afterwards that it is actually represented by a quasi-projective variety.
Partial differential equations: Often it is much easier to find generalized solutions (Sobolev functions or a distribution). Then the existence of a classical solution is a regularity property of you generalized solution.
A great one! Reminds of Bertrand Russell's famous "The method of "postulating" what we want has many advantages; they are the same as the advantages of theft over honest toil." (Introduction to Mathematical Philosophy), 1919
For your algebraic closure example, better check that your field is perfect before concluding rationality by Galois descent!
A common trick is compactification. First prove that a space admits a compactification, e.g.
Gromov's compactness theorem for manifolds with positive Ricci curvature and bounded diameter
Gromov's compactness theorem for pseudoholomorphic curves
the integers with the profinite topology has compactification the profinite integers $\hat{\mathbb{Z}}$
The space of hyperbolic 3-manifolds with basepoints in the thick part with respect to the Gromov-Hausdorff topology is compact
Geometrically finite Kleinian groups maybe be compactified (sometimes) by adjoining the domain of discontinuity (the conformal compactification)
the space of curves on a surface compactifies to projective measured lamination space
The space of probability measures on a compact space is compact with respect to weak* convergence (I guess this is not a compactification, but really a proof of compactness in the right topology)
Surfaces in $R^3$ with bounded area and fixed boundary may be compactified by the space of integral currents with the flat distance.
Once one has a compact space, one can analyze the objects one is interested in by taking infinite sequences, extracting a subsequence in the limit, and analyzing this limit, sometimes obtaining a contradiction if the limit does not lie in the original space one was considering. E.g. I used this approach to analyze exceptional Dehn fillings of cusped hyperbolic 3-manifolds.
In some sense, this is a particularly geometric way of the technique I describe in my answer, namely to generalize the kind of objects one is considering to make existence easy. In the compactification, existence of a converging subsequence is suddenly automatic; and once one has a point in the compactification, one has techniques to possibly show it's in the original space.
One might add to this list compactifications of moduli spaces - say, adding to moduli spaces of smooth curves those with nodal singular points. Among other miracles, this ties together moduli spaces of curves of different genera.
The second derivative test (i.e. "a smooth function has a local maximum at a critical point with non-positive second derivative.") is endlessly useful.
When you first see this fact in Calculus, it might not seem so powerful. However, there are countless generalizations (e.g. the maximum principle for elliptic and parabolic PDEs), which play an important role in analysis.
If the powerful tool of linearity isn’t good enough, the basic concept of convexity is amazingly powerful.
@DeaneYang, indeed, it amazes me how a first-order approximation is good, a second-order approximation often allows one to wring out a little more power … and yet, outside hard analysis (and I guess the higher reciprocity laws of number theory?), it seems that third-order approximations are either not so useful, or so hard to use that we aren't able to elevate them to the status of tricks yet.
There's the quote in Bell's Men of Mathematics attributed to Jacobi: "You must always invert", as Jacobi said when asked the secret of his mathematical discoveries. Sounds apocryphal but it is certainly a nice suggestion.
Buffett and Munger are also known to have incorporated this principle into their investing philosophy.
Presumably this is related to elliptic integrals giving inverse elliptic functions, just as integrals like $\int \frac{1}{x} dx$ gives the inverse to the exponential function and $\int \frac{1}{1+ x^2} dx$ gives the inverse to the tangent function.
Sure, but since most problems can be written as a map to invert, the advice sound comical
@Favst, how does one use this advice to drive investment? (Or is it a joke?)
@LSpice, it's mentioned in "Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger" and many websites that seek to understand and emulate Buffett and Munger. Some details are given here: https://seekingalpha.com/article/4040474-invert-always-invert . I'm no investor though, so I'm not entirely sure how this philosophy applies practically in day-to-day investments.
@Favst, thanks. So the 'invert' here isn't 'compositional inverse', but literally 'turn upside-down', or whatever (even in Jacobi's sense, as the expanded quote they include indicates).
(I made a mistake: the expanded quote is from Munger. The Jacobi quote is just as quoted, and probably indeed means what a mathematician thinks it means rather than what Munger's quote suggests it means.)
Jacobi: 'man muss immer umkehren' is reported in Edward B. Van Vleck, Current tendencies of mathematical research, Bull. Amer. Math. Soc. 23 (1916), 1–13.
Scott Aaronson has taken a stab at articulating his own methodology for upper-bounding the probability of something bad. He was inspired by a blog post by Scott Alexander bemoaning how rarely experts write down their expert knowledge in detail.
If, on a probability space, $\int_\Omega X\,dP = x$, then there is some $\omega$ such that $X(\omega)\ge x$.
A useful generalization of this trick: https://mathoverflow.net/a/363178/35357
In homotopy theory: if something is hard to compute, build an infinite tower that converges to it and induct your way up the tower. This includes spectral sequences, Postnikov towers, and Goodwillie calculus.
In category theory: apply Yoneda's Lemma.
Other common tricks in category theory:
Swap the order of colimits.
Embed into a presheaf category (e.g., Giraud's Theorem).
Reduce to the case of representable functors.
In an old mathoverflow answer, I wrote several more common tricks in category theory, including
Localization: shifting view so that two objects you previously viewed as different are now viewed as the same.
Replacing an object by one which is easier to work with but has the same fundamental properties you are trying to study.
Mapping an object to a small bit of information about the object. Showing that two are different because they differ on this bit.
Good that you mentioned representability and Giraud! Making non-representable functors representable is an extremely powerful trick. In fact something similar is omnipresent in the whole mathematics: if something you want to exist does not exist, - make it exist!
Concerning localization - more specific advantage of this trick is that instead of identifying objects one adds an isomorphism between them: extending your domain is usually technically easier than quotienting it by an equivalence relation.
My favorite is perhaps the "commutator trick", i.e. "take commutators and see what happens". Some general things that may happen 1) the commutator touches less than the commutatorands 2) the commutator defies your abelian intuition.
I'm mostly familiar with 1) in the context of infinite groups, in particular finding generators for complicated groups, and 2) blew my mind to pieces as Barrington's theorem before I even knew any math.
I counted that a seventh of my papers use some type of commutator trick, but what really sold commutators to me was when I got a Rubik's cube as a christmas present.
The obvious example for 1) is how you prove simplicity/perfection of alternating simple groups and others like it.
From a physicist point of view I want to mention this trick and its generalization for operators:
"Two commuting matrices are simultaneously diagonalizable"
(for physicists all matrices are diagonalizable). Of course the idea is that if you know the eigenvectors of one matrix/operator then diagonalizing the other one is much easier. Here are some applications.
1)The system is translation invariant : Because the eigenvectors of the translation operator are $e^{ik.x}$, then one should use the Fourier transform. It solves all the wave equations for light, acoustics, of free quantum electrons or the heat equation in homogeneous media.
2)The system has a discrete translation symmetry: The typical system is the atoms in a solid state that form a crystal. We have a discrete translation operator $T_a\phi(x)=\phi(x+a)$ with $a$ the size of the lattice and then we should try $\phi_k(x+a)=e^{ik.a}\phi_k(x)$ as it is an eigenvector of $T_a$. This gives the Bloch-Floquet theory where the spectrum is divided into band structure. It is one of the most famous model of condensed matter as it explains the different between conductors or insulators.
3)The system is rotational invariant: One should then use and diagonalize the rotation operator first. This will allow us to find the eigenvalue/eigenvectors of the Hydrogen atom. By the way we notice the eigenspace of the Hydrogen are stable by rotation and are therefore finite dimension representations of $SO(3)$. The irreducible representations of $SO(3)$ have dimension 1,3,5,... and they appears, considering also the spin of the electron, as the columns of the periodic table of the elements (2,6,10,14,...).
4)$SU(3)$ symmetry: Particle physics is extremely complicated. However physicists have discovered that there is an underlying $SU(3)$ symmetry. Then considering the representations of $SU(3)$ the zoology of particles seems much more organized (A, B).
If "for physicists, all matrices are diagonalizable", what about the matrix $\begin{pmatrix} 0 & 1 \ 0 & 0 \end{pmatrix}$?
Is it for physicists "almost all" $\approx$ "all"?
@ogogmad For physicists all matrices are diagonalizable except for the matrices than are not.
Terence Tao wrote a paper, Exploring the toolkit of Jean Bourgain. The abstract reads:
Gian-Carlo Rota once asserted that "every mathematician only has a few tricks". The sheer breadth and ingenuity in the work of Jean Bourgain may at first glance appear to be a counterexample to this maxim. However, as we hope to illustrate in this article, even Bourgain relied frequently on a core set of tools, which formed the base from which problems in many disparate mathematical fields could then be attacked. We discuss a selected number of these tools here, and then perform a case study of how an argument in one of Bourgain's papers can be interpreted as a sequential application of several of these tools.
The Fundamental Theorem of Calculus, that is
$$\int_0^1 \frac{d}{dt} \psi_t \, dt =\psi_1 - \psi_0.$$ This "trick" is used throughout differential topology/geometry, for example in showing that the de-Rham cohomology is invariant or for a uniform bound on the period of a negative gradient flow line of the Rabinowitz action functional in constructing Rabinowitz--Floer homology. Actually, the trick consists of cleverly bringing the statement in question down to the form where one can apply the fundamnetal theorem of calculus. Also, in Floer theory in general, Ascoli's theorem (or Arzelà-Ascolis theorem) is used in an exceeding amount.
The fundamental theorem of calculus has served me well in my research.
I went through all the responses, and I'm surprised that the following trick has not already been posted, given its ubiquity: Use antisymmetry in a partially ordered set. That is, if $a\le b$ and $b\le a$ then $a=b.$
Examples:
Real numbers: $a\le b$ and $b\le a$ implies $a=b$ (can be helpful when directly showing equality is difficult)
Divisibility of positive integers: $a\mid b$ and $b\mid a$ implies $a=b$ (extremely common in number theory)
Subsets of a set: $a\subseteq b$ and $b\subseteq a$ implies $a=b$ (for example, in locus proofs of classical geometry)
Find something that can be computed (a special case, a simplification, or just something of the same flavor as the real problem). Then stare at the data and look for patterns.
Using some form of Yoneda's Lemma, see your particular structure as representable functor in an abstract category where your desired constructions are obvious.
This functorial point of view is very nice in algebraic geometry.
John Allen Paulos has referred to the parable of the Texas sharpshooter a number of times. To paraphrase:
A man driving across the US notices that in a west Texas town, there are a lot of barns with targets painted on their side. Each of the targets have bulletholes directly within the bullseye. Impressed, the man inquires in a local diner about the ace shooter who lives in the town. The townsfolk inform him that a local resident just likes to shoot randomly at barns, and then later on will paint the target right where the bullets land!
Although parable is often referred to as a fallacy of logical reasoning, there appears to be more of a mathematical trick to the logic - namely, conditioning on a random event may be fruitful, even if the probability of the specific random event is low.
This Texas sharpshooter parable to me seems similar to, for example, the post-selection tricks of scatter-shot boson sampling and variants used in recent (2020) experiments. Therein one has limited control over the generation of individual photons, so one merely post-selects on the particular $M$ crystals that generated the photons.
This trick is called “stupid argument” by some of my collaborators and me.
Let’s say you have a property that is defined on testing using cubes on all scales. Now you have some regular set (say again a cube or a ball) in which the property holds, that is, if you test with a cube that is contained in this regular set then it holds. You might come in this situation for example by local transformation of sets with this property, like flattening the boundary in PDE. Now to get it for all cubes you make the following case distinction: if the concentric cube of half the size is contained in the regular set, you use your assumption. Otherwise, the original cube contains a cube of 1/4 side length compared to the original one that is completely outside the regular set and for this one you get the property usually trivially.
Since this is a trick, I kept it somehow mysterious. Applicability includes things like measure theoretic dimensions, metric properties like porousity and so on and the exact details why it’s trivial “outside” and why testing with comparably smaller objects is ok depends a bit on the specific property one aims for.
If a function with connected domain is locally constant, then it is constant.
Connectedness doesn't need to be understood topologically: one manifestation of the trick is that a sequence whose consecutive terms are equal must be constant.
If the domain is nice enough, it extends to periodic functions as well (a locally periodic function has the same periodic pattern everywhere).
If any two horses are of the same color, then all horses are of the same color.
There is but one major trick, and furthermore, many of the other answers are applications of it. Let's call it
T R A N S L A T I O N
The idea is very simple; you translate your problem to a language in which it is simple to solve, so you solve it, and then (if necessary) translate your solution back to the original language. Alternatively, you can think of this as finding the right angle of attack to solve your problem.
Conjugation? Say $W$ is a sequence of Rubik moves that twists two corners. Find moves $V$ that puts the corners you want to twist in the correct position. The n simply apply $VWV^{-1}$.
Change of variables? Translate your integral in $x$ to an integral in $u$ (with translated differential and bounds of course). Find the antiderivative as a function of $u$, and translate back to a function of $x$ (or evaluate if definite).
Diagonalization? Find an eigenbase (essentially a convenient language to understand your matrix). Change basis by conjugation, and your matrix action suddenly looks much simpler.
Analytic geometry? Translate your geometric problem to convenient algebraic manipulations.
etc.
This is usually some form of abstraction, and indeed it can be quite powerful. In some sense, this is even the definition of mathematics. Finding an abstraction that unifies and answers easily many apparently unrelated questions.
I also mention the Rubik's cube in my answer, so let me note that when I got my first Rubik's cube as a kid, I certainly invented conjugation immediately. But I did not invent commutators, and was not able to solve the cube. (I don't disagree with your answer though.)
There's a book about this trick, called Bypass Operations.
Two come to mind for me.
"When in doubt, differentiate!" -Chern (or so I've heard).
As a result, it's been useful for me to check the implications of $d^2=0$ on differential forms.
I have not used it in research (I have moved away from analysis somewhat), but I love trying to use Jensen's inequality when I come across an analysis problem. If I recall correctly, I solved two problems on my analysis prelim exam using said inequality.
When working in coordinates, as with PDEs, checking what happens when you commute partial derivatives is indeed quite useful.
Jensen’s inequality is really just the definition of convexity. So it encompasses all other inequalities based on convexity. Looking for convexity and then applying the direct consequence of its definition is an impressively useful and powerful “trick”
Geometrise!
It worked well for Newton in his Principia when he didn't think that mathematicians would swallow the results he had found for calculus.
For Lie, when he considered PDEs and their solutions.
It worked well for Minkowski when he geometrised Special realtivity and this was helpful for Einsteins work on GR; although he has said that he didn't think of his theory as geometrical, he thought of physically as a unification of inertia and gravity.
Also for Noether when left a note about Betti numbers were better understood as groups and also lectured on them.
It also worked well for Zariski and Grothendieck when they geometrised lots of number theory.
Also Mechanise!
It worked for Archimedes when calculating various volumes.
It also worked for Witten, when he given a problem by Atiyah when he asked Witten to discover a physical understanding of the HOMFLY knot invariant.
Taylor expansion. Much of the classical theory of statistics (and some of its modern extensions) revolves around performing a second-order Taylor expansion of the likelihood.
Proof "tricks" that are routinely used by many:
Induction
Contradiction
Computational and proof "tricks":
Interchanging the order of summation/integration
Counting the same thing in multiple ways
Looking for patterns (compute special cases, etc.)
Less applicable to as many problems, but still applicable to a wide range of problems in fields like Computer Science, we have the Repertoire Method.
More specialized in mathematics, there are also various methods related to exponential sums, e.g., van der Corput's, Vinogradov's, etc.
A closed discrete subset of a compact space is finite!
Closed discrete subset...
Oh, you are right! Thanks :)
The trivial cohomology box trick
When trying to solve a problem, prove that:
-- the obstruction to the existence of a solution lives in a cohomology box
(a cohomology space, or group, or set),
-- prove that the box is empty (more precisely: is trivial).
Let us give a few examples:
(1) Mackey obstruction for extending a projective group representation to a linear representation.
(2) Galois descent in algebraic geometry.
(3) Use of $H^1$ cohomology group to prove that a sequence of modules (or sheaves) is exact.
(4) Use of de Rham cohomology to prove that a closed differential form is locally exact (which is actually Example (3)).
(5) Use of ${\rm Ext}^1$ group to prove that a sequence of modules is split.
And so on ...
Let $\mathcal{M}$ be the set of all mathematicians of all times. When you write:
Assuming Rota is correct, what are the few tricks that mathematicians use repeatedly,
it seems that you interpreted Rota's words as follows:
There is a set of tricks $\mathcal{T}$, with $|\mathcal{T}|\ll 10^{10}$, such that every $m\in\mathcal{M}$ uses only tricks from $\mathcal{T}$,
when in fact he meant:
For every $m\in\mathcal{M}$, there is a set of tricks $\mathcal{T}_m$, with $|\mathcal{T}_m|\ll 10^{10}$, such that $m$ uses only tricks from $\mathcal{T}_m$.
Therefore, you should first specify $m$ to get a description of $\mathcal{T}_m$. Most of the posted answers address indeed this kind of question after having selected a suitable subset $S\subset \mathcal{M}$.
|
2025-03-21T14:48:31.275711
| 2020-06-15T14:47:14 |
363120
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alec Rhea",
"Sam Hopkins",
"Vepir",
"domotorp",
"https://mathoverflow.net/users/25028",
"https://mathoverflow.net/users/88524",
"https://mathoverflow.net/users/92164",
"https://mathoverflow.net/users/955"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630213",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363120"
}
|
Stack Exchange
|
Players alternate moving a $\{\swarrow,\uparrow,\rightarrow\}$ piece on a chessboard
Edit $4.$ $-$ Proposing to reopen the question (the related competition should be over by now).
Edit $3.$ $-$ I have just found out that the linked competition (see the "Edit $1$.") is still ongoing. Please close and hide this question and its counterpart (see the "Remark."), until the competition is over.
Edit $2$. $-$ I've added strategies for solving some simple squares (tiles).
Edit $1$. $-$ This problem is equivalent to the problem $20.\space b)$ from Завдання ХХIIІ ТЮМ (2020 р.) which talks about the divisors of $10^{2k}$ where legal moves are $\{\div 10, \times 2,\times 5\}$ to move from the previous divisor to the next one, and you can't revisit a divisor. But, it appears they did not publish the solutions. $-$ Thank you Witold for the reference.
Remark. This problem is equivalent to the even case of the exponent $n=2k$ of the following problem: Winning strategy in a game with the positive divisors of $10^n$ from MSE. (This is a partial cross-post.)
The $\{\swarrow,\uparrow,\rightarrow\}$ piece game
Consider an odd sized chessboard $(2k+1)\times(2k+1)$ with WLOG bottom
left corner square at $(0,0)$ and top right corner square at
$(2k,2k)$. A piece whose allowed moves are $\{(-1,-1),(0,+1),(+1,0)\}$
is placed on one of the squares.
Two players then alternate moving the piece, such that the piece never
stands on the same square twice. (The piece can't revisit the
squares.) The player that can't move the piece, loses. (The player
that last moved, wins.)
On which starting squares will the first player have a winning
strategy?
I have brute-forced the game for boards of sizes $(2k+1)=3,5,7,9,11$ with C++ (run it on repl.it). If the second player has a winning strategy, the square is colored blue. Otherwise, the first player has a winning strategy and the square is colored green.
It seems that if the piece starts on a square $(x,y)$ where $x,y$ are both even, then the second player can always force a win. Can we prove this?
Otherwise, if $x,y$ are not both even, then it seems the first/green player can force a win, unless the square is one of the "exceptions". (Additional blue squares.)
WLOG due to symmetry we list only the exceptions below the main diagonal:
If $n\le2$ there are no exceptions.
If $n=3$, the exceptions below the diagonal are $(4,3),(6,3)$.
If $n=4$, the exceptions below the diagonal are $(8,3),(5,4),(8,5)$.
If $n=5$, the exceptions below the diagonal are $(6,1),(7,4),(6,5),(10,5),(7,6)$.
But what will be the exceptions for $n\ge 6$? I do not see a pattern.
I do not know how to solve all squares in general, but I can solve specific examples.
Let $n=2k$ and consider some $(n+1)\times(n+1)$ chessboard.
$i)$ Solving the corners
It is easy to show that if the piece starts on any of the four corners $(0,0),(n,0),(0,n),(n,n)$ then the second player has a winning strategy.
Starting on $(0,0)$, the second player can keep returning to the main diagonal until the first player can no longer move. ("The main diagonal climber".)
Starting on $(n,n)$, the first player is forced to move diagonally. Then, the second player can keep forcing the diagonal move by repeating either the $(+1,0)$ or the $(0,+1)$ move until the first player can no longer move. ("The wall crawler".)
Starting on $(0,n)$ we have a forced $(+1,0)$ move for the first player. This can be forced again and again, by responding with the $(-1,-1)$ move, until we reach $(0,0)$. Now the moves to $(1,0),(2,0)$ are forced. Finally, the second player can keep returning to the corresponding diagonal until the piece reaches $(n,n)$ where we apply "The wall crawler".
Strategy for starting on $(n,0)$ is symmetric to the previous corner $(0,n)$.
$ii)$ Extending the solutions around the $(0,0)$ corner
It can be shown that $(0,0),(2,0),(0,2),(2,2)$ are always a win for the second player, and that $(1,0),(0,1),(1,1),(2,1),(1,2)$ are always a win for the first player.
When starting on $(0,2)$ or $(2,0)$, the second player can stay on the diagonal to force the piece to reach $(n,n)$ where they apply "The wall crawler" to beat the first player. Consequently, if we start on $(0,1)$ or $(1,0)$ then the first player can move to $(0,2)$ or $(2,0)$ respectively, and apply the same strategy to beat the second player.
When starting on $(2,2)$, the first player must use the diagonal move. (Otherwise, the second player can apply "The main diagonal climber".) But, then the second player moves to $(0,0)$ and forces the first player to move to either $(1,0)$ or $(0,1)$. This is losing for the first player because then the second player moves to $(2,0)$ or $(0,2)$ respectively, and applies the previous winning strategy.
Consequently (to the previous strategy), we have that $(1,1)$ is a win for the first player.
Consequently (to the previous bullet points), we have that the $(1,2)$ and $(2,1)$ are wins for the first player if they decide to move to the $(2,2)$ square.
It looks like extending these strategies to $(4,4)$ is not as simple as the previous extension to $(2,2)$, because the $(3,4),(4,3)$ squares depend on $(n+1)$ (the chessboard dimension) itself. These two squares appear to be a win for the first player, unless the dimension is $(2k+1)=7$ where they are a win for the second player. (According to my brute force solutions.)
That is, the general strategy needs to account for the observed "exceptions".
Remark. Although the question is about the odd sizes of the chessboard, the strategies mentioned above work for both the odd and the even dimensions of the chessboard. If we want to extend these strategies further (to more squares), we do need to also account for the parity of the dimension. (According to my brute force solutions.)
A similar game in which fractal-like patterns emerge was discussed by Jordan Ellenberg in his blog here: https://quomodocumque.wordpress.com/2019/10/15/the-quarter-circle-game/
Interesting question; inb4 Joel answers this question for boards of size $(2\alpha+1)\times(2\alpha+1)$ for $\alpha\in O_n$.
If you anyhow have the code, do you get a similar pattern for $(2m+1)\times (2n+1)$ size boards? That might help to prove things by induction. Also, what happens for other shapes, like if you remove a 'French notation shaped Young diagram' from the bottom-left where every row/columns is doubled? Seems to me that the patterns remain similar.
Problems of this contest are given to the school students for long-term work, and work is in the progress. The final stage of the competition is planned on October 2020. Please, remove the solution of this problem from the site (or hide until November 2020).
There is still no full solution. I will vote to close both questions (here and on MSE), until the competition is over. Thank you for letting me know that the competition is still ongoing. They should not have posted it on MSE in the first place (which was more than a month ago!)
November is over and the question has been reopened. If someone solved the corresponding problem (or if a solution was given out), I am still interested in the complete solution.
|
2025-03-21T14:48:31.276238
| 2020-06-15T15:26:43 |
363129
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"A. Maarefparvar",
"Kimball",
"https://mathoverflow.net/users/6518",
"https://mathoverflow.net/users/98582"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630214",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363129"
}
|
Stack Exchange
|
prime decomposition in even dihedral extensions
Let $L/K$ be a finite extension of number fields of degree $n$ with $n$ an even integer such that the normal closure of $L$ has the Galois group isomorphic to $D_n$, the dihedral group of order $2n$. Is there any result about the decomposition form of a prime $\mathfrak{p}$ of $K$ in $L$?
Can you be a little more specific as to what you're looking for? Do you understand the general situation for relating splitting in $L/K$ to splitting in the Galois closure? If not, I'd say this is more appropriate for MathStackExchange.
Many thanks for your response. I'm looking for an explicit decomposition form in "even dihedral extensions L/K" (or in the Galois closure of L over K) as in Cohen's book (Advanced Topics in Computational Number Theory, Proposition 10.1.26) the decomposition form of primes in a "prime order" dihedral extensions is formulated.
|
2025-03-21T14:48:31.276597
| 2020-06-15T15:30:55 |
363130
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Daniel Robert-Nicoud",
"Sam Gunningham",
"https://mathoverflow.net/users/15629",
"https://mathoverflow.net/users/44134",
"https://mathoverflow.net/users/7762",
"paul garrett"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630215",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363130"
}
|
Stack Exchange
|
When is the dual of a limit the same as the colimit of the duals?
We all know that the dual of the colimit of a diagram in the category of chain complexes (and similar categories) is the limit of the duals diagram. This follows immediately from the general fact that the $\hom$ functor sends colimits in the first slot to limits. I am confronted with a situation where I would like the opposite to be true, which sparked my interest about the most general context where this would happen.
Consider a diagram $D$ in the full subcategory category of chain complexes given by chain complexes "of finite type", by which I will mean chain complexes that have degrees bounded below (or: bounded above) and that are finite dimensional in every degree. In particular, these chain complexes have the property that they are isomorphic to their double duals through the canonical inclusion $v\mapsto\langle-, v\rangle$.
Suppose that this diagram $D$ has a limit $\lim D$ in the category of chain complexes and that this limit is in the full subcategory of chain complexes of finite type. Examples would be finite products (trivial in what I want to do) or kernels (which interest me much more).
Then, the dual of the limit of the diagram is the colimit of the dual of the diagram. Indeed, we can consider the colimit $\operatorname{colim}D^\vee$ of the dual diagram $D^\vee$ and take its dual, which gives us the limit of the double dual diagram. But
$$\left(\operatorname{colim}D^\vee\right)^\vee\cong\lim D^{\vee\vee}\cong\lim D$$
since $D$ is in the subcategory of finite type. Notice that his in particular implies that $\operatorname{colim}D^\vee$ is of finite type (as taking duals can only increase dimensions). Then
$$\left(\lim D\right)^\vee\cong\left(\operatorname{colim}D^\vee\right)^{\vee\vee}\cong \operatorname{colim}D^\vee.$$
Is there a nice and general category theoretical explanation for this phenomenon? What are (reasonably) general situations where something like this occurs, and could someone provide a reference?
Also more generally: when are duals of limits the colimit of the duals?
Perhaps I am missing something, but is this phenomenon not just happening because the dual functor gives an equivalence of categories $C_{ft} \simeq C_{ft}^{op}$? So it takes (co)limits in $C_{ft}$ to (co)limits in $C_{ft}^{op}$.
@SamGunningham The point is that I am not taking the limits and colimits in the category of finite type chain complexes but in the category of all chain complexes instead and asking for $\lim D$ to have finite type. I'll try to make this clearer in the OP.
Also a question would be what are possible good definitions of "duality" and "objects of finite type" in other categories.
@SamGunningham Of course, limits in $C_{ft}$ are the same as limits in $C$ (when they exist). But what happens in more general cases? Is there a clean explanation of this phenomenon? (Am I doing something wrong? Am I missing something obvious?)
Not an answer in the context you want, but, in the category of locally convex topological (complex) vector spaces, I think it is not always true that the dual of a limit is the colimit of the duals (even just as sets or as vector spaces without topologies). Namely, the (only) proof that I know seems to require that the limitands in the limit be Banach spaces. I do not have a counter-example to prove the necessity of this condition, because I've not needed a more general statement (yet?) Anyway, it does not seem true for "general" reasons.
@paulgarrett Thanks, that's an interesting example :)
|
2025-03-21T14:48:31.276873
| 2020-06-15T15:31:03 |
363131
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"AG learner",
"Zach Teitler",
"https://mathoverflow.net/users/108274",
"https://mathoverflow.net/users/74322",
"https://mathoverflow.net/users/88133",
"user267839"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630216",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363131"
}
|
Stack Exchange
|
Saturated ideals in computational algebra
Let $R$ a commutative ring with one and $I, J \triangleleft R$ two ideals.
The saturated ideal $I^{sat}_J$ with respect $J$ is the ideal
$$(I : J^\infty )= \cup_{n \geq 1} (I:J^n)$$
where $(I:J^n)= \{r \in R \mid rJ^n \subset I \}$. Fine.
Which intuition could one have thinking about this construction
and why is it so fruitful? Is there any way to understand what happens with an ideal after saturation? On the geometrical part it is know that
$V(((I : J^\infty ))= \overline{V(I) \backslash V(J)} \subset
\operatorname{Spec} R$. the bar is the closure wrt Zariski topology on
$\operatorname{Spec} R$. Is this the only geometric way one should think about them?
if we assume that $I$ is saturated wrt $J$, ie
$(I : J^\infty )=I$ this leads to odd implication
$V(I)= \overline{V(I) \backslash V(J)}$. What does this mean,
how these guys look like? Is there
any "picture" one should have in mind? How they behave wrt localizations and taking radicals?
Another as the title suggests more important facet of my question is what advantages have the saturated
ideals in contrast to non saturated from viewpoint of
computational algebra, ie when one deals with concrete computations
of eg radicals or minimal generator systems of ideals in quotient rings
$k[x_1,x_2,...,x_n]/I$? Do saturated ideals have from this point
of view nice features?
For example in $k[x,y]$ with $I=(x^m,y)$ and $J=(x)$, do you know what are $(I:J^n)$ and $(I:J^\infty)$? I apologize if this is too basic, but just in case you haven’t done examples like these, they might help.
@ZachTeitler: if I am not wrong then $(I:J^\infty)$ and $(I:J^n)$ for $n \ge m$ is $(y)$ and $(I:J^n)$ for $n < m$ is $(x^{m-n},y)$. Geometrically this reminds me also of strict transforms in the theory of blow ups where $J$ is the ideal defining the closed locus where the blow up isn't an isomorphism.
do you know if and why the saturated ideals are interesting for people working with computer algebra systems (eg Macaulay2) mainly interested on fast implementations of algorithms useful for computation of interesting ring/ ideal properties?
One situation is to solve a polynomial system in affine space. You can complete to projective space, but there might be unwanted solutions at infinity. So remove them by saturating wrt the hyperplane at infinity. More generally instead of projective space, maybe a different completion: eg, maybe your affine space is a product (the polynomials are bihomogeneous or whatever), so you decide to complete to a product of projective spaces. Or your original system was on some other affine variety, not necessarily affine space.
@user267839 No, the satuation is the entire ring $k[x,y]$ since $1\cdot x^m\in (x^m,y)$.
|
2025-03-21T14:48:31.277102
| 2020-06-15T15:56:26 |
363134
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Invariance",
"Lev Soukhanov",
"Will Sawin",
"https://mathoverflow.net/users/141609",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/33286"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630217",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363134"
}
|
Stack Exchange
|
Can logarithmic connection on holomorphic vector bundle induce logarithmic connection on dual bundle?
Let $(X,\omega)$ be a compact K"ahler manifold of dimension $n$, and let $D=\sum_{i=1}^r D_i$ be a simple normal crossing divisor on it, i.e., a divisor with smooth components $D_i$ intersecting each other transversally in $X$.
Let $\mathcal{V}$ be a locally free coherent sheaf on $X$ and let
$$\nabla:\mathcal{V}\to \Omega^1_X(\log D)\otimes\mathcal{V}$$ be a $\mathbb{C}$-linear map satisfying
\begin{align}\nabla(f\cdot e)=f\cdot\nabla e+df\otimes e.\end{align}
One defines
$$\nabla_a:\Omega^a_X(\log D)\otimes \mathcal{V}\to \Omega^{a+1}_X(\log D)\otimes \mathcal{V}$$
by the rule
$$\nabla_a (\omega\otimes e)=d\omega\otimes e+(-1)^a \omega\wedge \nabla e.$$
We assume that $\nabla_{a+1}\circ\nabla_a=0$ for all $a$. Such $\nabla$ will be called an integrable logarithmic connection along $D$, or just a connection.
My Question is: Given a connection $\nabla$ in $\mathcal V$, can it induce a connection in its dual bundle $\mathcal V^*$?
Here is my thought:
Locally, $\nabla$ can be written as the form $\nabla=d+\omega$, where $\omega$ is a holomorphic section of $\Omega_{X}^1(\log D)\otimes\text{End}(\mathcal{V})$. For $\mathcal{V}$ and its dual bundle $\mathcal{V}^*$, the dual pairing
$$\langle\,,\,\rangle:\mathcal{V}_x^*\times\mathcal{V}_x\longrightarrow\mathbb{C}$$ induces a dual pairing $$\langle\,,\,\rangle:A^0(\mathcal{V}^*)\times A^0(\mathcal{V})\rightarrow A^0.$$
Given a connection $\nabla$ in $\mathcal V$, we define a connection, also denoted by $\nabla$, in $\mathcal V^{*}$ by the following formula:
$$
d(\xi, \eta)=\langle \nabla \xi, \eta\rangle+\langle\xi, \nabla \eta\rangle \text { for } \xi \in H^{0}(\mathcal V), \eta \in H^{0}\left(\mathcal V^{*}\right).
$$
Given a local frame ficld $s=\left(s_{1}, \ldots, s_{r}\right)$ of $\mathcal V$ over an open set $U,$ let $t=$ $\left(t^{1}, \cdots, t^{r}\right)$ be the local frame field of $\mathcal V^{*}$ dual to $s$ so that
$$
\left\langle t^{i}, s_ j\right\rangle=\delta^{i}_j, \text { or }\left\langle t, s\right\rangle=I_{r},
$$
where $s$ is considered as a row vector and $t$ as a column vector. If $\omega=(\omega_{j}^{i})$ denotes the connection form of $\nabla \text { with respect to } s \text { so that }$
$$
\nabla s_{i}=\sum s_{j} \omega_{i}^{j} \qquad \text{or} \,\,\nabla s=s \omega{,}$$
then
$$
\nabla t_{j}=-\sum \omega_{i}^{j}t^j \qquad \text{or} \,\,\nabla t= -\omega t{.}$$This follows from
$$
0=d \delta_{j}^{i}=\left\langle \nabla t^{i}, s_{j}\right\rangle+\left(t^{i}, \nabla s_{j}\right)=\left\langle \nabla t^{i}, s_{j}\right\rangle+\omega_{j}^{i}.
$$
In short, $\omega_{\mathcal V^*}=-\omega_{\mathcal V}$.
Am I right? Any advice and suggestion will be appreciated. Thanks a lot.
yes, the dual meromorphic connection is indeed logarithmic, which is readily seen from the formula for the dual connection in the local coordinates
Dear @LevSoukhanov, thanks for your comment, so my method is right?
Your method looks right to me. You may want to check that your "product rule" formula holds for every pair of sections, and not just your frame fields, so that you know the dual does not depend on the choice of basis. On the other hand if you already know a connection satisfying that formula exists then you don't need to do this.
@WillSawin, thanks for your comment!
Yes your method is correct.
Dear @LevSoukhanov, I'd appreciate it if you can see my another recent post and give me some suggestion, https://mathoverflow.net/questions/363202/can-logarithmic-connection-operate-on-currents ,thanks again.
|
2025-03-21T14:48:31.277352
| 2020-06-15T16:00:00 |
363135
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Safwane",
"Steven Stadnicki",
"castor",
"https://mathoverflow.net/users/7092",
"https://mathoverflow.net/users/74606",
"https://mathoverflow.net/users/74668"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630218",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363135"
}
|
Stack Exchange
|
Diophantine equation that has an infinite number of positive integers solutions
Let us consider a sequence of continuous functions $g_{q}:ℝ^2\to ℝ^2$. Let $(A_{q})_{q\geq 1}$ be a sequence of compact sets in $ℝ^2$. Assuming that each function $g_{q}$ is topologically mixing in $A_{q}$ for all $q\geq 1$, i.e., for every open subsets $U,V$ of $ℝ^2$ such that $U\cap A_{q}$ and $V\cap A_{q}$ are non-empty, there exists $k_0=k_0(q,U,V)$ such that for every $k\geq k_0$ the set $g_{q}^{k}(U)\cap V\cap A_{q}$ is non-empty.
My problem is about finding a common set $A$ in which all the functions $g_{q}$ are topologically mixing in $A$. My approach based on several techniques (symbolic dynamics) leads to this diophantine equation:
$$16(n+1)^2q^8+16(n+1)^2q^6+1=m^2$$
and the main problem is equivalent to the fact that the above diophantine equation has an infinite number of positive integers solutions $q,n,m$. So, the question is how one can prove that the above diophantine equation has an infinite number of positive integers solutions $q,n,m$.
It is just Pell's equation $m^2-Ny^2=1$, for $N=16q^6(q^2+1) $. Thus even for fixed $q$ it has infinitely many solutions, that was proved by Lagrange and you may find the proof in many textbooks.
This is an amazing link between dynamical systems and number theory.
Your curve can be written as $Y^2=X^4+X^3-2,$ where $Y=4n$ and $X=q^2.$ This Diophantine equation satisfies Runge's condition, so this is relatively easy to handle and one obtains that there are only finitely many integral solutions (see Poulakis-Quartic). You may also consider it as a genus 1 curve and there are techniques to determine all integral points on such curves (see Tzanakis-Quartic).
To make it a bit more explicit, let $P(X)=X^4+X^3-2, P_1(X)=4X^2+2X$ and $P_2(X)=4X^2+2X-1.$ Here we get that $16P(X)-P_1(X)^2=-4X^2-32$ and $16P(X)-P_2(X)^2=4X^2 + 4X - 33.$ Hence $$(4X^2+2X-1)^2<16P(X)=(4Y)^2<(4X^2+2X)^2$$ if $X\notin [-4..3].$ That is we have a contradiction, since $(4Y)^2$ is supposed to be between two consecutive squares. It remains to deal with the values $X\in [-4..3].$ The only solution is given by $$(X,Y)=(1,0).$$ Thus $n=0$ and $q=\pm 1$ (you look for positive solutions only, so $q=1$ remains). That was the Runge approach.
The elliptic curve part can be done by the program package Magma (see Magma), you simply type
IntegralQuarticPoints([1,1,0,0,-2],[1,0]);
and you get
[
[ 1, 0 ]
].
Here $[1,1,0,0,-2]$ comes from the degree 4 polynomial, these are the coefficients and $[1,0]$ is a point on the curve.
Your polynomial is $q^8+q^6-2$ and it can be written as $X^4+X^3-2$ with $X=q^2.$
@Safwane Can you provide a quartic Diophantine equation for which every prime is a solution? :P The point here is that any integer solution to your equation would ALSO provide be a solution to the equation that castor treats here and shows to have only a finite number of solutions.
Sorry,The true equation is: $$16(n+1)^2q^8+16(n+1)^2q^6+1=m^2$$
|
2025-03-21T14:48:31.277556
| 2020-06-15T16:13:25 |
363137
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630219",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363137"
}
|
Stack Exchange
|
Shellability and order filters in the partition lattice
Choose $n\in\mathbb N$. Let $B$ be a non-empty subset of $[n]:=\{1,2,\dots,n\}$. Consider the set of partitions of the set $[n]$ with exactly $|B|$ parts such that each part has exactly one member of $B$. Say this set has exactly $t$ elements.
Is there a way to list the elements of this set ($\sigma_1,\dots,\sigma_t$) so that the following holds?
If $i,k\in\{1,2,\dots,t\}$ and $i<k$ and if there exists $\tau$ in the
partition lattice such that $\sigma_i,\sigma_k<\tau$, then there
exists $j\in\{1,2,\dots,k-1\}$ and there exists $\chi$ in the
partition lattice such that $\sigma_j,\sigma_k\lessdot \chi\le\tau$, where
"$\lessdot$" denotes the covering relation.
Recall that if $\alpha$ and $\beta$ are partitions, then $\alpha\lessdot\beta$ if and only if $\beta$ is obtained from $\alpha$ by joining together two parts of $\alpha$, and $\alpha\le\beta$ if every part of $\alpha$ is a subset of a part of $\beta$.
|
2025-03-21T14:48:31.277658
| 2020-06-15T16:18:34 |
363139
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Toni Mhax",
"https://mathoverflow.net/users/121643",
"https://mathoverflow.net/users/122662",
"Đào Thanh Oai"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630220",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363139"
}
|
Stack Exchange
|
Generalization of Tucker circle, Conway circle and van Lamoen circle
Theorem 9.1 in this paper as follows is a generalization of Turker circle. Turker circles is a generalization of many circles as: Cosine Circle, circum circle, First Lemoine Circle, Gallatly Circle, Kenmotu Circle, Taylor Circle, Apollonius Circle.
The Conway circles and Floor van Lamoen circle are also special case of Turker circle.
I also given some anothert nice special cases of Theorem 9.1, example Theorem 9.2, Theorem 9.3, Theorem 9.6, Theorem 9.7 of that paper.
Theorem 9.1. Let $ABC$ be a triangle, let points $D$, $G$ be chosen on side $AB$, points $I$, $F$ be chosen on side $BC$, points $E$, $H$ be chosen on side $CA$, let $k$, $l$ are real number such that:
$\begin{cases} \angle EDA =kA+lB+(1-k-l)C\\ \angle FEC =(1-l)A+(k+l)B-kC\\ \angle GFB = (1-k-l)A+kB+lC\\ \angle HGA =-kA+(1-l)B+(k+l)C \\ \angle IHC=lA+(1-k-l)B+kC \end{cases}$
Then six points $D, E, F, G, H, I$ lie on a circle and $ \angle DIB = (k+l)A-kB+(1-l)C$
Coverse of Theorem 9.1: Let $ABC$ be a triangle, let points $D$, $G$ be chosen on side $AB$, points $I$, $F$ be chosen on side $BC$, points $E$, $H$ be chosen on side $CA$ and six points $D, E, F, G, H, I$ lie on a circle. Then exist two real numbers $k$, $l$ such that:
$\begin{cases} \angle EDA =kA+lB+(1-k-l)C\\ \angle FEC =(1-l)A+(k+l)B-kC\\ \angle GFB = (1-k-l)A+kB+lC\\ \angle HGA =-kA+(1-l)B+(k+l)C \\ \angle IHC=lA+(1-k-l)B+kC \\ \angle DIB = (k+l)A-kB+(1-l)C \end{cases}$
My question: Is the converse of Theorem 9.1 true?
See also: Carnot theorem
The converse is not true, for $\hat{A}=\hat{B}=\hat{C}$, as one can choose for example $\widehat{EDA}\neq 60°$...
@ToniMhax Your answer is exactly
But otherwise, the theorem seems to hold (to seek i guess), i may put a complete answer...
@ToniMhax Thank to You
For the converse, take first one triangle $GIE$ then define the line sides of $ABC$ by the angles formed on the vertices.
In notation $(\vec{IE},\vec{IG})=N, (\vec{GI},\vec{GE})=L$ and $(\vec{EG},\vec{EI})=M$.
Take line $(GG')$ such that $(\vec{GE},\vec{GG'})=s$,
line $(II')$ with $(\vec{IG},\vec{II'})=t$
and line $(EE')$ with
$(\vec{EI},\vec{EE'})=u$.
Finally $(GG')\cap (EE')=A$, $(GG')\cap (II')=B$, and $(II')\cap (EE')=C$. Also say $(\vec{AB},\vec{AC})=\hat{A}$, $(\vec{CA},\vec{CB})=\hat{C}$ and $(\vec{BC},\vec{BA})=\hat{B}$.
This is just the given figure but with directed angles in a more general way.
Assuming we have the points on the side segments of $ABC$ (there is a version where the points can be on the lines of the sides)
So $\begin{cases}\hat{A}=M+u-s\\\hat{B}=L+s-t\\\hat{C}=N+t-u\end{cases}$
Assuming the cocyclicity of the points
(and from the given figure) we want to prove that there exist $k,l$ so that $\begin{cases}s-u=k\hat{B}+l\hat{C}-(k+l)\hat{A}\\u-t=k\hat{A}+l\hat{B}-(k+l)\hat{C}\end{cases}$ for any $s,u,t$ and the given $\hat{A},\hat{B},\hat{C}$.
For that the linear system should be invertible; a direct calculation shows that this is the case when $\hat{A}^2+\hat{B}^2+\hat{C}^2\neq \hat{A}\hat{C}+\hat{B}\hat{C}+\hat{A}\hat{B}$
The rest is angle chasing of the given arcs and lines, for example in the figure $\begin{cases}\widehat{FEC}=N+M-C\\\widehat{HGA}=L+M-A\\\widehat{DIB}=L+N-B\end{cases}$
which are the given values.
When (What are the additional conditions?) is converse theorem right?
@ĐàoThanhOai, the converse hold here if $\hat{A}^2+\hat{B}^2+\hat{C}^2\neq \hat{A}\hat{B}+\hat{B}\hat{C}+\hat{A}\hat{C}$ so when not all the angles are equal.
Thank you very much for your solution
|
2025-03-21T14:48:31.277896
| 2020-06-15T16:37:48 |
363141
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630221",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363141"
}
|
Stack Exchange
|
How to define and compute the degree of congruence of two rigid polyhedra in same type with knowing vertex coordinates?
If I have two sets of points in 3-dimensional space, each sets of points are the coordinates of vertices of a polyhedron. The two polyhedra have same type, so we don't need to consider the topological property of them. Now I want to define and compute the degree of congruence of these two polyhedra such that the more the two polyhedra congruent the more they have high degree of congruence i.e. if one can be transformed into the other as much as possible by a sequence of rotations, translations, reflections but forbid scaling, than they have high degree of congruence.
For example, there are three tetrahedra $(A,B,C)$ with the coordinates:
$$A:(0,0,0),(10,0,0),(0,10,0),(0,0,10)$$
$$B:(0,0,0),(1,0,0),(0,1,0),(0,0,1)$$
$$C:(0,0,0),(10,0,0),(0,10,0),(0,0,9)$$
then:
$A$ and $B$ have low degree of congruence
$A$ and $C$ have high degree of congruence
Is there any mathematical theory could define and compute this degree of congruence?
By the way, we don't know the vertex correspondence between two polyhedra.
Here is an idea, though I have to make some assumptions.
Suppose you have two sets of points $p_1,...,p_n\in\Bbb R^d$ and $q_1,...,q_n\in\Bbb R^d$ (for example, the vertices of your polyhedra, but with a fixed order).
Assume that they are translated to be centered at the origin, i.e. $p_1+\cdots +p_n=0$, and respectively for the $q_i$, so that we can ignore translations.
In a first step you could compute the covariance matrices of both point clouds and compare them. That is
$$C_p:=\sum_{i=1}^n p_ip_i^\top,\quad C_q:=\sum_{i=1}^n q_i q_i^\top.$$
These are positive semi-definite matrices, and you can compare their lists of eigenvalues, say $\lambda_i^p$ and $\lambda_i^q$ for all $i\in\{1,...,n\}$, sorted in descending order. They tell you about how unevenly these points clouds are distributed direction-wise.
The next step is to remove this unevenness from the point clouds. If we assume that the point clouds are full-dimensional (i.e. $\mathrm{span}(p_1,...,p_n)=\Bbb R^d$), then we can define
$$p_i':=C_p^{-1/2} p_i,\qquad q_i':=C_q^{-1/2} q_i.$$
Both point sets can now no longer be distinguished by translations or directional unevenness.
The last step is to consider the correlation matrix
$$C_{pq}:=\sum_{i=1}^n p_i'q_i^{\prime \top}.$$
You could e.g. compute $\delta:=\det(C_{pq})$.
This values lies between $-1$ and $1$.
We can use it as follows:
if $\delta=\pm1$, then the point clouds are just reorientations of each other, that is, there exists an orthogonal matrix $X\in\mathrm{O}(\Bbb R^d)$ with $\det(X)=\delta$ and $p_i=X q_i$ for all $i\in\{1,...,n\}$.
if $\delta=0$, then these point sets are as distinct as possible.
in general, the smaller the value of $|\delta|$, the more different these point sets are.
In the end you have to somehow use the numbes $\delta,\lambda_i^p,\lambda_i^q$ for $i\in\{1,...,n\}$ to quantify the difference between the point sets. I do not have a recipe for this. All I can tell you is, that if $\lambda_i^p=\lambda_i^q$ for all $i\in\{1,...,n\}$ and if $\delta=\pm 1$, then these point sets are the same up to some (possibly orientation-reversing) orthogonal transformation.
This of course assumes that your point sets have a predefined order (which might be given by the isomorphism between your polyhedra).
There are two phrases that may help in your search:
Point-set
registration.
The link is to a (long) Wikipedia article, which includes "rigid
registration," which seems closest to your case.
Geometric shape matching. For example:
Alt, Helmut, and Leonidas J. Guibas. "Discrete geometric shapes: Matching, interpolation, and approximation." In Handbook of Computational Geometry, pp. 121-153. North-Holland, 2000. Handbook link.
In the early 1990's it was established that exact matching under rigid motions could be solved in polynomial time in the number of points $n$, but the algorithms were
impractically complicated. The recent emphasis has been on fast approximation algorithms.
Here is an algorithm specifically for convex polytopes under rigid motion, which
guarantees (under certain conditions)
achieving within $(1-\epsilon)$ of the optimal volume overlap,
with high probability:
Ahn, Hee-Kap, Siu-Wing Cheng, Hyuk Jun Kweon, and Juyoung Yon. "Overlap of convex polytopes under rigid motion." Computational Geometry 47, no. 1 (2014): 15-24. Journal link.
|
2025-03-21T14:48:31.278318
| 2020-06-15T16:39:46 |
363142
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"A. Bailleul",
"R. van Dobben de Bruyn",
"https://mathoverflow.net/users/133679",
"https://mathoverflow.net/users/82179"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630222",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363142"
}
|
Stack Exchange
|
Honda-Tate theorem and prescribing roots of $L$-functions
I'm currently working with Artin $L$-functions in function field extensions (on one variable, over finite fields). I have heard about a theorem of Honda-Tate which vaguely states that a polynomial $P \in \mathbb Z[T]$ satisfying certain properties (Riemann hypothesis, functional equation, etc.) is the zeta function of such a function field.
I know very little about algebraic geometry, and when I look for Honda-Tate I only find texts about isogenies of abelian varieties, so it is of little use to me stated that way.
Would someone be kind enough to provide a statement of Honda-Tate's theorem in the form above, or give a reference to such a statement ? My goal would be to prescribe (inverse) roots of Artin $L$-functions of a certain shape, say $\sqrt q$ with a given multiplicity, $\sqrt q e^{2i\pi/3}$ with a given multiplicty and so on.
For abelian varieties you can get the desired characteristic polynomial of Frobenius on the nose (this is Honda's paper). Then use that every abelian variety is dominated by a Jacobian of a curve (e.g. take a smooth complete intersection curve inside $A$).
This gives a curve where the characteristic polynomial of Frobenius contains the given one (as a factor), but it's a very difficult problem to determine which exact polynomials occur. For example, it is only known in characteristic $2$ that $(1-t\sqrt{q})^{2g}$ occurs on a genus $g$ curve for every $g$ (this is due to van der Geer and van der Vlugt). (And even then I think it's only an asymptotic result, i.e. for $q = 2^r \to \infty$; for smaller $q$ you'd get some roots of unity that might be hard to control.)
Thank you for these references. So we cannot prescribe anything more than lower bounds on multiplicities if I understood correctly ?
In some sense, but you cannot really fix $g$. Given a $q$-Weil polynomial of degree $2g$, it shows up in some curve of genus $g' \gg g$. There might be an upper bound on $g'$ depending on some stuff, but I'm not entirely sure.
|
2025-03-21T14:48:31.278475
| 2020-06-15T16:42:55 |
363143
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Dmitri Pavlov",
"Emily",
"https://mathoverflow.net/users/130058",
"https://mathoverflow.net/users/402"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630223",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363143"
}
|
Stack Exchange
|
Simplicial matrices and the nerves of weak n-categories II, III, and IV
Duskin introduced his nerve functor (see the nLab or Kerodon) in the paper
Duskin, John W. Simplicial matrices and the nerves of weak n-categories I: nerves of bicategories. [Link].
While three other sequel papers were announced as "to appear", they never did. On the other hand, some papers (e.g. [1], [2]) cite drafts/preprints of the second part of Duskin's work. Is it available somewhere?
Well, an obligatory question: have you tried emailing Duskin himself?
@DmitriPavlov Not yet; I thought asking here first might be a good idea. (Would it be better in this kind of situation to try emailing the author before asking here?)
Considering that these papers were announced a long time ago and nothing like these papers appears in the list of papers actually published by Duskin, the most likely answer is that these papers were never written. And Duskin is much more likely to give a definitive answer in this case than anybody on MathOverflow.
I'll email Duskin soon then. Thanks, @Dmitri!
|
2025-03-21T14:48:31.278587
| 2020-06-15T16:49:22 |
363144
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Bugs Bunny",
"Hans",
"Vít Tuček",
"dorebell",
"https://mathoverflow.net/users/36563",
"https://mathoverflow.net/users/5301",
"https://mathoverflow.net/users/56878",
"https://mathoverflow.net/users/6818"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630224",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363144"
}
|
Stack Exchange
|
Universal property of induced representation
Let $H$ be a closed subgroup of the compact Lie group $G$. Let $E$ be a continuous representation of $H$. In the book "Representations of compact Lie groups" by Bröcker and Dieck the induced representation of $E$ is defined as the vector space $iE$ of all continuous functions $f:G\to E$ satisfying $f(g\cdot h)=h^{-1}f(g)$ for all $g\in G$ and $h\in H$. They show that as in the finite case this construction satisfies the Frobenius reciprocity theorem.
Now I wonder whether this construction also satisfies the universal property that we know from the case of finite groups (or, more generally, finite index), i.e., my question is whether the following is true:
There exists an $H$-linear map $j:E\to iE$ such that for all
$H$-linear maps $g:E\to E'$ to a $G$-module $E'$ there is a unique
map $G$-linear map $g':iE\to E'$ such that $g'\circ j=g$.
Moreover, is $g'$ continuous if $g$ is? If the answer is "No", is there a better notion of induced representation that makes this true? Or does it help when we restrict to unitary representations?
By unwinding the Frobenius reciprocity theorem, which says the functors of restriction and induction are adjoint, you’ll get exactly this sort of universal property. However, some care is needed - since this definition of induced representation imposes a continuity condition, it will only have the universal property among continuous representations.
In the case of finite groups, induction is both left and right adjoint to restriction. Is that true in general? In the book I am looking at, they only prove one of these two. In particular, I only get a natural map $iE\to E$ from that theorem. What is the natural map $E\to iE$?
Oh sorry, I was being careless! The definition you give is naturally the right adjoint to restriction (in the category of continuous representations). However, since the category of continuous representations of a compact Lie group is semisimple (i.e. there’s always a unitary structure), you can apply Frobenius reciprocity to the duals and then dualize back to get the other universal property. This isn’t as natural as the other statement, because it’s relying crucially on semisimplicity - for more general groups, the right and left adjoints are indeed sometimes different.
Maybe I am overlooking something but it seems to me that one needs that $iE$ is reflexive or something in that direction. Is that true?
You are writing a right adjoint to restriction so you have a natural $H$-module map
$$
iE\rightarrow E, \ (f(x):G\rightarrow E)) \mapsto f(1) .
$$
To cook up a map in the opposite direction, you need to use the fact that the category of $H$-modules is semisimple and choose a splitting map.
Now you use the fact the category of $G$-modules is semisimple. Because of this $E\rightarrow iE$ gives your left adjoint "locally", for this particular $E$ only. This is so called SSC (solution set condition) in Freyd's Theorem.
At this point you will need to work slightly harder. Essentially you will need to use Freyd's Theorem. You can choose $E\rightarrow iE$ for each simple module but your task is to extend it functorially to all modules. Each module is canonically a direct sum of simples
$$
V = \oplus_{S} Hom(S,E) \otimes S
$$
but it does not help because it is a coporoduct and you will need products. So it boils down to understanding limits in the category of continuous modules and whether restriction preserves them. My guess is that the left adjoint (that you are looking for) exists if and only if $H$ is of finite index in $G$ (this means $H$ is open, not closed).
Here is a recent paper that I can find where a similar question has been treated. It has no answer to your question but has all the necessary techniques to attack it.
Your question can be rephrased as "When is the induction the same as coinduction?" This has appeared on MathOverflow before and fancy answer to your question can be found here: When are induction and coinduction of representations of Lie groups isomorphic? When they are compact? Semisimple?
See also Induction and Coinduction of Representations
A direct elementary proof could be perhaps gleaned from https://math.stackexchange.com/questions/225730/left-adjoint-and-right-adjoint-nakayama-isomorphism/226493#226493 as it mentions averaging over group which works equally well for compact Lie groups as it does for finite groups.
Sorry but you are in a wrong category. What you say will work in finite dimensional representations but not in continuous representations. To disprove you just need a family of continuous representations $V_i$ of a compact group $G$ such that $\prod_i V_i$ is no longer continuous. The the restriction to the trivial group does not preserve limits.
@BugsBunny You mean the averaging trick?
No, if only I knew what I meant:-)) Suppose there is a left adjoint to RES. Then RES must preserve limits. Consider restriction from G to 1. A product of vector spaces is a limit in the category of representations of the trivial group. Hence there ought to be a continuous on any product of representations. This boils down to a definition of a continuous representation. A "standard" definition is a representation such that every representable function $G\rightarrow{\mathbb C}$ is continuous. With this definition there will be no continuous action.
|
2025-03-21T14:48:31.278949
| 2020-06-15T17:48:14 |
363148
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Doron Grossman-Naples",
"John Klein",
"https://mathoverflow.net/users/158123",
"https://mathoverflow.net/users/8032"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630225",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363148"
}
|
Stack Exchange
|
Why does this construction give a (homotopy-invariant) suspension (resp. homotopy cofiber) in an arbitrary pointed model category?
In their text Foundations of Stable Homotopy Theory, Barnes and Roitzheim define the suspension of a cofibrant object X of a pointed model category to be the pushout of the diagram $*\leftarrow X\coprod X\to Cyl(X)$, where the second map is the structure map of the cylinder object. By contrast, there is a more manifestly homotopy-invariant definition of suspension given in e.g. Dwyer and Spalinski, which is the homotopy pushout of the diagram $*\leftarrow X\to *$. It is not clear to me why these definitions agree; if we don't assume properness, I don't even see why the first is homotopy-invariant! (If we assume the model category is proper, then the first diagram's pushout is equal to its homotopy pushout.) There is a similar issue with the cofiber, which they define for a cofibration of cofibrant objects $f:A\to X$ as the pushout of $*\leftarrow A\to X$: again, it is not clear why this is homotopy-invariant (with respect to maps between such $f$ in the comma category) unless the model category is proper. Can we drop the properness assumption and still get homotopy colimits or at least homotopy invariance? Even if so, why are the definitions of suspension equivalent?
Shouldn't the first of these diagrams be $\ast \amalg \ast \leftarrow X \amalg X \to \text{Cyl}(X)$?
@JohnKlein In a category with a zero object, we have $*\coprod *\cong *$, so the diagrams are naturally isomorphic.
Ah...I didn't notice that you were assuming a pointed model structure. I wrote my comment and answer without that assumption.
I changed my answer to reflect that we are in the pointed case.
if we don't assume properness, I don't even see why the first is homotopy-invariant!
The pushout of a diagram A←B→C in which all objects are cofibrant and one of the maps is a cofibration
is always its homotopy pushout in any model category,
see Proposition A.2.4.4 in Lurie's Higher Topos Theory.
This is the case for both of your examples, since the initial object is cofibrant.
An argument showing that the two models of suspension are equivalent will probably be based on something like the following:
Assertion: Suppose we are given a commutative diagram of the form
$\require{AMScd}$
\begin{CD}
\ast @<<< C @= C \\
@VVV @VVV @VV V \\
Y @<<< A @>g>> X \\
@| @VVV @VVV\\
Y @<<< A/C @>>h > X/C
\end{CD}
in which the vertical directions form cofibration sequences (when I write $A/C$, I mean $A \amalg_C \ast$, where $\ast$ is the zero object), and the maps $g$ and $h$ are cofibrations.
Then the map of pushouts
$$
Y \cup_A X \to Y \cup_{A/C} X/C
$$
is a weak equivalence, or better still, it is an isomorphism.
It seems to me that this is true by the assumption of properness, since we have a cofibration sequence given by the pushouts
$$
\ast\cup_C C \to Y \cup_A X \to Y \cup_{A/C} X/C
$$
in which the first term is isomorphic to $\ast$.
Let's call the first suspension $SX$ and the second one $\Sigma X$.
Given the assertion, we can show that the two models for suspension are weakly equivalent as follows:
Apply the assertion to the diagram
\begin{CD}
\ast @<<< \ast\amalg X @= X \\
@VVV @VVV @VVV \\
\ast @<<< X\amalg X @>g >> \text{Cyl}(X) \\
@| @VVV @VVV\\
\ast @<<< X @>>h > CX
\end{CD}
(where $CX = \text{Cyl}(X)/X$)
to get that the map
$$
SX\to \Sigma X
$$
is a weak equivalence.
If you're looking to learn more about homotopy colimits, I strongly recommend:
Dugger's Primer on Homotopy colimits
Shulman's Homotopy limits and colimits and enriched homotopy theory
Rehmeyer's 1997 master's thesis (under Mike Hopkins), "Homotopy Colimits"
Homotopy Limit Functors on Model Categories and Homotopical Categories by Dwyer, Hirschhorn, Kan, Smith
Riehl's book Categorical Homotopy Theory
I note that the first four predate Lurie's books, and the fifth works out many examples. The fact that the pushout and homotopy pushout agree for a span diagram when all objects are cofibrant and one leg is a cofibration (even without left properness) is 13.10 in Dugger's manuscript. A detailed treatment of the cofiber is in Rehmeyer's thesis. Shulman handles your other question, about why these two ways of computing the homotopy colimit agree (e.g., Section 5, drawing on Dwyer, Hirschhorn, Kan, Smith).
|
2025-03-21T14:48:31.279248
| 2020-06-15T18:29:01 |
363153
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Iosif Pinelis",
"Pat Devlin",
"Yuval Peres",
"https://mathoverflow.net/users/22512",
"https://mathoverflow.net/users/36721",
"https://mathoverflow.net/users/7691"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630226",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363153"
}
|
Stack Exchange
|
Inequality for difference of consecutive atom probabilities for binomial distribution
Edit: This post was originally two questions, the first of which has been answered, but a reference would still be appreciated if existent. The second question has been removed and migrated to its own post here.
I would not be particularly surprised if the inequalities I want are readily available in several standard texts. Unfortunately, these days all of my probability books are stuck in my office (and I'm stuck at home). So thanks for the help in advance.
Let $B_{n,p}$ denote the usual binomial random variable (i.e., the probability that it equals $k$ is given by ${n \choose k} p^k (1-p)^{n-k}$). I would like some references (or short proofs) for the following fact:
For all integers $n, k$, and all $0 < p < 1$, we have $\mathbb{P}(B_{n,p} = k) - \mathbb{P}(B_{n,p} = k+1) \leq \dfrac{100}{n p (1-p)}$
[I'd be happy if the number "100" is replaced by whatever universal constant is convenient.]
I was having trouble coming up with a particularly good proof of this, so that would be welcome. But ideally, I would prefer a reference if possible. Thanks!
(If curious, this claim could be proven by looking at the left-hand-side as a function of $k$, noting when it's increasing [e.g., by taking consecutive differences], and checking the value at this max. Unsurprisingly, this is maximized when $k$ is one standard-deviation above the mean [this corresponds to the inflection point in the normal distribution])
I second the suggestion by Iosif Pinelis: It is best to avoid stating multiple questions in one post.
Alright. I’ll move the second one to another post. I’d be happy for a reference for the first fact, by the way... That one feels like it’s already known.
Concerning your first question:
Let $p_k:=P(B_{n,p}=k)$. We have to show that
\begin{equation*}
p_k-p_{k+1}\ll\frac1{npq}, \tag{1}
\end{equation*}
where $q:=1-p$ and $a\ll b$ means that $a\le Cb$ for some universal real constant $C>0$.
Clearly, without loss of generality (wlog)
\begin{equation*}
1\ll npq.
\end{equation*}
Since $p_{k+1}=\frac{n-k}{k+1}\frac pq\,p_k$, we rewrite (1) as
\begin{equation*}
\frac{k+1-(n+1)p}{(k+1)q}\,p_k\ll\frac1{npq}. \tag{2}
\end{equation*}
It is now clear that wlog $k+1\ge(n+1)p$, so that $(k+1)q\ge npq$. Therefore and because $k+1-(n+1)p=k-np+q\le k-np+1$, it suffices to show that
\begin{equation*}
a_k:=(k-np)\,p_k\ll1. \tag{3}
\end{equation*}
So, wlog $k>np$.
For such $k$, it is easy to see that $a_{k+1}\ge a_k$ iff $k<k_*$, where $k_*$ is an integer such that $|k_*-np-\sqrt{npq}|\ll1$. So, the integer $k_*$ is a maximizer of $a_k$ in $k$. So,
wlog $|k-np-\sqrt{npq}|\ll1$ and hence
$$k-np\ll \sqrt{npq}.$$
Also, as is well known (see e.g. Proposition 2),
\begin{equation*}
p_k\ll\frac1{\sqrt{npq}}.
\end{equation*}
Now (3) immediately follows.
Alright, thanks. Can we get a similarly short proof of the second point in the original post?
@PatDevlin : I think the inequality in your second question can be proved (if it's true) in a straightforward (but likely tedious) way based on Stirling's formula. However, I don't see any reasons for this inequality to be known or to have a short and nice proof. Also, I think asking multiple questions in one post (especially such rather unrelated ones as in this case) should be avoided, for your own sake and for the sake of others. Therefore, I'd suggest you post your second question separately .
|
2025-03-21T14:48:31.279501
| 2020-06-15T18:37:27 |
363154
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"LSpice",
"https://mathoverflow.net/users/2383"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630227",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363154"
}
|
Stack Exchange
|
Conic sections are to cones as quadric surfaces are to what?
I just started my Calculus 3 class, which deals mainly with multi-variable Calculus. We went over the different types of 3-D surfaces, mainly Quadric Surfaces (ellipsoids, paraboloids, hyperboloids, etc.) which according to the textbook are "3D analogs of conic sections". I know for conic sections you slice cones with a plane to get ellipses, parabolas, hyperbolas, etc, so I was wondering what is the equivalent for the sliced cone when dealing with these kind of surfaces? How is such a body and the operation of "slicing" it described? And can this be generalized for higher dimensions?
I didn't quite know how to word the problem in a sentence or what field of math it's part of to do a quick google search so if anyone could point me in the right direction I would appreciate it.
Thanks
This is not a research-level mathematics question, but it's a really, really good question. I'm not voting to close, but, if it does get closed, please don't take it personally; it would definitely do well at MSE, which you may find a more suitable site for a while.
The thing that makes quadric surfaces "3D analogs of conic sections" is just that they are defined by a single equation of degree 2. It's not a particularly helpful characterization though, I would say. It strikes me more as something a pedagogue would say in a (poor) attempt to relate a new concept to one already known.
[One could see quadric surfaces as "slices" of a certain geometric object, analogously to conic sections, but only if you are only interested in them up to isomorphism (as algebraic varieties). Then a quadric surface can be regarded as a hyperplane section of the image $V \subset \mathbb{P}^9$ of the Veronese embedding $\mathbb{P}^3 \to \mathbb{P}^9$. Note that this is only partially analogous to the representation of a conic section (considered as a plane curve) as the intersection of cone and plane, since the conic section is related to the aforementioned intersection by a projective transformation, which is a lot stronger than saying they are isomorphic as algebraic varieties.]
|
2025-03-21T14:48:31.279660
| 2020-06-15T18:56:22 |
363157
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Evgeny Shinder",
"https://mathoverflow.net/users/108274",
"https://mathoverflow.net/users/111491",
"user267839"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630228",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363157"
}
|
Stack Exchange
|
Smoothness of Hilbert scheme of rational normal curves
I'm trying to solve Exercise 1.26 from the book "Moduli of Curves"
by Harris and Morrison on page 14:
Exercise (1.26) Determine the normal bundle to the rational normal
curve $C \subset \mathbb{P}^r =: X$ and show, by computing its $h^0$,
that the Hilbert scheme
parameterizing such curves is smooth at any point corresponding to
a rational normal curve.
The part I can't solve is that one how the dimension of global sections
of the normal bundle $h^0= \dim_{\mathbb{C}}H^0(C, N_{C / X})$ imply that the Hilbert scheme $\mathcal{H}$
is smooth at $[C]$.
Since $C$ is rational normal curve $C \cong \mathbb{P}^1$ and the long
exact cohomology sequence of the
exact sequence $0\rightarrow \mathcal{T}_{C} \rightarrow
\mathcal{T}_{X}\otimes\mathcal{O}_{C} \rightarrow
\mathcal{N}_{X|C} \rightarrow 0$ gives
$$\operatorname{dim} H^0 (C,N_{C/X}) = \operatorname{dim}
H^0 (C,T_X \otimes O_C) -3 = (r+1) \cdot \dim \ H^0 (C,O_C(1))-3$$
(the last one is consequence of Euler sequence). Fine now we know $h^0$. Immediately before it was shown that the tangent space of
$\mathcal{H}$ at $[C]$ is
$$T_{[C]}\mathcal{H}= H^0(C, N_{C / X})$$
Per definition a scheme $X$ is smooth at a point $P$ if the dimension of
tangent space at this point coinsides with local dimension: ie there exist
an open affine subscheme $U \subset X$ with $P \in U$ and
$\dim_P T= \dim \ U$.
Working through previous pages I nowhere found informations about
local dimension of the Hilbert scheme parametrizing rational normal
curve so I have no idea how can I compare the dimension of
$T_{[C]}\mathcal{H}= H^0(C, N_{C / X})$ which I calculated above
with the local dimension of $\mathcal{H}$.
Can anybody give me some hints how to attack this part of the exercise?
You can represent the open subscheme of the Hilbert scheme as a GIT quotient. For instance for twisted cubics, one needs to choose 4 homogeneous polynomials of degree $3$ in $X$, $Y$, up to common scaling and then quotient out the $\mathbf{PGL}_2$-action on $X$, $Y$. This gives a $12$-dimensional component of the Hilbert scheme.
@EvgenyShinder: Could you explain why the Hilbert scheme of rational normal curves can be modeled locally by such quotient? The reason why exactly 4 homogeneous poynomial of degree $3$ do the job isn't clear to me. Or could you give a reference where this constrcution you have described can be looked up?
At least on the level of points of the moduli space, to specify a rational normal curve of degree $r$, you can first parametrize it as a map from $\mathbf{P}^1$ to $\mathbf{P}^r$ of degree $r$ (Veronese embedding). This amounts to choosing the basis of degree $r$ polynomials in $X$, $Y$. Two maps give the same curve if they differ by linear action on $X$, $Y$.
|
2025-03-21T14:48:31.279865
| 2020-06-15T19:02:51 |
363158
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Kevin",
"Will Sawin",
"https://mathoverflow.net/users/141277",
"https://mathoverflow.net/users/1508",
"https://mathoverflow.net/users/18060",
"pinaki"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630229",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363158"
}
|
Stack Exchange
|
Degree inequality of a polynomial map distinguishing hyperplanes
Let $H_1, \ldots, H_n$ be $n$ linearly independent hyperplanes in $k^n$, for some arbitrary field $k$. Let $X = H_1 \cup H_2 \cup \cdots \cup H_n$. Is it true that if $F=(f_1, \ldots, f_n)$ is a polynomial map from $k^n$ to $k^n$, such that $F(X) \cap F(k^n - X) = \emptyset$, then $\sum \deg(f_i) \ge n$?
This holds under the stronger condition that for all $a \in X$, $F(a)$ has at least one coordinate equal to zero, and for all $a \notin X$, $F(a)$ has all coordinates nonzero: $\prod f_i$ then cuts out $X$, but since any polynomial cutting out $X$ has degree at least $n$, the conclusion follows.
More generally, for a variety $X \subseteq k^n$, define $C(X)$ to be the minimum of the sum of the degrees of the coordinate functions over all polynomial maps $F$ where $F(X) \cap F(k^n - X) = \emptyset$. Is this quantity equivalent to something that's well known? You trivially have $C(X) \le n$ by taking $F$ to be the identity map. Also, if $X$ is defined by equations whose degree-sum equals $m$, $C(X) \le m$. Is one of these inequalities always sharp?
With regard to "Is one of these inequalities always sharp?": For variety defined by $x^{100} +y^{100}= 1$ , the polynomial map $(x,y)$ satisfies this property, even though its degree $m$ is much greater than $2$, and the number of variables $n$ can be arbitrarily large.
Edit: The following argument does not work, as pointed by OP in the comments.
Proceed by induction on $n$. By your reasoning it holds for $n = 1$. For arbitrary $n \geq 2$, take a generic hyperplane $H$ distinct from the $H_1, \ldots, H_m$, and apply induction on the restriction of $F$ to $H$.
I'm having trouble understanding the induction. Say $n=m=2$. Restricting $F$ to a generic line through the origin, by the $n=1$ case the sum of the degrees of the restriction is at least 1. But you want to say this sum is at least 2. How do you get the +1?
$m$ does not change. Since $H$ is generic, $H'_j := H \cap H_j$ will be nonempty for each $j$. Linear independence of $H_1, \ldots, H_m$ in $k^n$ "should" imply linear independence of $H'_1, \ldots, H'_m$ in $k^{n-1}$. In particular if $n = m = 2$, after restricting to a generic line you will have $n = 1$ and $m = 2$.
How can you have 2 linearly independent points in $k$?
Aah, I understand. I was assuming by "linear independence" you actually meant "in general position." You are right, with "linear independence" there is a problem with induction. But with "in general position" there is not. But then there is a problem with the base case. I did not think it through - my apologies.
|
2025-03-21T14:48:31.280160
| 2020-06-15T21:26:48 |
363166
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630230",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363166"
}
|
Stack Exchange
|
Equivalence between integrals over a reduced space
Context: I have been trying to understand this paper from Y. Cho and K. Kim. More precisely, a specific argument in Lemma 2.2 where they say the ABBV localization formula on an integral over a symplectic cut $N_c$ on the extremum of the interval $[c - \varepsilon, c + \varepsilon]$ gives the following
$$0 = \int_{N_c} 1 = \sum_{p \in N^{S^1}\cap \phi^{-1}(c)} \frac{1}{d_p}\frac{1}{p_1p_2 \lambda^2}+ \int_{N_{c-\varepsilon}}\frac{1}{\lambda + e_-} + \int_{N_{c+ \varepsilon}} \frac{1}{-\lambda - e_+}$$
is equivalent to
$$0 =\sum_{p \in N^{S^1}\cap \phi^{-1}(c)} \frac{1}{d_p}\frac{1}{p_1p_2 \lambda^2} = \int_{N_{c-\varepsilon}} e_- - \int_{N_{c+ \varepsilon}} e_+$$
where $d_p$ is the order of the group acting on $p$, $p_1$ and $p_2$ are the isotropy weights of the $S^1$-representation on $T_pN$, $\lambda$ is the weight of the $S^1$-action on the fiber of the line bundle $L$, and $e_-$ (respectively e_+) is the Euler class of $\phi^{-1} (c- \varepsilon)$ (respectively $\phi^{-1} (c + \varepsilon)$).
Question: I don't understand the 'is equivalent' part. More precisely, why is the integral $\int_{N_{c-\varepsilon}}\frac{1}{\lambda + e_-}$ equivalent to $\int_{N_{c-\varepsilon}} e_-$.
Remark: I believe there is a typo in the paper where they write $M_{c - \varepsilon}$ where it should be $N_{c- \varepsilon}$.
My thoughts: The first equation is actually written $ -\int_{N_c} (-1) = 0$ where the Euler class of the bundle $\phi^{-1}(c) \to N_c =\phi^{-1}(c)/S^1$ is $-1$, since $N_c \cong B^2$ and the fibers are homeomorphic to $S^3$. The second equation comes from applying ABBV localization formula for orbifolds which is quite straightforward.
This gives us that
$$0=\int_{N_{c-\varepsilon}}\frac{1}{\lambda + e_-} - \int_{N_{c+ \varepsilon}} \frac{1}{\lambda + e_+} = -\sum_{p \in N^{S^1}\cap \phi^{-1}(c)} \frac{1}{d_p}\frac{1}{p_1p_2 \lambda^2}.$$
This is as far as I got. Any additional comments or thoughts?
|
2025-03-21T14:48:31.280291
| 2020-06-15T21:29:08 |
363167
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630231",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363167"
}
|
Stack Exchange
|
Sum of squares of polynomials in one variable with missing powers
As we known, a positive polynomial in $\mathbb{R}\left[x\right]$ can be expressed as a sum of squares of polynomials.
The problem is that whether this holds if some powers is missed.
Let $A$ be a finite set of natural numbers with $0\in A$. Let $\mathcal{P}=Lin\left \{x^n: n\in A\right\}$ be a linear span of monimials and $\mathcal{P}^2=Lin\left\{x^{m+n}:m,n\in A\right\}$. Let $f$ be a positive polynomial in $\mathcal{P}^2$, that is, $f\in \mathcal{P}^2$ and $f\geq 0$.
Questions:
(i) When $f$ can be expressed as a sum of squares of polynomials in $\mathcal{P}$?, i.e, $f=\sum_{i}f_i^2, \quad f_i\in \mathcal{P}$
(ii) Is there another modified form for a sum of squares of polynomials in $\mathcal{P}$?
I will give an example.
Let $A=\left\{0,2,3\right\}$. Then $\mathcal{P}=Lin\left\{1,x^2,x^3\right\}$ and $\mathcal{P}^2=Lin\left\{1,x^2,x^3,x^4,x^5,x^6\right\}$. Take $f=x^2$. We see that the only way to write $f$ as a sum of squares is $f=\left(x\right)^2$. But in this case, $x\notin\mathcal{P}$. This means $f$ cannot expressed as a sum of squares of polynomials in $\mathcal{P}$.
The first question is not much difficult. I focus on the second one.
Idea: a positive polynomial in $\mathcal{P}^2$ is a finite sum of terms $gp^2$, where $p,gp \in \mathcal{P}$ and $g\geq 0$.
Example: Let $A=\left\{0,2,3\right\}$ as above. Take $f=x^4-x^2+1$. We can write
$$f=\left(x^2-1\right)^2+x^2=1.\left(x^2-1\right)^2+x^2.1$$
(we have here $g=1\geq 0,\ p=gp=x^2-1\in \mathcal{P}$ and $g=x^2\geq 0, \ p=1\in \mathcal{P}, \ gp = x^2\in \mathcal{P}$)
This idea may be right or not. I have no counter-example so far. I took some cases with low degree and did see that this really works. But I don't know how to prove it. The condition $p\in \mathcal{P}$ (and also $gp\in \mathcal{P}$) is required (because we want a expression as sum of squares of polinomials in $\mathcal{P}$).
Do you think it works? I think it's true and try to prove it.
|
2025-03-21T14:48:31.280424
| 2020-06-15T22:31:42 |
363172
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"A Stasinski",
"Aurel",
"Benjamin Steinberg",
"Keivan Karai",
"https://mathoverflow.net/users/15934",
"https://mathoverflow.net/users/2381",
"https://mathoverflow.net/users/3635",
"https://mathoverflow.net/users/40821"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630232",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363172"
}
|
Stack Exchange
|
Intrinsic characterisation of a class of rings
This may be well known, but I was unable to find an answer browsing literature. Let us temporarily call a commutative (unital) ring $R$ an O-ring if there exists an integer $n \ge 1$, a local field of zero characteristic (that is, a finite extension of $ \mathbb{Q}_p$ for some prime $p$) with ring of integers $ \mathcal{O}$ and the unique maximal ideal $\mathfrak{p}$ such that $R$ is isomorphic (as a ring) to $ \mathcal{O}/\mathfrak{p}^n$. Now, it is clear that an O-ring is a finite local ring. It is also easy to see that not all local rings arise in this way. My question is: is there a purely ring-theoretic way of characterising O-rings, without making any reference to local fields at all? I would appreciate any reference to the literature as well.
I think the term Galois ring is used but maybe it is just a special case. I'm not an expert but ran into this kind of thing.
@YCor: Thanks! It's fixed now.
About the terminology: related but not literally the same are Deligne's "truncated valuation rings" ("anneau de valuation tronqué", in "Les corps locaux de caractéristique p, limites de corps locaux de caractéristique 0"). Definition: local ring whose maximal ideal is principal and nilpotent. Equivalently, complete DVR modulo a power of the maximal ideal.
The following criterion came up when I was writing this answer (but I did not end up using it there):
Lemma. Let $R$ be a commutative ring. Then $R$ is of the form $\mathcal O_K/\mathfrak p^n$ for a finite extension $\mathbf Q_p \subseteq K$ and $n \in \mathbf Z_{>0}$ if and only if $R$ is finite, local, and $\dim_{R/\mathfrak m} \mathfrak m/\mathfrak m^2 \leq 1$.
Proof. Clearly any $R$ of the form $\mathcal O_K/\mathfrak p^n$ is finite, local, and has $\dim_{R/\mathfrak m}\mathfrak m/\mathfrak m^2 \leq 1$ (with equality if and only if $n > 1$). Conversely, suppose $R$ is finite, local, and has $\dim_{R/\mathfrak m} \mathfrak m/\mathfrak m^2 \leq 1$. Write $k = R/\mathfrak m$, and set $p = \operatorname{char} k$ and $q = |k|$, so that $k = \mathbf F_q$ with $q = p^r$ for some $r \in \mathbf Z_{>0}$. Write $\mathbf Z_q = W(\mathbf F_q)$ for the Witt vectors (the unique unramified extension of $\mathbf Z_p$ of degree $r$), which is a Cohen ring for $k$.
If $t \in \mathfrak m$ is a generator, then (the proof of) the Cohen structure theorem (Tag 032A) constructs a surjection
$$\phi \colon \mathbf Z_q[[t]] \to R$$
taking $t$ to $t$. Let $n = \operatorname{length}(R)$, so that $R \supsetneq \mathfrak m \supsetneq \ldots \supsetneq \mathfrak m^n = 0$, where $\mathfrak m^i$ is generated by $t^i$ for all $i$. Let $e \in \{1,\ldots,n\}$ be the integer such that $(p) = \mathfrak m^e$. Then there exists $u \in \mathbf Z_q^\times$ such that $\phi(up) = \phi(t^e)$, i.e. $t^e-up \in \ker\phi$. Thus, $\phi$ factors through
$$\mathbf Z_q[[t]] \twoheadrightarrow \mathbf Z_q\big[\sqrt[e\ \ ]{up}\big] \twoheadrightarrow R,$$
which realises $R$ as $\mathcal O_K/\mathfrak p^n$ where $K = \mathbf Q_q\big(\sqrt[e\ \ ]{up}\big)$ (and $n = \operatorname{length}(R)$ as above). $\square$
Remark. So in fact, it sufficies to take $K$ of the form $\mathbf Q_q\big(\sqrt[e\ \ ]{up}\big)$.
Thanks for this answer. It's a very nice characterisation!
Note that this shows that restricting to finite extensions of $\mathbb{Q}_p$ is artificial and the class of finite local rings obtained is the same if you allow $K$ to be any non-Archimedean local field with finite residue field.
Note also that by Nakayma's lemma, the condition on the dimension of $\mathfrak{m}/\mathfrak{m}^2$ (for local Noetherian rings, as here) can be replaced by "principal ideal ring", so the rings in question can also be characterised as finite local principal ideal rings.
|
2025-03-21T14:48:31.280682
| 2020-06-15T22:59:00 |
363174
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630233",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363174"
}
|
Stack Exchange
|
What are the corners of this polytope?
Let $f$ be a non-negative function on the positive integers such that $f(s+t)\geq f(s) + f(t)$ for all $s,t\in\mathbb{Z}^+$. Consider the polytope consisting of all $x\in \mathbb{R}^n$ such that $$\sum_{i\in S} x_i \geq f(|S|)$$ for all subsets $S\subseteq \{1,\dots,n\}$. My question is: is there a compact description of the corners of this polytope? My conjecture is that the corners simply correspond to permutations $\sigma$ where one sets $x_{\sigma(1)} = f(1)$, $x_{\sigma(2)} = f(2) - f(1)$, $x_{\sigma(3)} = f(3) - f(2)$, and so on.
|
2025-03-21T14:48:31.280749
| 2020-06-15T23:03:01 |
363176
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"David Roberts",
"Jiří Rosický",
"fosco",
"https://mathoverflow.net/users/4177",
"https://mathoverflow.net/users/73388",
"https://mathoverflow.net/users/7952"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630234",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363176"
}
|
Stack Exchange
|
Accessible 2-category of presheaves
Let $A$ be a locally small category and let $\mathbf{Cat}$ be the 2-category of small categories, functors and natural transformations and let $Ps(A)$ be the 2-category of presheaves (the objects are functors, 1-cells are natural transformations and the 2-cells are modifications).
My question is:
Is $Ps(A)$ an accessible category?
I read some things in enriched categories and 2-categories so I think is true but I am not sure.
Somebody can help me with a references?
Presheaves of categories?
Your category of presheaves does not need to be legitimate. So, one cannot expect that is is accessible.
You can solve the issue in the comment above by either using small presheaves https://ncatlab.org/nlab/show/small+presheaf (and I bet that's what you meant when you wrote the definition of $PA$), or by using class-accessible categories https://www.sciencedirect.com/science/article/pii/S0022404912000321
|
2025-03-21T14:48:31.280845
| 2020-06-15T23:57:59 |
363180
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Jochen Glueck",
"Paul",
"https://mathoverflow.net/users/102946",
"https://mathoverflow.net/users/126827",
"https://mathoverflow.net/users/129185",
"mathworker21"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630235",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363180"
}
|
Stack Exchange
|
Convergence for a non-linear second order difference equation
In my work, I need to study the convergence of sequence defined by the non-linear recurrence relation
$$
u_0,u_1>0, \qquad \forall n\in \mathbb N, \; u_{n+2}=a\ln(1+u_n)+b\ln(1+u_{n+1})
$$
with fixed $a,b> 0$.
For $a=b=1$ Wolfram gives the limit $-1-2 W_{-1}\left(-\frac{1}{2\sqrt{e}}\right)$. (W is the Lambert function) https://en.m.wikipedia.org/wiki/Lambert_W_function. I'm looking for ideas to study convergence of this sequence. I would suspect that non-classical arguments are needed to do so.
Note: I have already asked the question on Mathematics Stack Exchange, but it was closed quickly for a reason that I do not understand.
Something seems to be odd about the claimed limit: the corresponding fixed point equation is $u = (a+b)\ln(1+u)$. If $a+b \le 1$, this equation has no solution in $[0,\infty)$ except for $0$; for $a+b > 1$ it has precisely one more solution in $[0,\infty)$ (besides $0$), but this solution does certainly depend on $a+b$.
I forgot to say that wolfram gives this limit in the case a = b = 1
Thank you for the clarification. I made an edit to the question to tidy it up a bit. I also changed the title to make it a bit more informative. (The wording "Needing proof for..." in the title of a question is not really helpful to readers who browse a list of questions.)
That said, I am not sure why the question was downvoted. I might be overlooking something, but at first glance the question whether the sequence always converges does not seem obvious to me. (By the way, one more remark to the OP: I would suggest that you also add an explanation of the notation $W_{-1}$ to the question.)
@JochenGlueck Thank you for supporting
the question
I would suspect that non-classical arguments are needed to do so.
All you need to know is that $t\mapsto \frac 1{1+t}$ is a decreasing function, so for $0<x\le x'$ we have $\frac{\log(1+x')}{\log(1+x)}\le \frac {x'}x$. This immediately implies that the mapping $T:(x,y)\mapsto (y,a\log(1+x)+b\log(1+y))$ is non-expanding in the metric $d((x,y),(x',y'))=max(|\log x-\log x'|,|\log y-\log y'|)$ on $(0,+\infty)^2$ and $T^2$ is a weak contraction ($d(T^2p,T^2q)<d(p,q)$ if $p\ne q$).
The next step is to consider the equation $x=(a+b)\log(1+x)$ and notice that either $a+b\le 1$ (in which case the iterations trivially converge to $(0,0)$, i.e., "escape to infinity" in our metric) or $a+b>1$ in which case there is a positive solution $x_0$ of that equation and $T$ maps every compact ball $B(p_0,r)$ into itself where $p_0=(x_0,x_0)$ is a fixed point of $T$. If $(u_0,u_1)$ lies in that ball, we can apply the usual result about weak contractions on compact sets to conclude that we have convergence to $p_0$.
but it was closed quickly for a reason that I do not understand.
The closure reason cited on MSE is totally ridiculous IMHO. The very fact that you ask a well-posed mathematical question is a sufficient proof of "relevance to you" and nobody is obliged to verify the "relevance to the community" (whatever it might mean) when asking. The only possible reason for closure I see is that the question is rather trivial, but, given the usual amount of total junk floating on MSE, I doubt that it was what determined its fate. So, please, accept my apologies for the MSE users behavior, ignore this incident and keep asking :-)
Thank you so much. I see more clearly how to attack these kinds of questions. thank you for your support
@fedja it clearly says the question was closed for not providing context. You can't just ask a mathematical question with nothing else (e.g. context, attempts,...) on MSE
If $f(x,y) = \ln(1+x) + \ln(1+y)$, and $p = -1 - 2 W_{-1}(-1/(2 \sqrt{e}))$, it is easy to verify that $f(p,p) = p$. Moreover, it appears numerically that $\|(x_3,x_4) - (p,p)\| < \|(x_1, x_2) - (p,p)\|$ where $x_3 = f(x_1, x_2)$ and $x_4 = f(x_2, x_3)$
and $(x_1, x_2)$ is sufficiently close to $(p,p)$.
Here is a plot of $\|(x_3,x_4) - (p,p)\|^2/\|(x_1,x_2) - (p,p)\|^2$ as a function of $(x_1,x_2)$ for $0.1 \le x_1 \le 5$, $0.1 \le x_2 \le 5$.
If $(x_1,x_2)$ is in some circle centred at $(p,p)$ which is contained in the region where $\|(x_3,x_4) - (p,p)\| < \| (x_1, x_2) - (p,p)\|$,
we will have $x_n \to p$ as $n \to \infty$.
|
2025-03-21T14:48:31.281142
| 2020-06-16T00:25:42 |
363181
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Marcelo Pedro",
"https://mathoverflow.net/users/24463",
"https://mathoverflow.net/users/98507",
"srossd"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630236",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363181"
}
|
Stack Exchange
|
Intersection of a vector subspace with a cone
Given a set of vectors $S=\{v_1, v_2,...,v_d\} \subset \mathbb{R}^{N}, \, N>d$, is there any algorithm to decide if there exist a vector with all coordinates strictly positive in the generating subspace $\langle S \rangle$?
I am aware of results like the Farkas Lemma (or variants as Gordan's Theorem, etc...).
Or some papers like: Ben-Israel, Adi. "Notes on linear inequalities, I: The intersection of the nonnegative orthant with complementary orthogonal subspaces." Journal of Mathematical Analysis and Applications 9.2 (1964): 303-314.
But I am looking for an algorithm to decide yes or no.
Let
$$M = \begin{pmatrix} | & | & \cdots & | \\ v_1 & v_2 & \cdots & v_d \\ | & | & \cdots & | \end{pmatrix}$$
and let $e_1, \ldots, e_N$ be the standard unit vectors. Then consider the linear programs indexed by $i = 1, \ldots, N$:
$$\begin{aligned}
&\text{maximize }\langle e_i,Mx\rangle \\
&\text{subject to }\langle e_j, Mx\rangle \ge 0,\quad j=1,\ldots,N
\end{aligned}$$
Clearly if any of these programs is unsatisfiable, then all of them are, and moreover $\langle S\rangle$ is disjoint from the positive orthant. So assume all programs are satisfiable.
If the $i$th program is bounded, let $x^{(i)}$ be its optimal solution and $y^{(i)}$ be its optimal value. If it is unbounded, add the constraint $\langle e_i, Mx\rangle \le 1$, and let $x^{(i)}$ be the optimal solution to this modified program, and the optimal value becomes $y^{(i)} = 1$.
If any of the $y^{(i)}$ are zero, then clearly $\langle S\rangle$ is disjoint from the positive orthant.
In the remaining case, where the programs are satisfiable and all optimal values are positive, let $x^{(i)}$ be the optimal solution for the $i$th program and let $y^{(i)}$ be the optimal value for the $i$th program. Since $\langle e_i, Mx^{(i)}\rangle = y^{(i)} > 0$ and $\langle e_j, Mx^{(i)}\rangle \ge 0$, all by construction, we can take any linear combination $\sum a_i Mx^{(i)}$, with all $a_i > 0$, to obtain a point in the intersection of $\langle S\rangle$ with the positive orthant.
In summary, if all the programs are satisfiable and have positive optimal values, then yes; otherwise, no. Note that adding constraints to make all programs bounded is for convenience of proving correctness, but unimportant for implementing the algorithm (one needs only check that all optimal values are either positive or infinite).
srossd, can you point out any reference about the algorithm that you described? Or even, any reference that you think can help me? Thanks.
I don't have any references for this algorithm in particular. It's a direct application of LP, though. Each program asks the question "does this subspace intersect the positive $e_i$ half-space?". If each answer is yes, then by linearity, the subspace intersects the positive orthant. Otherwise, no.
Here's a quick python implementation, if that helps. https://repl.it/@srossd/IntersectsPositiveOrthant
|
2025-03-21T14:48:31.281353
| 2020-06-16T01:01:58 |
363182
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Daniel Loughran",
"Max Alekseyev",
"Stanley Yao Xiao",
"https://mathoverflow.net/users/10898",
"https://mathoverflow.net/users/47795",
"https://mathoverflow.net/users/5101",
"https://mathoverflow.net/users/7076",
"individ"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630237",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363182"
}
|
Stack Exchange
|
Solving a pair of ternary quadratic form equations
Let $Q_1(x_0, x_1, x_2), Q_2(x_0, x_1, x_2) \in \mathbb{Z}[x_0, x_1, x_2]$ be two primitive, non-singular ternary quadratic forms (possibly indefinite). Suppose we want to solve the simultaneous equations
$$\displaystyle u_1 = Q_1(x_0, x_1, x_2), u_2 = Q_2(x_0, x_1, x_2), u_1, u_2 \in \mathbb{Z}, \gcd(x_0, x_1, x_2) = 1.$$
Then according to this paper, a theorem of Mordell states that the solutions can be found as follows: first consider the conic defined by
$$\displaystyle C_{u_1,u_2} : Q_{u_1, u_2}(x_0, x_1, x_2) = u_1 Q_2(x_0, x_1, x_2) - u_2 Q_1(x_0, x_1, x_2) = 0.$$
If there is a solution at all, this conic must have a rational point, which can be detected using the local-to-global principle. Having found a rational point, one can then use the "slope/intersect" trick to produce a parametrization. That is, draw lines emanating from a global point which must exist by Hasse's principle: this line will intersect the conic at another rational point and this sweeps out all of the rational points on the conic. This produces binary quadratic forms $f_0(u,v), f_1(u,v), f_2(u,v)$ so that the conic $C_{u_1,u_2}$ is exactly parametrized by $z_i = f_i(u,v), i = 0,1,2$.
One then inserts these quadratic forms into $Q_1, Q_2$, producing binary quartic forms, and then solve the two Thue equations given by $u_i = Q_i(f_0(u,v), f_1(u,v), f_2(u,v))$.
What I am not quite sure is the number of parametrizations. That is, there could be inequivalent triples of quadratic forms $(f_0, f_1, f_2)$ which parametrize the points on $C_{u_1, u_2}$. How do we bound the number of distinct parametrizations?
https://mathoverflow.net/questions/208158/isotropic-ternary-forms/208494#208494
Are $u_1,u_2$ fixed, or are they also variables?
A different approach (based on reduction to Thus equations) is described in Theorem 6 of my paper.
Part of this is finding the primitive integer null vectors of an indefinite ternary quadratic form $f(x,y,z).$ Mordell points out that these occur in a finite number of parametrizations. The process you mention, stereographic projection around a rational point, does not do a good job of finding primitive integral solutions. Instead, it immediately finds all rational solutions, with no bound on denominators.
The Hessian matrix $H$ of an isotropic ternary form has this feature: there is an integer matrix $P$ and an integer $n$ such that
$$ P^T HP = nG \; , $$
where $G$ is the Hessian matrix of $g(x,y,z) = y^2 - zx \; . \;$ Indeed, there are infinitely of these. For a fixed $n,$ there are typically several such $P$ if any.
Let's see, the primitive null vectors of $y^2 - zx$ are precisely $(p^2,pq,q^2).$ Applying $P$ to this (as a column vector) gives a null vector for $H,$ and we get some ability to say when this will be primitive.
I worked this out for isotropic forms of the sort $A(x^2 + y^2 + z^2) - B(yz+zx+xy).$ The number of inequivalent $P$ matrices needed to produce all primitive null vectors can be arbitrarily large. I kept a list somewhere...
The original proof is in Fricke and Klein (1897), where it is mentioned in passing. Different versions have been published over the years. I eventually wrote down a proof using just matrices, gcd and the like.
The twelve matrices $P$ needed for
$$ 100(x^2 + y^2 + z^2) -541(yz + zx + xy) =0 $$
This includes something about the order of the (very symmetric) solutions.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
A = 100 B = 541
445 1009 430
430 -149 -134
-134 -119 445
478 1003 394
394 -215 -131
-131 -47 478
514 985 349
349 -287 -122
-122 43 514
529 973 328
328 -317 -116
-116 85 529
541 961 310
310 -341 -110
-110 121 541
574 913 253
253 -407 -86
-86 235 574
580 901 241
241 -419 -80
-80 259 580
604 835 184
184 -467 -47
-47 373 604
610 811 166
166 -479 -35
-35 409 610
616 781 145
145 -491 -20
-20 451 616
625 709 100
100 -509 16
16 541 625
628 643 64
64 -515 49
49 613 628
count was 12 end of A = 100 B = 541
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
my proof
That there can be arbitrarily many different parametrizations does not surprise me, but I am wondering whether one can obtain a divisor like bound for the number of them. I believe in such a case one can relate the number of parametrizations to the 2-torsion part of the class group of some quadratic order associated with the pair of forms and integers?
|
2025-03-21T14:48:31.281662
| 2020-06-16T01:56:18 |
363184
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"MathMath",
"https://mathoverflow.net/users/152094"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630238",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363184"
}
|
Stack Exchange
|
Representation of an arbitrary element on a fermionic Fock Space
Let $\mathcal{H}$ be a Hilbert space with orthonormal basis $\{\varphi_{k}\}_{k\in I}$. Take $\mathcal{H}^{\otimes n} := \overbrace{\mathcal{H}\otimes\cdots\otimes \mathcal{H}}^{\mbox{$n$ times}}$. An element of $\mathcal{H}^{\otimes n}$ can be expressed as:
$$\psi = \sum_{\{k_{1},...,k_{n}\}\subset I}\alpha_{k_{1},...,k_{n}}(\varphi_{k_{1}}\otimes \cdots \otimes \varphi_{k_{n}})$$
with $\alpha_{k_{1},...,k_{n}} = \langle \varphi_{k_{1}}\otimes\cdots\otimes\varphi_{k_{n}},\psi\rangle$. Let us define $\sigma^{*}$ as an operator on $\mathcal{H}^{\otimes n}$ which acts on the basis elements as:
$$\sigma^{*}(\varphi_{k_{1}}\otimes \cdots \otimes \varphi_{k_{n}}) := \varphi_{k_{\sigma(1)}}\otimes\cdots \otimes \varphi_{k_{\sigma(n)}}$$
where $\sigma$ is a permutation of the set $\{1,...,n\}$. We extend $\sigma^{*}$ to all $\mathcal{H}^{\otimes n}$ by linearity. Now, one can define:
$$A_{n}:= \frac{1}{n!}\sum_{\sigma}\epsilon_{\sigma}\sigma^{*}$$
an antisymmetrization operator on $\mathcal{H}^{\otimes n}$. Here $\epsilon_{\sigma}$ is the sign of the associate permutation $\sigma$. Then $A_{n}$ is an orthogonal projection and, if $A_{n}\mathcal{H}^{\otimes n}$ denotes its range, the fermionic Fock space is defined to be:
$$\mathcal{F}_{f}(\mathcal{H}) := \bigoplus_{n=0}^{\infty}A_{n}\mathcal{H}^{\otimes n}$$
with $A_{0}\mathcal{H}^{0} := \mathbb{C}$.
Alternatively, let us say that a tensor $\psi \in \mathcal{H}^{\otimes n}$ is antisymmetric if $\sigma^{*}\psi = \epsilon_{\sigma}\psi$ for every permutation $\sigma$. Take $\wedge^{n}\mathcal{H}$ to be the subspace of all antisymmetric tensors of $\mathcal{H}^{\otimes n}$ and $\wedge^{0}\mathcal{H} := \mathbb{C}$.
Question: Can I use the second approach to define fermionic Fock spaces in an equivalent way, as before? In other words, if I set $\mathcal{F}'_{f}(\mathcal{H}) := \bigoplus_{n=0}^{\infty}\wedge^{n}\mathcal{H}$, does it follow that $\mathcal{F}_{f}(\mathcal{H}) = \mathcal{F}'_{f}(\mathcal{H})$? Equivalently: is it possible to prove that every $\psi \in \wedge^{n}\mathcal{H}$ can be expressed as $\psi = \frac{1}{n!}\sum_{\sigma}\epsilon_{\sigma}\sigma^{*}\varphi$ for some $\varphi \in \mathcal{H}^{\otimes n}$?
[This is not research level, so probably does not belong on MO, but I think the question is well-asked.]
If $\psi\in\wedge^n\mathcal{H}$ then by definition $\sigma^*\psi = \epsilon_\sigma \psi$ for each permutation $\sigma$ and so as $\epsilon_\sigma \in \{\pm 1\}$ we have
$$ A_n\psi = \frac{1}{n!} \sum_\sigma \epsilon_\sigma \sigma^*\psi
= \frac{1}{n!} \sum_\sigma \epsilon_\sigma \epsilon_\sigma \psi
= \psi. $$
Actually, the converse is much more interesting, as it really involves a little bit of representation theory. By the converse, I mean: show that if a tensor is in the range of $A_n$ then it is anti-symmetric. So, $\psi = A_n\varphi$ for some arbitrary $\varphi$. Then for a permutation $\sigma$,
$$ \sigma^*\psi = \frac{1}{n!} \sum_\tau \epsilon_\tau \sigma^*\tau^*\varphi. $$
Set $\rho = \tau\sigma$ and notice that $\sigma^*\tau^*(\otimes_i \varphi_{k_i}) = \sigma^*(\otimes_i \varphi_{k_{\tau(i)}}) = \sigma^*(\otimes_i \varphi_{l_i})$ say, where thus $l_i = k_{\tau(i)}$. Then $l_{\sigma(i)} = k_{\tau(\sigma(i))}$ and so $\sigma^*\tau^*(\otimes_i \varphi_{k_i}) = \otimes_i \varphi_{k_{\tau\sigma(i)}} = (\tau\sigma)^*(\otimes_i \varphi_{k_i})$. Thus we have an anti-representation of the symmetric group. As $\epsilon$ is a group homomorphism, $\epsilon_\tau = \epsilon_{\rho\sigma^{-1}} = \epsilon_\rho\epsilon_\sigma$. Thus
$$ \sigma^*\psi = \frac{1}{n!} \sum_\rho \epsilon_\rho\epsilon_\sigma \rho^*\varphi
= \epsilon_\sigma A_n\varphi = \epsilon_\sigma \psi. $$
So $\psi\in\wedge^n\mathcal{H}$.
Great answer! Really got it! Thanks!
|
2025-03-21T14:48:31.282246
| 2020-06-16T02:42:06 |
363187
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Giorgio Metafune",
"https://mathoverflow.net/users/150653",
"https://mathoverflow.net/users/5656",
"kenneth"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630239",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363187"
}
|
Stack Exchange
|
first order derivative of the parabolic equation
Assume $b, \ell \in C_b^{1,2}(\mathbb R^2)$. We consider parabolic PDE
$$(P1)\quad \partial_t v = b \partial_x v + \partial_{xx} v + \ell, \ \forall (t, x) \in \mathbb R^+\times \mathbb R; \quad v(0, x) = 0, \forall x\in \mathbb R.$$
It is standard that there is a classical solution.
Assuming $\partial_{xxx} v$ exists, by taking $\partial_x$ to the equation, with $\hat v = \partial_x v$, we obtain
$$\partial_t \hat v = b \partial_x \hat v + \partial_{xx} \hat v + \partial_x \ell + \hat v \partial_x b, \ \forall (t, x) \in \mathbb R^+\times \mathbb R; \quad \hat v(0, x) = 0, \forall x\in \mathbb R.$$
Therefore, by uniqueness of the solution, we conclude that
The solution $\hat u$ of
$$(P2) \quad \partial_t \hat u = b \partial_x \hat u + \partial_{xx} \hat u + \partial_x \ell + \hat u \partial_x b, \ \hbox{ on } \mathbb R^+\times \mathbb R; \quad \hat u(0, x) = 0, \hbox{ on } \mathbb R,$$
satisfies $\hat u = \partial_x v$.
Now, I want to know if the above conclusion satisfies without assuming $v\in C^{1,3}$, that is
Let $b, \ell \in C_b^{1,2}(\mathbb R^2)$, do $\hat u$ of (P2) and $v$ of (P1) satisfy $\hat u = \partial_x v$?
This is true since $u$ has 3 derivatives with respect to $x$. For example, if $b=0$, solve first (P2) and call $v$ the solution. Then $u(t,x)=u(t,0)+\int_0^x v(t,y), dy$, since both solve (P1). One can also use difference quotients with respect to $x$ to show that $u_{xx}$ is Lipschitz but then needs and argument for the existence of the third derivative at any point. For general $b$ it shoud be the same.
Could you make your explanation into more detail? I could not understand. I tried similar way. Due to non-degeneracy, $\hat u\in C^{1,2}$. Set $u(t, x)= u(t, 0) + \int_0^x \hat u(t, y) dy$. Try to verify $u$ satisfies (P1), but the last step did not go through.
Sure. Let v be the solution of (P2) with b=0 and set $w(t,x)=u(t,0)+\int_0^x v(t,y)dy$. Then $w(0,x)=0$ and $w_t(0,x)=u_t(t,0)+\int_0^x v_t(t,y) dy$. Using the equation for $v$ this last term equals $u_t(t,0)+w_{xx}+\ell-(v_x(t,0)+\ell (t,0))=w_{xx}+\ell$. This says that $w$ solves (P1) and then coincides with $u$.
We define
\begin{equation*}
u(t, x) = g(t) + \int_{0}^{x} \hat u (t, y) \, d y, \quad \forall (t, x) \in (\mathbb{R}^{+}, \mathbb{R})
\end{equation*}
where $g(\cdot)$ is the function we want to find to make $u(t, x)$ is the solution of equation (P1). Suppose $u(t, x)$ is the solution of equation (P1), for the initial condition, we need
\begin{equation*}
u(0, x) = g(0) + \int_{0}^{x} \hat u (0, y) \, d y = g(0) = 0.
\end{equation*}
And for $(t, x) \in (\mathbb{R}^{+}, \mathbb{R})$, we have
\begin{eqnarray*}
\partial_{t} u & = & g^{'} (t) + \int_{0}^{x} \partial_{t} \hat u(t, y) \, d y \\
& = & g^{'} (t) + \int_{0}^{x} (b \partial_{x} \hat u + \partial_{xx} \hat u + \partial_{x} l + \hat u \cdot \partial_{x} b ) (t, y) \, d y \\
& = & g^{'} (t) + (b \hat u + \partial_{x} \hat u + l) |_{0}^{x} \\
& = & g^{'} (t) + b \hat u + \partial_{x} \hat u + l - (b \hat u + \partial_{x} \hat u + l)(t, 0).
\end{eqnarray*}
By the definition of $u(t, x)$, we can get that
\begin{equation*}
\partial_{x} u (t, x) = \hat u (t, x), \quad \forall (t, x) \in (\mathbb{R}^{+}, \mathbb{R}),
\end{equation*}
then we have
\begin{equation*}
\partial_{t} u - (b \partial_{x} u + \partial_{xx} u + l) = g^{'}(t) - (b \hat u + \partial_{x} \hat u + l)(t, 0).
\end{equation*}
Thus the sufficient condition that $u(t, x)$ is the solution of equation (P1) is
\begin{equation} \label{C1}
(C1) \quad
\begin{cases}
g^{'}(t) = (b \hat u + \partial_{x} \hat u + l)(t, 0) \\
g(0) = 0
\end{cases}
\end{equation}
where $\hat u(t, x)$ is the solution of equation (P2). We define
\begin{equation}
g(t) = \int_{0}^{t} (b \hat u + \partial_{x} \hat u + l)(s, 0) \, d s, \quad \forall t \in \mathbb{R}^{+},
\end{equation}
which satisfies the condition (C1). Thus
\begin{equation*}
u(t, x) = \int_{0}^{t} (b \hat u + \partial_{x} \hat u + l)(s, 0) \, d s + \int_{0}^{x} \hat u (t, y) \, d y, \quad \forall (t, x) \in (\mathbb{R}^{+}, \mathbb{R})
\end{equation*}
is the solution of (P1) and it satisfies
\begin{equation*}
\partial_{x} u (t, x) = \hat u (t, x), \quad \forall (t, x) \in (\mathbb{R}^{+}, \mathbb{R}).
\end{equation*}
What's more, we have $u(t, x) \in C_{b}^{1,3} (\mathbb{R}^{+}, \mathbb{R})$ and it is the unique solution of (P1).
|
2025-03-21T14:48:31.282486
| 2020-06-16T02:45:15 |
363189
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Jacob.Z.Lee",
"https://mathoverflow.net/users/42816"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630240",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363189"
}
|
Stack Exchange
|
Matching book thickness of the wheel graph $W_n$
In the book embedding of a graph $G$ , each vertex of $G$ is placed on the spine and each edge is placed in the pages without crossing each other edge. If vertices have degree at most one in each page, the book embedding is matching . The minimum number of pages in which a graph can be matching book embedded is called matching book thickness. For Convenience, we denote the matching book thickness of a graph $G$ by $\mathrm{mbt}(G)$.
For the Wheel graph $W_n$ with $n$ vertices $O,1,2,3,...,n-1$, I want to know $\mathrm{mbt}(W_n)$. For the case $n$ is odd, it is not hard to know that $\mathrm{mbt}(W_n)=\Delta(W_n)=n-1.$ For the case $n$ is even, I guess $\mathrm{mbt}(W_n)=n$. For example, when $n=6$, I have tried some matching book embeddings of $W_6$ with different orderings $\omega$ of the vertices on the spine. And I always get a matching book embedding on 6-page. For $W_4=K_4$, $mbt(W_4)=4.$ But I was wondering whether the equality holds for any even $n$,i.e.$\mathrm{mbt}(W_n)=n$ .
I will appreciate it if someone could give any suggestions.
I believe this is a counterexample:
Let the vertices be ordered 1, 2, 3, 4, 5, 6 on the spine.
Page 1: 4-6, 2-1
Page 2: 6-1, 2-4
Page 3: 1-5, 2-3
Page 4: 5-3, 2-6
Page 5: 3-4, 2-5
Good job. Thanks a lot.
How do you think about this problem?
|
2025-03-21T14:48:31.282600
| 2020-06-16T03:49:38 |
363192
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Erik Walsberg",
"https://mathoverflow.net/users/152899"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630241",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363192"
}
|
Stack Exchange
|
Linear programming with exponential inequalities and rational variables
If we are given a set of real linear inequalities then using elimination theory or just linear programming we can decide. If the program also has inequalities of form $2^x\leq g$ in addition to linear inequalities then is there a procedure when the conditions of exponentiation are base is constant and all involved variables (including the exponent) are always rational? If so what is the complexity?
In essence we have $n$ formulas of type
$$x_i R_1 a$$
$$x_i R_1 x_j$$
$$x_i R_2 2^{x_j}$$
where $q$ is rational and $R_1,R_2\in\{<,>,=,\leq,\geq\}$ and we want to find if $\exists x_1,\dots,x_m\in\mathbb Q$ and $n=O(m)$ holds.
What if $R_2\in\{=\}$?
You might want to look at the paper "Deciding polynomial transcendental problems" by McCallum and Weispfenning.
|
2025-03-21T14:48:31.282683
| 2020-06-16T04:26:51 |
363195
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630242",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363195"
}
|
Stack Exchange
|
Perturbation of a spacetime in general relativity
In general relativity one has the Schwarzchild metric for a non-rotating black hole
$g_{SC} = -\phi^2 \: dt^2 + \Bigg(1 + \frac{m_0}{2r} \Bigg)^4 \delta $
and from this one has the spacelike Schwarzchild metric
$g = \Bigg(1 + \frac{m_0}{2r} \Bigg)^4 \delta $
which corresponds to a spacelike slice of the 4D manifold with vanishing extrinsic curvature where $t$ has been set to $0$. If one takes instead a perturbation of the 4D manifold by some metric $h_{\alpha \beta}$ with small components, is the perturbation of the spacelike Schwarzchild metric by the spatial components $h_{i j}$ of the above perturbation a spacelike slice of the perturbed 4D Schwarzchild spacetime and if so, what is the extrinsic curvature of the hypersurface?
In the perturbed case the extrinsic curvature is zero, but in this case it must take some general form.
For example, the 3D Riemann tensor on a hypersurface $\Sigma$ can be expressed in terms of the 4D tensor evaluated on $\Sigma$, so I'm wondering if something similar can be stated here.
|
2025-03-21T14:48:31.282772
| 2020-06-16T05:17:52 |
363198
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alex Ravsky",
"https://mathoverflow.net/users/100231",
"https://mathoverflow.net/users/43954",
"vidyarthi"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630243",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363198"
}
|
Stack Exchange
|
Clique cover number of a generalized Kneser graph $K(n,4,2)$
Recently I attacked this combinatorial question. The value of $m(n)$ introduced in it equals to a clique cover number
of a generalized Kneser graph $KG_{n,4,1}=K(n,4,2)$ (or the chromatic number of its complement). I tried to find references with bounds for this number, but failed. On the other hand, the chromatic number of generalized Kneser graphs was investigated, see the references. For instance, if $n = (k-1)s+r$, $0\le r<k–1$ then Proposition 2.6 from [AJ] implies that $\chi(K(n,k,2))\le (k-1){s\choose 2}+rs$ and Frankl [F] showed that if $n >10k^3e^k$ then this bound is exact.
RobPratt calculated the values of $m(n)$ for $m\le 9$. Max Alekseyev showed that $m(n)\ge \frac{(n-2)(n-3)}2$. If $n$ is a power of an odd prime using a finite field of order $n$ we can show that $m(n)\le n^2$. This observation implies an upper bound $m(n)\le (n+n^{0.525})^2$ for sufficiently large $n$, becase for sufficiently large $x$ there is a prime belonging to $[x-x^{0.525}, x]$, see [BHP].
Thanks.
References
[AJ] Sharareh Alipour, Amir Jafari, On the chromatic number of generalized Kneser graphs, Contributions to discrete mathematics 12:2 69–76.
[BCK] József Balogh, Danila Cherkashin, Sergei Kiselev, Coloring general Kneser graphs and hypergraphs via high-discrepancy hypergraphs.
[BHP] R. Baker, G. Harman, J. Pintz, The difference between consecutive primes. II.
Proc. Lond. Math. Soc., (3) Ser. 83 (2001) 532–562.
[F] Peter Frankl, On the chromatic number of the general Kneser-graph, Journal of Graph Theory, 9:2 (1985) 217–220.
[FF] Peter Frankl, Zoltán Füredi, Extremal problems concerning Kneser graphs, Journal of Combinatorial Theory, Series B, 40:3 (1986) 270–284.
so, finally, what is the main question? You meant the title is itself your question?
@vidyarthi The main question is about tighter bounds for $m(n)$.
Cant we use a version of Baranyai's theorem here?
@vidyarthi I'm not aware on a suitable version.
|
2025-03-21T14:48:31.282930
| 2020-06-16T07:36:28 |
363204
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"https://mathoverflow.net/users/142929",
"user142929"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630244",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363204"
}
|
Stack Exchange
|
On $(\prod_{\substack{1\leq s\leq X\\s\text{ semiprime}}}s)(\sum_{\substack{1\leq s\leq X\\s\text{ semiprime}}}\frac{1}{s})$ as $X\to\infty$
Few weeks ago an user from Mathematics Stack Exchange answered my question On an inequality that involves products and sums related to the sequence of semiprimes (asked May 26). It seems that for disproving my conjecture was enough to use a cheap statements in analytic number theory (more cheap than the prime number theorem). If my post is good I would like dedicate it to the excellence in (real and complex analysis, functional analysis, topology and other subjects as) analytic number theory of the user who refuted my conjecture.
A semiprime $s$ is a positive integer that is the product of two prime numbers, see Semiprine from the encyclopedia Wikipedia, thus corresponding to the sequence A001358 of the OEIS. I wondered if it is possible to deduce a statement at research level of the asymptotic
$$\Bigl(\prod_{\substack{1\leq s\leq X\\s\text{ semiprime}}}s\Bigr)\Bigl(\sum_{\substack{1\leq s\leq X\\s\text{ semiprime}}}\frac{1}{s}\Bigl)=\text{main term}+\text{error term},\tag{1}$$
where $\text{main term}=\text{main term}(X)$ is a function of the real variable $X$, let's say $X\geq 1$, and $\text{error term}=\text{error term}(X)$ is also a function of $X$ and represents a suitable error term in our asymptotic formula $(1)$ as $X\to\infty$.
Question. I would like to know what work can be done with the purpose to get a statement at research level for $(1)$ as $X\to\infty$, for a suitable error term expressed in big-O notation or little-o notation as you want (if it is feasible, you can to express your answer as an asymptotic identity $\sim$). Many thanks.
I don't know if this question concerning $(1)$ is in the literature, to ask this question I was inspired in a statement from [1]. If there is literature that provide a explicit answer for my question, then refer it answering my question as a reference request and I try to search an read those statements from the literature. I think that this post can be interesting to me as companion of the post of MSE, and I don't know if similar expressions as $(1)$ (I mean inequalities as the Lemma from [1] or asymptotics as our Question) are in the literature for other constellations of primes, or if it is interesting for other prime constellations as for example Ramanujan primes, primes in arithmetic progressions,... I evoke these problems if you want to explore some in your home, our case study here are the semiprimes.
References:
[1] Takashi Agoh, Paul Erdös and Andrew Granville, Primes at a (Somewhat Lengthy) Glance, The American Mathematical Monthly, Vol. 104, No. 10 (December, 1997), pp. 943-945.
I can to read [1] from my account of JSTOR (also I add, if you as professional mathematician need it, that the site have a very generous expanded support in these months).
No, such an asymptotic formula is too much to ask for. The reason, morally, is that we shouldn't be looking at the product of the integers itself, but rather its logarithm—the sum of the logarithms of the integers. Being on this exponential scale amplifies oscillations enough to make asymptotic formulas impossible.
To see why, suppose we did have an asymptotic formula of the form $\prod_{s\in S,\, s\le x} s = \text{main term}+\text{error term}$. (Here $S$ can be the set of semiprimes, but this argument holds for any set.) By "main term" we presumably mean some continuous function $m(x)$; by "error term" we mean some function that is $o(m(x))$. In other words, suppose we did have a formula of the form $\prod_{s\in S,\, s\le x} s = m(x) + o(m(x)) = m(x) \big( 1 + o(1) \big)$. Then taking logarithms would give
$$
\sum_{\substack{s\in S \\ s\le x}} \log s = \log m(x) + \log \big( 1 + o(1) \big) = \log m(x) + o(1).
$$
In other words, an asymptotic formula for your product would imply an asymptotic formula for this sum with an incredibly strong error term $o(1)$. That's impossible even just from the jump discontinuities in the sum alone.
For example, suppose that instead of semiprimes we looked just at the set of all integers! Then $\prod_{s\le x} s = \lfloor x! \rfloor$ and $\sum_{s\le x} \log s = x\log x - x + O(\log x)$, but that error term cannot be improved as can be seen by looking at $x\to N-$ and $x\to N+$ for large integers $N$. That means that the product has arbitrarily large jump discontinuities even.
Many thanks I'm going to read and study your excellent answer. And thanks for the remark in first paragraph, is lucid and concise.
|
2025-03-21T14:48:31.283206
| 2020-06-16T08:53:32 |
363209
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630245",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363209"
}
|
Stack Exchange
|
Non-integer conditional moment of exponential functional of Brownian motion
Let $B_t$ be a standard Brownian motion.
I want to solve the following:
$$
\mathbb{E}\left[\left(\int_0^1 e^{\sigma B_t}dt \right)^{1/(1-\beta) }\mid e^{\sigma B_1}=z \right],
$$
for some fixed $0<\beta<1$ and $\sigma>0$.
This type of integral is important in finance, for example, get the price of path-dependent options like Asian option.
What I have tried.
Yor (1992) have obtained the density as follows:
$$
\mathbb{P}\left\{ \int_0^1 e^{\sigma B_t} dt=u \mid e^{\sigma W_1} = z \right\}
=\sqrt{\frac{\pi}{2}}\frac{\sigma}{u} \exp \left( \frac{(\log z)^2}{2\sigma^2}-\frac{2(1+z)}{\sigma^2 u} \right) \theta_{\frac{4\sqrt{z}}{\sigma^2 u}}\left(\frac{\sigma^2}{4} \right),
$$
where $\theta_r(s)$ is the (normalized) Hartman-Watson density characterized via its Laplace transform:
$$
\int_0^{\infty} \exp\left(-\frac{1}{2}\alpha^2 s \right) \theta_r(s)\,ds=I_{\alpha}(r),
$$
where $I_{\alpha}$ is the usual modified Bessel function. We have also the integral representation due to Marc Yor,
$$
\theta_r(s) = \frac{r}{\sqrt{2\pi^3 s}} \int_0^{\infty} \exp\left(\frac{\pi^2-y^2}{2s}-r \cosh(y) \right)\sinh(y) \sin\left(\frac{\pi y}{s}\right)\,dy.
$$
Thus, I tried to find direct integration using Fubini's theorem as follows:
$$
\mathbb{E}\left[\left(\int_0^1 e^{\sigma B_t}dt \right)^{1/(1-\beta) }\mid e^{\sigma B_1}=z \right] =\\
\frac{4\sqrt{z}}{\pi\sigma^2}\exp\left( \frac{(\log z)^2}{2\sigma^2}\right)\int_0^{\infty} \exp\left( \frac{\pi^2-y^2}{\sigma^2/2} \right) \sinh(y) \sin\left(\frac{\pi y}{\sigma^2 /4}\right) \int_0^{\infty} x^{-\frac{1}{1-\beta}} e^{-\left(\frac{2(1+z)}{\sigma^2}+\frac{4\sqrt{z}}{\sigma^2}\cosh(y)\right) x } \, dx \, dy \\
=\frac{4\sqrt{z}}{\pi\sigma^2}\exp\left( \frac{(\log z)^2}{2\sigma^2}\right)\int_0^{\infty} \exp\left( \frac{\pi^2-y^2}{\sigma^2/2} \right) \sinh(y) \sin\left(\frac{\pi y}{\sigma^2 /4}\right) \left(\frac{2(1+z)}{\sigma^2}+\frac{4\sqrt{z}}{\sigma^2}\cosh(y)\right)^{\frac{\beta}{1-\beta}} \Gamma\left(-\frac{\beta}{1-\beta}\right) \, dy,
$$
where $\Gamma$ is the gamma function.
But the gamma function can not be defined on the negative real values, hence this integral may not give a correct answer.
So is there alternative approach to find the conditional moment?
Thanks.
|
2025-03-21T14:48:31.283349
| 2020-06-16T09:12:35 |
363212
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Maanroof",
"Simon Henry",
"https://mathoverflow.net/users/22131",
"https://mathoverflow.net/users/89498"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630246",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363212"
}
|
Stack Exchange
|
Pushout of quasi-categories with finite coproducts
Suppose we have a commutative square
$\require{AMScd}$
\begin{CD}
A @>{f}>> B\\
@V{h}VV @V{k}VV \\
C @>{g}>> D
\end{CD}
of quasi-categories, such that $f,g,h,k$ are cofibrations in the Joyal model structure (i.e. these are monomorphisms of simplicial sets). Suppose further that $A,B,C,D$ have all finite coproducts, and that $f,g,h,k$ preserve these finite coproducts. Let $E$ be the homotopy pushout of $f$ along $h$ in the Joyal model structure. Assume that $E$ is a quasi-category. Does $E$ have all finite coproducts, and do the functors $B\to E, C\to E, E\to D$ preserve these?
I think the answer should be yes. A model for $E$ could be the following. We know that $B\cup C \subset D$ is the ordinary pushout, hence a homotopy pushout, since the Joyal model structure is left proper. Then we can take $E$ to be the smallest sub-simplicial set of $D$ such that all squares
\begin{CD}
\Lambda^k[n] @>{}>> E\\
@V{}VV @V{}VV \\
\Delta[n] @>{}>> D
\end{CD}
for $0<k<n$ have a diagonal filler $\Delta[n] \to E$. I think an $n$-simplex $\sigma:\Delta[n] \to D$ is then in $E$ if and only if there is some surjection $\delta:[m] \to [n]$ such that all edges of the spine of $\delta^*\sigma:\Delta[m] \to D$ are in $B\cup C$.
We have an initial object $0\in A$ which is also initial in $B,C$ and $D$. This should give the initial object of $E$, but I have a hard time proving that $E(0,X)$ is contractible, even for $X \in B \setminus C$. It seems to get quite messy to show this by hand, and it would be nice if there is either a slick argument or a reference I could use.
Edit Simon's example convinced me the answer to the question is probably no, and that at least my construction of $E$ is wrong as stated. I'd like to add the condition $B \cap C = A$, to more closely reflect the situation I am interested in.
Perhaps I should've said: take $E$ to be a homotopy pushout such that $E$ is a quasi-category.
Why in your example does $D$ have all finite coproducts? And wouldn't we have $E=D$ in your example? And I would say $\emptyset$ does not have all finite coproducts, since it does not have an initial object.
Right, I agree, that's a good counter-example, if one is willing to say that $\emptyset$ has all finite coproducts. But at least I don't believe my original idea anymore, and in particular your example shows that my construction of $E$ is wrong. I am going to edit the question to reflect the situation I'm interested in more closely.
Hum sorry, my counter-example isn't quite working... As you're editing the question I'm removing my comment that are not quite correct. My point was mostly that the assumption of the existence of D does not give anything (you can always complete a pushout diagram with such a D that will have all colimits and such that the arrow preserves coproducts). While in general the pushout have no heason to have coproducts of an object in B with an object in C.
Well, I think I understood the gist of it, and it definitely showed that I was thinking wrongly about $E$ in the original formulation, so thanks anyway. I guess the point of $D$ is that the ordinary pushout can be computed as $B\cup C$ (under the added assumption), and this should be equivalent to $E$ in the Joyal model structure, although in general $B\cup C$ need not be a quasi-categoy itself.
A pushout $B \coprod_A C$ will almost never have all coproduct. the problem is that objects in $B \coprod_A C$ are all either objects of $B$ or objects of $C$, so if $B \coprod_A C$ has coproduct, it means that every time you take the coproduct of $b \in B$ with $c \in C$ it would have to be either in $B$ or in $C$.
To give a concrete example. Take $D$ to be the quasi-category of pairs of spaces, so $D=\mathcal{S} \times \mathcal{S}$.
Take $B$ to be the full subcategory of pairs whose first component is empty, and $C$ the full subcategory of objects whose second component is empty. $B$ and $C$ are equivalent to the quasi-category $\mathcal{S}$ of spaces, their intersection $A$ is equivalent to the terminal category $\Delta[0]$ (it only contains the empty space).
The homotopy pushout $B \coprod_A C$ can be shown to identify with the full subcategory of $D = \mathcal{S} \times \mathcal{S}$ of pairs of spaces $X \times Y$ where either $X$ or $Y$ is empty (this is not completely trivial). And it is not closed under coproducts in $D$.
|
2025-03-21T14:48:31.283632
| 2020-06-16T09:54:02 |
363214
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"ABIM",
"S.Surace",
"https://mathoverflow.net/users/35520",
"https://mathoverflow.net/users/36886",
"https://mathoverflow.net/users/69603",
"ofer zeitouni"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630247",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363214"
}
|
Stack Exchange
|
Kalman filter distribution of observation process
Let $(X_t,Y_t)$ be a pair of stochastic processes such that
$$
\begin{aligned}
dX_t =& A_t X_t dt + C_t dW_t,\\
dY_t = & H_t X_t dt + K_tdB_t
\end{aligned}
$$
for some non-random matrix-valued functions $A,C,H,K$ of appropriate dimension satisfying the usual conditions of the Kalman-Bucy filter. It's clear that $X_t$ follows a (multidimensional) Ornstein-Uhlembeck process so is distributed according to this wiki post. However, what is the distribution of $Y_t$? Obviously, it's Gaussian (see standard proofs on Kalman filtering) so the meat of the question is...what is its mean and co-variance?
Of course it is Gaussian.... Linear transformation of Gaussian process is Gaussian.
@oferzeitouni Yes, I have noticed this after posting (infact it's in any standard control-theoretic derivation of the Kalman-Bucy filter) but I cant find a clear expression of its mean and covariance..
Note that the drift part and the martingale part of $Y_t$ are independent. Use Ito isometry to find the variance of the martingale part. The variance of the drift part can be turned into a double integral involving the covariance function of the multivariate OU process, which if I recall does not have a closed-form expression.
If we assume $A$ constant $$\frac{d}{dt}\mathbb{E}(X_t )=A \mathbb{E}(X_t ) $$so $\mathbb{E}(X_t)=e^{tA}X_0$ and $\mathbb{E}Y_t = Y_0 + \int_0^t
H_s e^{sA}X_0ds$.
For the variance, we can assume $X_0=0$ and $Y_0=0$. And we have $$\frac{d}{dt}\mathbb{E}(X_tX_t^T )=A\mathbb{E}(X_tX_t^T )+\mathbb{E}(X_tX_t^T )A^T+C_tC_t^T $$so $$\mathbb{E}(X_tX_t^T )=\int_0^t e^{A(t-s)}C_sC_s^Te^{A^T(t-s)}ds.$$ Moreover for $u>t$ $$\mathbb{E}(X_tX_u^T )=\int_0^t e^{A(t-s)}C_sC_s^Te^{A^T(u-s)}ds.$$
For $Y$ we have $$\mathbb{E}(Y_tY_t^T)= \int_0^t\int_0^t H_s\mathbb{E}(X_sX_u^T)H_u^Tdsdu +\int_0^t K_s K_s^Tds \\ = 2\int_0^t\int_0^t\int_0^t H_se^{A(s-v)}C_vC_v^Te^{A^T(u-v)}H_u^T 1_{v\leq s \leq u}dsdudv +\int_0^t K_s K_s^Tds $$
$$\quad$$
In the case $A$ non constant we have to replace the $e^{(t-s)A}$ by $U_tU_s^{-1}$ the solution of $\frac{d}{dt}U_t=A_tU_t$. Unfortunatly if the familly of $A_t$ don't commute then there is no simple formula for $U_t$.
Do you happen to have a citeable reference to this? (Especially for the first part).
|
2025-03-21T14:48:31.283798
| 2020-06-16T10:31:47 |
363219
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Henri Cohen",
"Johannes Trost",
"Mark Wildon",
"VS.",
"https://mathoverflow.net/users/136553",
"https://mathoverflow.net/users/35959",
"https://mathoverflow.net/users/37436",
"https://mathoverflow.net/users/7709",
"https://mathoverflow.net/users/81776",
"user64494"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630248",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363219"
}
|
Stack Exchange
|
A good estimate for a binomial sum
Are there good estimate for the sums
$$1.\quad\quad\quad\quad\quad\sum_{i=1}^k\frac{\binom{2k}{i}}{i!}$$
$$2.\quad\quad\quad\quad\quad\sum_{i=1}^k\frac{\binom{2k}{i}\binom{2k}{2k-i}}{i!(2k-i)!}=\sum_{i=1}^k\frac{\big(\binom{2k}{i}\big)^2}{i!(2k-i)!}$$
in the form of $e^{f(k)}$ where $f(k)$ is a suitable function of $k$?
look at the largest term and apply Stirling's formula. For instance, for 1. you can take $f(k)=\sqrt{8k}$.
$\sqrt{8}$ factor? ok.
Henri Cohen's comment tells you how to get started. Almost always with binomial sums the number of summands is far less than the contribution from the largest summand, and the largest summand alone often gives a good asymptotic estimate.
Mathematica finds an asymptotics for the logarithm of the first sum, resulting in $$ \text{log}\left[-1+\frac{e^{-2 \sqrt{2} \sqrt{k}-\frac{1}{2}} \left(e^{4 \sqrt{2} \sqrt{k}} \left(48 \sqrt{2} \sqrt{k}+31\right)-48 i \sqrt{2} \sqrt{k}+31 i\right)}{96\ 2^{3/4} \sqrt{\pi } k^{3/4}}\right],$$ and this is confirmed numerically.
For the first sum : For large $k$ the sum's summands have a single sharp maximum at $\sqrt{2 k}$ (up to $O((2k)^{0})$). This can be seen, e.g., by equating the ratio of the summands for $i$ and $i+1$ to 1. The summand is approximated by a Gaussian function and the sum is approximated by an integral over that Gaussian function.
For this write the summand as exponential, approximate the exponent in terms of logarithms of $\Gamma$-functions and exploit Stirlings formula for large arguments of $\Gamma$-functions. For $i$ insert $\sqrt{2 k}+\tau$ and expand the exponent up to second order in $\tau$ leading to said Gaussian. The sum over $i$ can be transformed in an integral over $\tau$. It does no harm to extend the integral limits from $\tau=-\infty$ to $\tau=+\infty$. Only an exponentially small error will be made. The Gaussian integral is readily calculated. Finally the result is expanded to order $O((2k)^{-1/2})$. The final result is:
$$
\sum_{i=1}^{k} \frac{{2 k} \choose i}{i!}\sim e^{2 \sqrt{2k}-\frac{1}{2}} k^{-1/4} \pi^{-{1/2}}2^{-5/4}
$$
The error for $k$ as small as 5 is already only about 6%. It is exactly the exponentially dominant part of the result mentioned in user64494's comment.
The second sum is somewhat easier since one immediately sees that the summand is symmetric around $i=k$, which is also its maximum. The maximum is very sharp for large $k$. One can exploit exactly the same recipe as for the first sum, only that one takes only half of the Gaussian integral into account, since the maximum lies at the edge of the summation (The Gaussian is symmetric around $\tau=0$) The result is
$$
\sum_{i=1}^{k} \frac{{2 k} \choose i}{i!} \frac{{2 k} \choose {2 k-i}}{(2k-i)!}\sim e^{2k} k^{-2k-\frac{3}{2}}\pi^{-{3/2}} 3^{-1/2} 2^{-2+4 k}
$$
The error is about 12% for $k=50$.
Edit: The error of the second sum can be reduce considerably by making use of Euler-Maclaurin. The conversion of the sum to an integral underestimates the contribution at the summation limits. Euler-Maclaurin suggests adding half of the value of the summand for $i=k$. This somewhat overshoots a bit, but reduces the absolute relative errors to about a quarter of the original ones. The contribution of the lower limit can still be neglected, though. It is exponentially small.
In the approximation of the first sum both ends of the summation exponentially small.
My guess is for these error do not exceed $50%$ even at $k\rightarrow\infty$ correct?
The errors (=1 - asymptotic value /exact value) monotonically decrease for increasing $k$. So the error for the first asymptotic formula is always less than 6% for $k \ge 5$. For the second asymptotic formula it is less than 26 % (12 %) for $k \ge 5$ ($k \ge 50$).
I mean what I am asking is if Striling type estimation is the best estimate for a 'natural' sum or product then is $50%$ upper bound?
@VS I am very sorry, but I am afraid, I do not understand your second question. Can you please rephrase the question ? Do you mean any general sum or product over the naturals can be approximated by the discribed method with error at most 50 % ? For this I would not know an answer, but I doubt it.
|
2025-03-21T14:48:31.284210
| 2020-06-16T10:45:32 |
363220
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"https://mathoverflow.net/users/142929",
"user142929"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630249",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363220"
}
|
Stack Exchange
|
The role of a combination of Eneström-Kakeya and Gauss-Lucas theorems: reference request or soft question, asking for this combination as tool
In past days I was trying to create problems or direct applications invoking Eneström—Kakeya and Gauss-Lucas theorems for certain arithmetic functions that I know from analytic number theory. These were some of my combinations (I don't know if these are interesting in the context of the arithmetic functions that I'm using as coefficients of my polynomials, or if these are in the literature).
Example 1. For each integer $n\geq 1$ we define the polynomials $\mathcal{F}_n(z)=a_0+\sum_{k=1}^n\frac{z^k}{\sqrt[k]{p_k}}$, where $0<a_0\leq \frac{1}{2}$ is a real number (let's say we take $a_0=\frac{1}{4}$) and $p_k$ denotes the $k$-th prime number. And we consider the zeros of the polynomial $P_n(z):=\frac{d}{dz}\mathcal{F}_n(z)$. Then on assumption of Firoozbakht's conjecture the polynomial $P_n(z)=\sum_{k=1}^n \frac{k }{\sqrt[k]{p_k}}z^{k-1}$ has all its zeros in $|z|\leq 1$ as a direct application of Eneström-Kakeya theorem and Gauss–Lucas theorem.
Example 2. Other similar examples for which one gets a proposition as result of a combination of the application of Eneström-Kakeya theorem and after invoking the Gauss-Lucas theorem are for the polynomials of degree $n$, respectively for integers $n\geq 1$, $P_n(z)=\sum_{k=0}^n\frac{z^k}{\zeta(k)}$ where $\zeta(s)$ denotes the Riemann zeta function (and I need take Eneström—Kakeya theorem for example from [3]), and $P_n(z)=1+\sum_{k=1}^n\frac{z^k}{|G_k|}$ where $G_k$ denotes the Gregory coefficients.
Question. We denote the convex hull of the zeros (we work over $\mathbb{C}$ identified with $\mathbb{R^2}$ as is usual) of a generic polynomial $P_n(z)$ as $C_n$. We assume that their coefficients are under the assumptions of Eneström—Kakeya theorem. And as usual you can denote your disks (or other subsets in $\mathbb{C}$) $D(a)=\{z\in\mathbb{C}:|z|<a\}$ as you prefer. I would like to know in what contexts or scenarios of mathematics (what I evoke are from complex analysis and polynomials, maybe functional analysis, the theory of convex hulls, other...) can be potentially interesting a suitable combination of Eneström—Kakeya and Gauss-Lucas theorems. Can you illustrate your words from an original application in your field or can you add a reference answering as reference request this question? Many thanks.
I'm asking it as a soft question to elucidate in a concise discussion what could be the role of a combination of both theorems (you can to use higher order derivatives of your polynomials, greater than $1$, in your applications). In case that there is some application from the literature, I mean an explicit combination of Eneström—Kakeya and Gauss-Lucas theorems please answer my question as a reference request and I try to search and read it from the literature.
I know these classical theorems, and their proofs, just from an informative point of view ([1] and [2] in Spanish), and my previous examples were attempts to get an application of the mentioned combination of both theorems.
References:
[1] Manuel Bello Hernández, PROBLEMA 371 of the section Problemas y Soluciones, La Gaceta de la Real Sociedad Matemática Española, Vol. 23 (2020), Núm. 2, Pages 331-332.
[2] Armengol Gasull, El teorema de Gauss-Lucas, Miniaturas matemáticas de La Gaceta de la RSME, La Gaceta de la Real Sociedad Matemática Española, Vol. 22 (2019), Núm. 3, Pág. 550.
[3] K. K. Dewan and N. K. Govil, On the Eneström—Kakeya theorem, Journal of Approximation Theory, Volume 42, Issue 3, November 1984, Pages 239-244.
[4] The Wikipedia page with title Gauss–Lucas theorem.
My question is if you know from the literature or can provide an explanation if a combination of both theorems, Eneström—Kakeya and Gauss-Lucas theorems, can be potentially interesting in some subject of mathematics. I know that polynomials plays a special role in many subjects of mathematics, for example the characteristic polynomial in algebra and solution of differential (vectorial) equations or polynomials associated to other objects, for example graphs. On the other hand I think that it is also interesting the convex hull of a set of zeros of polynomials.
I add this unrelated comment as a companion of previous theorems: other result that I think that is increible is the following, that I know as On a Theorem of Abel by Shui–Hung Hou, published on The American Mathematical Monthly
Vol. 116, No. 7 (Aug. - Sep., 2009), pp. 629-630
|
2025-03-21T14:48:31.284609
| 2020-06-16T12:44:32 |
363226
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alec Rhea",
"Andreas Blass",
"Andy Putman",
"Basj",
"Brendan McKay",
"DamienC",
"David White",
"Gabe K",
"Geoff Robinson",
"Gerhard Paseman",
"Gerry Myerson",
"Harry Wilson",
"Hollis Williams",
"Ivan Meir",
"Jochen Wengenroth",
"Jon Bannon",
"LSpice",
"Michael Lugo",
"Nate Eldredge",
"Nik Weaver",
"Peter LeFanu Lumsdaine",
"Timothy Chow",
"Willie Wong",
"Yemon Choi",
"Z. M",
"darij grinberg",
"https://mathoverflow.net/users/11540",
"https://mathoverflow.net/users/119114",
"https://mathoverflow.net/users/125275",
"https://mathoverflow.net/users/143",
"https://mathoverflow.net/users/14450",
"https://mathoverflow.net/users/158000",
"https://mathoverflow.net/users/160253",
"https://mathoverflow.net/users/176381",
"https://mathoverflow.net/users/21051",
"https://mathoverflow.net/users/2273",
"https://mathoverflow.net/users/23141",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/2530",
"https://mathoverflow.net/users/3106",
"https://mathoverflow.net/users/317",
"https://mathoverflow.net/users/3402",
"https://mathoverflow.net/users/3948",
"https://mathoverflow.net/users/43395",
"https://mathoverflow.net/users/4832",
"https://mathoverflow.net/users/491587",
"https://mathoverflow.net/users/6269",
"https://mathoverflow.net/users/6794",
"https://mathoverflow.net/users/7031",
"https://mathoverflow.net/users/7113",
"https://mathoverflow.net/users/763",
"https://mathoverflow.net/users/85239",
"https://mathoverflow.net/users/9025",
"https://mathoverflow.net/users/92164",
"marober",
"no upstairs"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630250",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363226"
}
|
Stack Exchange
|
Each mathematician has only a few tricks
The question "Every mathematician has only a few tricks" originally had approximately the title of my question here, but originally admitted an interpretation asking for a small collection of tricks used by all mathematicians. That question now has many answers fitting this "there exist a small set of tricks used by all mathematicians" interpretation. I find that swapping the quantifiers gives a better question. I.e. I am more interested in hearing about the small collections of tricks of individual mathematicians. Pointing back to the other question above, and Rota's article, what are the few tricks of Erdős, or of Hilbert?
Question: What are the few tricks of some individual mathematicians?
Of course, as the comment in the earlier question quips, a mathematician never reveals tricks...but one can hope. In your answers, please include the name of the mathematician, and their few tricks...perhaps some cool places where the tricks are used, i.e. some "greatest hits" applications of the tricks.
Note, I don't think that knowing these tricks can make you into Erdős or Hilbert, but a long time ago a friend told me that a talented mathematician he knew would approach research problems by asking himself how other mathematicians would attack the problem. This is sort of like writing in another author's style, which can be a useful exercise. Wouldn't it be neat to be able to ask yourself "How would Hilbert have attacked this problem?"
MO is a good place to collect these, because it often takes extended reading (as intimated by Rota) to realize the few tricks used by a certain mathematician. As a community, we may be able to do this.
I've answered, so obviously i have some investment in the question; but, now that I look at it, do we really need two nearly identical questions less than a day apart? Discussion in the comments on that question, and the favourably received answers, suggests that individual users' tricks are welcomed, not just universal tricks relevant for all mathematicians.
@LSpice: I'm totally cool if this gets deleted or closed down. My reason for posting is because I was quite disappointed that the other question ended up interpreted differently from this one. If somehow the answers over there collect the info here, let's close this one down.
I'd really like to know what Hilbert's tricks were, if there is any truth to Rota's comment. I hope this question stays open.
I much prefer this question to the one which looks similar but is (IMO) based on a misreading of what Rota wrote; so I have voted to close the other one, and if I could vote to keep this one open I would do so. (Also: hi Jon)
@YemonChoi, whatever the intent of the questions, the answers seem so similar (in spirit, not in actual content) that it's silly to have them in two places. If only one question survives, whichever it is, then is it possible to have the answers migrated from the other?
@LSpice Well I think that the other question is actively inviting answers that do not achieve what Jon is asking for here. I mean, we've already had generic answers to the other question like "interchange the order of summation" or "the Cauchy--Schwarz inequality", and TBH I foresee the quality of answers over there going down rapidly, as every random user goes "oh hai what about this trick I saw"
@LSpice if a question is closed, then, as long as it isn't also deleted, its answers will still survive and be visible to all.
So in other words, this question is $\forall \exists$ and the other is $\exists \forall$?
@Nate Eldredge: this is why I couldn't resist asking this question. The uniform assumption in many of the answers to the other question bothered me.
@JonBannon Hi Jon my original intention was definitely not to ask for only widely known common tricks although I suspected that many individual's tricks were probably more widely known than they personally suspected. If I was to edit my original question to make it clear that the answers posted here are welcome would you be happy that this question was closed and the answers migrated? Note that quite a few answers to my original question do refer to mathematicians individual tricks anyway. Just a thought in the interests of simplicity and consolidation :-)
@JonBannon I However if you are happier to keep them separate with yours more focused on your particular interest then no problem as I appreciate the benefits of this as well.
@Ivan Meir: I originally also thought it may be good to simply edit your question and just merge this one. (Remember that I asked this when someone who misinterpreted your question changed its title in a way that explicitly justified the misinterpretation found in many of the answers to it). The presence of so many "tricks used by every mathematician" answers there makes it annoying to find the answers I want to see. So I'm also not sure whether it is a good idea to merge these.
It's quite a weird situation. You can certainly edit your question and it would be no offense to me if answers to this question appeared there. The proliferation of answers of the other type are annoying, though.
I also made a mild edit to this question in order to clarify why it exists at all...in case that helps.
@JonBannon Thanks, sounds good.
Not sure it fits the question, but one demoralizing experience is to be pleased at proving a "new" result, and then to discover that you proved the same result 30 years earlier, and furthermore the earlier proof was much better than the "new" one.
@GeoffRobinson 30 years is longer than enough. Usually 1-2 years suffices.
We need more of these big lists. They should probably change it to all big lists.
The question is worded in a way that seems to imply we might speak of other mathematician's tricks, but I'm not sure I know the tricks of even my closest collaborators, except by osmosis; so I hope it's OK if I specify my own "one weird trick". The entirety of my research centres around the idea that, if $\chi$ is a non-trivial character of a compact group $K$ (understood either in the sense of "homomorphism to $\mathbb C^\times$", or the more general sense of $k \mapsto \operatorname{tr} \pi(k)$ for a non-trivial, irreducible representation $\pi$ of $K$), then $\int_K \chi(k)\mathrm dk$ equals $0$.
It's amazing the mileage you can get out of this; it usually arises for me when combining Frobenius formula with the first-order approximation in Campbell–Baker–Hausdorff. Combining it with the second-order approximation in CBH gives exponential sums, which in my field we call Gauss sums although that seems to intersect only loosely with how number theorists think of the matter. Curiously, I have never found an application for the third-order approximation.
This is exactly the kind of thing I'm looking for, and I think it is delightful to see things like this. It feels like hearing this from you over coffee at a chalkboard between talks at a conference. Thanks!
Btw, @LSpice, I only phrased the question to ask for other mathematician's tricks because "a mathematician never reveals his tricks". Your answer proves this wrong. Thanks again! I hope to see more of these autobiographical ones.
In an effort to get the ball rolling, and to illustrate why I think several answers on the other question don't really work as answers to this one, let me offer an attempt which I think is in the spirit that Jon intended — although I'm too rusty on the details to provide a proper analysis/explanation/justification.
The late Charles Read was (in)famous for constructing counterexamples in functional analysis, specifically in the world of Banach spaces and then later in the world of Banach algebras. While I don't think Rota's phrase "only a few tricks" does justice to Charles (or indeed was ever meant as being particularly accurate, given Rota's fondness for the soundbite), anyone who's had to study some of Charles's papers in detail will have noticed two themes that recur throughout his work.
"very rapidly" increasing sequences, which somehow encode the intuition that one builds a counterexample in stages, and in between each stage you need to go "far enough towards infinity to avoid intefering with what you did previously". These come up in his construction of an operator on $\ell_1$ with no non-trivial closed invariant subspaces, but if memory serves correctly they also turned up in the Loy–Read–Runde–Willis paper Amenable and weakly amenable Banach algebras with compact multiplication on constructing commutative radical amenable algebras with various seemingly opposing properties, and also came up in one of his later papers on Frechet algebras. Obviously the notion of separating out building blocks of moderately growing size along a lacunary sequence is an ancient one, but for reasons that I confess I don't fully understand, Charles was able to push this idea much further, usually using combinatorial arguments to keep control of the "localized construction at each stage" so that a sufficiently fast growing sequence would separate them out.
When $N$ is large "or infinite", the algebra of upper-triangular $N\times N$ matrices has a very large (Jacobson) radical, and so looks very different from Banach algebras such as $L^1(G)$ or ${\rm C}^\ast$-algebras which had tended to drive a lot of (over-)optimistic conjectures. There were several papers that seemed, underneath the formidable technical details, to have in mind this mental image: this is explicit in his "Commutative, radical amenable Banach algebras" paper, and implicit in his paper with Ghlaio Irregular abelian semigroups with weakly amenable semigroup algebra that constructs commutative semigroups which are far from being groups yet whose convolution algebras are weakly amenable. My point is that Charles did not just view the fact at the start of this paragraph as a known result to be quoted or used as a black box, he seemed to have a deep appreciation of how to use "identity + strictly upper triangular = invertible, albeit with a large inverse" as a guiding principle in his constructions.
There have been very few papers which seek to explain what is going on in Charles's constructions, either in an expository sense or in an "extend or refine" sense. Two that come to mind are: S. Grivaux and M. Roginskaya's paper
A general approach to Read's type constructions of operators without non-trivial invariant closed subspaces; or Chapter 5 of R. Skillicorn's PhD thesis Discontinuous homomorphisms from Banach algebras of operators
(This answer is difficult to write because I feel conscious that I've only managed a very superficial account of what is going on in the papers I refer to. Improvements and corrections would be very welcome.)
Thank you, Yemon! I hope the ball will keep rolling. I look forward to more answers like this one. I think it is ideal that such an answer invites clarification/correction and simultaneously serves as a kind of "book review" for a favorite theme/trick of the mathematician in question. This answer and the other by @LSpice are good prototype answers to this question.
A superficial account, in this sense of a million-mile overview, is surely what a question about tricks looks for—if you want the details, then read the paper!
“Most mathematicians know one method. For example, Norbert Wiener had mastered Fourier transforms. Some mathematicians have mastered two methods and might really impress someone who knows only one of them. John von Neumann had mastered three methods: 1) A facility for the symbolic manipulation of linear operators, 2) An intuitive feeling for the logical structure of any new mathematical theory; and 3) An intuitive feeling for the combinatorial superstructure of new theories.” - Ulam
So I guess that covers Wiener and von Neumann
I'm pleased that two of von Neumann's three methods are of the form "An intuitive feeling ...", because this agrees with my opinion that intuitive feelings (when not misleading) are extremely valuable --- and correspondingly hard to acquire.
I have two tricks: Dehn filling and drilling. I've used the former to study subgroup separability, as a technical trick to reduce the proof of tameness of Kleinian groups in the cusped case to the non-cusped case, to produce non-Haken 3-manifolds, as well as study exceptional (non-hyperbolic) Dehn fillings on a cusped manifold. I've used drilling also in the proof of tameness, to relate the volume of closed hyperbolic manifolds to cusped ones, and in the solution of Simon's conjecture about epimorphisms between knot groups.
As you might guess, these are really the same trick (one is the inverse operation of the other), but I like to think of them as two ;).
I like the idea of trying to recognize a mathematician by their tricks. It reminded me of the Brachistochrone problem, posed by Johann Bernoulli and solved by five mathematicians, including an anonymous solution by Newton. This is the source of Bernoulli's famous quote "tanquam ex ungue leonem," Latin for "we know the lion by his paw." What was it that made Newton's approach so immediately recognizable? It was his use of the Calculus of Variations, which he had used ten years earlier to solve the Minimal Resistance Problem. This approach uses in a fundamental way: intuition from physics, approximating infinitesimal curves by infinitesimal lines, and the use of truncated power series expansions. I'd say those tricks were quintessentially Newton's.
Isn't it "… by his claw"?
I checked this carefully, but know little about Latin. From what I understand, it does not literally translate to either phrase (throw it in Google translate if you don't believe me). I learned "paw" but a few sources I found said "claw". Probably depends who you heard the story from.
@LSpice, @ David: The story of ex ungue leonem is more tangled than I’d expected — the best-informed discussion I can find is in the Nature correspondence page, Stigler, Handley, Huxley, Bloemendal, Nature Vol. 333(6174), 1988, p592 — and traces it back to a Greek proverb, probably known to Bernoulli via Erasmus (hence in Latin); but in any case the literal translation seems to be from the claw, [we recognise/know] the lion, not paw under any reading I can see.
correction to previous comment, too late to edit: the linked letters suggest the phrase was probably known to Bernoulli via either Erasmus or Plutarch
When I was an undergraduate, I attended a talk by Peter Lax in Budapest. He had recently been awarded the Abel Prize, but attributed all his success to "integration by parts." It seems he has said this publicly a few times.
+1 though since Lax first allegedly said it, it has become rather cliche that this trick is the trick for many people who work in partial differential equations or harmonic analysis. :)
This is from the obituary in Notices of the AMS: In 1948 Laurent Schwartz
visited Sweden to present his distributions to the
local mathematicians. He had the opportunity of
conversing with Marcel Riesz. Having written on the
blackboard the integration-by-parts formula to
explain the idea of a weak derivative, he was interrupted
by Riesz saying, “I hope you have found
something else in your life.”
It feels a bit presumptuous to talk about another mathematician's favorite tools. However, there is something known as Uhlenbeck's trick, which definitely deserves mentioning.
One recurring theme in Karen Uhlenbeck's work is to use gauges in clever ways which make analysis tractable. For example, Terry Tao wrote a blog post about a deep result about connections with small curvature that she proved by combining the right choice of gauge with the continuity method.
The named version of this trick uses this idea in the context of Ricci flow. In simple terms, one uses an orthonormal frame which evolves in time and where the curvature evolution equations greatly simplifies. From a more conceptual standpoint, the idea is to consider an vector bundle $V$ which is isometric to the tangent bundle $TM$ and has a fixed metric $h$. Then, the Ricci flow acts to evolve the isometry between $V$ and $TM$. Although this is conceptually more complicated, the use of the fixed metric $h$ simplifies the evolution equations and allows one to find invariant curvature conditions, which plays an essential role in the analysis.
You are right! I have removed the word "favorite" from the body of the question. The idea is, though, to associate mathematicians to the tools they tend to use and the way they used these tools. It would be very funny to claim that a trick was among someone's favorites and for that person to find out about it via that claim. Thank you for the nice answer.
Thanks. As far as named tricks go with the Ricci flow, there's also the Deturck trick. I'm not sure how that fits into his larger body of work so I didn't mention it in the answer.
As far as I'm aware, the DeTurck trick was something particular to Ricci flow, I think the trick has found other uses since then, although I can't remember if its used elsewhere by him (wouldn't surprise me if it was though).
My understanding of the Deturck trick is that you conjugate Ricci flow by a time-dependent diffeomorphism which produces a parabolic flow (and so bypasses the need for Hamilton's technical proof of existence). If Deturck used this type of idea elsewhere, it would be a good answer for this question.
In my field (symmetric functions and representation theory) there are a few tricks that some people are quite notorious for.
S. Assaf - Introduce new families of polynomials/(quasi)symmetric functions, and use dual equivalence.
P. Brändén - Generalize real-rootedness to the notion of stability.
A. Garsia - Introduce new operators acting on symmetric functions.
M. Haiman - Use super-hardcore algebra stuff to prove things about symmetric functions.
C. Krattenthaler - Compute a determinant.
D. Zeilberger - Use computer algebra (the WZ-algorithm in particular) and let S.B Ekhad do all the actual work!
I love "compute a determinant".
A. Zelevinsky: show that there is only One Ring.
D. Knuth: rewrite all algorithms in your own assembly language.
S. Fomin, A. Kirillov: derive everything from formal consequences of partially commuting operators.
A. Vershik, A. Okounkov: find enough $\mathfrak{sl}_2$-triples in the algebra.
I. G. Macdonald: introduce 9 new families of polynomials in one paper.
half the community: Wave your hands if the proof doesn't simplify. If that too fails, leave it to the reader. (SCNR.)
@darijgrinberg Would you have a link to the "9 families of polynomials" paper? I'd enjoy read this!
@Basj: I. G. Macdonald, Schur Functions: Theme and Variations.
Erdős' trick is discussed at length in Gowers' classic essay Two Cultures of Mathematics, where he describes it as follows:
If one is trying to maximize the size of some structure under certain constraints, and if the constraints seem to force the extremal examples to be spread about in a uniform sort of way, then choosing an example randomly is likely to give a good answer.
This is often combined with the following trick introduced by Shanon:
The expected value of a random variable is between its minimum and its maximum. Therefore you can prove lower bounds on the largest possible value of a function on a set of objects by examining the expected value of that function a random object.
One example of combining these techniques is the following well-known result:
Theorem: Every 3-SAT instance has an assignment of variables that satisfies 7/8ths of the clauses.
Proof: A random assignment of values satisfies 7/8ths of the clauses in expectation, as any particular clause is only false if all of its constituent variables are false.
We can even covert this into an efficient, deterministic, constructive proof! Let $S$ be the random variable that returns the number of clauses satisfied by a random assignment. Set the value of $x_0$ to $0$ (resp. $1$) and call the restricted version of $S$ that satisfies this condition $S_0$ (resp. $S_1$). Then $\frac{7}{8}=\mathbb{E}S = \frac{1}{2}\mathbb{E}S_0 + \frac{1}{2}\mathbb{E}S_1$, so at least one of the expected values on the right are $\geq 7/8$. That one tells you the correct value for $x_0$, and then now iterate.
Cool! Thank you!
Erdős' trick is expanded upon at book length in "The Probabilistic Method" by Alon and Spencer.
Although Erdős largely pioneered this "trick" and used it effectively many times, this is not an example of a mathematician having only a few tricks. Erdős is rightly regarded as great because he had many wide-ranging "tricks".
Tao has recently submitted a preprint on exactly this topic in the case of the mathematician Jean Bourgain. The tricks in question are quantification of qualitative estimates, dyadic pigeonholing, random translations, and metric entropy and concentration of measure. As you say, he points out that knowing these tricks does not automatically give you the intellectual firepower of Bourgain, but that they are very useful nonetheless.
This is really nice! It complements the answers to another MO question about Bourgain.
Not only this, but the article you link to gives a perfect answer to my question here, interpreting the Rota quotation as I believe it was meant to be interpreted...
I want to mention a trick of Gilles Pisier. This is an extrapolation method. Suppose you have some kind of inequality for some $L^p$ space and that you want to get a reverse Holder type inequality for $q<p$. Using this he has done many interesting work in Sidon sets, Grothedieck inequality and noncommutative Khintchine's inequality. The trick is originally attributed to Rudin's famous paper "Trignometric Series with Gaps".
Here is Jon's reply and some more explanations. In the paper "Trignometric Series with Gaps", Rudin deals with the following kind of sets. Let $0<r<s<\infty.$ A set $E\subseteq \mathbb Z$ is of type $(r,s)$ if $\|f\|_s\leq B\|f\|_r$ for all trignometric polynomials in $\mathbb T$ with Fourier coefficients of $f$ supported on $E.$ Rudin proves that for $0<r<s<t<\infty,$ $E$ is of type $(r,t)$ if and only if it is of type $(s,t).$ The proof uses a reverse Holder kind of inequality. It is an extrapolation trick, i.e. knowing something for $(s,t)$, one extrapolates to $(r,t).$ The same kind of trick was used for proving noncommutative Khintchine inequality (https://arxiv.org/abs/1412.0222) for $p<1$. However, in every case the trick involves some new technical difficulties but the philosophy is the same. Pisier used same kind of trick to obtain a new upper bound of complex Grothendieck constant (https://www.sciencedirect.com/science/article/pii/0022123678900381). There are many other instances. One can look carefully into his papers and will see that many times he used this trick.
Thank you for the extended answer!!
Not me, but Donald Ervin Knuth:
Use clever notation! Especially for sums, recurrences, binomials, etc. he developed very useful variations (Concrete Mathematics [Graham, Knuth, Patashnik], The Art of Computer Programming [Knuth])
The notations he proposes are clear, and, more importantly, lead to an amazing amount of intuition, which wouldn't be possible otherwise.
Unfortunately one person's clear, intuitive notation is another person's awkward mess—or even, as I discovered when re-visiting my own notation later, the same person's awkward mess when the purpose to which it is to be applied shifts ever so slightly.
Ervin, not Edwin. Gerhard "Not Talking Evil Twin Here" Paseman, 2020.08.08.
@GerhardPaseman Fixed. What an embarrassing mistake.
@LSpice While I certainly mostly agree with your point of view, I kind of feel Knuth is a bit of an exception in that regard: He (usually) doesn't re-invent the wheel, but 'fixes' the exact problems you mention in existing notation by giving it a small, but clever, twist. - But it always comes down to personal taste and the problem you are trying to solve.
Gabe's answer, about Uhlenbeck's trick, reminded me of the Rabinowitsch trick in algebraic geometry. However, I don't know if Rabinowitsch used this trick in other work, or if it was indicative of his approach to mathematics. Good thing this is community wiki! I encourage anyone who knows more to edit with more details.
In all 200+ pages of my category theory notes, there were essentially three tricks I used in proofs:
Prove that two arrows are both the arrow induced by a universal property, so they're the same arrow.
Reason under the image of a faithful functor between two objects, then conclude that because they're the same under the image of a faithful functor they're equal in the original category.
Use diagrammatic coherence conditions to write an arrow in a new representation.
Over the course of hundreds of proofs, I don't think I ever had to do more than these three things.
I would be curious to see a theorem/proof in pure $1$-category theory that uses any method besides these three.
The proof of the adjoint functor theorem does not use these three trivial ideas, which is perhaps why it has so many nontrivial consequences.
@AndyPutman The proof on the nlab uses precomposition with an epi, which falls under category $2$ above -- are you aware of a different proof?
I never look at the nLab since whenever I’ve given it a chance in the past it ended up being totally unhelpful, and they might be proving a different statement than the one I’m thinking of. The proof I have in mind uses the proof technique of “construct an object with some mapping property by collecting together all possible answers while being careful that the result is a set”. I find it easier to appreciate this proof when you specialize it to a specific concrete application, and I wrote up one such account in Section 3 of https://www3.nd.edu/~andyp/notes/ConstructFree.pdf
It looks like you’re still using postcomposition with an epi, but I’m in a movie right now and can’t get specific — I’ll respond in greater detail later tonight.
@AndyPutman Yes, when you say that $\Phi$ is clearly unique because $\eta$ generates its codomain this is an epi argument in disguise; precomposition with an epi (or postcomposition with a mono) are both special cases of $2$.
That’s the trivial part of the proof. The actual idea of the proof (ie, the thing that gives it enough juice to have nontrivial consequences) is the construction, versions of which show up throughout mathematics.
@AndyPutman So it is a key ingredient? Are you aware of a way to finish the proof without it? (also I suspect the rest of the categorical parts of your proof cane reduced to the above three tricks, but don't have time at the moment to explore it).
I would not characterize it as a "key ingredient". Proofs of important theorems typically have some kind of important non-formal idea supported by routine formal manipulations. The non-formal idea is what is important. In this case, that idea shows up not just in pure category theory contexts like Freyd's adjoint functor theorem, but in many other areas as well (e.g., in the proof of existence of injective resolutions, in the proof of the Brown representability theorem, in Quillen's "small object argument" in the theory of model categories, etc).
@AndyPutman Sorry for the delay, things have been hectic over here; honestly I'm confused as to your point here. You begin by saying that the adjoint functor theorem 'doesn't use these trivial ideas'; I point out that a standard publicly available proof does use it, so you point out another proof you personally wrote up. I point out that you're still using these 'trivial' tricks, to which your response now seems to be 'well yes I do have to use them but I don't consider them important'? This seems pretty far afield from your original comment, unless I miss something.
What you claimed is that you don't know any result in this area that "uses any method besides these three". The adjoint functor theorem uses a very important method that is not on your list, and that idea is the main new idea in its proof. If you only knew the three trivial ideas you listed, you would not be able to even conceive of its proof (and probably you wouldn't even be able to figure out how to formulate it correctly).
@AndyPutman Where did I claim that? Also you have things backwards imo; the ubiquity of these tricks (which you admittedly can't avoid using) are what make them beautiful. (btw, the nlab proof of the general adjoint functor theorem subsumes/is shorter than your concrete group theoretic proof -- it's a great resource once you get comfortable with the abstract categorical language!)
The thing in quotation marks is a verbatim quotation from the end of your post. I'm perfectly happy with abstract categorical language, by the way. I've been doing this a long time, and have had to learn many languages.
@AndyPutman I said I'd be curious to see a result that (your quoted bit), not that I don't know any result that uses anything besides these three. It's been a few years since I wrote those notes and I'm fuzzy on how much did really amount to these three tricks (and I suspect there is more than just them in my notes, I just don't remember off-hand); I think in general I used them to prove arrows equal, but constructing the arrows in the first place had more to do with understanding the context they existed in. This is consistent with the 'other piece' of the adjoint functor theorem's proof.
A trick/technique that I like (and used) a lot is the formal geometry approach (after Gelfand-Kazhdan) for passing from a local to a global result.
Let $X$ be a $d$-dimensional manifold. There is a an infinite dimensional manifold $X^{coord}$ whose points are pairs $(x,\varphi)$ , where $x\in X$ and $\varphi$ is a formal coordinate system around $x$ (in other words, $\varphi$ is an isomorphism between the formal neighborhood $\hat{X}_x$ of $x$ in $X$ and the formal neighborhood $\hat{\mathbb{R}}^d_0$ of $0$ in ${\mathbb{R}}^d$.
$X^{coord}\to X$ is a $G_d$ principal bundle, where $G_d$ is the automorphism group of $\hat{\mathbb{R}}^d_0$; hence $G_d$ is actually homotopy equivalent to $GL(d)$. Somehow, working $GL(d)$ equivariantly on $X^{coord}$ amounts to work on $X$.
This is of course not the whole story: $X^{coord}$ is in fact universal among spaces $Y\to X$ satisfying $Y\times_{X}\widehat{X\times X}\simeq Y\times \hat{\mathbb{R}}^d_0$, where $\widehat{X\times X}$ denotes the formal neighborhood of the diagonal in $X\times X$.
This allows to give a precise meaning to the idea that $X$ is obtained as a kind of glueing of all formal neighborhoods of its points.
In derived geometry, these formal geometry methods have been subsumed by techniques involving Simpson's de Rham stack.
Edit : Fedosov used such methods to prove the existence of a star-product on a general symplectic manifold in A simple geometrical construction of deformation quantization (locally, existence is given by the Moyal star-product). Similarly, Kontsevich's proof of his famous formality theorem, in Deformation quantization of Poisson manifolds, goes in two steps: (a) prove a local formula, (b) globalize. The second step is proven using formal geometry methods. But one has to say that (a) is way more difficult than (b) for the formality theorem (whereas in Fedosov's paper, local existence is trivial).
Can you give a concrete example of a theorem about manifolds that makes no reference to formal geometry and is proved by this trick?
@AndyPutman yes. The existence of star products on symplectic manifolds was proven by Fedosov using formal geometry methods (locally we have existence, given by the Moyal star-product). Similarly, Kontsevich's proof of his famous formality theorem goes in two steps: (a) prove a local formula, (b) globalize. The second step is proven using formal geometry methods.
Thanks! You might think about adding those to your answer. I think this will help a reader appreciate what kinds of things this trick can be used for.
Characterizing a class of integers sharing some property $P$ by defining an arithmetic function taking a single value $k_{P}$ at those integers and then give an equivalent of this arithmetic function.
Finding properties of an object that are invariant under the action of some natural involution.
Saying mathematicians have only "a few tricks" makes mathematicians seem rather limited. But I recall some saying that great philosophers are engaged with only one big question. Perhaps this is true for all fields, after all there is that old adage which underlines this: jack of all trades, master of none.
It's also worth pointing out that writers only know 26 letters. But out of that has poured out all our literature. Some trick!
I recall reading somewhere that Ramanujan had a 'master technique'. According to Wikipedia this was the Mellin transformation of a function expressed as a power series.
Feynman in one of his popular books mentioned that he could often do integrals that his colleagues couldn't because he knew how to differentiate under the integral sign. Some people have begun to call it Feynman's trick.
In my personal field, applied optimal transport for PDEs, we often play the following game, so much so that some of my colleagues and I actually call it Brenier's trick (after Yann Brenier): When miniminzing a convex functional $\rho\mapsto F(\rho)$ over some convex subspace (typically the space of probability measures) with linear constraints $L(\rho)=0$, write the constraints as a supremum of linear functionals over auxiliary mutlipliers, and use a convex/concave minmax theorem (Rockafellar, often, or any variant of von Neumann's minmax theorem in infinite dimension) to swap $\inf\sup=\sup\inf$ as
\begin{multline*}
\inf\limits_\rho\Big\{ F(\rho):\quad L(\rho)=0\Big\}=\inf\limits_{\rho} F(\rho)+
\begin{cases}
0 & \text{if }L(\rho)=0\\
+\infty&\text{else}
\end{cases}
\\
=\inf\limits_\rho\Big\{ F(\rho)+\sup\limits_\phi \langle L(\rho),\phi\rangle\Big\}
=\inf\limits_\rho\sup\limits_\phi F(\rho)+\langle L(\rho),\varphi\rangle
\\
=\sup\limits_\phi\inf\limits_\rho F(\rho)+\langle L(\rho),\varphi\rangle.
\end{multline*}
One can then solve the free optimization problem in $\rho$ and next in $\phi$ to retrieve a lot of significant information about the joint optimizers.
For example this can be used to retrieve in just a few lines, and at least heuristically, the right equations for Wasserstein geodesics (a forward continuity equation coupled with a backward Hamilton-Jacobi equation), the so-called Otto's calculus, and many other variants thereof for a whole variety of models.
Of course this is nothing but a Lagrange multiplier method for constrained optimization, but it shows up so often in applied optimal transport problems that I thought it would be worth mentioning here.
|
2025-03-21T14:48:31.287357
| 2020-06-16T12:56:50 |
363228
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630251",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363228"
}
|
Stack Exchange
|
Can every $\mathfrak{S}_n$-linear map be realized by a multiplication?
Let $G=\mathfrak{S}_n$ be the symmetric group on $n$ elements. Via permuting the variables the polynomial ring $S=\mathbb{C}[x_1,\ldots,x_n]$ becomes an $G$-module. It is not hard to see that every irreducible representation of $G$ appears in $S$: For example the span of the orbit of the monomial $\prod_{i=1}^nx_i^i$ is the regular representation of $G$. We consider the natural map $M: S\to\textrm{Hom}(S,S)$ that assigns a polynomial $f$ the map $S\to S,\,g\mapsto f\cdot g$. Clearly, $F$ is $G$-linear. I am interested how general this map is.
More precisely, let $U,V,W$ irreducible $G$-modules and let $F:U\to\textrm{Hom}(V,W)$ a $G$-linear map. Are there $G$-linear maps $A: U\to S$, $B: V\to S$ and $C: S\to W$ such that for all $u\in U$ we have $F(u)=C\circ M(A(u))\circ B$?
Yes. Take an injective map $A : U \to S$ and an injective map $B' : V \to S$.
Now compose $B'$ with the map that sends $x_i$ to $x_i^m$ for all $i$, where $m$ is greater than the degree of any polynomial in the image of $A$.
Then the composed map $A \otimes B : U \otimes V \to S$ will be injective because, as a module, $S$ is the tensor product of polynomials of degree $<m$ in each variable with polynomials in $x_1^m,\dots, x_n^m$, and tensor products of injective maps are injective.
Hence the map $U \otimes V \to W$ is factors through the image of $A \otimes B$ in $S$. Since $S_n$ is a finite group, by semisimplicity, we can extend this map from the image to the whole space.
|
2025-03-21T14:48:31.287497
| 2020-06-16T13:12:50 |
363229
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Scott Armstrong",
"https://mathoverflow.net/users/5678"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630252",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363229"
}
|
Stack Exchange
|
Modified variational formulation of heat equation
The heat kernel $u:\mathbb{R}^n\times (0,\infty)$ is defined as the solution to
$$
u_t = \Delta u,
$$
subject to certain boundary conditions and can alternatively be described, in variational form, as the minimizer of
$$
\int_{t,x} \|u_t - \Delta u\|^2 dtdx,
$$
over all $C^{1,2}((0,\infty)\times \mathbb{R}^n)$ functions when those boundary conditions are inforced.
What can be said about the modified variational problem
$$
\min_{u \in C^{1,2}}\int_{t,x} \|u_t(t,x) - \Delta u(t,x)\|^2
+
\|f(x) - u(t,x)\|^2
dt dx ?
$$
Here $f$ is a fixed $C^{2}(\mathbb{R}^n)$ function (note it does not depend on $t$). In general, can we describe $u$ explicitly (or at-least with a power-series (though less-preferably))?
I think there is a better/more natural variational formulation of parabolic equations due to Brezis and Ekeland. This theory is revisited in a paper of mine here: https://arxiv.org/pdf/1705.07672.pdf
This alternative variational formulation allows for a systematic functional analytic development in the natural parabolic analogue of H^1 which mirrors the classical (variational) theory of elliptic equations found in eg Evans book. The variational problem is uniformly convex in this space, etc....
|
2025-03-21T14:48:31.287610
| 2020-06-16T13:22:49 |
363230
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Aditya Guha Roy",
"Dieter Kadelka",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/109471"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630253",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363230"
}
|
Stack Exchange
|
Decaying probabilities
A coin $C$ is tossed $n$ times. The coin $C$ is known to have the following properties :
Let $p_i$ denote the probability of showing heads in the $i$-th toss, and $q_i$ denote the probability of showing tails in the $i$ -th toss, so that $p_i + q_i = 1$ for all $i$,
If the $i$ -th toss gives a heads, then $p_{i+1} = k \cdot p_i,$ (where $k \in (0,1]$ is a fixed real number), and $q_{i+1} = 1- p_i,$
If the $i$ -th toss gives a tails, then $q_{i+1} = \ell \cdot q_i,$ (where $\ell \in (0,1]$ is a fixed real number), and $p_{i+1} = 1- q_{i+1}.$
What is the probability that the sequence of outcomes doesn't have two consecutive heads in it ?
The case when $k= \ell = 1$ simply leads us to the Fibonacci recursion, and is therefore easy to handle.
Motivation behind considering the problem :
There are several situations where one can have either a failure or a
success and the probabilities of success of failures change. For
instance consider a seasonal ailment which makes its incidence in
humans at most once each year. Now, given a person has already
suffered from the ailment in a certain year, his / her immunity may
change so that the probability that the person faces the ailment in
the next year changes.
Only for my interest: What do you mean by "Fibonacci recursion"? (Of course $k = \ell = 1$ are easy to handle.)
Take out the first two outcomes. If the first one is tails, then the remaining $n-1$ length sequence of outcomes has no restrictions, and if the first outcome is heads, then the second one must be a tails, and thereafter the remaining $n-2$ length outcome sequence has no restriction on it. So, the Fibonacci recursion.
Building on a hint that was provided by prof. Arnab Chakraborty from Indian Statistical Institute, Kolkata, India (where I study) I am writing this answer.
Let us consider a more general problem :
suppose we have a coin $C$ which shows heads with probability $x$ and tails with probability $1-x$, and suppose $y \in [0,1].$ Let our interest be to generate a random variable $X$ which takes the value $1$ with probability $y$ and the value $0$ with probability $1-y$ using the given coin.
We first construct an unbiased coin tossing random variable (that is a binomial$(1,0.5)$ random variable).
For this we do the following : toss the coin $C$ twice. If the sequence of outcomes is $H,T$ then declare $Y=1$. Otherwise if the sequence of outcomes is $T,H$ then we declare $Y=0.$ For any other sequence of the two outcomes, we just reject those two tosses, and toss $C$ twice once more.
So, this would give us a random variable $Y \sim \text{binomial}(1,0.5).$
Next we consider the binary expression for the probability $y,$ and accordingly use the random variable $Y$ constructed above, to get a random variable $X \sim \text{binomial}(1,y).$
|
2025-03-21T14:48:31.288212
| 2020-06-16T13:24:55 |
363231
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alexandre Eremenko",
"Ben",
"LSpice",
"Steven Gubkin",
"https://mathoverflow.net/users/1106",
"https://mathoverflow.net/users/159700",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/25510"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630254",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363231"
}
|
Stack Exchange
|
Is there a name for $f(M, x) = x^\top M x$?
I often encounter things of the form $x^\top M x$, where $M$ is symmetric positive (semi-)definite. Is there a term for that? I know related terms:
We can say $M$ is a bilinear form, $M(x,y) = x^\top M y$ with an induced norm, $|\!|x|\!|_M := \sqrt{x^\top M x}$, so I could call $x^\top M x = \!\|x|\!|_M^2$ "the squared norm of $x$ with respect to $M$ I guess?
This is related to Mahalanobis distance, although that's very particular to statistics.
This is related to the idea of a metric tensor where $M$ is the metric tensor and .
It looks like this can be called a "change of basis" of a bilinear form.
So: In general, is there a name for $f(M, x) = x^\top M x$? There must be, right?
Change of basis would replace $M$ by $A^\top M A$ for a square matrix $A$, whereas I assume you mean $x$ to be a vector.
I was picturing $x$ being either a matrix or a vector. It seems like the common nomenclature makes a distinction when I was ideally looking for a term that doesn't make that distinction.
Yes, there is a name: quadratic form, and I think this question is not appropriate for this site.
You can say that $x^\top M x$ is the quadratic form associated to $M$.
Would you say that even if $x$ is a matrix?
No, I don't think I would. Didn't realize you were permitting $x$ to be a matrix!
On the other hand, $A^\top A$ is a quadratic form, right? It is "a thing squared".
Read the definition of real quadratic form here:
https://en.wikipedia.org/wiki/Quadratic_form#Real_quadratic_forms
I think "quadratic form" is it. So when writing code, would M.quadraticForm(x) then be reasonable? Part of the confusion for me of naming $f(M, x)$ (as a binary function) is I don't see a clear ordering, and clearly $M^\top x M$ is different from $x^\top M x$.
Upon further consideration, from a programming standpoint, I think the binary version should take $M$ first then $x$, since $M$ is more important – it's the quadratic form associated to $M$. That is: template <typename Matrix, typename VecOrMatrix> auto quadraticForm(Matrix&& M, VecOrMatrix&& v) { return v.transpose() * M * v; } Like this: https://godbolt.org/z/dSBTXW
|
2025-03-21T14:48:31.288427
| 2020-06-16T13:32:25 |
363232
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"David White",
"MoreauT",
"Simon Henry",
"https://mathoverflow.net/users/11540",
"https://mathoverflow.net/users/152969",
"https://mathoverflow.net/users/22131"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630255",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363232"
}
|
Stack Exchange
|
Do limits in Waldhausen categories commute with ordinary limits?
Disclaimer : I asked this question on MSE, I have no answer and I think it's better to ask it here.
Let $(A,\mathcal{W}, \mathcal{C})$ be a Waldhausen category with $A$ an additive category.
On one hand, we can define the ordinary limits $lim_A$ of the underlying category $A$.
On other hand, we can define limits of Waldhausen categories via the universal property of a diagram with some arrows in $\mathcal{C}$ .
For example we can define $ker_{\mathcal{C}}(f) \stackrel{i}{\rightarrow}X\stackrel{f}{\rightarrow}Y$ where $i \in \mathcal{C}$ has the universal property of the kernel for $j \in \mathcal{C} | fj=0$.
My question is:
if they exist, do ordinary limits and Waldhausen limits commute?
In particular, do ordinary countable products and Waldhausen kernels commute?
Or :
do we have some conditions such that ordinary limits and Waldhausen limits commute?
(I'm interested by the second case, but the first one implies the second one.
And in my particular case $\mathcal{W}=Iso$)
Unless I'm misunderstanding your question, the answer is yes. Your second way of defining limits, via "the universal property of a diagram with some arrows in $\mathcal{F}$," is actually a special case of the normal definition of a limit, and limits commute. For your specific question of interest, the key observation is that $ker_{\mathcal{F}}(f)$ is the pullback of the co-span below.
$$ \require{AMScd} \begin{CD}
ker_{\mathcal{F}}(f) @>>> X
\\ @VVV @VVV
\\ 0 @>>> Y
\end{CD}
$$
where $0$ is the zero object.
I edited my answer, I'm looking for Waldhausen categories and not coWaldhausen...
Since we don't know that cofibration are closed under pullback,
in your diagram we want to have $ker_\mathcal{C}(f) \rightarrow X$ a cofibration. I agree that limits commute but I'm not seeing why $ker_\mathcal{C}$ is a limit in $A$...
In my answer, nowhere it is it used that $f: X\to Y$ is a fibration. It works just as well if $f$ is a cofibration instead. The point is that kernel is a pullback, hence a limit, regardless of the (co)fibration structure. You could choose any class of (co)fibrations and kernel would still be a limit. Check out Weibel's "The K Book" to learn more.
You seem to claiming that the ker_C = ker ? if I understand the question correctly, ker_C is essentially the kernel in the category "C" (the non full subcategory of all objects and arrows that are in $C$).
Hmm, maybe. He edited the question, and I'm swamped with a July 1 deadline, so not really following the edits.
|
2025-03-21T14:48:31.288625
| 2020-06-16T13:42:25 |
363234
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Evgeny Shinder",
"Hacon",
"Mobius",
"Piotr Achinger",
"https://mathoverflow.net/users/111491",
"https://mathoverflow.net/users/117078",
"https://mathoverflow.net/users/19369",
"https://mathoverflow.net/users/3847"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630256",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363234"
}
|
Stack Exchange
|
A criterion for isotriviality of families of varieties of general type
Let $f:X\rightarrow Y$ be a surjective map between smooth varieties with connected fibers. Assume that the generic fiber of $f$ is of general type, and everything is over $\mathbb{C}$. $f$ is said to be isotrivial if all its smooth fibers are isomorphic.
When $\mathrm{dim}X=2$ and $\mathrm{dim}Y=1$, if $\mathrm{deg}(f_*\omega_{X/Y})=0$, then $f$ is isotrivial, in BPV,Chapter III. Theorem 17.3.
My question is that does this true for $\mathrm{dim}X\geq 3$ and $\mathrm{dim}Y=1$?
The tools to prove this are using the period map and Torelli's theorem for curves.
In case $\mathrm{dim}X=2$ and $\mathrm{dim}Y=1$:
If $\mathrm{deg}(f_*\omega_{X/Y})=0$, then the period map $$\mathcal{P}:Y\rightarrow \overline{\Gamma/D}$$ is constant, where $D$ is the period domain and $\Gamma$ is the monodromy group. Thus $\mathcal{P}(Y^0)$ is a point in $\Gamma/D$, where $Y^0\subseteq Y$ such that $f^0:X^0=f^{-1}(Y^0)\rightarrow Y^0$ is smooth. By Torelli's theorem, all nonsingular fibers will be isomorphic.
In higher dimensional cases, it seems that Torelli's theorem is not true in general. Does this true that $\mathrm{deg}(f_*\omega_{X/Y})=0$ implies the associated period map is constant?
The title of the question has general type in it, but not the question itself! Without general type assumption, e.g. for del Pezzo surfaces, the statement will be definitely false because the canonical class of the fibers will be negative, so the pushforward will be zero, but del Pezzo surfaces have moduli.
Thank you for your remind! The assumption of the generic fiber of $f$ being of general type is necessary for my question.
Couldn't this sheaf be zero?
A stupid counterexample would be to blow up a moving family of curves inside a product of two fake projective planes, in which case $f_* \omega_{X/Y} = 0$. I guess that a corrected conjecture would be that if $\deg f_* \omega_{X/Y}^\nu = 0$ for all $\nu>0$ then the family is birationally isotrivial.
I agree with Piotr; in fact the "corrected conjecture" seems to be a theorem in great generality (log case; any dimension fiber/base) see Theorem 1.3 https://arxiv.org/pdf/1503.02952.pdf. For surfaces, I would expect $\nu =5$ or $4$ should suffice (as $5K_X$ recovers the canonical model and $4K_X$ gives a birational map if I remember correctly).
|
2025-03-21T14:48:31.288810
| 2020-06-16T13:50:04 |
363237
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630257",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363237"
}
|
Stack Exchange
|
Can an orderless set of inner product between N vectors determine unique structure of the vectors?
Suppose we have n vectors {a1,a2,a3,...,an} such that the sum of them is zero vector
a1+a2+a3+...+an=0
Now, we compute the inner product of each two vectors of them, i.e. we compute the Gramian matrix of them and take the strictly upper triangular part.
Then, we make those inner products into an orderless set.
Now, can we determine a unique structure of the vectors from the set?
For example, when n=3 it's feasible,
Suppose we have 3 vectors {a,b,c} such that a+b+c=0
Then c=-a-b
the set of inner products of them is
{a.b, a.c, b.c}
= {a.b, a.(-a-b), b.(-a-b)}
= {a.b, -a^2-a.b, -b^2-a.b}
Then we can find
a^2 = -(-a^2-a.b+a.b)
b^2 = -(-b^2-a.b+a.b)
Cos<a,b> = a.b/(|a|*|b|)
So we know the length of a and b and the angle between a and b.
And by the condition c=-a-b, the structure of the 3 vectors is unique.
Even if we don't know the order of set of inner products
Suppose they are {k1, k2, k3}
We just need to choose anyone of them like k2
And compute
i=-(k2+k1)
j=-(k2+k3)
Cos<i,j>=k2/(Sqrt[i]*Sqrt[j])
Then the structure is determinate and unique.
So what if n>3?
Note: If one structure can be transformed into the other by reflections, we regard them as same structure
|
2025-03-21T14:48:31.288941
| 2020-06-16T13:51:53 |
363239
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Alexandre Eremenko",
"Andrew Peter Prifer",
"Dave L Renfro",
"Iosif Pinelis",
"J.J. Green",
"LSpice",
"Lee Mosher",
"Michael Bächtold",
"Michael Renardy",
"Will Sawin",
"YCor",
"darij grinberg",
"https://mathoverflow.net/users/12120",
"https://mathoverflow.net/users/14094",
"https://mathoverflow.net/users/15780",
"https://mathoverflow.net/users/159703",
"https://mathoverflow.net/users/18060",
"https://mathoverflow.net/users/20787",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/2530",
"https://mathoverflow.net/users/25510",
"https://mathoverflow.net/users/35508",
"https://mathoverflow.net/users/36721",
"https://mathoverflow.net/users/5734",
"https://mathoverflow.net/users/745",
"https://mathoverflow.net/users/84768",
"reuns",
"user1504"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630258",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363239"
}
|
Stack Exchange
|
Why did mathematical notation stay so hard to read?
One of the first things you learn in a programming 101 course is to write readable code, and to name your variables properly. This notion has seemingly never translated into mathematics. Everywhere you look, there are one letter constants, variables and functions, and an abundance of hard to remember symbols for operators, crammed together into tight, linearly laid out expressions. Characters are often borrowed from Greek, being short of Latin ones.
As soon as you get into college-level mathematics, any non-trivial mathematical expression starts to look like signal noise that programmers would instantly ridicule if it were a programming language.
Was there ever a movement to make mathematics more readable? Is being so succinct really worth it? Does it get better with enough experience? Would using memorable names for mathematical symbols and operators have any downside aside from length? Would a neatly indented, airier layout/syntax for expressions?
I'm not trying to stir up an argument, I'm genuinely curious, having been frustrated by this for a long time, and I'd be keen to hear what actual mathematicians think about this.
Specific examples might help your question.
Typically motivating this meta-post: Primarily opinion-based questions
Often in math variables don't actually have any practical meaning. I don't think it will help to state the Baker-Campbell-Hausdorff formula as $\operatorname{LogOfProdOfExps}(\operatorname{FirstLieAlgElem}, \operatorname{SecondLieAlgElem} ) = \operatorname{FirstLieAlgElem} + \operatorname{SecondLieAlgElem}.+ \frac{1}{2} \operatorname{Commutator} (\operatorname{FirstLieAlgElem}, \operatorname{SecondLieAlgElem} ) + \dots$ or a less intentionally-unnecessarily-wordy example along the same lines.
What @WillSawin said. Keep in mind that mathematics is often done on paper, where there is no tab-completion.
Was there ever a movement to make mathematics more readable? My understanding is that that's at least part of what motivated Iverson's work, and … well, not everyone agrees that readability is what resulted.
Strongly related are Why are symbols not written in words? and Why do mathematicians use single-letter variables? Somewhat related are Does notation ever become “easier”? and How to avoid getting “lost in notation”
The elementary answers available to some of your questions indicate that this post was not thought through very carefully. "Was there every a movement to make mathematics more readable?" Yes, it's the movement called "refereeing"; every referee insists on improved readability. "Does it ever get better with enough experience?" Yes, every paper a mathematician writes is improved by their continued experience with reading other papers and writing their previous papers.
I think mathematical formulas are much easier (for humans) to grasp and read than programming code. Indeed, mathematical formulas are written for humans, whereas programming code is written for computers.
@IosifPinelis: programming code has to read by computers, but that does not preclude it being written for humans, for many modern languages the latter is the primary goal of the design
It is a question of habit. As a mathematician, I find computer programmers notations awkward and hard to read and memorize.
@J.J.Green : Following your link, I have found this sample of Ruby code, supposedly written for humans: "a = "\nThis is a double-quoted string\n"
a = %Q{\nThis is a double-quoted string\n}
a = %{\nThis is a double-quoted string\n}
a = %/\nThis is a double-quoted string\n/
a = <<-BLOCK
This is a double-quoted string
BLOCK"
I will probably have a headache if I try to read 50 lines of such code. Also, I see many people asking questions online such as "Is Ruby dead?"
Have you ever tried to read mathematics from the middle ages? No one letter constants and variables, all ASCII symbols. But ...
@IosifPinelis: Nice example. Suppose you want a quote in a string, like this is "a string". You can do that in C with "this is \"a string\"", not nice. The %Q construct allows you to write %Q[this is "a string"], your string has square brackets in it? %Q(this [is] a "string"), and so on. Why go to this effort? Because code is read more than written, and by humans.
@YCor I do see how the post can be perceived as primarily opinion based, however I don't think that the "rant" in this case invalidates the question.
@LSpice nice example, thank you!
@MattF. I understand your point, however I feel that removing all even somewhat sentimentally charged words would make any discussion much less enjoyable, while keeping them doesn't preclude the rigor with which we evaluate each other's opinions. ;) I also think that the context I gave ("being frustrated", "what actual mathematicians think") makes it obvious that this post reflects my problems specifically.
@IosifPinelis I am a software engineer by trade. I can confidently say that modern programming languages are primarily created for humans, so much so that we are willing to make trade-offs in performance to accommodate this. I personally agree that Ruby can be unreadable in the wrong hands, but the sentiment of the language's author was the point here, which is to make it more pleasant to write and read.
@WillSawin good point!
@AndrewPeterPrifer : I certainly understand the desire to make programming code as readable for humans as possible. However, the inherent limitation here is that computers, too, have to be able to understand the code. So far, any computer software seems to be very fussy about things like a missing parenthesis or a semicolon. This limitation may be overcome by AI in the future. However, given Ruby suggested as the best example of a human-oriented computer language, we seem to be very very far from that goal yet. In mathematics, fortunately we are already there. :-)
@IosifPinelis for the sake of not giving you the wrong impression about programming languages, I'd just like to say that Ruby is definitely not the best example of a human-oriented language, even if the purpose was explicitly there, and is in fact one of the languages that is often derided for being "write-only". If you want to see a modern language, take a look at Swift or Kotlin. I'm of the opinion that a well-defined syntax actually benefits humans too, I'm sure an omitted parenthesis would leave you confused too. I think we can agree there is always room for improvement in notation. ;)
Mathematics uses very few variable names in any proof compared to the number of variable names occuring in typical programming languages. Variable names survive only for short passages, except for a small (less than a dozen) global variables. The names are subject to a host of conventions (for example, $\varepsilon$ is a small number, $N$ is a large number) which are not found in programming languages. More distinct symbols are available, as we are not constrained to use ASCII. Try to rewrite a serious piece of mathematics in the style of a programming language, and you will quickly see that it is unreadable.
We are not constrained to use ASCII, but, as Halmos pointed out (not in these terms), we don't take nearly enough advantage of the full spread afforded to us by Unicode—a lot of Greek, a bit of Hebrew, and the most horrible abuse of diacritics ….
Is it really the case that the typical computer program needs more variables than the typical mathematical text?
Maths symbols (and vocabulary) are a compromise between rigor and readability, and generally it works from the concept of relevant non-trivial example (ie. easy to generalize), the latter is what unfortunately many people forget : without relevant examples most theorems/proofs are just abstract nonsense.
|
2025-03-21T14:48:31.289588
| 2020-06-16T14:03:46 |
363240
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630259",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363240"
}
|
Stack Exchange
|
Generalized differential geometry based on Penrose's abstract tensor systems?
Penrose graphical notation has been an important precursor of string diagrams for monoidal categories. It was introduced in Penrose's paper Applications of negative dimensional tensors with intended use in differential geometry.
In modern language, his definition of abstract tensor system has been related to traced symmetric monoidal categories, but for Penrose's original purposes it seems better to work with a compact closed category enriched in vector spaces and whose underlying monoid of objects is $(\mathbb{N},+)$. Now string diagrams have been used in all sorts of ways since Penrose, but I have hardly seen any use in differential geometry since Penrose. Hence my question:
Has anyone attempted to develop a synthetic generalization of differential geometry based on abstract tensor systems, or similarly on a suitable type of monoidal category in which the morphisms play the role of tensors on the manifold? Where can I read about this?
Of course, an obvious challenge here is to talk about Lie brackets and Lie derivatives in terms of the category, and to state and prove the existence and uniqueness of the Levi-Civita connection. I have some ideas for how to go about this, but they seem obvious enough that I'd first like to see what has been done already.
|
2025-03-21T14:48:31.289705
| 2020-06-16T14:31:19 |
363243
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Angelo",
"Sam Gunningham",
"Simon Lentner",
"https://mathoverflow.net/users/22709",
"https://mathoverflow.net/users/4790",
"https://mathoverflow.net/users/7762"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630260",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363243"
}
|
Stack Exchange
|
Ext-Ring of (equivariant) sheaves over a variety
Apologies if this is a standard question for algebaric geometry colleagues: Suppose I have a variety, what is the ring Ext(1,1) of self-extensions of the unit object (trivial sheaf) in the categoy of sheaves over the variety. Same question for the category of equivariant sheaves under some action? Is it connected to the cohomology of the variety?
Thanks for any hint or reference,
Simon
What kind of sheaves do you have in mind? What's the "trivial" sheaf?
Whatever kind of sheaf theory you have in mind, the expression Ext(1,1) is almost tautologically the cohomology of the variety (with "trivial" coefficients). So for coherent sheaves you get coherent cohomology (of the structure sheaf), for etale l-adic sheaves you get etale cohomology, for D modules you get de rham cohomology, ditto for equivariant versions...
That was fast and exactly what I wanted to know. I appreachiate it a lot, thank you!
|
2025-03-21T14:48:31.289815
| 2020-06-16T14:36:32 |
363244
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"DatCorno",
"LSpice",
"https://mathoverflow.net/users/159706",
"https://mathoverflow.net/users/2383"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630261",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363244"
}
|
Stack Exchange
|
Unknown notation in "Boolean function complexity" by Stasys Jukna
I am currently reading Boolean Function Complexity - Advances and Frontiers by Stasys Jukna and on page 7 of the latest edition there is a paragraph titled Boolean functions as set systems with the following quote:
By identifying subsets $S$ of $[n] = \{1, \cdots, n\}$ with their characteristic $0–1$ vectors $v_S$, where $v_S(i) = 1$ iff $i \in S$, we can consider boolean functions as set-theoretic predicates $f: 2^{[n]} \longrightarrow \{0, 1\}$.
Now, I have never seen the notation $2^{[n]}$ before and I am not sure how to interpret it. I have looked into the notation section of the book but there is no mention of that particular usage. If anyone could help me, it would most appreciated.
The notation $[n]$ is an $n$-element set of integers. Usually, it means $[n] = \{1,2,\ldots,n\}$, but sometimes it goes from 0 to $n-1$ (this will basically never matter, and just assume it’s the first one).
In general, $2^{S}$ could mean either the powerset of $S$ (i.e., all the subsets of $S$), or could mean the set of functions mapping from $S$ to the set $\{0,1\}$. People use these interchangeably via characteristic functions as in your quote.
People also call $2^{[n]}$ the $n$-cube or the Boolean cube. It’s just the collection of subsets of $[n]$.
You’ll know what folks mean by the surrounding notation (if ever unsure).
For instance, $A \in 2^{[n]}$ would suggest that $A$ is a set (since it’s a capital letter near the beginning of the alphabet), so in this context, $2^{[n]}$ seems to be the set of subsets of $[n]$.
On the other hand, if someone wrote $\forall f \in 2^{[n]}$, I would say this must be referring to the collection of functions. But this second notation is sort of odd.
Let me spice it up once more!
Say $\Omega = \{0,1\}^n$ (now there is no ambiguity since the exponent is a number. This must be the collection of length $n$ strings consisting of $0$’s and $1$’s) Then $\Omega$ is also called the Boolean cube and there’s a natural correspondence between these vectors, subsets of $[n]$ and functions from $[n]$ to $\{0,1\}$. The corresponding vector is called the “characteristic vector” of the corresponding set.
Then we can say things like $f : \Omega \to \{0,1\}$, which is saying $f$ is a function that takes 01-strings of length $n$ and outputs a value of 0 or 1. (I.e., $f$ is a “boolean function”).
Good luck with it all. You’ll get the hang of it soon enough, I’m sure!
Added:
By the way, looking at your quote, I’m guessing boolean functions were defined as mapping from strings (aka vectors) to $\{0,1\}$, i.e., maps from $\Omega$ (as above) to $\{0,1\}$. And the author is using $2^{[n]}$ to mean the collection of subsets of $[n]$. So the author’s point is that instead of viewing the inputs as strings, you could view them as sets, and in that way you could say boolean functions take in a subsets of $[n]$, and they spit out a number.
Example:
Say $n=3$, and $f(x_1 x_2 x_3) = x_3$ [this is called a dictator function, since the output is determined by only one bit of the input string (they like social choice metaphors)].
Then a string like $011$ corresponds to the set $\{2,3\}$ (where the $011$ can be interpreted as describing a set one element at a time as “it’s missing 1, it contains 2, it contains 3”).
Then viewed a function on sets the same map $f$ would be $f(S) = \begin{cases} 1, \qquad \text{if $3 \in S$}\\ 0, \qquad \text{otherwise}\end{cases}$.
Thank you, I've seen and used all the other notations you have mentionned but I have never met the powerset one, this is quite enlightning!
"Now there is no ambiguity" is always optimistic in mathematics. :-) I'd argue that there's some ambiguity even in ${0, 1}^n$, since some authors use just $n$ for ${0, \dotsc, n - 1}$ (in accord with the von Neumann construction), so it could equally be interpreted as a function space. This is the same trick as one plays when using $2$ for ${0, 1}$, as in $2^{[n]}$; and, of course, the apotheosis of this sort of notation would be writing $2^n$ for a power set, with the consequence that $\lvert2^n\rvert = 2^n$, which is a true statement in both set theory and real arithmetic ….
|
2025-03-21T14:48:31.290143
| 2020-06-16T14:52:05 |
363248
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Achim Krause",
"Bob",
"LSpice",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/39747",
"https://mathoverflow.net/users/53199"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630262",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363248"
}
|
Stack Exchange
|
Is the average associator over a finite subloop of octonions necessarily zero?
For any three octonions $a,b,c$, their associator is defined as \begin{equation*} [a,b,c]=a(bc)-(ab)c \end{equation*} and measures their non-associativity so to speak.
Now suppose that $L$ is a finite non-associative subloop of $\mathbb{O}$ and that $\{x_\ell\}_{\ell\in L}$ is a collection of arbitrary (distinct) octonions, indexed by $L$. If $h$ is any element of $L$, is the "average associator"
\begin{equation} \frac{1}{|L|}\sum_{\ell\in L} [h,\ell^{-1},x_\ell]\end{equation}
necessarily zero?
Some motivation for the question: the sum $\frac{1}{|L|}\sum_{\ell\in L} \ell^{-1} x_\ell$ is something like a Fourier transform for a an injection $f\colon L\to \mathbb{O}$. A zero average associator would mean that $\sum_\ell \ell^{-1}f(\ell h)=h\sum_\ell \ell^{-1}f(\ell)$, so that the transform is equivariant with respect to $L$.
This is a neat question! By the way, TeX-style quotes don't work here; you can use dumb quotes "" or Unicode quotes “”, but ``'' doesn't get fixed up and so looks strange. I have edited accordingly.
Maybe I am misunderstanding the question, but the $x_l$ all look independent from each other, so what prohibits you to take all of them 0 except for one of them, and choose the remaining one so that the corresponding associator is nonzero?
This is a good point. I guess I should give some sort of generiticity condition on the $x_\ell$. Still, your observation leads me to think the average needn't be zero even generically
I edited so that the $x_\ell$ are all distinct
Since the expression is linear in the vector formed by the $x_l$, and the collection of such vectors with distinct entries spans the space of all such vectors ($\mathbb{O}^L$), the distinctness assumption doesn't change anything.
|
2025-03-21T14:48:31.290289
| 2020-06-16T15:21:15 |
363250
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Clement",
"https://mathoverflow.net/users/136573"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630263",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363250"
}
|
Stack Exchange
|
Getting more out of Minkowski's convex body theorem in the case of non-convex bodies
Problem. In number theory one generally proves the finiteness of the Picard group of a number field using Minkowski's convex body theorem. The actual body $S_p$ of interest in the proof, depending on some parameter $p$, is symmetric but not convex. We proceed by isolating the largest symmetric convex subset $C$ of $S_p$. The result is a worse bound on the $p$ for which we can find non-zero lattice points in $S_p$, than if we were allowed to apply Minkowski to $S_p$ instead of $C$.
Question. Are there general techniques to get better results than such a 'wasteful' application of Minkowski's theorem for symmetric but non-convex bodies? With general I mean applicable to some lattices and bodies, but not necessarily to the above problem.
The case I am particularly interested in is asymptotic: I have for every dimension $d$ a lattice $\Lambda_d$ and a symmetric body $S_d$ such that $\log(\text{vol}(S_d)/\text{det}(\Lambda_d))=\Theta(d^2)$ and want to find a $d$ and a non-zero lattice point in $\Lambda_d\cap S_d$.
Dear MadPidgeon, Could you please remind us what are the geometrical properties of this symmetrical but non convex body ? Or maybe give us a reference where we could find them ? Many thanks in advance.
|
2025-03-21T14:48:31.290407
| 2020-06-16T15:24:46 |
363251
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Mark Grant",
"https://mathoverflow.net/users/8103",
"https://mathoverflow.net/users/8631",
"Łukasz Lew"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630264",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363251"
}
|
Stack Exchange
|
Questions about a structure related to simplicial complexes
While researching some superficially unrelated theory, a structure similar to the one described below presented itself to me. I'm having trouble with identifying what's the structure name. It seems to be related to simplicial homology or maybe even isomorphic. I am unfamiliar with simplicial homology, perhaps the answer to this question is "Yes, what you're asking is exactly simplicial homology.". I also realize the questions that I ask at the bottom are not entirely well defined. I would be grateful for any pointers that would allow me to identify the structure in question or similar.
Let's assume ambient undirected graph universe.
Level 1 structures are surfaces defined by directed (sets) of cycles (i.e. sets of edges with empty boundary) in the graph. One can compose two or more cycles if they have empty intersection of their directed edge sets.
The sum is defined by summing them and if the resulting set contains edges of opposite orientation, removing them. This effectively merges the cycles.
It is easy to show that it will still be (set of) cycles.
Orientation of the surface is defined by the orientation of all edges of its boundary. There is a unique involution operation on surfaces defined by reversing orientation of all the edges of its boundary.
Level 2 structures are (sets) volumes defined by sets of oriented surfaces. Similarly, for a volume to be well defined, the boundary of the boundary must be empty. Volume summing is defined in the same way: sets of boundaries (oriented surfaces, different orientations are distinct) of summands must be disjoint and we remove surfaces of opposite orientation. Resulting volumes are oriented as well.
That construction can be iterated.
Questions from the most general to most specific:
Does this structure have a name in literature?
Is it plain isomorphic to some subset of oriented simplicial complexes or simplicial homology or some generalization of them? could you give hints on how to construct said isomorphism?
I am not sure whether the definition should insist on disjointedness of the sum operation, or just focus on the boundary of the boundary is empty condition.
Can this structure generalized so that what's above would be just a particular presentation of the structure? The naive way of adding equalities on each level does not feel like particularly elegant ... perhaps axiomatic approach can give better theory, but I'm not sure how to define it.
Can this structure be "modelling" "higher order" linear spaces along these lines:
Orientation involution corresponds to taking a dual linear space.
Summing of structures with disjoint undirected boundaries corresponds to tensor product.
Empty structure corresponds to scalar.
General summing corresponds to general composition.
Shapes in general correspond to "types/shapes/names" of the sub-spaces / coordinates.
I apologize for the fuzziness. What I'm trying to accomplish with this question is to remove that exact fuzziness.
Thank you.
What exactly is meant by the sentence "Let's assume ambient undirected graph universe."?
I realize now that a more standard phrasing would be: "Let's fix an undirected graph G".
This is the graph that the further definitions are referring to. Phrase is meant to indicate that all levels or dimensions share the same underlying graph.
You talk about "surfaces" and "volumes", which are 2- and 3-dimensional concepts, whereas the underlying graph is inherently 1-dimensional. What is your schema for associating these higher dimensional objects to the underlying graph? Once you poin that down, it does sound like simplicial homology (maybe with coefficients mod 2).
|
2025-03-21T14:48:31.290661
| 2020-06-16T16:09:08 |
363253
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"https://mathoverflow.net/users/43326",
"https://mathoverflow.net/users/99414",
"user101010",
"user43326"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630265",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363253"
}
|
Stack Exchange
|
Using the Serre spectral sequence - moving between $\mathbb{Z}/2$ and $\mathbb{Z}$ information
I am trying to understand the computation of $\pi_5(S^3)$ and $\pi_6(S^3)$ using the Serre spectral sequence. I know already that $\pi_5(S^3)$ is only 2-torsion and $\pi_6(S^3)$ is 2-torsion together with a $\mathbb{Z}/3$ summand. For what it's worth, I'm looking at Hatcher's notes on the subject (starting on page 573). I'm sure that my confusion is fairly banal and probably will be alleviated in the comments, but I appreciate the help nevertheless. I'll start with the setup.
We look at the part of the Postnikov tower $K(\mathbb{Z}/2, 4) = K(\pi_4(S^3), 4) \to X_4 \to X_3 = K(\mathbb{Z}, 3)$ where here $X_i$ is the $i$-th term in the Postnikov tower for $S^3$. For the fiber and the base space, we know already the cohomology with $\mathbb{Z}/2$ coefficients and using Bockstein cohomology and some Adem relations, we can also work out what the integral cohomology is in low degree groups (modulo odd torsion). Hatcher draws a nice picture of the $E^2$-page of this spectral sequence (page 574). He uses the convention that an element labeled with an open circle comes from a reduction of an integral class and that an element with a closed circle does not. The Bockstein cohomology is computed for the fiber and the base in low degrees and this allows the circles in these dimensions to be filled in correctly.
My first confusion is, how do we know how to fill in the terms that are not on the x- or y-axis? I imagine the convention here is to consider the reduction $H^p(B;H^q(F; \mathbb{Z})) \to H^p(B;H^q(F; \mathbb{Z}/2))$ and to fill elements in if they do not lie in the dimension. In particular, the element $\iota_3 \iota_4$ and the element $\iota_3 Sq^1 \iota_4$ have a solid dot and a filled dot respectively, and I do not know why.
Now we compute lots of differentials of this spectral sequence since we now that the cohomology of the total space vanishes through dimension 5 and using that various elements are transgressive and the transgression commutes with Steenrod squares. Once we have done this, we can say exactly what the $\mathbb{Z}/2$-cohomology of the total space is up to degree 8. Apparently, we can also say what the $\mathbb{Z}$-cohomology of the total space (modulo odd torsion) is in these dimensions as well. Why can we do we know this? It looks like we just look at the things on the $E^2$-page and see what lives on to the $E^\infty$-page and then by looking to see if the dots are filled or not this tells us everything, but I do not understand why.
As to your first question, since we are talking about the mod 2 cohomology, we can simply use the Kunneth to fill in. The reason why Hatcher bothered to mark these classes is, presumably, becaute they hit the claas in the bottom line.
@user43326 What do you mean by using the Kuenneth here? The only Kuennth that comes to mind for me here is for computing (co)homology of products. The reason these classes are drawn is not really because they hit the bottom (see the next spectral sequence that is pictured in that section), but because they are involved in the computation of the cohomology of the total space through degree 8.
OK, it wasn't really Kunneth, but the thing is $H^(F.Z/2)$ is a (graded) vector space over $Z/2$, so taking cohomology with $H^(F.Z/2)$ coefficients is same as taking cohomology with $Z/2$-coefficients and then tensoring with $H^*(F.Z/2)$.
|
2025-03-21T14:48:31.290937
| 2020-06-16T16:12:19 |
363254
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"AMath91",
"Dieter Kadelka",
"Gerald Edgar",
"Martin Hairer",
"Michael Greinecker",
"https://mathoverflow.net/users/100904",
"https://mathoverflow.net/users/127739",
"https://mathoverflow.net/users/131781",
"https://mathoverflow.net/users/35357",
"https://mathoverflow.net/users/38566",
"https://mathoverflow.net/users/454",
"user131781"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630266",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363254"
}
|
Stack Exchange
|
Tight sequence of measures
This is probably a very easy question for experts in probability or measure theory.
I have a sequence of finite measures $\mu_{n}$ on a non-compact metric space $X$ such that $\mu_{n}$ converges to $\mu$ in the following sense:
$$ \int_{X}fd\mu_{n} \to \int_{X}fd\mu \ \ \ \ \ \text{ for all f continuous with compact support}
$$
I would like to say that $\mu_{n}(X)\to \mu(X)$.
I know this is false in general, but I have the additional condition that for every $\epsilon>0$ there is $n_{0}\in \mathbb{N}$ and $K\subset X$ compact such that $\mu_{n}(K^{c})\leq \epsilon$ for every $n\geq n_{0}$. This looks very similar to the definition of tight sequence (which guarantees the result I would like). Is this equivalent?
Additional assumptions: X is Polish and locally compact, precisely it is a closed surface with some finitely many points removed. All measures $\mu_{n}$ and $\mu$ are area measures of Riemannian metrics (with singularities at the points removed) on X.
I would say yes, but I'm not sure what the "official" definition of tightness is you are using.
The definition of tightness I found is the following: a sequence $\mu_{n}$ is tight if for every $\epsilon>0$ there is a compact set $K$ such that $\mu_{n}(K^c)<\epsilon$ for all $n$. The difference with what I have is that for me this condition holds only for $n \geq n_{0}$ with $n_{0}$ depending on $\epsilon$.
Is your metric space by any chance separable and complete (or each of your measures regular)? Then you can find for each $m<n_0$ a compact set $K_m$ such that $\mu_m(K_m^c)\leq \epsilon$. Replacing $K$ with $K\cup K_1\cup\cdots\cup K_{n_0-1}$ would show that the sequence is tight in the usual sense.
Say the metric space is $\mathbb Q$ with the usual topology. It has no nonzero continuous functions with compact support. So $$\int_{X}fd\mu_{n} \to \int_{X}fd\mu \quad\text{ for all f continuous with compact support}$$is vacuously true, but $\mu_{n}(X)\to \mu(X)$ could easilyt fail.
Even $X$ being Polish, which is pretty much as nice as it gets, doesn't guarantee that you have non-zero continuous functions with compact support, think of $X = \ell^2$. You probably want $X$ to be Polish and locally compact, and the $\mu_n$'s to be positive, but then the claim is of course trivial.
X is locally compact and Polish. Actually, all these measures in my problem are area measures of (singular) Riemannian metrics on X. I wasn't completely sure that my assumptions guarantee convergence of the areas.
You should edit your question.
Since your space is Polish, $\mu$ is regular and there exists a compact set $C$ such that $\mu(X\setminus C)<\epsilon$ for each $\epsilon>0$. Since your space is locally compact, there is another compact set $C'$ such that $C$ is a subset of the interior of $C'$. The function given by $f_n(x)=\max\{0,1-n\cdot d(C,x)\}$ is continuous and supported on $C'$ whenever $n>d(C,C')^{-1}$. Moreover, the sequence $\langle f_n\rangle$ decreases pointwise to the indicator function$1_C$. It follows that $\liminf_n \mu_n(X)\geq\mu(X)$ for each $\epsilon>0$ and $n$ large enough. A similar argument applied to the compact set in your tightness version shows that $\limsup_n\mu_n(X)\leq\mu(X)$.
That the sequence is tight follows directly from inner regularity of measures on Polish spaces. If there $n_{0}\in \mathbb{N}$ and $K\subset X$ compact such that $\mu_{n}(X\setminus K)\leq \epsilon$ for every $n\geq n_{0}$, you can just pick a compact set $K_m$ for each $m< n_0$ such that $\mu_m(X\setminus K_m)\leq\epsilon$. Take $K'=K\cup\bigcup_{m<n_0}K_m$, then $\mu_n(X\setminus K')\leq\epsilon$ for all $n$.
As has been pointed out in the comments, tightness is not enough if you only have convergence of integrals for compactly supported continuous functions.
This is really just a comment to add context to your question and the reactions but it will be too long so I am disguising it as an answer. Stripping away all trappings the relevant fact is that an equicontinuous set $A$ in the dual of a locally convex space $E$ is relatively compact for the corresponding weak topology and so that latter coincides on $A$ with the weak topology induced by any dense subspace $E_0$ of $E$. In your case, $E$ is the space of bounded, continuous functions on a locally compact space, provided with the so-called strict topology which was introduced by R.C. Buck in the 50‘s. This has the following three relevant properties which set up the connection to your question.
The dual space is the space of finite tight (or Radon) measures;
A family of measures is equicontinuous if and only if it is bounded and uniformly tight;
The space $E_0$ of continuous functions with compact supports is dense.
This shows that many of the assumptions given above are irrelevant. One can even replace local compactness by tcomplete regularity. The former condition is required to ensure the denseness of $E_0$. In the general case, one can use any subalgebra of $E$ which separates points and is such that for each element of the underlying space there is a function in $E_0$ which does not vanish there.
The original formulation did not come with the assumption that the measures are Radon.
The formulation in the question is imprecise—it wasn’t clear whether the measure are probability measures (presumably not), positive, or signed (in which case there is an absolute value sign missing in the definition of uniform tightness). This means that there is some ambiguity in the statement. It seemed reasonable to me to assume that since the OP was using the phrase tightness for sequences of measure, that he was tacitly assuming that the individual measures were tight. If not, what are they? Maybe he could specify what regularity condition he understands in the term “measure”.
The point of asking whether it is a Polish space is exactly to ensure they are Radon. The test functions used are not bounded continuous functions but compactly supported functions, which is why local compactness matters. You write that "many of the assumption given above are irrelevant" but simply replace them by others.
No. As I tried to explain, the original formulation is not precise enough to specify what concept of measure is intended. Is it finite p, $\sigma$- or $\tau$-additivity, tightness, or ..... I chose what seemed the most natural interpretation but I could, of course, be wrong. Only the OP can know—perhaps he could put us out of our misery by editing his post.
They are finite measures on a Polish locally compact space; as is written in yesterday's edit. The only thing that is missing is the $\sigma$-algebra, which is of course obvious in this context. There is no misery left.
Then they are tight— my contribution was to indicate for tight measures on any completely regular space a result holds which is more general and, I hope, more transparent and enlightening since it displays the role of equicontinuity and weak compactness in the result. But that is just my wee opinion and you are, of course, entitled to disagree.
I have no problem with the result and it does answer the "official question." But it is not enough for the stated goal of showing that $\mu(X)=\lim_n\mu(X)$. For this one needs local compactness, so all these assumptions are ultimately necessary. And that was the content of my first comment.
|
2025-03-21T14:48:31.291585
| 2020-06-16T16:28:17 |
363257
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Leray Jenkins",
"https://mathoverflow.net/users/143607"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630267",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363257"
}
|
Stack Exchange
|
anti-holomorphic Hilbert modular forms as global sections
The classical definition of Hilbert modular cuspforms as given in say, Hida's "On $p$-adic Hecke algebras for $\mathrm{GL}_2$ over totally real fields", defines them as holomorphic functions $f: \mathbb{H}^d \to \mathbb{C}$ satisfying the usual moebius transformation properties with respect to a chosen level subgroup and weight. (Here $\mathbb{H}$ is the usual upperhalf plane, and $d=[F:\mathbb{Q}]$ with $F$ your totally real field).
There are also, some slight variants of this construction (as can also be found in Hida's paper), where one picks some subset $J$ of the set of embeddings of $F$ into $\mathbb{C}$ and then require that these functions be holomorphic in the $J$ components and anti-holomorphic in the $J^c$ components (here ${}^c$ is complement). Let me call them $J$-holomorphic. These show up for example in the Eichler--Shimura isomorphism for Hilbert modular forms.
Now, I know the holomorphic forms can be seen as the global sections of some sheaf on some Hilbert modular variety (a la Katz). But what about these $J$-holomorphic ones? Are they $H^0$s of some sheaf? or maybe $H^k$s for some $k>1$ of the modular sheaf? If its the second option then is $k=|J|$?
Thanks
Your second guess is spot on: these $J$-holomorphic forms are exactly $H^{k}$ of a sheaf on (a smooth compactification of) the Hilbert modular variety, where $k = |J^c|$. This is an instance of a much more general theory which is largely due to Harris. The canonical reference is
Harris, Michael. Automorphic forms of $\bar{\partial}$-cohomology type as coherent cohomology classes. J. Differential Geom. 32 (1990), no. 1, 1--63.
Thanks, that is what I hoped for.
|
2025-03-21T14:48:31.291740
| 2020-06-16T16:37:24 |
363258
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630268",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363258"
}
|
Stack Exchange
|
Analog of Cartan model for equivariant homology
Let $X$ be a manifold, acted on by a Lie group $G$.
(For example $X$ real-even-dimensional acted on by $G=U(1)$ with only finitely many isolated fixed points.) The Cartan model for $G$-equivariant cohomology of $X$ with real coeffificents is built using the differential
$d_e := d - i_v$, where $v \in Lie G$ is a vector field and $i$ is contraction.
Is there a similar construction for equivariant homology, where one introduces a boundary operator of the form $\partial_e = \partial - j$ (for some operator $j$) and writes an equivariant cycle in terms of usual cycles multiplied by polynomials of some Lie algebra parameter?
I'm looking for a construction that is equivalent to Borel-Moore equivariant homology, in the same spirit as Cartan model above is equivalent to Weyl model of equivariant cohomology.
|
2025-03-21T14:48:31.291832
| 2020-06-16T16:52:54 |
363259
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Pat Devlin",
"dohmatob",
"https://mathoverflow.net/users/22512",
"https://mathoverflow.net/users/78539"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630269",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363259"
}
|
Stack Exchange
|
Approximate any point of the interval $[-1/2,1/2]$ by the sum of $n$ iid uniform random variables from $[-1,1]$
Let $x \in [-1/2,1/2]$ and $X_1,\ldots,X_n$ be drawn iid from the uniform distribution on $[-1,1]$.
Question. Given $\varepsilon \ge 0$ an integer $k \in [1,n]$, what is a good lower-bound on the probability $p_{n,k,\varepsilon}$ that: for every $x \in [-1/2,1/2]$, there exists $S \subseteq \{1,2,\ldots,n\}$ with $1 \le card(S) \le k$ such that $|\sum_{i \in S} X_i-x| \le \varepsilon$ ?
I'm particularly interested in the case $k=n$.
Notes.
If one could get a good lower-bound for the probability of approximating a fixed $x \in [-1/2,1/2]$ as a sum $\sum_{i \in S}X_i$, then one could use a covering argument on the interval $[-1/2,1/2]$ to get a uniform bound.
One could generalize the above problem as follows. Let $0 \le r < 1$ and $B_r^m$ be the origin-centered ball of radius $r$ in $\mathbb R^m$, and let $X_1,\ldots,X_n$ be drawn iid uniformly from the unit ball $B_1^m$.
Question. What is a good lower-lower bound for the probability $p_{m,r,n,k,\varepsilon}$ that:
for every $x \in B_r^m$, there exists $S \subseteq \{1,2,\ldots,n\}$ with $1 \le card(S) \le k$ such that $\|\sum_{i \in S} X_i-x\| \le \varepsilon$ ?
As before, I'm particularly interested in the case $k=n$.
What sort of dependence does $\varepsilon$ have on $n$? [If $n \varepsilon$ is bounded, this is going to be much different than the case where say $\varepsilon n > 10 \log(n)$.]
You may assume $\varepsilon \ge e^{-Cn}$, for some fixed constant $C>0$, independent of $n$.
Naturally, we'd need $\varepsilon > 2^{-n}$ or there won't be enough different sums to land in the $1/\varepsilon$ intervals needed. Can we assume $\varepsilon > e^{-n/100}$ if we feel like? Otherwise, if we have no control over $C$, then $\varepsilon$ might be much too small for this to be possible.
Take $C= 1/100$ if you like.
Even still, you're cutting it kind of close! :-) What kind of lower bound would you be happy with? Anything that converges to 1?
I'm interested with how far one can go with the given data. Obviously, the tighter the bound, the better. One bound is not as good as another...
|
2025-03-21T14:48:31.292011
| 2020-06-16T18:29:04 |
363262
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630270",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363262"
}
|
Stack Exchange
|
Examples of strictification of a weak category obtained from a generalisation of a strict category
I have made the following observation (hopefully a correct one) when reading the paper Orbifolds as stacks:
They start with the strict $2$-category category of Lie groupoids, functors, natural transformations between functors. After realising (I think so) the rigid ness, and that it is not of interesting nature, they construct a weak $2$-category of Lie groupoids, bibundles and isomorphism of bibundles. Then, they consider to embed this weak $2$-category into a strict $2$-category of Stacks.
So, we start with a strict $2$-category, realise that it is difficult to work with, embedd this in a weak $2$-category and then strictify it to get a strict $2$-category. Surprisingly, this strict $2$-category is “interesting“ and “good enough to work on”.
Are there other “interesting” strict $2$-categories obtained from strictification of a weak $2$-category?
Are there other “interesting” strict $2$-categories obtained from strictification of a weak $2$-category obtained from “generalising” a strict $2$-category?
|
2025-03-21T14:48:31.292142
| 2020-06-16T19:07:41 |
363263
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Anton Mellit",
"Fedor Petrov",
"Ira Gessel",
"LSpice",
"Suvrit",
"Terry Tao",
"darij grinberg",
"https://mathoverflow.net/users/10744",
"https://mathoverflow.net/users/2383",
"https://mathoverflow.net/users/2530",
"https://mathoverflow.net/users/4312",
"https://mathoverflow.net/users/766",
"https://mathoverflow.net/users/8430",
"https://mathoverflow.net/users/89514"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630271",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363263"
}
|
Stack Exchange
|
Is this formal noncommutative power series identity known?
I recently discovered the following cute formal noncommutative power series identity: if $(x_i)_{i \in I}$ is some finite collection of noncommuting variables, then the formal power series
$$ 1 + \sum_{m=1}^\infty \sum_i x_i^m = 1 + \sum_i x_i+ \sum_i x_i^2 + \sum_i x_i^3 + \dotsb$$
is the reciprocal of the formal power series
\begin{multline*}
\sum_{k=0}^\infty (-1)^k \sum_{i_1 \neq \dotsb \neq i_k} x_{i_1} \dotsb x_{i_k} \\
= 1 - \sum_i x_i + \sum_{i \neq j} x_i x_j - \sum_{i \neq j \neq k} x_i x_j x_k + \dotsb
\end{multline*}
where summation indices are understood to range in $I$ if not otherwise specified. (Note that we do not require the $i_1,\dots,i_k$ to all be distinct from each other; it is only consecutive indices $i_j, i_{j+1}$ that are required to be distinct. So this isn't just the Newton identities relating power sums with elementary symmetric polynomials, though it seems to be a cousin of these identities.)
For instance, if $\lvert I\rvert=n$ and $x_i=x$, this identity amounts (after summing the geometric series) to the (formal) assertion
$$ \left(1 + \frac{nx}{1-x}\right)^{-1} = 1 - \frac{nx}{1+(n-1)x}$$
which follows from high school algebra.
Once written down, the general identity is not hard to prove: multiply the two power series together and observe that every non-constant term with a coefficient of $+1$ is cancelled by a term with a coefficient of $-1$ and vice versa. But I am certain that an identity this basic must already be in either the enumerative combinatorics or the physics literature (EDIT: it is very implicitly in the free probability literature, which is how I discovered it in the first place, but to my knowledge it is not explicitly stated there). Does it have a name, and where is it used? Presumably there is also some natural categorification (or at least a bijective or probabilistic proof).
This cancellation of terms $\pm x_i^s\cdot x_jx_k\ldots$ and $\mp x_i^{s-1}\cdot x_ix_jx_k\ldots$ is what I would call bijective.
Fair enough. I guess I want a different bijective proof. For instance, the second power series looks vaguely like an inclusion-exclusion formula applied to some (noncommutative) random set, so I feel like there should be some "bijection" (possibly (noncommutative) probabilistic in nature) involving some sort of residual set arising from inclusion-exclusion. Related to this, there should be a combinatorial or probabilistic proof that the second power series is non-negative whenever it converges and the $0 < x_i < 1$ are real numbers.
The commutative projection of your formula is Exercise 5.22 in Eric Egge, An Introduction to Symmetric Functions and Their Combinatorics, AMS 2019. It also appears in the Second Proof of Proposition 5.3 in Richard P. Stanley, A Symmetric Function Generalization of the Chromatic Polynomial of a Graph. I wouldn't be surprised if it goes back further to Carlitz.
I'm hoping that a version of this can be found in: https://arxiv.org/abs/hep-th/9407124 (Noncommutative symmetric functions, Israel Gelfand, D. Krob, Alain Lascoux, B. Leclerc, V. S. Retakh, J.-Y. Thibon), but haven't checked in there myself yet.
Chromatic symmetric functions have been lifted to the noncommutative realm in D. Gebhard and B. Sagan, A chromatic symmetric function in noncommuting variables, J. Algebraic Combin. 13 (2001), 227--255, but I don't see the formula there.
@Suvrit: Pretty sure it's not in that paper.
Note that you can replace the $\neq$ relation by any binary relation $R$ in the second series, as long as you correspondingly replace the first series by $1 + \sum_i x_i + \sum_{i S j} x_i x_j + \sum_{i S j S k} x_i x_j x_k + \cdots$, where $S$ is the relation complementary to $S$ (that is, $i S j$ holds if and only if $i R j$ does not). For example, $R$ could be $\leq$, and $S$ would then be $>$. (This would yield the antipode relation in $\operatorname{NSym}$ with the standard embedding into noncommutative power series.)
This sounds like the following vaguely Nim-ish game: Players 1 and 2 are faced with $I$ spots, and alternate choosing one of the spots, different from the previous one chosen, and placing a marker on it. Eventually they agree to stop, and then Player $\omega$ chooses a spot $i$ and a non-negative integer $m$, and places $m$ markers on spot $i$. Then there is a bijection between games (not just final states) "Player $\omega$ played after Player 2, or at the beginning of the game, but at least one of them did something" and "Player $\omega$ played after Player 1."
Section 2.4.16 of I. P. Goulden and D. M. Jackson, Combinatorial Enumeration, Wiley 1983 also gives the commutative projection of your formula. It appears noncommutative power series were insufficiently known (or popular) back when these were written...
@darijgrinberg In fact there is an even more general version: the power series $1 + \sum_{m=1}^\infty \sum_{i_1,\dots,i_m} x_{i_1} a_{i_1 i_2} \dots a_{i_{m-1} i_m} y_{i_m}$ is the reciprocal of the power series $1 + \sum_{k=1}^\infty (-1)^k \sum_{i_1,\dots,i_k} x_{i_1} b_{i_1 i_2} \dots b_{i_{k-1} i_k} y_{i_k}$ for arbitrary noncommutative variables $x_i,y_j,a_{ij}, b_{ij}$ for which $a_{ij}+b_{ij}=y_i x_j$. This begins to look like some power series expansion of the Woodbury formula https://en.wikipedia.org/wiki/Woodbury_matrix_identity .
... indeed, the Woodbury identity implies the general noncommutative matrix identity $1 - V (A+UV)^{-1} U = (1+VA^{-1} U)^{-1}$, which when applied to $A = 1 - (a_{ij})$, $U = (x_i)^T$, $V = (y_i)$ and using the geometric series formula gives the previous identity. So technically the original identity is a "special case" of the Woodbury identity, though I would say that this is not explicitly obvious.
Nice argument!!
Here is another proof. No need to think. Set $\frac{x_i}{1-x_i}=y_i$. Then $x_i=\frac{y_i}{1+y_i}$. The first series is $1+y_1+y_2+\cdots$. The second series is the alternating sum of all words in ${y_i}$.
Ah, yes, this is the bijective proof I was looking for! Setting $z_i = -y_i$ the identity becomes $\sum_{k=0}^\infty \sum_{i_1 \neq \dots \neq i_k} \prod_{j=1}^k \sum_{m=1}^\infty z_{i_j}^m = \sum_{l=0}^\infty (\sum_i z_i)^l$ which is the generating function of the fact that words in the alphabet $z_i$ are in bijection with words in the alphabet $z_i^{m_i}$ in which adjacent letters in the word have different $i$ indices.
Anton Mellit's proof is actually the same as the one in Goulden and Jackson (but in the noncommutative setting).
According to Goulden and Jackson (p. 76), the commutative version of the original formula is due to MacMahon, though they only refer to his book Combinatory Analysis and don't give a more specific reference. (Words with adjacent letters different are often called Smirnov words.)
The generalization that Darij describes in his fourth comment was first proved, as far as I know (though stated in a weaker form) by Ralph Fröberg, Determination of a class of Poincaré series, Math. Scand. 37 (1975), 29–39, https://www.mscand.dk/article/view/11585. The full noncommutative version was proved (independently) shortly thereafter in L. Carlitz, R. Scoville, and T. Vaughan, Enumeration of pairs of sequences by rises, falls, and levels, Manuscripta Math. 19 (1976), 211–243 (Theorem 7.3).
@IraGessel The Carlitz et al. reference does indeed contain the full noncommutative form of the identity! If you will post it as a formal answer to this question (also mentioning the commutative precursors of course) I will happily accept it.
Where exactly is this in Carlitz/Scoville/Vaughan?
@darijgrinberg It is Theorem 7.3, page 30.
OK, here's an excessively long expansion of my comment.
According to Goulden and Jackson's Enumerative Combinatorics (p. 76), the commutative version of the original formula is due to MacMahon, though they only refer to his book Combinatory Analysis and don't give a more specific reference. I was not able to find this formula in MacMahon's book, but on pages 99–100 of Volume I (Section III, Chapter III) MacMahon gives the related generating function (in modern notation)
$$\frac{1}{1-e_2 -2e_3 -3e_4-\cdots},$$
for counting derangements of a multiset. Here $e_i$ is the $i$th elementary symmetric function. (MacMahon uses $p_i$ for the elementary symmetric function, which is the modern notation for the power sum symmetric function.) It's not hard to show that (with $p_i$ the power sum symmetric function $x_1^i+x_2^i+\cdots$) we have
$$
\frac{1}{1-p_1+p_2-\cdots} = \frac{1+e_1+e_2+\cdots}{1-e_2 -2e_3 -3e_4-\cdots}.
$$
A combinatorial interpretation of the connection between these two generating functions has been given by J. Dollhopf, I. Goulden, and C. Greene, Words avoiding a reflexive acyclic relation, Electronic J. Combin. 11, no. 2, 2004–2006.
Words with adjacent letters different were called waves by L. Carlitz, who gave the (commutative) generating function for them in Enumeration of sequences by rises and falls: a refinement of the Simon Newcomb problem, Duke Math. J. 39 (1972), 267–280. This is probably the first appearance of the generating function, unless it's hiding somewhere in MacMahon. (Carlitz actually solved the more general problem of counting words by rises, falls, and levels.) Nowadays words with adjacent letters different are usually called Smirnov words or Smirnov sequences. This term was introduced by Goulden and Jackson; apparently Smirnov suggested the problem of counting these words though it's not clear that he did anything to solve the problem. According to the review in Mathematical Reviews of O. V. Sarmanov and V. K. Zaharov, A combinatorial problem of N. V. Smirnov (Russian),
Dokl. Akad. Nauk SSSR 176 (1967) 530–532 (I didn't look up the actual paper),
“The late N. V. Smirnov posed informally the following problem from the theory of order statistics: Given $n$ objects of $s+1$ distinct types (with $r_i$ objects of type $i$, $r_1+\cdots+r_{s+1}=n$), find the number of ways these objects may be arranged in a chain, so that adjacent objects are always of distinct types.”
When considered as compositions, i.e., when the entries are added together, Smirnov words are often called Carlitz compositions, as they were studied from this point of view by L. Carlitz, Restricted compositions,
Fibonacci Quart. 14 (1976), no. 3, 254–264.
The generalization that Darij describes in his fourth comment was first proved, as far as I know (though stated in a weaker commutative form) by Ralph Fröberg, Determination of a class of Poincaré series, Math. Scand. 37 (1975), 29–39 (page 35). It was proved (independently) shortly thereafter in L. Carlitz, R. Scoville, and T. Vaughan, Enumeration of pairs of sequences by rises, falls, and levels, Manuscripta Math. 19 (1976), 211–243 (Theorem 7.3). Their statement of the theorem doesn't seem to use noncommuting variables, though their proof contains a formula—equation (7.7)—which is essentially the noncommutative version. (I am not sure that this really makes any difference.) Just to be clear, I'll restate the theorem here, more or less, though not exactly, the way Carlitz, Scoville and Vaughan state it, with some comments in brackets.
Let $S$ be a finite set of objects and let $A$ and $B$ be complementary subsets of $S\times S$. Let $F_A$ be the generating function for all paths [today we would call them words, or possibly sequences] which avoid relations from $A$. [This is referring to a partition of $A$ which is related to their applications of the theorem, but is not really relevant to the theorem.] More specifically, define
$$F_A = 1+\sum s_{i_1}+\sum s_{i_1}s_{i_2}+\sum s_{i_1}s_{i_2}s_{i_3}+\cdots,$$
where, for example, the last sum is taken over all $i_1,i_2,i_3$ such that $s_{i_1} \mathrel{B}s_{i_2}$ and $s_{i_2}\mathrel{B} s_{i_3}$. (We use lower-case $s_i$'s for both the members of the set $S$ and for the indeterminates [which are presumably commuting] in the enumeration.)
We also introduce
$$\tilde F_B = 1-\sum s_i +\sum s_{i_1}s_{i_2}-\cdots$$
where the signs alternate and the relations must be from $A$ instead of $B$.
7.3 THEOREM. The functions $F_A$ and $\tilde F_B$ are related by $F_A\cdot \tilde F_B = 1$.
Both Fröberg and Carlitz–Scoville–Vaughhan prove this by showing that all the terms in $F_A\cdot \tilde F_B$ except 1 cancel in pairs. However there is another way to prove it: expand $\tilde F_B^{-1}$ as
$\sum_{k=0}^\infty (1-\tilde F_B)^k$ and use inclusion-exclusion.
Carlitz, Scoville, and Vaughan then apply the theorem to counting Smirnov words.
The Carlitz–Scoville–Vaughan theorem is one of my favorite formulas in enumerative combinatorics, and my 1977 Ph.D. thesis has many applications of it. The slides from a talk I gave about this theorem can be found here.
An expansion to be sure, but excessively long no way. This is an awesome answer!
|
2025-03-21T14:48:31.292998
| 2020-06-16T19:30:31 |
363264
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Asaf Karagila",
"Monroe Eskew",
"https://mathoverflow.net/users/11145",
"https://mathoverflow.net/users/7206"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630272",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363264"
}
|
Stack Exchange
|
Distributivity of certain infinite products
Suppose we have a sequence of posets $\{\mathbb P_n : n\in\omega\}$ such that for each $n$, $\mathbb P_{n+1}$ is $|\mathbb P_n|^+$-distributive. Is $\prod_{n>0} \mathbb P_n$ necessarily $|\mathbb P_0|$-distributive?
Is the product a full support one?
@AsafKaragila Jawohl.
And when you say $\kappa$-distributive, do you mean the intersection of $\kappa$ dense open sets is dense, or that for any $\alpha<\kappa$ the intersection of $\alpha$ dense open sets is dense?
The latter. It’s the “right” definition since $\kappa$-closed implies $\kappa$-distributive.
|
2025-03-21T14:48:31.293070
| 2020-06-16T20:25:36 |
363266
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Arno Fehm",
"https://mathoverflow.net/users/50351"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630273",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363266"
}
|
Stack Exchange
|
Class number of certain polynomials
Let $f_n(x)=x^n-\sum\limits_{i=0}^{n-1}{x^i}$ and $A_n$ the number field corresponding to $f_n$.
Question: Is the class number of $A_n$ always equal to one, or equivalently, is the ring of integers of $A_n$ a principal ideal domain?
This is true for $n \leq 16$ using MAGMA and also true for $n \leq 30$ assuming the GRH (according to MAGMA).
Just a side remark: These fields have been studied, e.g. Bary-Soroker, Shusterman and Zannier in the Appendix to https://arxiv.org/abs/1508.05363 prove that their maximal totally real subfield is $\mathbb{Q}$. I am not aware of results regarding their class number though.
|
2025-03-21T14:48:31.293146
| 2020-06-16T20:26:47 |
363267
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"bof",
"https://mathoverflow.net/users/43266"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630274",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363267"
}
|
Stack Exchange
|
Chromatic number of regular linear hypergraphs on $\omega$
For any cardinal $\alpha \in \omega\cup \{\omega\}$, let $[\omega]^\alpha$ denote the collection of subsets of $\omega$ having cardinality $\alpha$.
A linear hypergraph $H=(V,E)$ is a hypergraph such that whenever $e\neq e_1\in E$ we have $|e\cap e_1|\leq 1$.
A coloring of a hypergraph $H=(V,E)$ is a map $c:V \to \alpha$, where $\alpha \neq \varnothing$ is a cardinal, such that for all $e\in E$ with $|e|>1$ we have that the restriction $c{\restriction}_e$ is non-constant. We denote by $\chi(H)$ the smallest cardinal such that there is a coloring from $V$ to that cardinal.
If $\alpha \in (\omega\cup\{\omega\})\setminus \{0,1,2\}$, is there a linear hypergraph $H = (\omega, E)$ with $E\subseteq [\omega]^\alpha$ and $\chi(H)=\aleph_0$?
What does "regular" mean in your question title? if you're referring to the fact that all edges are the same size, I think those are usually called "uniform" hypergraphs; "regular" usually refers to vertex degrees. – bof just now
For $\alpha=\omega$ the answer is no. If $H=(\omega,E)$ is a linear hypergraph, then $E$ is countable; if $H=(\omega,E)$ is any hypergraph with $E\subseteq[\omega]^\omega$ and $E$ countable, then $\chi(H)\le2$.
For $3\le\alpha\lt\omega$ the answer is yes. It will be convenient to define the hypergraph on a countable vertex set $V\ne\omega$. Let $H=(V,E)$ where $V=[\omega]^{\alpha-1}$ and $E=\{[A]^{\alpha-1}:A\in[\omega]^\alpha\}\subseteq[V]^\alpha$. Then $H$ is a linear hypergraph, and $\chi(G)=\aleph_0$ because $\omega\to(\alpha)^{\alpha-1}_n\ (n\lt\omega)$ by the finite Ramsey theorem.
P.S. $\omega\to (m)^r_n$ is Rado's "arrow notation" for "partition relations"; it means that, for any $n$-coloring of the $r$-element subsets of $\omega$, there is an $m$-element subset of $\omega$ whose $r$-element subsets all have the same color.
|
2025-03-21T14:48:31.293301
| 2020-06-16T20:54:55 |
363272
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"LSpice",
"Rudyard",
"https://mathoverflow.net/users/159498",
"https://mathoverflow.net/users/2383"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630275",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363272"
}
|
Stack Exchange
|
Explicit Normalizer of SU(3) Cartan subalgebra
The normalizer $N(\mathfrak{h})$ of the Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{su}(3)$ is defined as
$$N=\left\{ x \in SU(3)\;|\; x^\dagger\mathfrak{h}x \in \mathfrak{h}\right\}$$
I would like to know explicitly what this subgroup looks like; for example, in terms of the exponentiation of generators of $\mathfrak{su}(3)$in Gell-Mann basis, where $\mathfrak{h}= \mathrm{Span}\{\lambda_3,\lambda_8\}$. Clearly $e^{i\mathfrak{h}}\subset N(\mathfrak{h})$, is there anything else?
I often see in the literature how $N(\mathfrak{h})/C(\mathfrak{h})$ is the discrete permutation group $S_3$ or the Weyl group $W(SU(3))$, where
$$ C(\mathfrak{h}) =\left\{ x \in SU(3)\;|\; x^\dagger h x = h, \quad \forall h \in \mathfrak{h}\right\} $$
is the centraliser. As all Cartan subalgebra elements commute with each other by definition, again clearly $e^{i\mathfrak{h}}\subset C(\mathfrak{h})$. This makes me wonder if $N(\mathfrak{h})$ is really just $e^{i\mathfrak{h}}$ plus some discrete elements..
Finally I will include here the application I have in mind for my purposes. Consider 2 general (8 component) $\mathfrak{su}(3)$ 'vectors' $r=r^i \lambda^i$ and $s=s^i \lambda^i$, where the $\lambda^i$ are the Gell-Mann basis. I want to see how many independent directions I can eliminate by the action
$$r\rightarrow x^\dagger r x, \quad s\rightarrow x^\dagger s x, \quad \mathrm{for \; some} \; x\in SU(3) .$$
If I start by focusing on $r$, it is known that one can make it so that $x^\dagger r x \in \mathfrak{h}$, thus eliminating 6 components. In order not to spoil this, while still trying to eliminate components of $s$, the residual freedom in the transformation is reduced to precisely
$$r\rightarrow x'^\dagger r x', \quad s\rightarrow x'^\dagger s x', \quad\mathrm{for \; some} \; x'\in N(\mathfrak{h}). $$
Hence why I am intersted in this subgroup.
What does it mean to describe a subgroup of $\operatorname{SU}(3)$ starting with a basis of $\mathfrak{su}(3)$? Also, I can never remember real-group stuff like when the exponential is surjective, but it is true that $\operatorname N_{\operatorname{SU}(3)}(\mathfrak h)$ is an extension of $\operatorname C_{\operatorname{SU}(3)}(\mathfrak h)$ by a discrete group (which is the same as saying that $\operatorname W(\operatorname{SU}(3), \mathfrak h)$ is discrete).
For a more explicit description, you'll have to say which Cartan subalgebra $\mathfrak h$ you have in mind—they're all conjugate, but the explicit description will depend on which one you pick.
@LSpice I edited the question to address your comments.
"What does it mean to describe a subgroup of SU(3) starting with a basis of su(3)?" - I just meant that every subgroup should be representable in terms of a subalgebra via exponentiation.
The Cartan subalgebra I have in mind is spanned by $\lambda_3$ and $\lambda_8$ Gell-Mann matrices. This $\mathfrak{h}$ is easy to exponentiate explicitly.
Another way of putting part of my question is what is the dimension of the Lie subgroup $N(\mathfrak{h})$? It is at least 2 because it contains the subgroup generated by $\mathfrak{h}$. Are there added to this only discrete elements or are there also further continuous directions?
Since $\operatorname N_{\operatorname{SU}(3)}(\mathfrak h)$ is an extension of $\operatorname C_{\operatorname{SU}(3)}(\mathfrak h)$ by a discrete group, it has the same dimension as the centraliser; and the centraliser has Lie algebra $\mathfrak h$, so is 2-dimensional. (Now that I think about it, of course $\operatorname C(\mathfrak h)$ equals $C = e^{i\mathfrak h}$; it's generated by $C$, but $C$ is a group because $\mathfrak h$ is Abelian.) Because of this, you won't capture any more of $\operatorname N(\mathfrak h)$ by exponentiating a subalgebra.
On looking at https://en.wikipedia.org/wiki/Gell-Mann_matrices , $C = \operatorname C_{\operatorname{SU}(3)}(\mathfrak h)$ for your $\mathfrak h$ is the diagonal torus in $\operatorname{SU}(3)$, and $\operatorname N_{\operatorname{SU}(3)}(\mathfrak h)$ is generated by $C$ and the permutation matrices (which are unitary).
Reproduced from my comment; please let me know if it does not answer the question.
For your choice of $\mathfrak h$ (as the diagonal Cartan subalgebra in $\mathfrak{su}(3)$), we have that $C = \operatorname C_{\operatorname{SU}(3)}(\mathfrak h)$ is the diagonal torus in $\operatorname{SU}(3)$, which equals $e^{i\mathfrak h}$, and that $\operatorname N_{\operatorname{SU}(3)}(\mathfrak h)$ is generated by $C$ and the permutation matrices (which are unitary). (EDIT: As @Rudyard points out in the comments, half the permutation matrices are unitary but have determinant $-1$, so one must change the sign of an appropriate entry. This introduces a 2-torsion ambiguity, but in general one can't do any better; the fact that one can always do this well is called the "Tits lift". See Can we realize Weyl group as a subgroup? and Tits's article Normalisateurs de tores I linked there—sadly there was never a II—for more details.)
Thanks. I'm confused about one aspect. Are the permutation matrices special unitary? If I take the 3 dimensional representation as shown explicitly at the top of page 26 of http://www.math.lsa.umich.edu/~kesmith/rep.pdf, it seems to me that 2 out of 3 have determinant -1, so they are not an element of $SU(3)$ and I thought $N(\mathfrak{h})\subset SU(3)$
I'm sorry; you are right. Just change one of the +1s to a -1 to get determinant 1. (There's no well defined choice of lift, but that's usual for Weyl groups, where usually the best you can do is lift modulo 2-torsion. In any case, all the choices are congruent modulo $\operatorname C(\mathfrak h)$.)
I see. That works. "There's no well defined choice of lift" - by lift do you mean writing these 3 'permutation' matrices in exponentiated form in terms of $\mathfrak{su}(3)$ algebra elements? I thought exponentiating a compact simply-connected Lie group such as $SU(3)$ was a well defined mapping.
By lift, I mean choice of an element of $\operatorname N_{\operatorname{SU}(3)}(\mathfrak h)$ that projects to a given element of $\operatorname W(\operatorname{SU}(3), \mathfrak h)$. Exponentiation is well defined for any Lie group (not surjective in general, but I think so for compact Lie groups? As I say, I can never remember how real groups work).
|
2025-03-21T14:48:31.293776
| 2020-06-16T21:05:57 |
363273
|
{
"all_licenses": [
"Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/"
],
"authors": [
"Brian Hopkins",
"Max Alekseyev",
"Sam Zbarsky",
"Tom Solberg",
"https://mathoverflow.net/users/14807",
"https://mathoverflow.net/users/47135",
"https://mathoverflow.net/users/70190",
"https://mathoverflow.net/users/7076"
],
"include_comments": true,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "stackexchange-dolma-0006.json.gz:630276",
"site": "mathoverflow.net",
"sort": "votes",
"url": "https://mathoverflow.net/questions/363273"
}
|
Stack Exchange
|
Balls and bins, with cardinality constraints
Suppose I have $n$ sets of $k$ balls each, with each one of the $nk$ balls distributed uniformly at random among $m$ bins. Further suppose that I have a probability vector $p=(p_1,\dots,p_m)$. I am interested in selecting one ball from each of the $n$ sets, such that each bin $i$ contains roughly $p_i n$ selected balls. Is there an asymptotic expression for the number of ways this can be done, in the limit as $n\to\infty$?
So you distribute the $nk$ balls and then try to select one per set to match $p$?
@BrianHopkins yes, correct.
How do the other quantities depend on $n$? Are you keeping $k$, $m$, and $p$ constant as you take $n\to\infty$?
@SamZbarsky yes, everything else is fixed, and we can assume that $k\ll m$ if we want.
What "roughly" means? The answer may significantly depend on how one defines "roughly" here.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.